Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both are severely underused for sure. But it didn't help that for a long time open source MILP solvers were pretty mediocre.

HiGHS didn't exist, SCIP was "non-commercial", CBC was ok but they've been having development struggles for awhile, GLPK was never even remotely close to commercial offerings.

I think if something like Gurobi or Hexaly were open source, you'd see a lot more use since both their capabilities and performance are way ahead of the open source solutions, but it was the commercial revenue that made them possible in the first place.

Using CP-SAT from Google OR-Tools as a fake MILP solver by scaling the real variables is pretty funny though and works unreasonably well (specially if the problem is highly combinatorial since there's a SAT solver powering the whole thing)



SCIP going Apache definitely improved the landscape, but Couenne (global MINLP), Bonmin (local MINLP), and IPOPT (local NLP, but e.g. [1] gets you MINLP) are solid and have been around for a long time. And anecdotally, I've seen a lot more issues with SCIP (presolvers and tolerances, mostly) than with other solvers. Still it's replaced Couenne in my toolbox, and Minotaur has replaced Bonmin, but IPOPT remains king of its domain.

1. E.g. https://en.wikipedia.org/wiki/Randomized_rounding.


Many thanks to you and the parent comment for providing names to search when looking for implementations.

A basic question, before searching these: are they "input compatible"? I mean, can a problem be formulated once and then be solved by a variety of solvers? Or does each one of them use its own input language?


For MILP there isn't one single standard, but multiple competing solutions.

Nearly every solver supports the MPS format, but that's a really old format straight from the era of punchcards, it sucks.

Many solvers support the nl format, which is a low level format spat out by the AMPL tool (commercial software for mathematical modeling).

Many solvers support the CPLEX lp format, which is a nice human readable and writable format.

Google OR-Tools includes an API for mathematical modeling that supports the relevant open source MIP solvers plus Gurobi I think and its own CP solver. There are Python and Julia packages that try to do the same (rather than calling the solver APIs directly they usually spit out a problem in nl format though).

MiniZinc supports various open source MILP solvers plus various CP solvers. Very nice language, very high level.

For MINLP the only standard I know of is OSiL but support for it is spotty, mostly supported by open source tools I think.


This is a good list --- would also add Pyomo. There's plenty of nuance to algebraic modeling languages like Pyomo and JuMP, but at base you're just writing mathematical expressions in Python for Pyomo (or in Julia for JuMP) to parse and transform into the target format. E.g. taking the objective from the Weapon Target Assignment problem (https://en.wikipedia.org/wiki/Weapon_target_assignment_probl...):

    def objective(model: WtaModel) -> SumExpression:
        return sum(
            model.target_values[t_j]
            * prod(
                model.survival_rates[w_i][t_j]
                ** model.target_selected[w_i, t_j]
                for w_i in model.weapon_types
            )
            for t_j in model.targets
        )


Also GAMS


JuMP [1] in Julia is one way to get that - it uses a common DSL that they gets translated to work with any of a variety [2] of solvers.

[1] https://jump.dev/ [2] https://jump.dev/JuMP.jl/stable/installation/#Supported-solv...


Didn't know about Randomized rounding. Is there any solver with built-in support for that? To turn a strong NLP solver into a fast but approximate MINLP solver?


Not necessarily randomized rounding in particular, but many solvers use rounding methods internally e.g. as part of a feasibility pump. Minotaur and SCIP definitely do this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: