Hi, I've tried some scipy optimization routines, they work
great!!! But I wondered, why historically for inequality constraints the
type was chosen to be "greater than or equal" type? This is
inconsistent with the classical formulation of non-linear programming
problems.Thanks! _______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
I would assume that it's because the "or equal to" option allows greater flexibility: if that criterion is allowed by the problem, and the algorithm can find such a solution (e.g., by checking all such corner points), then that's better than not even providing the option, yes? And if you try to "rig" the strict inequality approach by allowing for a little extra room around the corner, then the exact solution might not be found, yes? Indeed, i'm no expert, but i did have a course in this, and IIRC, if your problem allows for equality, then you _must_ separately check all the corners, yes? (In other words, what you state about the "classical formulation" is not what i was taught: I was taught that the specifics of the problem dictate whether any given inequality should be strict or "weak.") DLG On Thu, May 25, 2017 at 1:49 PM Kirill Balunov <[hidden email]> wrote:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
I'm sorry, perhaps I should more clearly formulate the question. David you are totally right. What I mean by classical: is "less than or equal" type. Of course it's a question of a sign, but still... -gdg 2017-05-26 0:07 GMT+03:00 David Goldsmith <[hidden email]>:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
Ah, yes, that convention i am familiar with; maybe to accommodate the "inflexibility" of less numerate potential users (who may be fixated, e.g., on wanting to maximize profit or yield)? Of course, at some point such people may want to minimize something, so hopefully they have someone around to tell them to simply multiply by negative one. ;-) DLG On Thu, May 25, 2017 at 2:29 PM Kirill Balunov <[hidden email]> wrote:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
No, no, the constraints only affect the feseable set of the problem. Min or Max depends on the sign of objective function. From mathemtical point of view, the problem is that the KKT conditions are derived for standard formulation (with "less than ..") of NLP. Cheers, -gdg 2017-05-26 0:36 GMT+03:00 David Goldsmith <[hidden email]>:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
The KKT reference exceeds my numeracy... Anyway, i doubt this is the case, but if it's really a problem, you can always write wrappers to automate the desired transformations, yes? DLG On The, May 25, 2017 at 2:53 PM Kirill Balunov <[hidden email]> wrote:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
David, KKT conditions are first-order necessary conditions which are applicable if some assumptions are satisfied (regularity, continuity,..). The points which satisfy KKT are said to be stationary (or candidate) points (min, max or inflation). Internally this conditions use gradient of Lagrange function -> c^T * x + lambda^T * h(x) + mu^T * g(x) (this is classical notation), where h(x) are equality and g(x) are inequality (g(x) <= 0) constraints respectively. Also there is a restriction among others for `mu` to be non-negative. This is classical formulation of NLP (which looks for minimum). Of course mathematics is simply a human game with some rules. So you can choose from four cases (for "inequality term"):
1) + mu^T * g(x), mu >= 0, g(x) <=0 (classical) 2) - mu^T * g(x), mu >= 0, g(x) >= 0 3) + mu^T * g(x), mu <= 0, g(x) >= 0 ( awkward) 4) - mu^T * g(x), mu => 0, g(x) <= 0 (awkward) Tрe last two are awkward they roughly crash the dual problem principle (the related maximization problem but in Lagrange multipliers). The first two look good but the the second one is very unusual. That is why I ask about historical reasons, why this form was chosen? -gdg 2017-05-26 1:03 GMT+03:00 David Goldsmith <[hidden email]>:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
I'm sorry, I understand that's because the only solver for constrained optimization in scipy is SLSQP? But, It is not good idea to write about a particular case as if it is a general -> In documentation -> "In general, the optimization problems are of the form: ..." Which is not true in general :) -gdg 2017-05-26 14:18 GMT+03:00 Kirill Balunov <[hidden email]>:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
It looks like it could simply be a typo: could you be troubled to file a ticket? Worst case: someone will explain, in that record, why it isn't a typo; best case: it will get fixed. Thanks for your conscientiousness. DLG On Fri, May 26, 2017 at 4:40 AM Kirill Balunov <[hidden email]> wrote:
_______________________________________________ SciPy-User mailing list [hidden email] https://mail.python.org/mailman/listinfo/scipy-user |
Free forum by Nabble | Edit this page |