Nonlinear Programming¶
Introduction¶
Compared with the linear and quadratic expressions and objectives we have discussed in the previous sections, nonlinear programming is more general and can handle a wider range of problems. In this section, we will introduce how to use PyOptInterface to formulate and solve structured nonlinear programming problems.
Note
Before trying out the code snippets, please ensure that you have completed the installation of PyOptInterface with correct dependencies via pip install pyoptinterface[nlp]
and nonlinear solver (IPOPT) as described in the Getting Started section.
Structures in nonlinear programming¶
Although nonlinear programming allows arbitrary nonlinear functions to appear in constraints and objective functions, most nonlinear terms in practical problems can be categorized into a few common structures.
Generally, a nonlinear function mapping from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) can be represented as a parameterized function \(f(x, p)\), where \(x \in \mathbb{R}^n\) is the input variable and \(p \in \mathbb{R}^k\) is a set of parameters. The same function can be applied to multiple groups of variables and parameters.
In PyOptInterface, a nonlinear function can be declared as a normal Python function that accepts two variables: vars
and params
. The vars
variable is a “magic” object that maps variable names to their values, and the params
variable is another “magic” object that maps parameter names to their values. The function should return a dictionary that maps output names to their values. Of course, you can omit the params
variable if the function does not have any parameters.
To make things easier, we start from a very simple bivariate function \(f(x, y, p) = p * exp(x) * y^2\). The function has two variables \(x\) and \(y\), and one parameter \(p\). The function can be implemented as follows:
from pyoptinterface import nlfunc, ipopt
model = ipopt.Model()
def f(vars, params):
p = params.p
result = p * nlfunc.exp(vars.x)
result *= vars.y ** 2
return result
reg_f = model.register_function(f)
In the code snippet above, we first import the nlfunc
module, which contains a set of nonlinear functions that can be used in PyOptInterface. Then, we create an ipopt.Model
object to represent the optimization problem.
Next, we define the function f
that accepts two arguments: vars
and params
. In the function, we can extract the values of variables and parameters by accessing the corresponding attributes of the vars
and params
objects. The nlfunc.exp
function is used to calculate the exponential function. Finally, we return the result.
Finally, we register the function f
by calling the register_function
method of the model
object. The register_function
method returns an abstract FunctionIndex
object that can be used in constraints and objectives.
The magic happens in the register_function
where we use function tracing to capture the variables and parameters used in the function and the whole computational graph of the function. This allows PyOptInterface to automatically differentiate the function and calculate the gradients and Hessians of the function.
It is worth mentioning that f
is also a simple Python function that can be called directly. For example, we can evaluate the function at a specific point by calling f
with the corresponding values of variables and parameters:
import types, math
vars = types.SimpleNamespace(x=1.0, y=2.0)
params = types.SimpleNamespace(p=3.0)
result = f(vars, params)
assert result == 3.0 * math.exp(1.0) * 2.0 ** 2
We also provide nlfunc.Vars
and nlfunc.Params
as a convenient way to create a group of variables and parameters. In fact, they are wrappers of types.SimpleNamespace
:
vars = nlfunc.Vars(x=1.0, y=2.0)
params = nlfunc.Params(p=3.0)
result = f(vars, params)
assert result == 3.0 * math.exp(1.0) * 2.0 ** 2
If there is no parameter in the function, we can omit the params
argument in the function definition. For example, the following code snippet defines a function \(g(x, y) = x^4 + y^4\):
def g(vars):
return vars.x ** 4 + vars.y ** 4
reg_g = model.register_function(g)
In the code snippet above, we define a function g
that accepts only one argument vars
. The function calculates the sum of the fourth powers of the two variables \(x\) and \(y\). We register the function g
in the same way as we did for the function f
.
The result of a nonlinear function is not restricted to a scalar value. It can be a scalar or a vector by returning a Python list containing the results. For example, the following code snippet defines a function \(h(x, y) = [cos(x + y^2), sin(x^2 - y)]\) mapping from \(\mathbb{R}^2\) to \(\mathbb{R}^2\):
def h(vars):
z1 = nlfunc.cos(vars.x + vars.y ** 2)
z2 = nlfunc.sin(vars.x ** 2 - vars.y)
return [z1, z2]
reg_h = model.register_function(h)
PyOptInterface currently supports the following nonlinear operators:
Operator |
Example |
---|---|
|
|
sin |
|
cos |
|
tan |
|
asin |
|
acos |
|
atan |
|
abs |
|
sqrt |
|
exp |
|
log |
|
log10 |
|
Operator |
Example |
---|---|
|
|
|
|
|
|
|
|
|
|
pow |
|
We can also use nlfunc.ifelse
to implement the conditional operator. For example, the following code snippet defines a absolute value function \(f(x) = |x|\):
def f(vars):
return nlfunc.ifelse(vars.x >= 0, vars.x, -vars.x)
nlfunc.ifelse
accepts three arguments: a condition, a value when the condition is true, and a value when the condition is false. The function returns the value of the second argument if the condition is true; otherwise, it returns the value of the third argument.
The condition variable must be a boolean variable, which can be obtained by comparing two variables using the comparison operators ==
, !=
, <
, <=
, >
, and >=
.
Nonlinear constraints and objectives¶
After defining and registering nonlinear functions, we can use them in constraints and objectives. For example, the following code snippet defines a nonlinear programming problem.
def obj(vars):
return nlfunc.exp(vars.x) + nlfunc.exp(vars.y)
def con(vars):
x, y = vars.x, vars.y
return [x ** 4 + y ** 4, - x * y]
obj_f = model.register_function(obj)
con_f = model.register_function(con)
x = model.add_variable(lb = 0.0)
y = model.add_variable(lb = 0.0)
model.add_nl_constraint(con_f, nlfunc.Vars(x=x,y=y), ub = [4.0, -1.0])
model.add_nl_objective(obj_f, nlfunc.Vars(x=x,y=y))
model.optimize()
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
This version of Ipopt was compiled from source code available at
https://github.com/IDAES/Ipopt as part of the Institute for the Design of
Advanced Energy Systems Process Systems Engineering Framework (IDAES PSE
Framework) Copyright (c) 2018-2019. See https://github.com/IDAES/idaes-pse.
This version of Ipopt was compiled using HSL, a collection of Fortran codes
for large-scale scientific computation. All technical papers, sales and
publicity material resulting from use of the HSL codes within IPOPT must
contain the following acknowledgement:
HSL, a collection of Fortran codes for large-scale scientific
computation. See http://www.hsl.rl.ac.uk.
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver ma27.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 4
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 2
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 2
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 2.0201003e+00 1.00e+00 5.42e-05 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 2.0221048e+00 1.00e+00 9.80e+01 -1.0 1.01e+00 - 9.03e-01 9.82e-03h 1
2 2.8874042e+00 8.65e-01 1.08e+02 -1.0 4.56e+01 - 3.06e-04 7.81e-03h 8
3 3.8770898e+00 5.62e-01 2.99e+02 -1.0 1.18e+00 2.0 1.00e+00 2.50e-01h 3
4 5.5678044e+00 0.00e+00 1.11e+01 -1.0 1.24e+00 - 9.72e-01 1.00e+00H 1
5 5.4931736e+00 0.00e+00 9.32e+00 -1.0 2.80e-01 1.5 1.00e+00 1.00e+00f 1
6 5.4953247e+00 0.00e+00 2.96e-02 -1.0 5.50e-03 - 1.00e+00 1.00e+00h 1
7 5.4427137e+00 0.00e+00 3.36e-03 -2.5 7.95e-02 - 9.84e-01 1.00e+00f 1
8 5.4366982e+00 0.00e+00 4.18e-06 -3.8 7.75e-03 - 1.00e+00 1.00e+00h 1
9 5.4365655e+00 0.00e+00 1.91e-08 -5.7 1.81e-04 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 5.4365636e+00 0.00e+00 3.26e-12 -8.6 2.72e-06 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 10
(scaled) (unscaled)
Objective...............: 5.4365636322427102e+00 5.4365636322427102e+00
Dual infeasibility......: 3.2572901920675866e-12 3.2572901920675866e-12
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5071229771986031e-09 2.5071229771986031e-09
Overall NLP error.......: 2.5071229771986031e-09 2.5071229771986031e-09
Number of objective function evaluations = 22
Number of objective gradient evaluations = 11
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 26
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 11
Number of Lagrangian Hessian evaluations = 10
Total CPU secs in IPOPT (w/o function evaluations) = 0.008
Total CPU secs in NLP function evaluations = 0.001
EXIT: Optimal Solution Found.
x_value = model.get_value(x)
y_value = model.get_value(y)
print(f"x = {x_value}, y = {y_value}")
x = 0.9999999954612174, y = 0.9999999954612174
Nonlinear constraint can be declared by calling add_nl_constraint
method of the Model
object. The method accepts the registered function, the variables/parameters of the constraint, and the bounds of the constraint.
The variables of the constraint should be constructed by nlfunc.Vars
and the parameters should be constructed by nlfunc.Params
.
The bounds can be specified as a scalar or a list with the same length as the number of outputs of the function. If it is a scalar, the same bound will be applied to all outputs. If it is a list, each output will have its own bound.
You can declare eq
for equality constraints, lb
for lower bounds, and ub
for upper bounds. lb
and ub
can be declared simultaneously to create a bounded constraint.
- model.add_nl_constraint(registered_function, vars[, params=None, eq=None, lb=None, ub=None, name=""])¶
add a nonlinear constraint to the model
- Parameters:
registered_function – the
FunctionIndex
returned byregister_function
vars – the variables of the constraint, must be constructed by
nlfunc.Vars
params – the parameters of the constraint, must be constructed by
nlfunc.Params
, optionaleq – the equality value of the constraint, optional
lb – the lower bound of the constraint, optional
ub – the upper bound of the constraint, optional
name (str) – the name of the constraint, optional
- Returns:
the handle of the constraint
Nonlinear objective can be declared by calling add_nl_objective
method of the Model
object. The method accepts the registered function, the variables/parameters of the objective, and the parameters of the objective.
It is noteworthy that add_nl_objective
only adds a nonlinear term to the objective and can be called multiple times to construct a sum of nonlinear terms as objective.
The nonlinear function in add_nl_objective
should return a scalar value and PyOptInterface will throw an error if the function returns multiple values.
- model.add_nl_objective(registered_function, vars[, params=None, name=""])¶
add a nonlinear objective term to the model
- Parameters:
registered_function – the
FunctionIndex
returned byregister_function
vars – the variables of the objective, must be constructed by
nlfunc.Vars
params – the parameters of the objective, must be constructed by
nlfunc.Params
, optionalname (str) – the name of the objective, optional
We will use the well-known Rosenbrock function as another example:
from pyoptinterface import nlfunc, ipopt
model = ipopt.Model()
x = model.add_variable()
y = model.add_variable()
def rosenbrock(vars):
x, y = vars.x, vars.y
return (1 - x) ** 2 + 100 * (y - x ** 2) ** 2
rosenbrock_f = model.register_function(rosenbrock)
model.add_nl_objective(rosenbrock_f, nlfunc.Vars(x=x, y=y))
model.optimize()
This is Ipopt version 3.13.2, running with linear solver ma27.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0000000e+00 0.00e+00 2.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 9.5312500e-01 0.00e+00 1.25e+01 -1.0 1.00e+00 - 1.00e+00 2.50e-01f 3
2 4.8320569e-01 0.00e+00 1.01e+00 -1.0 9.03e-02 - 1.00e+00 1.00e+00f 1
3 4.5708829e-01 0.00e+00 9.53e+00 -1.0 4.29e-01 - 1.00e+00 5.00e-01f 2
4 1.8894205e-01 0.00e+00 4.15e-01 -1.0 9.51e-02 - 1.00e+00 1.00e+00f 1
5 1.3918726e-01 0.00e+00 6.51e+00 -1.7 3.49e-01 - 1.00e+00 5.00e-01f 2
6 5.4940990e-02 0.00e+00 4.51e-01 -1.7 9.29e-02 - 1.00e+00 1.00e+00f 1
7 2.9144630e-02 0.00e+00 2.27e+00 -1.7 2.49e-01 - 1.00e+00 5.00e-01f 2
8 9.8586451e-03 0.00e+00 1.15e+00 -1.7 1.10e-01 - 1.00e+00 1.00e+00f 1
9 2.3237475e-03 0.00e+00 1.00e+00 -1.7 1.00e-01 - 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 2.3797236e-04 0.00e+00 2.19e-01 -1.7 5.09e-02 - 1.00e+00 1.00e+00f 1
11 4.9267371e-06 0.00e+00 5.95e-02 -1.7 2.53e-02 - 1.00e+00 1.00e+00f 1
12 2.8189505e-09 0.00e+00 8.31e-04 -2.5 3.20e-03 - 1.00e+00 1.00e+00f 1
13 1.0095040e-15 0.00e+00 8.68e-07 -5.7 9.78e-05 - 1.00e+00 1.00e+00f 1
14 1.3288608e-28 0.00e+00 2.02e-13 -8.6 4.65e-08 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 14
(scaled) (unscaled)
Objective...............: 1.3288608467480825e-28 1.3288608467480825e-28
Dual infeasibility......: 2.0183854587685121e-13 2.0183854587685121e-13
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 2.0183854587685121e-13 2.0183854587685121e-13
Number of objective function evaluations = 36
Number of objective gradient evaluations = 15
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 14
Total CPU secs in IPOPT (w/o function evaluations) = 0.004
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
x_value = model.get_value(x)
y_value = model.get_value(y)
print(f"x = {x_value}, y = {y_value}")
x = 0.9999999999999899, y = 0.9999999999999792
Mixing nonlinear and linear/quadratic constraints together¶
Introducing nonlinear functions does not prevent you from using the add_linear_constraint
, add_quadratic_constraint
methods. You can add both nonlinear constraints and linear/quadratic constraints in the same optimization problem.
We will use the clnlbeam problem as example:
from pyoptinterface import nlfunc, ipopt
import pyoptinterface as poi
model = ipopt.Model()
N = 1000
h = 1 / N
alpha = 350
t = model.add_variables(range(N+1), lb=-1.0, ub=1.0)
x = model.add_variables(range(N+1), lb=-0.05, ub=0.05)
u = model.add_variables(range(N+1))
def obj(vars):
return 0.5 * h * (vars.u2**2 + vars.u1**2) + 0.5 * alpha * h * (nlfunc.cos(vars.t2) + nlfunc.cos(vars.t1))
obj_f = model.register_function(obj)
for i in range(N):
model.add_nl_objective(obj_f, nlfunc.Vars(t1=t[i], t2=t[i+1], u1=u[i], u2=u[i+1]))
def con(vars):
return vars.x2 - vars.x1 - 0.5 * h * (nlfunc.sin(vars.t2) + nlfunc.sin(vars.t1))
con_f = model.register_function(con)
for i in range(N):
model.add_nl_constraint(con_f, nlfunc.Vars(t1=t[i], t2=t[i+1], x1=x[i], x2=x[i+1]), eq=0.0)
for i in range(N):
model.add_linear_constraint(t[i+1] - t[i] - 0.5 * h * u[i+1] - 0.5 * h * u[i] , poi.Eq, 0.0)
model.optimize()
This is Ipopt version 3.13.2, running with linear solver ma27.
Number of nonzeros in equality constraint Jacobian...: 8000
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 2002
Total number of variables............................: 3003
variables with only lower bounds: 0
variables with lower and upper bounds: 2002
variables with only upper bounds: 0
Total number of equality constraints.................: 2000
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.5000000e+02 0.00e+00 0.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 3.5000000e+02 0.00e+00 0.00e+00 -1.7 0.00e+00 - 1.00e+00 1.00e+00 0
2 3.5000000e+02 0.00e+00 0.00e+00 -3.8 0.00e+00 -2.0 1.00e+00 1.00e+00T 0
3 3.5000000e+02 0.00e+00 0.00e+00 -5.7 0.00e+00 0.2 1.00e+00 1.00e+00T 0
4 3.5000000e+02 0.00e+00 0.00e+00 -8.6 0.00e+00 -0.2 1.00e+00 1.00e+00T 0
Number of Iterations....: 4
(scaled) (unscaled)
Objective...............: 3.5000000000000318e+02 3.5000000000000318e+02
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5059035596802450e-09 2.5059035596802450e-09
Overall NLP error.......: 2.5059035596802450e-09 2.5059035596802450e-09
Number of objective function evaluations = 5
Number of objective gradient evaluations = 5
Number of equality constraint evaluations = 5
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 5
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 4
Total CPU secs in IPOPT (w/o function evaluations) = 0.025
Total CPU secs in NLP function evaluations
= 0.001
EXIT: Optimal Solution Found.
objective_value = model.get_model_attribute(poi.ModelAttribute.ObjectiveValue)
print(f"Objective value: {objective_value}")
Objective value: 350.0000000000032
More complex examples¶
In practice, the nonlinear constraints may have the same structure but with different parameters, we encourage you to read our optimal power flow and optimal control examples to learn how to construct more complex nonlinear programming problems.