Nonlinear Programming

Introduction

Compared with the linear and quadratic expressions and objectives we have discussed in the previous sections, nonlinear programming is more general and can handle a wider range of problems. In this section, we will introduce how to use PyOptInterface to formulate and solve general nonlinear programming problems.

Note

Before trying out the code snippets, please ensure that you have completed the installation of PyOptInterface with correct dependencies via pip install pyoptinterface[nlp] and solvers that support nonlinear programming (IPOPT, COPT, Gurobi) as described in the Getting Started section.

Construct nonlinear expressions

Nonlinear expressions are more complex than linear and quadratic expressions, and they can include various nonlinear functions such as trigonometric functions, exponential functions, logarithmic functions, etc. In PyOptInterface, we must declare a nl.graph() context to construct nonlinear expressions. The nl.graph() context is used to trace the computational graph of the nonlinear expression, which allows PyOptInterface to automatically differentiate the expression and calculate the gradients and Hessians.

import pyoptinterface as poi
from pyoptinterface import nl, ipopt

model = ipopt.Model()

x = model.add_variable(name="x")
y = model.add_variable(name="y")

with nl.graph():
    z = nl.exp(x) * nl.pow(y, 2)
    model.add_nl_constraint(z <= 10.0)

In the code snippet above, we first import the nl module, which contains the nonlinear programming utilities including a set of nonlinear functions that can be used in PyOptInterface. Then, we create an ipopt.Model object to represent the optimization problem.

We then add two variables x and y to the model. After that, we enter the nl.graph() context, where we can construct nonlinear expressions using the functions provided by the nl module. In this case, we define a nonlinear expression z = exp(x) * pow(y, 2) and add a nonlinear constraint z <= 10.0 to the model.

In the nl.graph() context, we can use various nonlinear functions provided by the nl module to construct nonlinear expressions. The functions are designed to be similar to the mathematical notation, making it easy to read and write nonlinear expressions.

PyOptInterface currently supports the following nonlinear operators:

Unary functions

Operator

Example

- (negation)

y = -x

sin

y = nl.sin(x)

cos

y = nl.cos(x)

tan

y = nl.tan(x)

asin

y = nl.asin(x)

acos

y = nl.acos(x)

atan

y = nl.atan(x)

abs

y = nl.abs(x)

sqrt

y = nl.sqrt(x)

exp

y = nl.exp(x)

log

y = nl.log(x)

log10

y = nl.log10(x)

Binary functions

Operator

Example

+

z = x + y

-

z = x - y

*

z = x * y

/

z = x / y

**

z = x ** y

pow

z = nl.pow(x, y)

We can also use nl.ifelse to implement the conditional operator. For example, the following code snippet defines a absolute value function \(f(x) = |x|\):

def f(x):
    return nl.ifelse(x >= 0, x, -x)

with nl.graph():
    y = f(x)
    model.add_nl_constraint(y <= 2)

nl.ifelse accepts three arguments: a condition, a value when the condition is true, and a value when the condition is false. The function returns the value of the second argument if the condition is true; otherwise, it returns the value of the third argument. The condition variable must be a boolean variable, which can be obtained by comparing two variables using the comparison operators ==, !=, <, <=, >, and >=.

Another interesting fact is that you can construct nonlinear expressions in the context of nl.graph that are prohibited outside the context. For example, you can construct \((x+2)^3\) in the following way:

with nl.graph():
    z = (x + 2) ** 3

# Illegal outside nl.graph context because it is a nonlinear expression
# z = (x + 2) ** 3  # This will raise an error

Nonlinear constraints and objectives

After constructing nonlinear expressions, we can use them in constraints and objectives. For example, the following code snippet defines a nonlinear programming problem.

model = ipopt.Model()
x = model.add_variable(lb = 0.0)
y = model.add_variable(lb = 0.0)

with nl.graph():
    model.add_nl_constraint(x ** 4 + y ** 4 <= 4.0)
    model.add_nl_constraint(x * y >= 1.0)

    model.add_nl_objective(nl.exp(x) + nl.exp(y))

model.optimize()
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt

This version of Ipopt was compiled from source code available at
    https://github.com/IDAES/Ipopt as part of the Institute for the Design of
    Advanced Energy Systems Process Systems Engineering Framework (IDAES PSE
    Framework) Copyright (c) 2018-2019. See https://github.com/IDAES/idaes-pse.

This version of Ipopt was compiled using HSL, a collection of Fortran codes
    for large-scale scientific computation.  All technical papers, sales and
    publicity material resulting from use of the HSL codes within IPOPT must
    contain the following acknowledgement:
        HSL, a collection of Fortran codes for large-scale scientific
        computation. See http://www.hsl.rl.ac.uk.
******************************************************************************

This is Ipopt version 3.13.2, running with linear solver ma27.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        4
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        2
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        2
        inequality constraints with only lower bounds:        1
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        1

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  2.0201003e+00 1.00e+00 5.42e-05  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  2.0221048e+00 1.00e+00 9.80e+01  -1.0 1.01e+00    -  9.03e-01 9.82e-03h  1
   2  2.8874042e+00 8.65e-01 1.08e+02  -1.0 4.56e+01    -  3.06e-04 7.81e-03h  8
   3  3.8770898e+00 5.62e-01 2.99e+02  -1.0 1.18e+00   2.0 1.00e+00 2.50e-01h  3
   4  5.5678044e+00 0.00e+00 1.11e+01  -1.0 1.24e+00    -  9.72e-01 1.00e+00H  1
   5  5.4931736e+00 0.00e+00 9.32e+00  -1.0 2.80e-01   1.5 1.00e+00 1.00e+00f  1
   6  5.4953247e+00 0.00e+00 2.96e-02  -1.0 5.50e-03    -  1.00e+00 1.00e+00h  1
   7  5.4427137e+00 0.00e+00 3.36e-03  -2.5 7.95e-02    -  9.84e-01 1.00e+00f  1
   8  5.4366982e+00 0.00e+00 4.18e-06  -3.8 7.75e-03    -  1.00e+00 1.00e+00h  1
   9  5.4365655e+00 0.00e+00 1.91e-08  -5.7 1.81e-04    -  1.00e+00 1.00e+00h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  5.4365636e+00 0.00e+00 3.26e-12  -8.6 2.72e-06    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 10

                                   (scaled)                 (unscaled)
Objective...............:   5.4365636322427102e+00    5.4365636322427102e+00
Dual infeasibility......:   3.2572901920675866e-12    3.2572901920675866e-12
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5071229771986031e-09    2.5071229771986031e-09
Overall NLP error.......:   2.5071229771986031e-09    2.5071229771986031e-09


Number of objective function evaluations             = 22
Number of objective gradient evaluations             = 11
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 26
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 11
Number of Lagrangian Hessian evaluations             = 10
Total CPU secs in IPOPT (w/o function evaluations)   =      0.003
Total CPU secs in NLP function evaluations           =      0.000

EXIT: Optimal Solution Found.
x_value = model.get_value(x)
y_value = model.get_value(y)

print(f"x = {x_value}, y = {y_value}")
x = 0.9999999954612174, y = 0.9999999954612174

Nonlinear constraint can be declared by calling add_nl_constraint method. Like the linear and quadratic constraints, you can specify the sense/right-hand-side of constraint, use an interval of values to represent a two-sided constraint, or use a comparison operator like <=, ==, or >= to create the constraint.

# One-sided nonlinear constraint
with nl.graph():
    model.add_nl_constraint(x ** 2 + y ** 2 <= 1.0)
    # equivalent to
    model.add_nl_constraint(x ** 2 + y ** 2, poi.Leq, 1.0)

# Two-sided nonlinear constraint
with nl.graph():
    model.add_nl_constraint(x ** 2 + y ** 2, (1.0, 2.0))
    # equivalent to
    model.add_nl_constraint(x ** 2 + y ** 2, poi.Leq, 2.0)
    model.add_nl_constraint(x ** 2 + y ** 2, poi.Geq, 1.0)

Similarly, the nonlinear objective can be declared by calling add_nl_objective method. It is noteworthy that add_nl_objective only adds a nonlinear term to the objective and can be called multiple times to construct a sum of nonlinear terms as objective.

# Nonlinear objective
with nl.graph():
    model.add_nl_objective(nl.sin(x) + nl.cos(y))

Because the nonlinear expressions are captured by the current nl.graph() context, both add_nl_constraint and add_nl_objective methods must be called within the same nl.graph() context. If you try to call them outside the context, it will raise an error.

Finally, we will use the well-known Rosenbrock function as another example:

from pyoptinterface import nl, ipopt

model = ipopt.Model()

x = model.add_variable()
y = model.add_variable()

with nl.graph():
    model.add_nl_objective((1 - x) ** 2 + 100 * (y - x ** 2) ** 2)

model.optimize()
This is Ipopt version 3.13.2, running with linear solver ma27.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        3

Total number of variables............................:        2
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.0000000e+00 0.00e+00 2.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  9.5312500e-01 0.00e+00 1.25e+01  -1.0 1.00e+00    -  1.00e+00 2.50e-01f  3
   2  4.8320569e-01 0.00e+00 1.01e+00  -1.0 9.03e-02    -  1.00e+00 1.00e+00f  1
   3  4.5708829e-01 0.00e+00 9.53e+00  -1.0 4.29e-01    -  1.00e+00 5.00e-01f  2
   4  1.8894205e-01 0.00e+00 4.15e-01  -1.0 9.51e-02    -  1.00e+00 1.00e+00f  1
   5  1.3918726e-01 0.00e+00 6.51e+00  -1.7 3.49e-01    -  1.00e+00 5.00e-01f  2
   6  5.4940990e-02 0.00e+00 4.51e-01  -1.7 9.29e-02    -  1.00e+00 1.00e+00f  1
   7  2.9144630e-02 0.00e+00 2.27e+00  -1.7 2.49e-01    -  1.00e+00 5.00e-01f  2
   8  9.8586451e-03 0.00e+00 1.15e+00  -1.7 1.10e-01    -  1.00e+00 1.00e+00f  1
   9  2.3237475e-03 0.00e+00 1.00e+00  -1.7 1.00e-01    -  1.00e+00 1.00e+00f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  2.3797236e-04 0.00e+00 2.19e-01  -1.7 5.09e-02    -  1.00e+00 1.00e+00f  1
  11  4.9267371e-06 0.00e+00 5.95e-02  -1.7 2.53e-02    -  1.00e+00 1.00e+00f  1
  12  2.8189505e-09 0.00e+00 8.31e-04  -2.5 3.20e-03    -  1.00e+00 1.00e+00f  1
  13  9.9920072e-16 0.00e+00 8.68e-07  -5.7 9.78e-05    -  1.00e+00 1.00e+00f  1
  14  0.0000000e+00 0.00e+00 1.57e-13  -8.6 4.65e-08    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 14

                                   (scaled)                 (unscaled)
Objective...............:   0.0000000000000000e+00    0.0000000000000000e+00
Dual infeasibility......:   1.5676349107707210e-13    1.5676349107707210e-13
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   1.5676349107707210e-13    1.5676349107707210e-13


Number of objective function evaluations             = 36
Number of objective gradient evaluations             = 15
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 14
Total CPU secs in IPOPT (w/o function evaluations)   =      0.004
Total CPU secs in NLP function evaluations           =      0.000

EXIT: Optimal Solution Found.
x_value = model.get_value(x)
y_value = model.get_value(y)

print(f"x = {x_value}, y = {y_value}")
x = 0.9999999999999897, y = 0.9999999999999789

Mixing nonlinear and linear/quadratic constraints together

Introducing nonlinear functions does not prevent you from using the add_linear_constraint, add_quadratic_constraint methods. You can add both nonlinear constraints and linear/quadratic constraints in the same optimization problem.

We will use the clnlbeam problem as example:

from pyoptinterface import nl, ipopt
import pyoptinterface as poi

model = ipopt.Model()

N = 1000
h = 1 / N
alpha = 350

t = model.add_m_variables(N + 1, lb=-1.0, ub=1.0)
x = model.add_m_variables(N + 1, lb=-0.05, ub=0.05)
u = model.add_m_variables(N + 1)

for i in range(N):
    with nl.graph():
        model.add_nl_objective(
            0.5 * h * (u[i] * u[i] + u[i + 1] * u[i + 1])
            + 0.5 * alpha * h * (nl.cos(t[i]) + nl.cos(t[i + 1]))
        )
        model.add_nl_constraint(
            x[i + 1] - x[i] - 0.5 * h * (nl.sin(t[i]) + nl.sin(t[i + 1])) == 0.0
        )
        model.add_linear_constraint(
            t[i + 1] - t[i] - 0.5 * h * u[i + 1] - 0.5 * h * u[i] == 0.0
        )

model.optimize()
This is Ipopt version 3.13.2, running with linear solver ma27.

Number of nonzeros in equality constraint Jacobian...:     8000
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:     3002

Total number of variables............................:     3003
                     variables with only lower bounds:        0
                variables with lower and upper bounds:     2002
                     variables with only upper bounds:        0
Total number of equality constraints.................:     2000
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  3.5000000e+02 0.00e+00 0.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  3.5000000e+02 0.00e+00 0.00e+00  -1.7 0.00e+00    -  1.00e+00 1.00e+00   0
   2  3.5000000e+02 0.00e+00 0.00e+00  -3.8 0.00e+00  -2.0 1.00e+00 1.00e+00T  0
   3  3.5000000e+02 0.00e+00 0.00e+00  -5.7 0.00e+00   0.2 1.00e+00 1.00e+00T  0
   4  3.5000000e+02 0.00e+00 0.00e+00  -8.6 0.00e+00  -0.2 1.00e+00 1.00e+00T  0
Number of Iterations....: 4

                                   (scaled)                 (unscaled)
Objective...............:   3.5000000000000318e+02    3.5000000000000318e+02
Dual infeasibility......:   0.0000000000000000e+00    0.0000000000000000e+00
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5059035596802450e-09    2.5059035596802450e-09
Overall NLP error.......:   2.5059035596802450e-09    2.5059035596802450e-09


Number of objective function evaluations             = 5
Number of objective gradient evaluations             = 5
Number of equality constraint evaluations            = 5
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 5
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 4
Total CPU secs in IPOPT (w/o function evaluations)   =      0.027
Total CPU secs in NLP function evaluations           =      0.001

EXIT: Optimal Solution Found.
objective_value = model.get_model_attribute(poi.ModelAttribute.ObjectiveValue)

print(f"Objective value: {objective_value}")
Objective value: 350.0000000000032

How to use nl.graph to capture the similar structure

You might be wondering how to place the nl.graph() context manager correctly in your code. The key is to ensure that all nonlinear constraints and objectives that share the same structure are enclosed within the same nl.graph() context.

As a rule of thumb, nl.graph should always be used inside the for loop instead of outside, so that each graph will have the same pattern and PyOptInterface can recognize the similar structures to accelerate the automatic differentiation process. This is particularly important when you have a large number of nonlinear constraints or objectives that share the same structure, as it can significantly improve the performance of the optimization process as discussed in our research paper Accelerating Optimal Power Flow with Structure-aware Automatic Differentiation and Code Generation.

In the clnlbeam example above, we have placed the nl.graph() context inside the for loop, which allows us to capture the structure of the nonlinear constraints and objectives for each iteration.

More complex examples

In practice, the nonlinear constraints may have the same structure but with different parameters, we encourage you to read our optimal power flow and optimal control examples to learn how to construct more complex nonlinear programming problems.