Matrix Modeling

In the previous container section, we have introduced the tupledict container to store and manipulate multidimensional data.

However, due to the Bring Your Own Container (BYOC) principle, variables and constraints in PyOptInterface can just simple Python objects that can be stored in Numpy ndarray directly as a multidimensional array, and you can enjoy the features of Numpy such like fancy-indexing automatically.

N-queen problem

We will use N-queens problem as example to show how to use Numpy ndarray as container to store 2-dimensional variables and construct optimization model.

Firstly, we import the necessary modules:

import numpy as np
import pyoptinterface as poi
from pyoptinterface import highs

model = highs.Model()

Then we create a 2-dimensional variable x with shape \(N \times N\) to represent the placement of queens. Each element of x is a binary variable that indicates whether a queen is placed at the corresponding position. We use object as the data type of x to store the binary variables. The following code snippet creates the variables:

N = 8

x = np.empty((N, N), dtype=object)
for i in range(N):
    for j in range(N):
        x[i, j] = model.add_variable(domain=poi.VariableDomain.Binary)
Running HiGHS 1.9.0 (git hash: 66f735e): Copyright (c) 2024 HiGHS under MIT licence terms

Next, we add the constraints to ensure that each row, each column has exact one queen, and each diagonal has at most one queen.

for i in range(N):
    # Row and column
    model.add_linear_constraint(poi.quicksum(x[i, :]), poi.Eq, 1.0)
    model.add_linear_constraint(poi.quicksum(x[:, i]), poi.Eq, 1.0)
for i in range(-N+1, N):
    # Diagonal
    model.add_linear_constraint(poi.quicksum(x.diagonal(i)), poi.Leq, 1.0)
    # Anti-diagonal
    model.add_linear_constraint(poi.quicksum(np.fliplr(x).diagonal(i)), poi.Leq, 1.0)

Finally, we solve the model.

model.optimize()

print("Termination status:", model.get_model_attribute(poi.ModelAttribute.TerminationStatus))
Coefficient ranges:
  Matrix [1e+00, 1e+00]
  Cost   [0e+00, 0e+00]
  Bound  [1e+00, 1e+00]
  RHS    [1e+00, 1e+00]
Presolving model
42 rows, 64 cols, 252 nonzeros  0s
42 rows, 64 cols, 270 nonzeros  0s
Objective function is integral with scale 1

Solving MIP model with:
   42 rows
   64 cols (64 binary, 0 integer, 0 implied int., 0 continuous)
   270 nonzeros

Src: B => Branching; C => Central rounding; F => Feasibility pump; H => Heuristic; L => Sub-MIP;
     P => Empty MIP; R => Randomized rounding; S => Solve LP; T => Evaluate node; U => Unbounded;
     z => Trivial zero; l => Trivial lower; u => Trivial upper; p => Trivial point

        Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work      
Src  Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time

         0       0         0   0.00%   0               inf                  inf        0      0      0         0     0.0s
         0       0         0   0.00%   0               inf                  inf        0      0      6        29     0.0s
 L       0       0         0 100.00%   0               0                  0.00%     1564     28     84       213     0.1s
         1       0         1 100.00%   0               0                  0.00%     1564     28     84       267     0.1s

Solving report
  Status            Optimal
  Primal bound      0
  Dual bound        0
  Gap               0% (tolerance: 0.01%)
  P-D integral      0
  Solution status   feasible
                    0 (objective)
                    0 (bound viol.)
                    0 (int. viol.)
                    0 (row viol.)
  Timing            0.07 (total)
                    0.00 (presolve)
                    0.00 (solve)
                    0.00 (postsolve)
  Max sub-MIP depth 1
  Nodes             1
  Repair LPs        0 (0 feasible; 0 iterations)
  LP iterations     267 (total)
                    0 (strong br.)
                    184 (separation)
                    54 (heuristics)
Termination status: TerminationStatusCode.OPTIMAL

The solution can be obtained and visualized by the following code:

get_v = np.vectorize(lambda x: model.get_value(x))
x_value = get_v(x)

print(x_value.astype(int))
[[0 0 1 0 0 0 0 0]
 [1 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 1 0]
 [0 0 0 0 1 0 0 0]
 [0 0 0 0 0 0 0 1]
 [0 1 0 0 0 0 0 0]
 [0 0 0 1 0 0 0 0]
 [0 0 0 0 0 1 0 0]]

Built-in functions to add variables and constraints as Numpy ndarray

Although you can construct the ndarray of variables and constraints manually, PyOptInterface provides built-in functions to simplify the process. The following code snippet shows how to use the built-in functions to add variables and constraints as Numpy ndarray:

model = highs.Model()

x = model.add_m_variables(N)

A = np.eye(N)
b_ub = np.ones(N)
b_lb = np.ones(N)

model.add_m_linear_constraints(A, x, poi.Leq, b_ub)
model.add_m_linear_constraints(A, x, poi.Geq, b_lb)

model.set_objective(poi.quicksum(x))

model.optimize()
Running HiGHS 1.9.0 (git hash: 66f735e): Copyright (c) 2024 HiGHS under MIT licence terms
Hessian has dimension 8 but no nonzeros, so is ignored
Coefficient ranges:
  Matrix [1e+00, 1e+00]
  Cost   [1e+00, 1e+00]
  Bound  [0e+00, 0e+00]
  RHS    [1e+00, 1e+00]
Presolving model
0 rows, 0 cols, 0 nonzeros  0s
0 rows, 0 cols, 0 nonzeros  0s
Presolve : Reductions: rows 0(-16); columns 0(-8); elements 0(-16) - Reduced to empty
Solving the original LP from the solution after postsolve
Model status        : Optimal
Objective value     :  8.0000000000e+00
Relative P-D gap    :  0.0000000000e+00
HiGHS run time      :          0.00

Here we use two built-in functions add_m_variables and add_m_linear_constraints to add variables and constraints as Numpy ndarray respectively.

The reference of these functions are listed in model.add_m_variables and model.add_m_linear_constraints.

add_m_variables returns a ndarray of variables with the specified shape.

add_m_linear_constraints adds multiple linear constraints to the model at once formulated as \(Ax \le b\) or \(Ax = b\) or \(Ax \ge b\) where the matrix \(A\) can be a dense numpy.ndarray or a sparse matrix scipy.sparse.sparray.