6. High-level Interface

The FORCES PRO high-level interface gives optimization users a familiar easy-to-use way to define an optimization problem. The interface also gives the advanced user full flexibility when importing external C-coded functions to evaluate the quantities involved in the optimization problem.

Important

Starting with FORCES PRO 1.8.0, the solver generated from the high-level interface supports nonlinear and convex decision making problems with integer variables.

Note

The high-level Python interface expects 0-based indices in the model formulation, such as for the indices in lbidx, ubidx, hlidx, huidx, xinitidx and xfinalidx, as is usual in Python programs. Note that this is contrary to the low-level interface, which requires 1-based indices for these fields.

6.1. Supported problems

6.1.1. Canonical problem for discrete-time dynamics

The FORCES NLP solver solves (potentially) non-convex, finite-time nonlinear optimal control problems with horizon length \(N\) of the form:

\begin{align*} \text{minimize} \quad & \sum_{k=1}^{N-1} f_k(z_k,{\color{red} p_{\color{red} k}}) && \quad \text{(separable objective)} \\ \text{subject to} \quad & z_1(\mathcal{I}) = {\color{red} z_{\mathrm{init}}} && \quad \text{(initial equality)} \\ & E_k z_{k+1} = c_k(z_k,{\color{red} p_{\color{red} k}}) && \quad \text{(inter-stage equality)} \\ & z_N(\mathcal{N}) = {\color{red} z_{\mathrm{final}}} && \quad \text{(final equality)} \\ & \underline{z}_k \leq z_k \leq \bar{z}_k && \quad \text{(upper-lower bounds)} \\ & F_kz_k \in \left[\underline{z}_k, \bar{z}_k\right]\cap\mathbb{Z} && \quad \text{(integer variables)} \\ & \underline{h}_k \leq h_k(z_k,{\color{red} p_{\color{red} k}}) \leq \bar{h}_k && \quad \text{(nonlinear constraints)} \end{align*}

for \(k = 1,\dots,N\), where \(z_k \in \mathbb{R}^{n_k}\) are the optimization variables, for example a collection of inputs, states or outputs in an MPC problem; \({\color{red} p_{\color{red} k}} \in \mathbb{R}^{l_k}\) are real-time data, which are not necessarily present in all problems; the functions \(f_k : \mathbb{R}^{n_k} \times \mathbb{R}^{l_k} \rightarrow \mathbb{R}\) are stage cost functions; the functions \(c_k : \mathbb{R}^{n_k} \times \mathbb{R}^{l_k} \rightarrow \mathbb{R}^{w_k}\) represents (potentially nonlinear) equality constraints, such as a state transition function; the matrices \(E_k\) are used to couple variables from the \((k+1)\)-th stage to those of stage \(k\) through the function \(c_k\); and the functions \(h_k : \mathbb{R}^{n_k} \times \mathbb{R}^{l_k} \rightarrow \mathbb{R}^{m_k}\) are used to express potentially nonlinear, non-convex inequality constraints. The index sets \(\mathcal{I}\) and \(\mathcal{N}\) are used to determine which variables are fixed to initial and final values, respectively. The initial and final values \(\color{red} z_{\mathrm{init}}\) and \(\color{red} z_{\mathrm{final}}\) can also be changed in real-time. At every stage \(k\), the matrix \(F_k\) is a selection matrix that picks some coordinates in vector \(z_k\).

All real-time data is coloured in red. Additionally, when integer variables are modelled, they need to be declared as real-time parameters. See Section Mixed-integer nonlinear solver.

To obtain a solver for this optimization problem using the FORCES PRO client, you need to define all functions involved \((f_k, c_k, h_k)\) along with the corresponding dimensions.

6.1.2. Continuous-time dynamics

Instead of having discrete-time dynamics as can be seen in Section 6.1.1, we also support using continuous-time dynamics of the form:

\[\dot{x} = f(x,u,{\color{red}p})\]

and then discretizing this equation by one of the standard integration methods. See Section 6.2.4.2 for more details.

6.1.3. Other variants

Not all elements in Section 6.1.1 have to be necessarily present. Possible variants include problems:

  • where all functions are fixed at code generation time and do not need extra real-time data \(\color{red} p\);

  • with no lower (upper) bounds for variable \(z_{k,i}\), then \(\underline{z}_i \equiv -\infty (\bar{z}_i \equiv + \infty)\);

  • without nonlinear inequalities \(h\);

  • with \(N = 1\) (single stage problem), then the inter-stage equality can be omitted;

  • that optimize over the initial value \(\color{red} z_{\mathrm{init}}\) and do not include the initial equality;

  • that optimize over the final value \(\color{red} z_{\mathrm{final}}\) final and do not include the final equality.

  • mixed-integer nonlinear programs, where some variables are declared as integers. See Section Mixed-integer nonlinear solver for more information about the MINLP solver.

6.1.4. Function evaluations

The FORCES NLP solver requires external functions to evaluate:

  • the cost function terms \(f_k(z_k)\) and their gradients \(\nabla f_k(z_k)\),

  • the dynamics \(c_k(z_k)\) and their Jacobians \(\nabla c_k (z_k)\), and

  • the inequality consraints \(h_k(z_k)\) and their Jacobians \(\nabla h_k(z_k)\).

The FORCES PRO code generator supports the following ways of supplying these functions:

1. Automatic C-code generation of these functions from MATLAB using the automatic differentiation (AD) tool CasADi. This happens automatically in the background, as long as CasADi is found on the system. This process is hidden from the user, only standard MATLAB commands are needed to define the necessary functions. This is the recommended way of getting started with FORCES NLP. See Section 6.2 to learn how to use this approach.

2. C-functions (source files). These can be hand-coded, or generated by any automatic differentiation tool. See Section 6.5 for details on how to provide own function evaluations and derivatives to FORCES PRO.

6.2. Expressing the optimization problem in code

When solving nonlinear programs of the type in Section 6.1.1, FORCES requires the functions \(f,c,h\) and their derivatives (Jacobians) to be evaluated in each iteration. There are two ways for accomplishing this: either implement all function evaluations in C by some other method (by hand or by another automatic differentiation tool), or use our integration of FORCES with CasADi, an open-source package for generating derivatives. This is the easiest option to quickly get started with solving NLPs, and it generates efficient code. However, if you prefer other AD tools, see Section 6.5 to learn how to provide your own derivatives to FORCES NLP solvers. This section will describe the CasADi-based approach in detail, using either the MATLAB or the Python client of FORCES PRO. Please note that even though both the MATLAB and the Python client are intended to behave largely identical, there are some differences between the two clients. For details, refer to Differences between the MATLAB and the Python client.

6.2.1. Model Initialization

6.2.1.1. Model Initialization in Matlab

In the MATLAB high-level interface, the formulation of the optimization problem is given through a simple structure array. In the following, we will describe the problem in such an array named model. It is advisable to zero-initialize this variable at the beginning of your script such that no values set in previous iterations of your script interfere with this run:

model = {}

6.2.1.2. Model Initialization in Python

In the high-level Python interface, optimization problems are described through objects of different types, depending on the problem. The following classes are available:

  • SymbolicModel - Allows you to describe your optimization problem using regular Python functions. These functions will be evaluated symbolically by CasADi to create optimized C code. Note that this model is meant to be used for nonlinear models. If you wish to express a convex model symbolically, consider using the ConvexSymbolicModel or forcing generation of a nonconvex solver by seting the option forcenonconvex to True.

  • ExternalFunctionModel - Enables more flexibility in describing nonlinear problems by allowing any external function to be used as objective function and constraints. This requires C code or already compiled code (object files or shared libraries) from any language. The approach using external function evaluations for your objective function and constraints is described in External function evaluations in C, including the required call signature of the callback function.

  • ConvexSymbolicModel - FORCES PRO can generate different solvers for convex problems.

Whichever model you choose, it can be initialized with no arguments, or with a single argument denoting the number of stages \(N\) in the problem:

import forcespro.nlp
model = forcespro.nlp.SymbolicModel(50)

Note that most symbolic problem descriptions will also require the Numpy and CasADi packages, so it is a good idea to import them at the beginning:

import numpy as np
import casadi

6.2.2. Dimensions

In order to define the dimensions of the stage variables \(z_i\), the number of equality and inequality constraints and the number of real-time parameters use the following fields (properties) in the client:

model.N = 50;   % length of multistage problem
model.nvar = 6; % number of stage variables
model.neq = 4;  % number of equality constraints
model.nh = 2;   % number of nonlinear inequality constraints
model.npar = 0; % number of runtime parameters
model.N = 50    # not required if already specified in initializer
model.nvar = 6  # number of stage variables
model.neq = 4   # number of equality constraints
model.nh = 2    # number of nonlinear inequality constraints
model.npar = 0  # number of runtime parameters

If the dimensions vary for different stages use arrays of length \(N\). See Section 6.2.7.1 for an example.

6.2.3. Objective

The high-level interface allows you to define the objective function using a handle to a MATLAB or Python function that evaluates the objective. This function is called with the variables of one stage as its first argument, i.e. a vector of model.nvar entries. FORCES PRO will process the given function symbolically and generate the necessary C code to be included in the solver.

model.objective = @eval_obj; % handle to objective function
model.objective = eval_obj  # eval_obj is a Python function

For instance, the function could be:

function f = eval_obj ( z )
    F = z(1);
    s = z(2);
    y = z(4);
    f = -100*y + 0.1*F^2 + 0.01* s^2;
end
def eval_obj(z):
    F = z[0]
    s = z[1]
    y = z[3]
    return -100*y + 0.1*F**2 + 0.01*s**2

If the cost function varies for different stages use a cell array of function handles of length \(N\) in MATLAB, or a list of function handles in Python. See Section 6.2.7.1 for an example.

Note

Python and MATLAB use different indexing bases. The first element of any variable has index 1 in MATLAB, whereas it is accessed at offset 0 in Python!

The objective evaluation function can optionally accept an additional argument p which serves as a run-time parameter. In order to be able to change the terms in the cost function during runtime, one can define the objective function as:

function f = eval_obj ( z, p )
    F = z(1);
    s = z(2);
    y = z(4);
    f = -100*y + p(1)*F^2 + p(2)* s^2;
end
def eval_obj(z, p):
    F = z[0]
    s = z[1]
    y = z[3]
    return -100*y + p[0]*F**2 + p[1]*s**2

The length of this additional parameter vector in each stage is given by model.npar.

6.2.4. Equalities

6.2.4.1. Discrete-time

For discrete-time dynamics, one can define a handle to a function evaluating \(c\) as shown below. The selection matrix \(E\) that determines which variables are affected by the inter-stage equality must also be filled. For performance reasons, it is recommended to order variables such that the selection matrix has the following structure:

model.eq = @eval_dynamics;      % handle to inter-stage function
model.E = [zeros(4,2), eye(4)]; % selection matrix
model.eq = eval_dynamics  # handle to inter-stage function
model.E = np.concatenate([np.zeros((4, 2)), np.eye(4)], axis=1)  # selection matrix

If the equality constraint function varies for different stages use a cell array (or list in Python) of function handles of length \(N-1\), and similarly for \(E_k\). See Section 6.2.7.1 for an example.

6.2.4.2. Continuous-time

For continuous-time dynamics, FORCES PRO requires you to describe the dynamics of the system in the following form:

\[\dot{x} = f(x,u,{\color{red}p})\]

where \(x\) are the states of the system, \(u\) are the inputs and \(\color{red} p\) a vector of parameters, e.g. the mass or intertia. The selection matrix \(E\) determines which components of the stage variable \(z_i\) are to be considered state \(x\) or input \(u\) in this representation.

For example, let’s assume that the system to be controlled has the dynamics:

\[\dot{x} = {\color{red} p_{\color{red} 1}} x_1 x_2 + {\color{red} p_{\color{red} 2}} u\]

In order to descretize the system for use with FORCES PRO we have to:

  1. Implement the continuous-time dynamics as a function:

function xdot = continous_dynamics(x, u, p)
      xdot = p(1)*x(1)*x(2) + p(2)*u;
end
def continuous_dynamics(x, u, p):
    return p[0]*x[0]*x[1] + p[1]*u[0]

Note that in general the parameter vector p can be omitted if there are no parameters. You can also implement short functions as anonymous function handles:

continous_dynamics_anonymous = @(x,u,p) p(1)*x(1)*x(2) + p(2)*u;
continuous_dynamics_anonymous = lambda x, u, p: p[0]*x[0]*x[1] + p[1]*u[0]

2. Tell FORCES PRO that you are using continuous-time dynamics by setting the continuous_dynamics field of the model to a function handle to one of the functions above:

model.continuous_dynamics = @continuous_dynamics;
model.continuous_dynamics = continuous_dynamics

or, if you are using anonymous functions:

model.continuous_dynamics = @continuous_dynamics_anonymous;
model.continuous_dynamics = continuous_dynamics_anonymous

3. Use the selection matrix \(E\) to link the stage variables \(z_i\) with the states \(x\) and inputs \(u\) of the continuous dynamics function:

model.E = [zeros(2, 1), eye(2)]
model.E = np.concatenate([np.zeros((2, 1)), np.eye(2)], axis=1)

Components of \(z_i\) are considered as state variables \(x\) according to the order prescribed by the selection matrix. If an entire \(k\)-th column of the selection matrix is zero, the \(k\)-th component of \(z_i\) is not governed by a dynamic equation and thus considered as input \(u\).

  1. Choose one of the integrator functions from the Integrators section (the default is ERK4):

codeoptions.nlp.integrator.type = 'ERK2';
codeoptions.nlp.integrator.Ts = 0.1;
codeoptions.nlp.integrator.nodes = 5;
codeoptions.nlp.integrator.type = 'ERK2'
codeoptions.nlp.integrator.Ts = 0.1
codeoptions.nlp.integrator.nodes = 5

where the integrator type is set using the type field of the options struct codeoptions.nlp.integrator. The field Ts determines the absolute time between two integration intervals, while nodes defines the number of intermediate integration nodes within that integration interval. In the example above, we use 5 steps to integrate for 0.1 seconds, i.e. each integration step covers an interval of 0.02 seconds.

6.2.5. Initial and final conditions

The indices affected by the initial and final conditions can be set as follows:

model.xinitidx = 3:6;   % indices affected by initial condition
model.xfinalidx = 5:6;  % indices affected by final condition
model.xinitidx = range(2, 6)   # indices affected by the initial condition
model.xfinalidx = range(4, 6)  # indices affected by the final condition

Note

Python and MATLAB use different indexing bases. The first variable in a stage has index 1 in MATLAB, whereas it is accessed at offset 0 in Python! Further note that Python’s range does not include the upper limit, thus:

list(range(2, 6)) == [2, 3, 4, 5]  # does not include upper limit

6.2.6. Inequalities

A function evaluating nonlinear inequalities can be provided in a similar way, for example:

function h = eval_const(z)
  x = z(3);
  y = z(4);
  h = [x^2 + y^2;
       (x+2)^2 + (y-2.5)^2 ];
end
def eval_const(z):
    x = z[2]
    y = z[3]
    return np.array([x**2 + y**2;
                     (x+2)**2 + (y-2.5)**2])

The simple bounds and the nonlinear inequality bounds can have +inf and -inf elements, but must be the same length as the field nvar and nh, respectively:

model.ineq = @eval_const;             % handle to nonlinear constraints
model.hu = [9, +inf];                 % upper bound for nonlinear constraints
model.hl = [1, 0.95^2];               % lower bound for nonlinear constraints
model.ub = [+5, +1, 0, 3, 2, +pi];    % simple upper bounds
model.lb = [-5, -1, -3, -inf,  0, 0]; % simple lower bounds
model.ineq = eval_const                       # handle to nonlinear constraints
model.hu = [9, +float('inf')]                 # upper bound for nonlinear constraints
model.hl = [1, 0.95**2]                       # lower bound for nonlinear constraints
model.ub = [+5, +1, 0, 3, 2, +np.pi]          # simple upper bounds
model.lb = [-5, -1, -3, -float('inf'), 0, 0]  # simple lower bounds

Note

While the FORCES PRO Python client does not require you to use numpy arrays, we encourage their use for vector- and matrix-valued properties of the model, as it simplifies calculations for the user. Therefore, any of the above properties can also be set to Numpy arrays instead of lists. If lists are given, these are converted to Numpy arrays internally.

If the constraints vary for different stages, use cell arrays of length \(N\) for any of the quantities defined above. See Varying dimensions, parameters, constraints, or functions section for an example.

Bounds model.lb and model.ub can be made parametric by leaving said fields empty and using the model.lbidx and model.ubidx fields to indicate on which variables lower and upper bounds are present. The numerical values will then be expected at runtime. For example, to set parametric lower bounds on states 1 and 2, and parametric upper bounds on states 2 and 3, use:

% Lower bounds are parametric (indices not mentioned here are -inf)
model.lbidx = [1 2]';

% Upper bounds are parametric (indices not mentioned here are +inf)
model.ubidx = [2 3]';

% lb and ub have to be empty when using parametric bounds
model.lb = [];
model.ub = [];
# Lower bounds are parametric (indices not mentioned here are -inf)
model.lbidx = [0, 1]

# Upper bounds are parametric (indices not mentioned here are +inf)
model.ubidx = [1, 2]

# There is no need to specify model.lb or model.ub to empty lists if
# model.lbidx or model.ubidx are set, and any non-empty value is disallowed.

and then specify the exact values at runtime through the fields lb01lbN and ub01ubN:

% Specify lower bounds
problem.lb01 = [0 0]';
problem.lb02 = [0 0]';
% ...

% Specify upper bounds
problem.ub01 = [3 2]';
problem.ub02 = [3 2]';
% ...
# Specify lower bounds
problem["lb01"] = [0, 0]
problem["lb02"] = [0, 0]

# Specify upper bounds
problem["ub01"] = [3, 2]
problem["ub02"] = [3, 2]

Tip

One could use problem.(sprintf('lb%02u',i)) in an i-indexed loop to set the parametric bounds more easily in the MATLAB client. Similarly, the parametric bounds for the stages can be set using problem["{:02d}".format(i+1)] in Python. Alternatively, consider using the option stack_parambounds, described below.

If the model.lbidx and model.ubidx fields vary for different stages use cell arrays of length \(N\). From Release 3.0.1, the parametric bounds can be stacked on one same array covering all stages. To enable this behaviour, users need to set the following code-generation option:

codeoptions.nlp.stack_parambounds = 1;
codeoptions.nlp.stack_parambounds = True

This option is effective for both the PDIP_NLP and SQP_NLP solve methods and works with bounds on variables and inequalities. At run-time, users can then write

% Lower and upper bounds stacked over all stages
problem.lb = [0 0 0 0 ...];
problem.ub = [3 2 3 2 ...];
# Lower and upper bounds stacked over all stages
problem["lb"] = [0, 0, 0, 0, ...]
problem["ub"] = [3, 2, 3, 2, ...]

Alternatively, if you want to use the same bounds across all stages:

problem.lb = repmat([0, 0], 1, model.N);
problem.ub = repmat([3, 2], 1, model.N);
problem["lb"] = np.tile([0, 0], (model.N,))
problem["ub"] = np.tile([3, 2], (model.N,))

6.2.7. Variations

6.2.7.1. Varying dimensions, parameters, constraints, or functions

The example described above has the same dimensions, bounds and functions for the whole horizon. One can define varying dimensions using arrays and varying bounds and functions using MATLAB cell arrays or Python lists. For instance, to remove the first and second variables from the last stage one could write the following:

for i = 1:model.N-1
  model.nvar(i) = 6;
  model.objective{i} = @(z) -100*z(4) + 0.1*z(1)^2 + 0.01*z(2)^2;
  model.lb{i} = [-5, -1, -3, 0, 0, 0];
  model.ub{i} = [+5 , +1, 0, 3, 2, +pi];
  if i < model.N-1
    model.E{i} = [zeros(4, 2), eye(4)];
  else
    model.E{i} = eye(4);
  end
end

model.nvar(nlp.N) = 4;
model.objective{nlp.N} = @(z) -100*z(2);
model.lb{nlp.N} = [-3, 0, 0, 0];
model.ub{nlp.N} = [ 0, 3, 2, +pi];
model = forcespro.nlp.SymbolicModel(50)  # to set values stage-wise, the model must be initialized this way

for i in range(0:model.N-1):
  model.nvar[i] = 6
  model.objective[i] = lambda z: -100*z[3] + 0.1*z[0]**2 + 0.01*z[1]**2
  model.lb[i] = [-5, -1, -3, 0, 0, 0]
  model.ub[i] = [+5 , +1, 0, 3, 2, +np.pi]
  if i < model.N-2:
    model.E[i] = np.concatenate([np.zeros(4, 2), np.eye(4)], axis=1)
  else:
    model.E[i] = np.eye(4)

model.nvar[-1] = 4
model.objective[-1] = lambda z: -100*z[1]
model.lb[-1] = [-3, 0, 0, 0]
model.ub[-1] = [ 0, 3, 2, +np.pi]

It is also typical for model predictive control problems (MPC) that only the last stage differs from the others (excluding the initial condition, which is handled separately). Instead of defining cell arrays as above for all stages, FORCES PRO offers the following shorthand notations that alter the last stage:

  • nvarN: number of variables in last stage

  • nparN: number of parameters in last stage

  • objectiveN: objective function for last stage

  • EN: selection matrix \(E\) for last stage update

  • nhN: number of inequalities in last stage

  • ineqN: inequalities for last stage

Add any of these fields to the model struct/object to override the default values, which is to make everything the same along the horizon. For example, to add a terminal cost that is a factor 10 higher than the stage cost:

model.objectiveN = @(z) 10*model.objective(z);
model.objectiveN = lambda z: 10*model.objective(z)

6.2.7.2. Providing analytic derivatives

The algorithms inside FORCES PRO need the derivatives of the functions describing the objective, equality and inequality constraints. The code generation engine uses algorithmic differentiation (AD) to compute these quantities. Instead, when analytic derivatives are available, the user can provide them using the fields model.dobjective, model.deq, and model.dineq.

Note that the user must be particularly careful to make sure that the provided functions and derivatives are consistent, for example:

model.objective = @(z) z(3)^2;
model.dobjective = @(z) 2*z(3);
model.objective = lambda z: z[2]**2
model.dobjective = lambda z: 2*z[2]

The code generation system will not check the correctness of the provided derivatives.

6.3. Generating a solver

In addition to the definition of the NLP, solver generation requires an (optional) set of options for customization (see the Solver Options section for more information). Using the default solver options we generate a solver using:

% Get the default solver options
codeoptions = getOptions('FORCESNLPsolver');

% Generate solver
FORCES_NLP(model, codeoptions);
# Get the default solver options
options = forcespro.CodeOptions('FORCESNLPsolver')

# Generate solver for previously initialized model
solver = model.generate_solver(options)

As the solver is generated, several files are downloaded into the current working directory of the calling script, including the compiled solver itself and MATLAB/Python interfaces for calling it.

Note

In the Python client, generate_solver() returns a solver object. This object can be used to call the solver. To get a solver object for a previously generated solver in some directory /path/to/solver, use:

import forcespro.nlp
solver = forcespro.nlp.Solver.from_directory('/path/to/solver')

6.3.1. Declaring outputs

By default, the solver returns the solution vector for all stages as multiple outputs. Alternatively, the user can pass a third argument to the function FORCES_NLP with an array that specifies what the solver should output. For instance, to define an output, named u0, to be the first two elements of the solution vector at stage 1, use the following commands:

output1 = newOutput('u0', 1, 1:2);
FORCES_NLP(model, codeoptions, output1);
output_1 = ("u0", 0, [0, 1], "")
model.generate_solver(options, [output_1])

Important

When using the MINLP solver and defining outputs, all integer variables need to be specified as custom outputs.

6.4. Calling the solver

After code generation has been successful, one can obtain information about the real-time data needed to call the generated solver by typing:

help FORCESNLPsolver
# Assuming `solver` is the return value of a `model.generate_solver()` call
solver.help()

In Python, a previously generated solver can be loaded as follows:

import forcespro.nlp
solver = forcespro.nlp.Solver.from_directory("/path/to/generated/solver/")
solver.help()

6.4.1. Initial guess

The FORCES NLP solver solves NLPs to local optimality, hence the resulting optimal solution depends on the initialization of the solver. One can also choose another initialization point when a better guess is available. The following code sets the initial point to be in the middle of all bounds:

x0i = model.lb +(model.ub - model.lb)/2;
x0 = repmat(x0i', model.N, 1);
problem.x0 = x0;
xi = (model.lb + model.ub) / 2  # assuming lb and ub are numpy arrays
x0 = np.tile(xi, (model.N,))
problem = {"x0": x0}

6.4.2. Initial and final conditions

If there are initial and/or final conditions on the optimization variables, the solver will expect the corresponding runtime fields:

problem.xinit = model.xinit;
problem.xfinal = model.xfinal;
problem = {"xinit": np.array([1, 2, 3]),
           "xfinal": np.array([4, 5, 6])}

Note that the Python client does not allow setting model.xinit or model.xfinal properties, as those are run-time parameters not needed at solver generation time.

6.4.3. Real-time parameters

Whenever there are any runtime parameters defined in the problem, i.e. the field npar is not zero, the solver will expect the following field containing the parameters for all the \(N\) stages stacked in a single vector:

problem.all_parameters = repmat(1.0, model.N, 1);
problem["all_parameters"] = np.tile(1.0, (model.N,))

6.4.4. Tolerances as real-time parameters

From FORCES 2.0 onwards, the NLP solver tolerances can be made real-time parameters, meaning that they do not need to be set when generating the solver but can be changed at run-time when calling the generated solver. The code-snippet below shows how to make the tolerances on the gradient of the Lagrangian, the equalities, the inequalities and the complementarity condition parametric. Essentially, when the tolerances are declared nonpositive at code-generatioon, the corresponding run-time parameter is created in the solver.

codeoptions.nlp.TolStat = -1; % Tolerance on gradient of Lagrangian
codeoptions.nlp.TolEq = -1;   % Tolerance on equality constraints
codeoptions.nlp.TolIneq = -1; % Tolerance on inequality constraints
codeoptions.nlp.TolComp = -1; % Tolerance on complementarity
codeoptions.nlp.TolStat = -1  # Tolerance on gradient of Lagrangian
codeoptions.nlp.TolEq = -1    # Tolerance on equality constraints
codeoptions.nlp.TolIneq = -1  # Tolerance on inequality constraints
codeoptions.nlp.TolComp = -1  # Tolerance on complementarity

Once the tolerance has been declared nonpositive and the solver has been generated, the corresponding parameter can be set at run-time as follows:

problem.ToleranceStationarity = 1e-1;
problem.ToleranceEqualities = 1e-1;
problem.ToleranceInequalities = 1e-1;
problem.ToleranceComplementarity = 1e-1;
problem["ToleranceStationarity"] = 1e-1
problem["ToleranceEqualities"] = 1e-1
problem["ToleranceInequalities"] = 1e-1
problem["ToleranceComplementarity"] = 1e-1

Tip

We do not recommend changing the tolerance on the complementarity condition since it is used internally to update the barrier parameter. Hence loosening it may hamper the solver convergence.

6.4.5. Exitflags and quality of the result

Once all parameters have been populated, the MEX interface of the solver can be used to invoke it:

[output, exitflag, info] = FORCESNLPsolver(problem);
output, exitflag, info = solver.solve(problem)

The possible exitflags are documented in Table 6.1. The exitflag should always be checked before continuing with program execution to avoid using spurious solutions later in the code. Check whether the solver has exited without an error before using the solution. For example, in MATLAB, we suggest to use an assert statement:

assert(exitflag == 1, 'Some issue with FORCES solver');
assert exitflag == 1, "Some issue with FORCES solver"
Table 6.1 Exitflag values

Exitflag

Description

1

Local optimal solution found (i.e. the point satisfies the KKT optimality conditions to the requested accuracy).

0

Maximum number of iterations reached. You can examine the value of optimality conditions returned by FORCES to decide whether the point returned is acceptable.

-4

Wrong number of inequalities input to solver.

-5

Error occured during matrix factorization.

-6

NaN or INF occured during functions evaluations.

-7

The solver could not proceed. Most likely cause is that the problem is infeasible.Try formulating a problem with slack variables (soft constraints) to avoid this error.

-8

The internal QP solver could not proceed. This exitflag can only occur when using the Sequential quadratic programming algorithm. The most likely cause is that an infeasible QP or a numerical unstable QP was encountered. Try increasing the hessian regularization parameter reg_hessian if this exitflag is encountered (see SQP specific codeoptions).

-10

NaN or INF occured during evaluation of functions and derivatives. If this occurs at iteration zero, try changing the initial point. For example, for a cost function \(1/\sqrt{x}\) with an initialization \(x_0=0\), this error would occur.

-11

Invalid values in problem parameters.

-100

License error. This typically happens if you are trying to execute code that has been generated with a Simulation license of FORCES PRO on another machine. Regenerate the solver using your machine.

6.5. External function evaluations in C

This approach allows the user to integrate existing efficient C implementations to evaluate the required functions and their derivatives with respect to the stage variable. This gives the user full flexibility in defining the optimization problem. In this case, the functions do not necessarily have to be differentiable, although the convergence of the algorithm is not guaranteed if they are not. When following this route the user does not have to provide MATLAB code to evaluate the objective or constraint functions. However, the user is responsible for making sure that the provided derivatives and function evaluations are coherent. The FORCES NLP code generator will not check this.

6.5.1. Interface

6.5.1.1. Expected function signature

To obtain the necessary information, the FORCES NLP will automatically call a function supplied by the user. This function must have the following signature:

void myfunctions (
    double *x, /* primal vars */
    double *y, /* eq. constraint multiplers */
    double *l, /* ineq . constraint multipliers */
    double *p, /* runtime parameters */
    double *f, /* objective function ( incremented in this function ) */
    double *nabla_f , /* gradient of objective function */
    double *c, /* dynamics */
    double *nabla_c , /* Jacobian of the dynamics ( column major ) */
    double *h, /* inequality constraints */
    double *nabla_h , /* Jacobian of inequality constraints ( column major ) */
    double *H, /* Hessian ( column major ) */
    int stage, /* stage number (0 indexed ) */
    int iteration /* Solver iteration count */
)

In the MATLAB client, note that this function must have the same name as the file it is contained in, minus the file extension. In the above example, the function must therefore be stored in a file named myfunctions.c. Using the Python client, an arbitrary function name unrelated to the file name can be used.

6.5.1.2. Custom data structures as parameters

If you have an advanced data structure that holds the user-defined run-time parameters, and you do not want to serialize it into an array of doubles to use the interface above, you can invoke the option:

codeoptions.customParams = 1;

This will change the interface of the expected external function to:

void myfunctions (
    double *x, /* primal vars */
    double *y, /* eq. constraint multiplers */
    double *l, /* ineq . constraint multipliers */
    void *p, /* runtime parameters */
    double *f, /* objective function ( incremented in this function ) */
    double *nabla_f , /* gradient of objective function */
    double *c, /* dynamics */
    double *nabla_c , /* Jacobian of the dynamics ( column major ) */
    double *h, /* inequality constraints */
    double *nabla_h , /* Jacobian of inequality constraints ( column major ) */
    double *H, /* Hessian ( column major ) */
    int stage, /* stage number (0 indexed ) */
    int iteration /* Solver iteration count */
)

i.e. you can pass arbitrary data structures to your own function by setting the pointer in the params struct:

myData p;   /* define your own parameter structure */
/* ... */   /* fill it with data */

/* Set parameter pointer to your data structure */
mySolver_params params; /* Define solver parameters */
params.customParams = &p;

/* Call solver (assuming everything else is defined) */
mySolver_solv(&params, &output, &info, stdout, &external_func);

Note

Setting customParams to 1 will disable building high-level interfaces. Only C header- and source files will be generated.

Note

Using a custom parameters struct is currently not supported when using the Python client.

6.5.2. Supplying function evaluation information

In MATLAB, to let the code generator know about the path to the C files implementing the necessary function evaluations use:

model.extfuncs = 'C/myfunctions.c';

As noted above, the MATLAB client derives the function name used for the callback from the file name; the function must therefore have the same name as the file in which it is contained.

In Python, use a ExternalFunctionModel as follows:

model = forcespro.nlp.ExternalFunctionModel(50)
model.add_auxiliary(["helper_functions.c", "compiled_helper_functions.obj"])
model.set_main_callback("myfunctions.c", funcion="myfunctions")

Herein, the add_auxiliary() method is used to add any helper C source files or object files that should be compiled and liked against, and the set_main_callback() function is used to define the path to a C source file or compiled object file, as well as the name of an exported function that conforms to the call signature given above. This function will be used to evaluate any nonlinear constraints and the objective function.

6.5.3. Rules for function evaluation code

The contents of the function have to follow certain rules. We will use the following example to illustrate them:

/* cost */
if (f)
{ /* notice the increment of f */
    (*f) += -100*x[3] + 0.1* x[0]*x[0] + 0.01*x [1]*x [1];
}
/* gradient - only nonzero elements have to be filled in */
if ( nabla_f )
{
    nabla_f [0] = 0.2*x[0];
    nabla_f [1] = 0.02*x[1];
    nabla_f [3] = -100;
}

/* eq constr */
if (c)
{
    vehicle_dyanmics (x, c);
}
/* jacobian equalities ( column major ) */
if ( nabla_c )
{
    vehicle_dyanmics_jacobian (x, nabla_c );
}

/* ineq constr */
if (h)
{
    h[0] = x [2]*x[2] + x[3]*x [3];
    h[1] = (x[2]+2)*(x[2]+2) + (x[3] -2.5)*(x[3] -2.5);
}
/* jacobian inequalities ( column major )
- only non - zero elements to be filled in */
if ( nabla_h )
{
    /* column 3 */
    nabla_h [4] = 2*x [2];
    nabla_h [5] = 2*x[2] + 4;
    /* column 4 */
    nabla_h [6] = 2*x [3];
    nabla_h [7] = 2*x[3] - 5;
}

Notice that every function evaluation is only carried out if the corresponding pointer is not null. This is used by the FORCES NLP solver to call the same interface with different pointers depending on the functions that it requires.

6.5.4. Matrix format

Matrices are assumed to be stored in dense column major format. However, only the non-zero components need to be populated, as FORCES NLP makes sure that the arrays are initialized to zero before calling this interface.

6.5.5. Multiple source files

The use of multiple C files is also supported. In the example above, the functions dynamics and dynamics_jacobian are defined in another file and included as external functions using:

extern void dynamics ( double *x, double *c);
extern void dynamics_jacobian ( double *x, double *J);

In MATLAB, to let the code generator know about the location of these other files use a string with spaces separating the different files. In Python, use the add_auxiliary() method:

codeoptions.nlp.other_srcs = 'C/dynamics.c';
model.add_auxiliary('C/dynamics.c')

6.5.6. Stage-dependent functions

Whenever the cost function in one of the stages is different from the standard cost function \(f\), one can make use of the argument stage to evaluate different functions depending on the stage number. The same applies to all other quantities.

6.6. Mixed-integer nonlinear solver

From FORCES PRO 1.8.0, mixed-integer nonlinear programs (MINLPs) are supported. This broad class of problems encompasses all nonlinear programs with some integer decision variables.

6.6.1. Writing a mixed-integer model

In order to use this feature, the user has to declare lower and upper bounds on all variables as parametric, as shown in the code below.

model.lb = [];
model.ub = [];
model.lbidx = range(0, model.nvar)
model.ubidx = range(0, model.nvar)

The user is then expected to provide lower and upper bounds as run-time parameters. Forces Pro switches to the MINLP solver as soon as some variables are declared as integers in any stage. This information can be provided to FORCES PRO via the intidx array at every stage. An example is shown below.

%% Add integer variables to existing nonlinear model
for s = 1:5
    model.intidx{s} = [4, 5, 6];
end
# Add integer variables to existing nonlinear model
for s in range(0, 5):
    model.intidx[s] = [3, 4, 5]

In the above code snippet, the user declares variables 4, 5 and 6 (3, 4 and 5 in Python’s zero-based indexing) as integers from stage 1 to 5 (stages 0 to 4 in Python’s zero-based indexing). The values that can be taken by an integer variable are derived from its lower and upper bounds. For instance, if the variable lies between -1 and 1, then it can take integer values -1, 0 or 1. If a variable has been declared as integer and does not have lower or upper bounds, FORCES PRO raises an exception during code generation. Stating that a variable has lower and upper bounds should be done via the arrays lbidx and ubidx. For instance, in the code below, variables 1 to 6 (0 to 5 in Python) in stage 1 (0) have lower and upper bounds, which are expected to be provided at run-time.

model.lbidx{1} = 1 : 6;
model.ubidx{1} = 1 : 6;
model.lbidx[0] = range(0, 6)
model.ubidx[0] = range(0, 6)

The FORCES PRO MINLP algorithm is based on the well-known branch-and-bound algorithm but comes with several customization features which generally help for improving performance on some models by enabling the user to provide application specific knowledge into the search process. At every node of the search tree, the FORCES PRO nonlinear solver is called in order to compute a solution of a relaxed problem. The generated MINLP solver code can be customized via the options described in Table 6.2, which can be changed before running the code generation.

One of the salient features of the MINLP solver is that the branch-and-bound search can be run in parallel on several threads. Therefore the search is split in two phases. It starts with a sequential branch-and-bound and switches to a parallelizable process when the number of nodes in the queue is sufficiently high. The node selection strategy can be customized in both phases, as described in Table 6.2.

Table 6.2 FORCES PRO MINLP solver options

Code generation setting

Values

Default

minlp.int_gap_tol

Any value \(\geq 0\)

0.001

minlp.max_num_nodes

Any value \(\geq 0\)

10000

minlp.seq_search_strat

'BEST_FIRST', 'BREADTH_FIRST 'DEPTH_FIRST'

'BEST_FIRST'

minlp.par_search_strat

'BEST_FIRST', 'BREADTH_FIRST', 'DEPTH_FIRST'

'BEST_FIRST'

minlp.max_num_threads

Any nonnegative value preferably smaller than 8

4

  • The minlp.int_gap_tol setting corresponds to the final optimality tolerance below which the solver is claimed to have converged. It is the difference between the objective incumbent, which is the best integer feasible solution found so far and the lowest lower bound. As the node problems are generally not convex, it can be expected to become negative. FORCES PRO claims convergence to a local minimum only when the integrality gap is nonnegative and below the tolerance minlp.int_gap_tol.

  • The minlp.max_num_nodes setting is the maximum number of nodes which can be explored during the search.

  • The minlp.seq_search_strat setting is the search strategy which is used to select candidate nodes during the sequential search phase.

  • The minlp.par_search_strat setting is the search strategy which is used to select candidate nodes during the parallelizable search phase.

  • The minlp.max_num_threads settings is the maximum number of threads allowed for a parallel search. The actual number of threads on which the branch-and-bound algorithm can be run can be set as a run-time parameter, as described below.

Note

The MINLP solver is currently constrained to run on one thread on MacOS, meaning that minlp.max_num_threads is automatically set to 1 on MacOS.

Important

When generating a MINLP solver for MacOS the thread local feature (codeoptions.threadSafeStorage) is automatically set to 0 so if a dynamic library is used for a MINLP solver in a MacOS environment then one should not run at the same time more than one solvers linked to that library. A workaround for this would be to use the static library which is not bound by this restriction.

The FORCES PRO MINLP solver also features settings which can be set at run-time. These are the following:

  • minlp.numThreadsBnB, the number of threads used to parallelize the search. Its default value is 1, if not provided by the user.

  • minlp.timeOutBnB, the maximum amount of time allowed for completing the search. Its default value is 1.0 seconds, if not set by the user.

  • minlp.parallelStrategy, the method used for parallelizing the mixed-integer search (from FORCES-PRO 1.9.0). Value 0 (default) corresponds to a single priority queue shared between threads. Value 1 corresponds to having each thread managing its own priority queue.

6.6.2. Mixed-integer solver customization via user callbacks

For advanced users, the mixed-integer branch-and-bound search can be customized after the rounding and the branching phases. In the rounding phase, an integer feasible solution is computed after each relaxed problem solve. The user is allowed to modify the rounded solution according to some modelling requirements and constraints. This can be accomplished via the postRoundCallback_template.c file provided in the FORCES PRO client. This callback is applied at every stage in a loop and updates the relaxed solution stage-wise. It needs to be provided before code generation, as shown in the following code snippet.

%% Add post-rounding callback to existing model
postRndCall = fileread('postRoundCallback_template.c'); % The file name can be changed by the user
model.minlpPostRounding = postRndCall;
with open('postroundCallback_template.c') as f:
  model.minlpPostRounding = f.read()

The branching process can be customized in order to discard some nodes during the search. To do so, the user is expected to overwrite the file postBranchCallback_template.c and pass it to FORCES PRO before generating the MINLP solver code.

%% Add as post-branching callbacks as you want
postBranchCall_1 = fileread('postBranchCallback_template_1.c');
postBranchCall_2 = fileread('postBranchCallback_template_2.c');
postBranchCall_3 = fileread('postBranchCallback_template_3.c');
model.minlpPostBranching{1} = postBranchCall_1;
model.minlpPostBranching{2} = postBranchCall_2;
model.minlpPostBranching{3} = postBranchCall_3;
# Add as post-branching callbacks as you want
with open('postBranchCallback_template_1.c') as f:
    model.minlpPostBranching[0] = f.read()
with open('postBranchCallback_template_2.c') as f:
    model.minlpPostBranching[1] = f.read()
with open('postBranchCallback_template_3.c') as f:
    model.minlpPostBranching[2] = f.read()

In each of those callbacks, the user is expected to update the lower and upper bounds of the sons computed during branching given the index of the stage in which the branched variables lies, the index of this variable inside the stage and the relaxed solution at the parent node.

6.6.3. Providing a guess for the incumbent

Internally, the mixed-integer branch-and-bound computes an integer feasible solution by rounding. Moreover, since version 1.9.0, users are allowed to provide an initial guess for the incumbent. At code-generation, the following options need to be set:

  • minlp.int_guess, which tells whether an integer feasible guess is provided by the user (value 1). Its default value is 0.

  • minlp.int_guess_stage_vars, which specifies the indices of the integer variables that are user-initialized within one stage (MATLAB based indexing). If minlp.int_guess = 1, a parameter int_guess needs to be set at every stage. An example can be found there Mixed-integer nonlinear solver: F8 Crusader aircraft.

Another important related option is minlp.round_root. If set to 1, the solution of the root relaxation is rounded and set as incumbent if feasible. Its default value is 1. The mixed-integer solver behaviour differs depending on the combinations of options. The different behaviours are listed below.

  • If minlp.int_guess = 0 and minlp.round_root = 1, then the solution of the root relaxation is taken as incumbent (if feasible). This is the default behaviour.

  • If minlp.int_guess = 1 and minlp.round_root = 0, then the incumbent guess provided by the user is tested after the root solve. If feasible, it is taken as incumbent. Note that the user is allowed to provide guesses for a few integers per stage only. In this case, the other integer variables are rounded to the closest integer.

  • If minlp.int_guess = 1 and minlp.round_root = 1, then the rounded solution of the root relaxation and the user guess are compared. The best integer feasible solution in terms of primal objective is then taken as incumbent.

This feature is illustrated in Example Mixed-integer nonlinear solver: F8 Crusader aircraft. The ability of providing an integer guess for the incumbent is a key feature to run the mixed-integer solver in a receding horizon setting.

6.7. Sequential quadratic programming algorithm

The FORCES PRO real-time sequential quadratic programming (SQP) algorithm allows one to solve problems of the type specified in the section High-level Interface. The algorithm iteratively solves a convex quadratic approximations of the (generally non-convex) problem. Moreover, the solution is stored internally in the solver and used as an initial guess for the next time the solver is called. This and other features enables the solver to have fast solvetimes (compared to the interior point method), particularly suitable for MPC applications where the sampling time or the computational power of the hardware is small.

Important

The SQP algorithm currently only supports affine inequalities. This means that all the inequality functions \(h_k, k=1,...,N\) from (6.1) must be affine functions of the variable \(z_k\) (not necessarily of \(p_k\)).

6.7.1. How to generate a SQP solver

To generate a FORCES PRO sequential quadratic programming real-time iteration solver one sets

codeoptions.solvemethod = 'SQP_NLP';
codeoptions.solvemethod = "SQP_NLP"

(see Generating a solver). In addition to the general code options specified in the previous section here are some of the important code options one can use to customize the generated SQP solver.

By default the FORCES PRO SQP solver solves a single convex quadratic approximation. This accomplishes a fast solvetime compared to a “full” sequential quadratic programming solver (which solves quadratic approximations to the nonlinear program until a KKT point is reached). The user might prefer to manually allow the SQP solver to solve multiple quadratic approximations: By setting

codeoptions.sqp_nlp.maxqps = k;
codeoptions.sqp_nlp.maxqps = k

for a positive integer k one allows the solver to solve k quadratic approximations at every call to the solver. In general, the more quadratic approximations which are solved, the higher the control performance. The tradeoff is that the solvetime also increases.

6.7.2. The hessian approximation and line search settings

The SQP code generation currently supports two different types of hessian approximations. A good choice of hessian approximation can often improve the number of iterations required by the solver and thereby its solvetime. The default option for a SQP solver is the BFGS hessian approximation. When the objective function of the optimization problem is a least squares cost it is often benefitial to use the Gauss-Newton hessian approximation instead. To enable this option one proceeds as specified in the sections Hessian approximation and Gauss-Newton options. When the Gauss-Newton hessian approximation is chosen one can also disable the the internal linesearch by setting

codeoptions.sqp_nlp.use_line_search = 0;
options.sqp_nlp.use_line_search = False

A linesearch is required to ensure global convergence of an SQP method, but is not needed in a real-time context when a Gauss-Newton hessian approximation is used.

Note

One cannot disable the line search when using the BFGS hessian approximation.

6.7.3. Controlling the initial guess at run-time

Upon the first call to the generated FORCES PRO SQP solver one needs to specify a primal initial guess (problem.x0, see also Initial guess). The default behaviour of the FORCES PRO SQP solver is to use the solution from the previous call as initial guess in every subsequent call to it. However, one can also manually set an initial guess in subsequent calls to the solver. Wether a manual initial guess (provided through problem.x0) will be used or the internally stored solution from the previous call will be used can be controlled by the field problem.reinitialize of the problem struct which is passed as an argument to the solver when it is called.

The reinitialize field can take two values: 0 or 1. For the default usage of the solver

problem.reinitialize = 0;
problem["reinitialize"] = False

should be used. This choice results in the solver using the solution from the previous call as initial guess. This feature is useful when running the real-time iteration scheme because it ensures that the initial guess is close to the optimal solution. If you want to specify an initial guess at run-time, you will need to set

problem.reinitialize = 1;
problem["reinitialize"] = True

So in summary: The first time the solver is called the initial guess the solver will use has to be provided by problem.x0. In all subsequent calls the solver will only make use of problem.x0 as its initial guess if problem.reinitialize = 1.

6.7.4. Additional code options specific to the SQP-RTI solver

In addition to the above codeoptions, the following options are specific to the SQP algorithm. Each of these options can be supplied when generating a solver as a field of codeoptions.sqp_nlp (e.g. codeoptions.sqp_nlp.TolStat).

Table 6.3 SQP specific codeoptions

option

Possible values

Default value

Description

TolStat

positive

\(10^{-6}\)

Set the stationarity tolerance required for terminating the algorithm (the tolerance required to claim convergence to a KKT point).

TolEq

positive

\(10^{-6}\)

Set the feasibility tolerance required for terminating the algorithm (the tolerance required to claim convergence to a feasible point).

reg_hessian

positive

\(5\cdot 10^{-9}\)

Set the level of regularization of the hessian approximation (often increasing this parameter can help if the SQP solver returns exitflag -8 for your problem)

qpinit

\(0\) or \(1\)

\(0\)

Set the initialization strategy for the internal QP solver. \(0\) = cold start and \(1\) = centered start. See also Solver Initialization (note however, that for the SQP solver qpinit=2 is not possible).

In addition to these options one can also specify the maximum number of iterations the internal QP solver is allowed to run in order to solve the quadratic approximation. If one wishes the QP solver use no more than k iterations to solve a problem one sets

codeoptions.maxit = k;

6.8. Differences between the MATLAB and the Python client

The Python NLP interface is largely similar to the MATLAB interface, but does come with some language- and implementation-specific differences.

  • All indices in the problem formulation are expected to be 0-based in Python, as is usual in this language. This does not include the indices of the generated solver, however, where outputs are named x01, x02, … as in MATLAB. Thus, the problem formulation before generation requires 0-based indices, whereas the returned solver from the server uses 1-based indices. This also does not apply to the low-level Python interface, where indices are 1-based even in the model formulation.

  • In the Python client, different model objects must be used when using external functions or symbolic expressions, namely nlp.ExternalFunctionModel() and nlp.SymbolicModel(). Furthermore, if the high-level interface is to be used for convex problems, this is only possible using the nlp.ConvexSymbolicModel(). This is different from the MATLAB client, where the FORCES_NLP function accepts problems of any kind and switches to the appropriate solver automatically.

  • When using the Python client with a nlp.SymbolicModel(), the C code generated for symbolic expressions is currently not entirely identical to the code generated by MATLAB. While the actual expression evaluation code generated by CasADi is the same, the structure of the files varies. Specifically, the MATLAB client creates individual C files for each problem stage with distinct symbolic expressions (leading to varying file names when changing the problem horizon) whereas all functions are gathered in one file in the Python client. Yet, the Python client does add one additional file for the FORCES-PRO-CasADi glue code, which is not present when using the MATLAB client. Lastly, function names of the evaluation functions differ.

    If you want to get the same code for MATLAB and Python, you must generate the CasADi C code from one of both clients and then supply this code as an external function in the other client.

6.9. Examples