10 years ago, we started to develop Optimization.Framework which we still use in our software projects – nowadays as OPTANO Modeling. Here, we would like to describe the motivation behind the development of Optimization.Framework, i.e.OPTANO Modeling[1], and provide you with background information on how it all came about.

The starting point were so-called „sparse matrices”. Imagine you have a starlit sky: A few points in a great deal of space. The edges of a network flow model would be the constellation in the image.

The model formulation with sparsely populated matrices is explained with the aid of the languages MPL (from Maximal Software[2]), MOSEL (by FICO[3], formerly Fair Isaac) and OPTANO Modeling1]

Modeling language and sparse matrices

Let’s take a look at the problem first: Sometimes we are called in to consult if models work well with small test instances but don’t scale well for large instances (now, even without errors, a polynomial runtime isn’t necessarily considered to be good “scaling”…). The reason for poor scaling is often because the sparsity of a matrix has not been properly exploited.

Here is one example:

A variable corresponds to the amount to be transported from s to z but transport is not allowed in this section. In this case, it is better not to generate the variable at all rather than setting it at 0.

Although these fixed variables are immediately deleted by all modern solvers, matrix generation takes up a lot of your working memory and runtime. The following examples show how not to generate variables in different modeling languages if they are not required.

All examples are based on the assumption that there is an amount of valid index tuples

MPL-Code (Excerpt):

In MPL a tuple amount of the entries available is first created, then uploaded and variables are defined which are dependent on these amounts. The dependencies always complement the indices that are already available: “Select z so that tuple (s,z) appears in the sparse index – whereby s has already been selected”. Therefore, an index tree that gradually builds up itself is very useful here.

```S_Z[S,Z] := DATABASE("Sparseindex_S_Z") PARAMETER myParameter[S, Z IN S_Z] := DATABASE("Data_V_SZ") VARIABLE myVariable[S, Z IN S_Z]```

MOSEL-Code (excerpt):

In Mosel all the indices are named in a forall loop and then checked by means of exist (or another logical expression) to see whether the variable should actually be created. Despite the valuable help provided by Mosel-Support in the project at that time (for which we would like to express our gratitude) we could not determine to what extent every index tuple was checked for existence.

```declarations mySparseIndex: dynamic array(s,z) of mpvar end-declarations SQLexecute("SELECT * from Data_V_SZ”, mySparseIndex) forall (s_ in S, z_ in Z | exists(mySparseIndex(s_, z_))) create(myVariable(s_, z_))```

OPTANO Modeling-Code (excerpt)

Normally a system of constraint is used to implicitly create the variables. This example, however, makes clear how the object link can be exploited:

`Model.AddVariables(AllS.SelectMany(s => s.TransportsToZ.Select(z=> myVariables [s,z]));`

## History

### Version one

The first idea for Optimization.Framework (back then it was the Model.cs-Class) was conceived in the DS&OR lab at Paderborn University[6]in 2006. Here we conducted research into the mathematical optimization of material flows. The implementation dated back to Multi-Commodity Network-Flows with Resource Constraints which have the unpleasant feature of generating very, very sparsely-populated matrices. We tested different modeling languages and discovered that MPL had the fastest matrix generator by far. I remember having a long discussion about it with Bjarne Kristjansson (was it at EURO2007 in Prague or the IFORS2008 in Sandton?) and he told me how much speed tuning was put into the MPL code. We were able to support him and also Sandip Pindoria by conducting tests in the MPL versions 4.2a to 4.2k. During our tests there were repeated errors (e.g. that OPTIMAX wasn’t thread-save) which both had fixed. Those were the wild days back then…
In spite of all the tuning there was one concern: Even with MPL, the matrices took hours to be set up, sometimes days, they also consumed gigabytes of storage space and in the end the solver solved the model in a few minutes (a good thing that the solution didn’t take days as well). This ratio worried us. We could only put our knowledge of the relationships of the models (which supplier delivers a product, where should the product go, which nodes can reach one another, etc.)  into MPL via queries of the database and thus generate it. Although everything was stored in the memory, the server could set up an object model of information including all the object links.

Later, Dr  Tim Schöneberg (today at Volkswagen) and Dr Kostja Siefen (today at Gurobi) joined the team. We began to write our own matrix generator in c# and .net: Restrictions and variables could be added to the model. Finally, these were transmitted to the solver as a matrix. This reduced the generating time to just minutes and the memory requirement to just a couple of hundred megabytes. The most essential difference was: the code to set up the model in the new class could use the knowledge of the content-related links of the network. By taking a procedural approach, it was possible to touch every object just once and put all the relevant information together. This saved us a lot of search time and memory for index dictionaries (which MPL probably stores internally).

Model.AddConstraint(new Constraint(0, binaryVariable1 + binaryVariable2 + binaryVariable3,1);

This is an expression for a sum of binary variables whose lower bound should be 0 and the upper bound 1. The object structures were relatively easy, the code was attributed to ADO.net[4] . However, there were also overloaded expressions such as >=, ==, <=, etc. whose restrictions could be formulated.

### Version Two

A short time later, Linq[5]came with Visual Studios 2008 and .net Framework 3.5. Here, new possibilities arose to store new restrictions in the code. During this time, Lars Beckmann (formerly at Microsoft) came to the DS&OR Lab. He re-created the code at that time and put it into a single library. This meant there were a lot more possibilities for use. Linq enabled us to present restrictions of this type in code:
`mathModel.AddConstraint(Expression.Sum(multiModel.Origins.Select(orig => Transport[orig, dest, prod])) == dest.Demand[prod]);`

Because of this, the model code was readable. Optimization.Framework could more than offset the advantage of poorer readability against conventional modeling languages: There were good code editors, good readable codes and even type safety.

• Further solvers were connected: alongside CPLEX[7], XPRESS[3] und MOPS[8] during the early years, Gurobi[9], GLPK[10] and several more could be approached (the list can easily be extended….)
•  Semantic information like AND and OR for restrictions can be entered and are translated by Optimization.Framework into the best possible formulation for the respective solver:
`AddOrConstraints(var1 == 1 | var2 == 2, "Either var1 equals 1 or var2 equals 2"));`
• A lot of different approaches to the memory manager were evaluated.
• Features such as read and write from lp- and mps files were added.