Following are more transcript highlights from our January 29/30 webinar with Ugo Feunekes of __ Remsoft__ and

__, which can be watched on this__

**Gurobi**__.__

**Gurobi webpage****Ugo:** Some of the suggestions were things we could do to change our modeling system itself, such as precision accuracies and rounding. Other things are tough for us sometimes to implement because many of the problems and things we’re seeing are actually because the users are modeling these and formulating these constraints in a specific way. One suggestion was that, if we're going to have soft constraints and if we are going to measure the total constraint violation, it may make sense to normalize these constraints to make it easier for the solver to deal with these things. One suggestion is, “Let's normalize all those output calculations which go into the constraints and make them of similar value, in which case when we run into the penalty function, things are normalized and work equally well.” This isn't something we have to change so much, but this has become one of the best practices that we will suggest to our customers as we see their models or review their models.

**Irv:** There was one particular MIP model that had some variables that were implicitly binary but that was due to a combination of constraints. The Gurobi presolve was not figuring out that those variables were binary with the constraints the way that this model was formulated. We modified the model and say, “Hey, these are binary variables.” Specifying they were binary helps solvers. We also added constraints that linked those binaries to other variables. With the additional constraints that were linking these binaries, Gurobi could create stronger cuts, could also branch on those variables, and actually get a better solution faster. The net result of that was, because they were now binary, the Gurobi presolve actually created a smaller problem, even though the modified problem had additional constraints. The solution times for this particular model were significantly decreased--from hours to minutes. Again, it was simply by doing two things. Adding and specifying some variables that were implicitly binary as being binary, and adding some constraints that were linking them to other variables, which gave more power to Gurobi to be able to solve the problem even faster. By understanding the problem and understanding the model, we can modify the model itself to help the solver solve the problem faster.

**Ugo:** When Irv came to visit and work with us, we gave him the nasty problems first, the ones that were causing us the most problems. This was only one problem, on this slide first, which represents a large forestry company. It's a group‑integrated company that has mills. This model had 2,700 different harvest units to harvest over 13 periods, which represented a year. They could be harvested by one of eight different systems, which produced a mix of 30 different products. Then, for each of those products produced in each of those blocks, we had to deal with the transportation system to get those to one of 70 different mills. In addition, we have harvesting machines with requirements that, if you're going to use them, you've got to use them at some level. This was really a big part of the problem we discovered in terms of the binary decision variables associated with these machines. The idea was, if this machine's going to work, then you got to have at least n being maybe seven, depending on the number of machines this company had in the area. These variables appear all over the place. The model took hours, if not overnight, to get any reasonable solutions when we got solutions. This was one of the first models. In statistics I showed you earlier, those two red lines were from this model. These had 4.5 million I think barebone columns and so forth. It was quite a large amount.

**Irv:** I took a look at this model. Ugo and I spent some time trying to understand the business problem as Ugo stated it and what it was really trying to model. You see at the top of the slide, the nature of the constraints was basically that, if a machine was being used, you had to use at least n machines in total. The variables that were representing whether a machine was being used in a period were appearing elsewhere in the model. As I learned to understand the model better, I did an experiment that said, “Hey. What if I make these V variables no longer integer?”

Then, Gurobi was able to solve that very large problem that had actually over a million variables, and hundreds of thousands of constraints. We knew this gave a hint that these were the variables that were making the problem difficult to solve. Once we understood that, then we were able to come up with an alternate formulation of the model that introduced some new variables and constraints that would interact with these variables that would generate stronger cuts for Gurobi. We ended up reducing the solution times by introducing these extra variables and extra constraints that would more represent the underlying problem better. As a result, Gurobi could generate stronger cuts and find solutions faster. I think when we started with that model, we would run it overnight. I would come in the next morning. Ugo would show me that we barely got a reasonable solution in about 12 hours' run time. I think with this alternate formulation, we drove the whole solution within a few hours. Is that correct, Ugo?

**Ugo:** I think so. This was also the model that forced us to do the hierarchical approach, which is a goal formulation. When the fellow would run the model at night, if the trial came back infeasible in the middle of the night, he'd lose the next morning. By doing that hierarchical goal structure, at least he'd get an answer by the morning.

**Irv:** This is a good recommendation here. If you have a problem and it's hard to solve, while there are a lot of great tools available in Gurobi to help you tune the performance, there are other things that you can do by really studying the model and understanding which parts of the model are making your problem harder to solve, by simply saying, “Hey, what if this variable's no longer integer? Does the problem become more tractable?” It relates to Ugo's comment earlier that sometimes moving a constraint would actually make a problem harder. In this case, removing a constraint that the variable had to be integer made the problem easier. It's a useful technique to understand where performance is needed.

**Ugo:** From our perspective, we made a number of programming changes certainly to address some of the rounding issues that we talked about earlier. We now allow people to very easily round off numbers where it makes sense. Some make sense in some places. Some do not make sense in other places. Another feature we've introduced is the whole notion of units within our modeling framework. One of the recommendations from Princeton Consultants was to use units. If you use units on your metric on your output calculations, it's easier to scale things. Take a dollar value and divide them by a thousand all through the model in one go.

We've increased the number of our debugging tools to help manipulate matrices and analyze those. We're often told we have bad numbers in our matrix. It's nice to be able to find those easily so that we can then address that problem if it's our problem or, if it's a client's problem, we can tell them how to fix that. We employ best practices for both within our company and when we work with clients' models. We have audit programs. People can submit their models to us. We'll make recommendations as to how they can improve their models, both for speed and efficiency but also to help them with debugging and so forth. That would be our modeling audit system, to which we’ve added many of the recommendations from Princeton Consultants.