I was recently interviewed on “Tech Leaders Unplugged,” a podcast for businesspeople that covers “the latest technology trends and innovations from AI to the blockchain, fintech to health tech.” Watch or listen to the 30-minute interview here. Here are some lightly edited excerpts.
Wade Erickson: A lot of our listeners come from the tech space. We’ve seen quite a frenzy around AI. Can you talk a little bit about the process that’s involved when you’re presented a project—how optimization goes into the design thinking and the evaluation of the current workflows in the application? Where do you insert your background and intellect into that process? How does that show up in the software development teams to insert the new algorithms, and the new models and methods to optimize?
Irv: What we first look for are business processes where people are making decisions in spreadsheets or on whiteboards about allocating resources. Often, in the scheduling area, they’re doing things like figuring out a schedule in the spreadsheet and trying to understand how they can jigger things around, etc. and it’s done manually. We look at how they are making decisions today and what data they are using to inform those decisions. We ask, “Is the data available?” Sometimes the data is in people’s heads: “Oh, we’ve always done it this way—I kind of know that I need to have this customer serviced by this engineer.”
After we look at the decision problem, we define and find the data that will drive the decisions. We create documentation that says, essentially, “Here’s all the data: it’s these tables and how they are interrelated.”
Then we write a mathematical model—the secret sauce of optimization—which requires talent and skill. It has traditionally been taught in Operations Research programs, though now you’ll find optimization courses in many Business Analytics and Data Science programs. Some formal education is needed—you can read books but for the real hard stuff there is an art and a science to writing a good math model. Basically, you are saying, “I have all this data and I’m going to mathematically define my decisions as variables.” Then you write out constraints. How the decisions in question are constrained is all part of understanding the business process. Let’s say you are allocating resources. If there are only 10 people to schedule, that’s a limit, but what other types of limits might exist?
You look at what is typically called the objective function—the key part of optimizing is to define it appropriately. You may be minimizing costs or minimizing the amount of time it takes to get a job done or you might be maximizing profits. Quantify KPIs so that if decisions change, the KPIs change accordingly. In optimization, we define these KPIs mathematically.
Once that documentation is done, we get to the coding. There is a good number of commercial tools, typically not in the open-source world, but commercial tools that allow you to represent a mathematical model, marry it with data, and then, as I like to describe it, press the big square root button on a calculator. You’ve thrown a bunch of numbers in, and you’ve gotten some numbers out that represent the optimal decision—meaning it’s provably optimal mathematically that, given the way you’ve set up your problem, there is no better solution or way of making those decisions as measured by the objective functions and KPIs.
You take those numbers out and use them in some kind of application. Start by building an application that has a user interface that will allow the users to understand the answer and see why it is making those decisions. Sometimes the users want to change the data. For example, at our firm we have a project that entails optimally scheduling a number of different tasks, and there is a certain constraint on how many tasks can be done by different people over time. Imagine it as building a big Gantt chart, where you’re not exceeding the number of resources. Determining the value of how many simultaneous things can be done is a user input. We have a screen that lets users edit that number. They have given us some default numbers, but they can override them—they can change those numbers and see the result -- the new schedule that arises.
The path starts with understanding the business problem, seeing if it is amenable to optimization, and trying to get an estimate that if we are optimizing, what will improve. Sometimes the improvements are minimizing costs, like in the case of the US Census Bureau, maximizing profits, or coming up with more efficiency in terms of time. Sometimes the improvement is that you’ve automated a process that was taking people a week to do that can now be done in a couple of minutes. They are going to start working and thinking differently, and looking at the world differently because they can make decisions faster.
To elucidate those benefits upfront, we go through the steps of defining the data, defining a model, implementation, and then building it into a software application that typically has a database to store the data needed for optimization, user interface, etc. Once people are comfortable with the user-interface type of application, we may subsequently put it in a black box where it is an automated decision process.
Wade: A lot of times AI is coming up with solutions that are not necessarily linear. There are so many different inputs that the mathematical model has to crunch. How do you validate and test for that?
Irv: It is a real challenge. One thing we do is have a team member—who knows nothing about optimization—write what we call a “solution validator,” which takes the same set of data inputs plus the solution and evaluates if it satisfies all the constraints, and if the objective value of the KPIs is exactly what the modeler was getting from the optimization. In this way we check for errors or misunderstandings about the requirements.
A second activity, which is more difficult, is to make sure we are solving the right business problem, and we are giving an answer that makes sense in the context of the business. This is typically done through multiple cycles with the experts, the people who were making the decisions before, and having them evaluate the quality of the solution. It is very difficult to conduct strictly automated testing as in traditional software development because in custom optimization the values and the numbers can change. We build out a sizable test harness of a lot of different data sets and test all those data sets and make sure failures are not occurring, the answers are making sense, and the answer coming out of the model equals what comes out of the solution validator. We create guardrails to prevent failures. In optimization, we often work with applications where the users are involved and their ability to visualize parts of the solution is essential.
If your tech team wants to explore how and where optimization should be part of your AI toolkit or if, more broadly, your organization seeks to identify and evaluate optimization opportunities, contact us to set up a call.