It’s Usually Not the Math: Risks in Implementing Optimization and AI Into Production

Wednesday, May 26, 2021

Following is a lightly edited excerpt of Steve’s presentation at the Joint Mathematics Meeting in January 2021, part of a special session, “Transformation Through Advanced Analytics.”

I am pleased to be speaking on the topic of The Princeton 20: the most common risk factors in implementing optimization AI into production. As we discuss project risk, the spoiler alert is: it’s usually not the math.

At our firm, we view AI as a subset of the broader field of advanced analytics. AI is an effort to automate intellectual tasks normally produced by people. Optimization is a further subset of AI; it is essentially enhanced decision‑making driven by AI.

AI is less of a magic pill, and more of a journey, which we can see in two parts. The first is discovery, where we get to an insight. The second is deployment, where we take the insight and get it out in the field. In our experience, when projects fail, it is most always during deployment.

We systematized the risks to deployment to make the journey a little easier. There are 20 risk factors I would like to share: we call them The Princeton 20. The first 10 are environmental; the second 10 are technical.

Our default is to be called in when executives are looking to use AI or optimization to improve an underperforming business. That is very common for us. Of course, it is terrific when the client wants to use bring an already thriving organization up to the next level to maintain or improve their competitive advantage. In a situation where the optimization is going to power a brand‑new business or brand‑new business model, there is risk because there is so much uncertainty around the business itself. Staff, customers, suppliers, and policies are in flux. Into that mix, our project is being hurled. There is excitement to be sure, and there is attendant risk. That is an example of an environmental risk factor.

We tend to see patterns when The Princeton 20 is applied. Let me give an example: we are working with a large healthcare company with many facilities. Scheduling is currently done manually, so there is very high potential value for optimization. A successful optimization solution will improve asset utilization, patient happiness, and staff happiness—a big win all over. The business executives really want it and there are a lot of resources being applied. They have a very good IT organization. The business is focused on the project. Where is the risk? Because there is no optimization model in place, the idea of an objective function is vague. There is not a good numeric way of scoring a schedule. The business rules and decision data are not necessarily captured correctly, and the user experience is Excel. Similarly, on the environmental side, the business had decided that employees out in the field perform scheduling whenever and however they prefer. How to evaluate a schedule falls on the business side, and there are controversies. For example, some employees might pack a schedule tightly to make it efficient, while others might preserve space so the schedule is more robust.

Question: Let's say, you predicted a very complicated project that turned out to be easy, or the other way around, how has it worked out in the validation of the scoring?

Steve: We are not claiming that The Princeton 20 is pure truth, but it works, so we opted to “open‑source” it. We thought it would be helpful for practitioners to have a vocabulary and a way of sharing what goes wrong with projects. We encourage you to modify it as appropriate.

If your organization does not have a consistent way of describing and measuring project risk so that experienced employees help the less experienced see and overcome problems, then you are really deficit. A lot of our client organizations have invested on the technical side, making sure their staffs have technical literacy, the latest libraries, latest techniques, and latest languages—but they have not invested comparably on the “soft side” of things, as it were.

The Princeton 20 is an attempt to add rigor into project risk management. There are a lot of cases that are part of our company’s lore. We are always telling the stories of what happened. The narrative of a project, what we learned from it, is often more valuable than the score.

Read more about managing project risk through The Princeton 20 in this previous post.