Simulation is Risky Business, but We Can Quantify The Error


Picture source:

Computer simulation is the most general-purpose tool an IE has for quantifying risk due to uncertainty.
In a simulation, the sources of uncertainty are represented by probability distributions, and these “input distributions” are often tuned by fitting them to historical data. Although any input distribution that is fit to a sample of data is an incomplete representation of reality, statistical theory is adept at measuring this error.

Unfortunately, it is the error in the simulation-based performance estimates, and ultimately in the decisions that the simulation supports, that matters, not the errors in the distributions

In “Quickly Assessing Contributions to Input Uncertainty,” doctoral student Eunhye Song and professor Barry L. Nelson from Northwestern University use bootstrapping to provide easy-to-automate methods that quantify the error due to using estimated input distributions, and they display this error via adjustments to the standard plus-or-minus confidence intervals that simulation users expect.

Song and Nelson also derive the relative contribution of each input model to the overall error, such as the contribution of each time-to-failure distribution to overall flow-time uncertainty. This contribution information indicates where additional data collection would be most useful for reducing uncertainty. Simulation software company Simio LLC collaborated with Song and Nelson through the National Science Foundation’s Grant Opportunities for Academic Liaison with Industry program, and the methodology is now a standard tool in Simio, allowing users to quantify input risk as a routine step in their output analysis.