Simulation and Optimization for Rapid Response After a Disaster

Simulation and Optimization for Rapid Response After a Disaster
ISE Magazine Mei 2021 Volume: 53 Number: 5
By Ghaith Rabadi

According to Our World in Data website, natural disasters accounted for an average of 60,000 deaths per year in the past decade, but given the high year-to-year variability in the data, some devastating events pushed the number to more than 200,000 in some cases (“Natural Disasters,” Hannah Ritchie, The numbers are staggeringly high when human-caused disasters, wars, and pandemics are included. With the rate at which the climate is changing, no one expects natural disasters’ frequency and intensity to decline any time soon.

When disasters strike, it is common for the situation to become chaotic partially due to the misuse of resources, inefficient use of volunteers and the lack of coordination among governments, emergency response units and humanitarian organizations. On more than one occasion, social media had become the new 911 (see Figure 1 from media outlets when Hurricane Harvey hit in 2017). The solution we present addresses this problem of coordinating and optimizing assignment of resources and personnel to multiple modes of transportation across multiple organizations with the objective of deploying relief and aid in the shortest possible time using simulation and optimization techniques.

Situational awareness from weather forecasting models to social media posts and images are infused into the framework to produce robust plans. Artificial intelligence (AI) deep learning, machine learning and natural language processing (NLP) techniques are implemented to contextualize and extract critical but trustworthy information from social media outlets during a crisis.

The story started in 2017 with a research project to solve a rapid military deployment problem sponsored by NATO Allied Command Transformation (ACT). It involved professors of engineering management and systems engineering at Old Dominion University (ODU) that included me and Mamadou Seck. In this project, NATO presented a hypothetical but plausible scenario to transport and deploy large numbers of personnel, vehicles and containers rapidly and efficiently from multiple NATO countries to a region with unrest or hostility. Along with our students, we developed simulation and optimization algorithms that did just that by modeling and solving the problem as a multicommodity, multimode transportation problem.

In the same year, the ODU team responded to the first NATO Innovation Challenge in which a hypothetical scenario of a hurricane that hits the U.S. East Coast was posted (not a purely hypothetical situation given that three of the five costliest hurricanes in U.S. history hit that year). Our proposed solution capitalized on the rapid deployment research project conducted earlier that year to optimize the operations of humanitarian logistics. Operations research methods were used to improve the loading and routing of relief resources from multiple international and local organizations to the disaster region. Our ODU engineering management team won the first position among more than 50 competing teams of universities and companies from all over the world (see related story).

For NATO, the proposal of a cloud-based platform to support disaster response across multiple organizations in different countries is a novel idea that was worth pursuing, especially that crisis management is one of NATO’s three strategic core tasks as outlined in its Strategic Concept of 2010 ( After several demonstrations of the research prototype, NATO ACT, through its Innovation Hub, in 2019 decided to support our idea of translating the research into a real solution that can support disaster response for NATO nations and its partners. NATO’s Crisis Management and Disaster Response Center of Excellence (CMDR CoE) was our collaborator throughout the project as an end user.

This event was a turning point to get the research out of the lab and into the real world through an entrepreneurial startup to develop a minimum viable product software for this problem in less than a year. I founded POLARes (Planning and Optimization Labs for Analytics, Research and Simulations, to develop iHELP (Intelligent Holistic Emergency Logistics Platform,

Cloud-based platform aids allocation of key resources

The core idea of the project was to develop a cloud-based engine that simulates and optimizes the allocation of resources and transporters to deliver relief and personnel to the disaster location in the shortest possible time. Then the goal is to define services and activities needed at the disaster site using local and delivered resources. Multiple organizations could choose to share resources and be part of the same disaster scenario. The platform produces effective solutions that maximize the use of resources by matching the needs at the affected locations with available resources and humanitarian relief.

The architecture of the developed system is shown in Figure 2 and includes the following main components:

Supply and demand module. The planner or analyst can input supply information in the form of available consumable and reusable resources, as well as demand-needs information, at the disaster zone from various organizations and sources of information.

Transportation planner. The combinations of commodities, people and other resources are prioritized by algorithms that consider item priorities, transportability and compatibility, as well as the availability and capacity of various types of transporters. A matching of supply and demand is executed to define a list of items that can be transported across a complex, multimode transportation network after taking into account infrastructure availability (airports, seaports, rail ports and roads).

Service planner. The analyst defines service needs such as search-and-rescue, evacuation, food distribution and medical attention, among other services, required at the disaster location with preferences for the method of executing these tasks. An algorithm optimizes the allocation of resources to services according to a priority system that fulfils the needs in the shortest possible time. KPIs such as inventory levels, task start and end times, service fulfilment rates and delayed services are tracked in the system.

Disruption module. Given the stochastic nature of disasters, disruptions due to transportation infrastructure failures or other reasons are handled through rerouting algorithms that can suggest alternative routes to reach the disaster area. This is similar to how smart GPS apps reroute drivers around traffic jams.

Weather module. The developed system communicates through Application Programming Interfaces (APIs) with weather forecasting platforms to identify extreme weather conditions that may cause disruptions to the response plan, and feed that back into the disruption module. This information is useful for providing the analyst with weather awareness in assessing logistical plans in both the disaster zone and the disaster response zones.

Social media module. Input from social media platforms such as Twitter and Facebook can be processed to extract trustworthy information by using some of the latest AI models, including NLP, deep learning, machine learning and image recognition to identify posts and images relevant to the disaster and classify them into different categories (such as fire, flood, collapse, injury, death, etc.). Furthermore, the extracted tweets and posts are contextualized and an aggregate trust score is computed and passed to the analyst to consider when allocating resources. In crisis situations, and when 911 systems become very busy or overwhelmed, social media platforms become a critical source of information, especially now that intelligent algorithms can decipher the information and extract the trustworthy elements of it.

Given the fact that iHELP is deployed on the cloud, there is no need to install and maintain software on the client side. Furthermore, the system’s back end implements cloud-based databases to reliably store information and display the output on a web front end that uses some of the latest technologies such as REACT (for visualization on Google maps) and JavaScript to present statistical output on a web dashboard (as shown in Figure 3).

 Finally, all components communicate via application programming interfaces and web services that make communicating with other systems in the future seamless and scalable. This is especially important for decentralization of data that may come from various public and private sources.

Validation of the system

Although iHELP has not been used in a real disaster yet, as it is less than a year old, NATO’s Crisis Management and Disaster Response Center of Excellence in Sofia, Bulgaria, was involved throughout the project as an end user. It provided realistic disaster scenarios to test the system and vali-date the results based on its staff experience. Its involvement has greatly contributed to the development of methods and models that can be used in real-life scenarios.

However, we believe it is necessary to work with other disaster response organizations such as the U.N. Office for the Coordination of Humanitarian Affairs, Red Cross and Federal Emergency Management Agency, among others, to enhance the system’s ability to adapt to various types of scenarios.

It is very common for faculty members at universities to pursue sponsored funding for their research ideas, especially in the fields of science and engineering. It is less frequent, however, for university research projects to evolve from basic and theoretical research to applied research and turn their work into a solution applicable to real-world problems.

This project is an example of an innovative research program that applies principles of industrial engineering and operations research to disaster response and crises management through open innovation platforms and entrepreneurial activity. We hope that not only would iHELP be sustainable but will also save lives.