AI in Healthcare: Improving Human Interface for Patient Safety
AI in Healthcare: Improving Human Interface for Patient Safety
ISE Magazine January 2020 Volume: 52 Number: 3
By Avishek Choudhury
Working in healthcare can cause major mental, physical and organizational stress, which in turn can lead to clinical hazards. Artiﬁcial intelligence may help, though recent research on such advanced technology is largely devoted to patient safety; regulatory bodies conducting such studies, such as the U.S. Food and Drug Administration (FDA), focus on improving an AI system’s performance purely from a statistical viewpoint. What’s missing is a need to in-tegrate AI and cognitive ergonomics to make it more user-friendly for medical workers
Cognitive ergonomics – the gathering of knowledge about human perception, memory and mental processes – has been neglected in the healthcare domain and the evaluation of AI from a cognitive ergonomics perspective is not well-estab-lished. Though the effect of clinicians’ heavy mental load on patient safety is known, there is no framework to guide the graphical user interface (GUI) of complex AI systems and their inﬂuence on clinicians’ and patients’ thoughts and feelings. Here we seek to adopt a systems approach to develop and pro-pose a conceptual framework across AI, cognitive ergonomics and patient safety.
The gap between cognitive ergonomics, AI
Cognitive ergonomics is a component of human factors and ergonomics aimed at ensuring effective interaction between technology and humans. In this article, we discuss human-system interaction from a cognitive ergonomics perspective in healthcare. In this interaction, cognitive ergonomics concentrates on mental processes such as thinking, reasoning and problem-solving, as well as psychological or behavioral interactions. In cognitive ergonomics, these aspects are studied in the context of work and other systems.
Due to the increasing complexity of healthcare, researchers are focused on the application of AI primarily in improving its ability to provide accurate diagnoses. AI systems have long held the promise of reﬁning the prediction of diseases, such as guiding imaging for pulmonary embolisms.
A healthcare AI system has two dimensions that impact cognitive workload: Complexity of the algorithm and user interface. Many researchers have tried to simplify its underlying algorithm, yet no signiﬁcant steps have been taken to improve the AI systems interface. Thus, its impact on human cognition remains uncertain.
A well-designed graphical interface can help an AI user lo-cate relevant information, then interpret and prioritize it. The signiﬁcance of cognitive ergonomics and human factors in the intersection of healthcare and AI has not yet been studied and cognitive load has been assumed rather than measured. There is therefore a need to use the methods of cognitive ergonomics in healthcare artiﬁcial intelligence systems. We propose a framework to simplify a GUI of healthcare AI systems and understand its impact on clinicians’ cognitive load.
Meaningful outcome and cognitive load
Those who use advanced AI, such as clinicians, might face difﬁculties in understanding and interpreting the outcomes of the technology. This may be due to the users’ inability to form a conceptual model of the information presented on a computer screen or a device, and their lack of understanding of the system’s working principle. Addressing confusion with AI technology and its interface will not only aid in safer application but can foster better satisfaction.
Nevertheless, a benchmark for examining the cognitive load generated by a healthcare AI system is not deﬁned in the FDA’s premarket clearance program. In an early-phase study of 326 hospitalized patients, the FDA approved a predictive algorithm, WAVE, which indicates vital signs’ abnormalities and has led to a reduction in the average duration of patient instability. Though this was judicious under current regulatory standards, the approved system was not tested for the usability and complexity of its GUI. Is the system’s interface simple or intuitive enough for an inexperienced clinician to implement and understand in a chaotic environment? It is uncertain whether WAVE has reduced clinicians’ cognitive load.
The FDA should rigorously conﬁrm and test surrogate endpoints of new technologies to prohibit the introduction of AI systems with questionable GUI into a chaotic healthcare environment where human life is at risk. The agency must ensure that implementing an AI system not only improves diagnostic capabilities but also minimizes a clinician’s time spent analyzing and interpreting com-plex outcomes; such necessary measures could be valuable for premarketing authorization. Unfortunately, no AI algorithms or systems that received regulatory clearance have been tested for their impact on cognitive load.
With the increase of technology in healthcare, clinicians and patients are encountering AI devices that involve complex and unfamiliar GUIs. The industry needs to con-duct comprehensive and informative research to develop user-friendly AI devices and systems by considering such design needs as human memory limitation, perception and attention.
GUI design considerations
Here are aspects of GUI design that should be considered when approving new AI technology for healthcare use:
Human memory limitation. This partly involves retention theory and cognitive load theory. The human memory allows an average person to retain seven (plus or minus two) sets of information at a time. Thus, to ensure better retention, health information should be divided into smaller units that do not exceed the limits of 7 ± 2 per AI system display. Displaying fewer sets of information per screen can reduce clinicians’ need to memorize data, which in turn helps minimize the cognitive load on them and their patients. In addition, using suitable colors in designing GUI is shown to enhance information retention by 50%.
Perception. This aspect employs the schema theory and Gestalt law. A schema signiﬁes the conceptual representation of how an individual decodes information and extracts contextual knowledge. For instance, when we see an image showing sun, blue sky and birds ﬂying, we interpret it as daytime and associate a feeling of happiness. In addition, users’ prior experience and knowledge plays a crucial role in enhancing their perceiving ability (transference and mental imagery). Transference denotes the users’ anticipation of the behavior of an AI system based on past experience with other computer interfaces, such as text placement and the location, appearance and functionality of buttons and icons. Mental imagery refers to the conceptual representation of how things look.
Attention. In the placement of texts and images, the left-to-right theory can be applied in designing an effective AI interface. This suggests allocating critical information on the top left corner of the screen. This is due to the responsiveness where people tend to interpret data from upper left to lower right. Additionally, the placement of texts and images should be applied considering the human visual ﬁeld, which is di-vided into two different segments, as shown in Figure 2. Thus, the placement of texts or images according to humans’ visual ﬁeld minimizes the users’ cognitive load.
Clinicians may experience difﬁculties whenever there are multiple parallel tasks overwhelming their senses. For example, an alarm noise in an emergency ﬂoor or a patient talking in an outpatient setting can be distracting if auditory information is also running on the AI system. Such multitasking should be avoided to reduce their cognitive load. Their sense of sight might be impaired when background images or texts are embedded under pieces of informative texts. Thus, AI-enabled applications should avoid such embedded content. And the need to process information simultaneously by two motor systems increases cognitive load, such as listening to a patient’s distress while using a complex AI system.
Trust and meaningful use
As governing bodies decide which downstream features matter for AI systems, they should also keep in mind these systems will fail as a technology without acceptance and trust from clinicians despite having good analytical performance. Studies have shown that trust is built in a continuous manner, demanding two-way interactions between the user and the technology.
Initial trust is essential for ensuring the adoption of new technology. Trust is inﬂuenced by a user’s ﬁrst impression and is built based on that person’s personality and institutional cues. Once trust is developed, it must be nourished to be sustained. In our context, continuous trust depends on the functioning of the AI system in reducing users’ cognitive load and yielding clinically meaningful outcomes.
It has been believed that trust in technology is determined by human characteristics (personality and ability), environ-mental characteristics (culture, task and institutional factors) and technology characteristics (performance, process and purpose). However, the impact of meaningful use on trust has been neglected. Meaningful use, just as it is imposed on the application of EHR technology, should be imposed on AI systems to improve the quality of care.
The meaningful use of AI systems means that speciﬁc AI should be implemented and interpreted in a speciﬁc manner. Depending on the functioning of the AI algorithm, not all systems can be generalized across a healthcare system.
For instance, the WAVE platform algorithm is based on ﬁve vital signs: heart rate, respiration, oxygen saturation, temperature and blood pressure. Since such measures are common across health systems, the system could be employed by multiple diverse health systems. However, other AI-based platforms, especially those based on institution-speciﬁc EHR or image datasets, may not translate across other EHRs. Moreover, Al trained on speciﬁc datasets, such as patients from a speciﬁc institution, may not be generally applied across broader populations.
Increasing AI interoperability may necessitate developers to deliver more speciﬁed data to conﬁrm that predictive algorithms will achieve reliable, replicable and valid results. Indeed, regulators should focus on balancing the clarity of predictive models without impeding the proprietary interests and intellectual property of algorithm developers.
Healthcare AI systems are just getting evaluated and being made available for clinical use, so the inﬂuence of the existing regulatory framework on patient outcomes is yet to be determined. It is also uncertain what impact the 21st Century Cures Act passed to relax regulatory standards for low-risk health technology will have on the value and quality of predictive algorithms. The FDA’s Digital Health Innovation Action Plan, issued in 2017, launched a precertiﬁcation program to analyze clinical outcomes of AI-based algorithms. Such efforts should be acclaimed but improved based on our recommended norms.
Some developers may disparage the overregulation and standardization of a vaguely understood ﬁeld. Indeed, a pledge to regulate healthcare AI systems will emerge over time and impose ﬁnancial costs to stakeholders. Policymakers should also be sensitive to the stability between regulation and in-novation in this evolving ﬁeld.
Source: IISE Magazine (https://www.iise.org/iemagazine/2020-02/html/choudhury/choudhury.html)