Rethinking the foundations of ethical AI

Rethinking the foundations of ethical AI

ISE Magazine September 2020 Volume: 52 Number: 09

By Joseph Byrum

https://www.iise.org/iemagazine/2020-08/html/byrum/byrum.html

 

Engineers are sometimes their own worst enemy. They will de-sign a system with the sole intention of creating the best possible results; the last thing they want is a product skewed with bias. So how could a system designed by engineers and programmers who are opposed to bias in all its forms ever produce the opposite of the intended outcome?

The introduction of unintentional bias in complex systems is nothing new. A decade ago, Nikon’s digital cameras used a primitive form of face recognition that would identify subjects in a photograph and automatically focus on the human face. Some commentators branded the feature to be racist because the camera failed to recognize Asian faces as human. It is not likely the Tokyo-based camera giant secretly harbored such a grudge. Nonetheless, they were caught up in a minor but embarrassing controversy.

Fast forward to today when far more advanced facial recognition algorithms, like Amazon’s Rekognition, have generated the same racial bias charges – this time from weightier sources such as MIT Media Labs and the National Institute of Standards and Technology (NIST).

Fortunately, developers of these algorithms respond swiftly once the problems were brought to their attention and the revised systems produce better results. But the damage is already done. In the age of Twitter-fueled, short-form media consumption, many will remember the initial incendiary “racist AI” headlines. Few will ever see the more modest “the AI has been margin-ally improved” follow-up stories, if they are ever written.

Unexpected results like this and the resultant media tempest represents the greatest long-term threat to the success of advanced artificial intelligence systems. Combine this with science fiction’s near universal depiction of AI (in this case, referring to artificial general intelligence) as an evil force attempting to destroy the world and the public will demand that politicians take action to protect them from AI in all its forms.

We shouldn’t let the problem get that far. If society ever expects to progress beyond the consumer-grade “AI” found in Alexa, Siri and Google Assistant, we need to make a commitment to shore up the ethical foundations of AI development so that we can end unforced errors and unintentional bias. Doing so will produce better results and open the door to revolutionary advances in business efficiency.

Validation is the key to ethical AI

Last year, the Institute of Electrical and Electronics Engineers released ethically aligned design guidelines that are meant to encourage developers of automated systems to reflect on the principles that, if followed, will help to avoid such problems. The all-encompassing principles deal with many different potential problems, but those following them will have to think about, then implement, effective validation routines. Validation is more than half of the battle in preventing the unintended consequences in AI.

One issue with deep learning and machine learning variants of artificial intelligence is that the end users and developers generally have no idea how an algorithm comes up with a solution for the problem it was created to solve. The whole point of this subset of AI is that these systems can create their own algorithm to evaluate data based upon an analysis of a training dataset.

Let’s say engineers wish to create a machine learning pro-gram that diagnoses a certain type of disease from an X-ray. Their first step would be to approach medical experts to collect images from a set of individuals known to have the disease and a second set from individuals without it. The developers would pay special attention to finding the most difficult edge cases that might trip up a simple program. The trickiest photos might, in fact, even fool a physician.

A machine learning algorithm would process the training dataset by making a catalog of all the distinctive visual features of the disease. For instance, it might note that black spots in the lung of a certain shape and size as a sure sign of disease. It would then compare these images to what the lungs of a healthy subject looks like. The algorithm would assign a weight to each feature.

Thus armed with training data, the program would ex-amine a new X-ray image and evaluate how many healthy features are contained in the image and how many diseased features are present. It would then make a statistical evaluation based on what has been “learned” from the training dataset to decide whether it is likely that the picture came from someone with the disease.

Such algorithms aren’t static; they are designed to make adjustments according to the algorithm’s ongoing successes and failures. For instance, if a black spot 2 pixels wide triggers too many false positives, the AI might adjust its sensitivity to only trigger a positive indication at one 3 pixels wide.

Human intervention isn’t required for the algorithm adjustment, nor is human understanding or approval. The whole point is to automate this process so that instead of requiring manual intervention each time an equation goes wrong, self-learning code can make the changes needed for optimum results. As much as that contributes to overall efficiency, it’s also a potential liability and source of error.

Approaching a solution through principles

Machine learning will always have these limitations because a training dataset of necessity presents only a slice of reality. The real world consists of many layers of complexity, with countless interacting factors that conspire to confound anyone or anything attempting to come up with a simple algorithm that can produce perfect results.

As with dodgy digital camera image recognition, the ethical problems aren’t caused by unethical engineers. They’re caused by them taking shortcuts in development as a way to manage the overwhelming complexity of the task at hand.

IEEE’s eight ethical design principles are designed to get AI creators thinking broadly about human rights, human well-being, privacy, transparency, accountability, potential misuse and competence. These principles direct development teams to think in terms of making AI that advances the interests of humanity, ensuring the system works in a way that’s highly documented and under human control.

A developer adhering to these standards will create an automated system with an audit trail allowing inspection of why the AI made any given choice. In this way, transparency allows the system’s human creator and owner to be accountable for the decisions. It also makes it more likely that developers will be more vigilant about the possibility someone would manipulate or otherwise exploit the AI.

Perhaps more importantly, the principles urge the creation of standards to validate that the AI and its human operators alike are effective in their respective tasks, getting the job done properly. In all cases, a human must always be overseeing what’s happening with the power to step in and correct potential mistakes.

Augmented intelligence bypasses common ethical dilemmas

Deep learning and machine learning get all the attention these days, but implementation of the IEEE principles is more difficult with these forms of AI because they lack transparency. An alternative would be to concentrate on augmented intelligence systems that don’t need to be squeezed and molded into something that might fit under ethical guidelines. Augmented intelligence remains human-centered by design.

With augmented intelligence, the AI system by design is always under a human operator’s full control. The job of augmented intelligence algorithms is to process data and to pro-vide refined intelligence to its user. The system can also offer suggested courses of action for the operator who reviews those and makes a final decision about what to do based upon the information presented.

Such systems are also transparent by design; because the in-formation is provided to the human operator, there’s a clear chain of data to follow to determine how each decision was formed. There’s no mystery involved.

It’s also better in terms of protecting human well-being. Augmented intelligence is similar to driving a car with a GPS device taking care of navigation, or with a lane assist feature that nudges drivers to keep them from veering off-course. It’s entirely possible for an evil driver to use a car so equipped to hit another vehicle or pedestrian, but that would only happen as the result of the operator’s deliberate choice. The system itself remains within ethical boundaries.

With a fully autonomous car, the systems are far more vulnerable to error and software glitches or hacking could cause unwanted results, to the detriment of human well-being. Hu-man control acts as an insurance policy for human interests – it’s not a guarantee of good results, but it certainly helps.

Why ethics matters:  Building the intelligent enterprise

Advancing ethical AI is a critical step toward having AI systems capable of assisting all the functions of a business. I refer to a company designed from the ground up to use augmented intelligence to boost the decision-making abilities of its employees as the intelligent enterprise. Judging from the benefits of one-off AI optimized systems, unlocking the productivity of every employee in a business would unleash a step change in efficiency, to the benefit of consumers and business owners alike.

But the intelligent enterprise will never become reality if AI development efforts are directed toward dead ends or systems the public will never fully trust. Ensuring more robust and ethical development of AI systems in these early days is the most critical step in achieving the long-term potential of automated systems.

References: IISE Magazine October 2020 (https://www.iise.org/iemagazine/2020-08/html/byrum/byrum.html)