Explainable AI: The Panacea for all AI woes?

“AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.” — Elon Musk.

Explainable AI: The Panacea for all AI woes?

That’s Musk brand of theatrics for you served with a generous helping of ominous hyperbole and vagueness; albeit intelligent-sounding.

For starters, AI isn’t a fire-spitting dragon that will destroy everything that comes in its way. It is technology — a set of self-learning, pattern-finding algorithms — that tries to make sense of data that is fed into it.

Think of AI as earth-moving equipment used by the construction industry; replace the mud and debris with tons of data. Just like the Bulldozer, AI can also kill if it is originally manufactured to accomplish a murderous mission (purpose) or the operator so chooses (application). Therefore, like all technology, the outcome of AI also depends on how and where it is applied. Having said that it has its own shortcomings.

One of the widely discussed shortcomings of AI is the inability to explain the output or the process employed for it. In certain areas of application, this lack of “explainability” could have serious social ramifications — like application screening for jobs, loans, predicting criminal behaviour and so on.

An immediate and implementable solution lies in the use of Explainable AI or XAI. XAI addresses the interpretability problem associated with outcomes of complex blackbox algorithms like convolutional neural networks (CNNs). Simply put, it evaluates the input and output of the blackbox algorithms, without many intermediate constructs or layers, to establish causality. It simplifies and, to a certain extent, guesses the reason for the output.

It is the intrinsic nature/structure of AI that makes explainability both a necessity and a challenge. Let’s dig in a little deeper to understand this better.

Traditional AI/ML versus XAI
Traditional AI/ML versus XAI
Source: https://www.darpa.mil/program/explainable-artificial-intelligence

How Explainable AI can help

When an organization chooses to implement AI for any of its decision-making processes, it needs to understand AI isn’t a plug and play software. The implementation process includes gathering, labelling, processing of data, and integrating into their existing processes.

During the course of implementation, discrepancies can set in at various points; one needs to watch out for biases at all stages. Not doing so might result in the company attracting financial penalties or suffering a loss of reputation.

There are umpteen examples of unintentional racial and gender biases creeping into crime and job-related predictive analysis using AI. Most of these are attributed to largely homogeneous databases and limited variability. Explainable AI helps spot these biases and potential pitfalls in time.

Four common types of errors that Explainable AI can help avoid are:

  1. Classification errors — In unsupervised learning, there aren’t pre-defined boxes in which the data gets categorized. Data gets categorized based on matching parameters. It would be misleading to rely on just the total percentage of observations correctly categorized. When one category is rare, even high total accuracy can have massive errors. Such incorrect categorization often has a large number of false positives.
Wrong classification leads to chaos
Wrong classification leads to chaos

2. Spurious bias — These are biases with purely statistical origins, like small sample sizes. Smaller samples have often wide confidence intervals and risk-averse AI systems may not include results outside a narrow confidence interval. Because of how confidence intervals work and the risk-averse nature of algorithms, statistical discrimination could occur against data subjects from a minority group with similar characteristics.

3. Anchoring and stability biases — Once a model is developed, newer variables have little influence and even if they do, it takes a long time to show.

For example, algorithms predicting crime rates may assign greater weightage to a specific zip code (or a particular ethnic group), if the early offenders belonged to one. Even as the new data points at the parameters of significant exception, the model continues with initial weights. While doing so, it ignores other data points like education, employment status or past criminal record.

4. Adversarial Machine Learning — Even the most robust and technically sound AI algorithms are fragile and vulnerable to attack. While the initial learning happens through a training data set, they keep learning with new data that keeps flowing in. The processing and output depend on historic data. It translates into “garbage in, garbage out”.

Explainable AI can help identify and address the above challenges of accidental misclassification, bias, and adversarial machine learning. Some of the corrective measures could include

a) Using more diverse training data to include rare or minority cases

b) Checking algorithms in new situations as they emerge

c) Proactively and regularly assessing algorithms for bias, to avoid getting off-guard

d) Monitoring algorithms learning-on-the-fly to make appropriate interventions when required

The Future of XAI

Explainable AI is a necessity in socio-technical applications that concern governance or affect the lives and livelihood of people in other ways.

A case in point is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) used by U.S. courts to predict the probability of a defendant becoming a repeat offender. A 2016 study pointed out (quoting verbatim):

· People of colour were wrongly labelled future criminals and almost twice that of white defendants

· Only 20 per cent of those predicted to commit violent crimes, did so

Remarkably, COMPAS has been used to provide insights for sentencing and deciding on the degree of supervision required in prisons. It’s a perfect example of blackbox that’s got it all wrong. Due to several such instances affecting human life and liberty, “just trust the algorithms” doesn’t cut it anymore. Moreover, in high-stakes use cases like self-driving cars or medical diagnosis, where trust is integral to the solution, opacity of blackbox algorithms has no place.

With known organizations like NVIDIA, Equifax gravitating towards XAI the future course of AI is clearly laid out. Explainable AI is the only way AI can find greater adoption.

Co-Founder at Railofy (an AI-powered TravelTech startup)