In this post we focus on explainability, which is being included as a principle in most AI ethical codes of companies and institutions.
Generally, when talking about AI and the need for it to be transparent, the discussion focuses on the power of data to influence our decision-making. Especially (but not only) in areas that may affect our fundamental rights such as finance, health, law or education.
Explainability AI, also known as XAI, is a set of processes and methods that aims to address how decisions are made in AI systems, its potential impact and possible biases, allowing its users to not only understand, but also trust, the results created by machine learning algorithms.
Explainability AI, helps to concretise the accuracy, fairness, transparency and results of the model in decision-making, so that the presentation of those results obtained is as comprehensible as possible for the user. Similarly, can help an organization to adopt a responsible approach to AI development, an approach that includes the ethical component that each organization has to include in the decisions it makes.
The more complicated an AI system becomes, the more connections it makes between different pieces of data, connections that need to be understandable to the user through explainability. For example, when a system does facial recognition, it relates an image to a person. But the system cannot explain how the bits of the image are assigned to that person because of the complex set of connections, so why does this happen?
It happens because AI uses deep learning (so-called neural networks) to solve problems in a similar way as the human brain works. However, neural networks are much more abstract and complex than those of the human brain.
In the following, we will analyze this ethical value with examples applied to different sectors of the AI industry, and in particular, some of the ones we focus on at AI Shepherds.
AI explainability in the manufacturing industry
Manufacturing companies have been pursuing automation for years as that extra set of eyes needed to increase operational efficiency.
This industry uses AI in many of its applications, such as predictive maintenance, anomaly detection and supply chain optimisation, but it can be a difficult process due to a lack of accountability in decision-making, something that is solvable if we let explainability take centre stage.
Explainable AI in manufacturing improves efficiency, workplace safety and customer satisfaction by automating their tasks. It also reduces downtime and delivers high quality products to meet consumer demands. A system that will indicate, for example, if a tool error or failure will occur and whether it will require maintenance or equipment replacement.
Overall, it helps to identify bottlenecks, address problems and deliver flawless end products.
AI explainability in automotive
For certain use cases, describing the decision-making process that the AI system has gone through is both important and responsible. Let’s take an example: if we talk about autonomous vehicles, can decisions save or endanger a person’s life? If there is an accident or something goes wrong, the humans involved must understand why and how it happened.
So the steps must be the right ones: track, find and mitigate bias with special attention to transparency in the system, i.e. explain everything in detail.
AI explainability in energy
AI is used every day in the energy sector and therefore it must be explainable and transparent, as the decisions that are made affect individuals and with that, their lives, for example, the price of electricity, avoiding blackouts, or energy surplus that leads to unwished amounts of CO2.
The energy sector is highly regulated and some actors have reporting obligations and must be able to account for the outcome of the predictive models they integrate into their operations and/or decision-making processes.
For example, to understand why energy will be cleaner, more affordable and much more reliable, we will need to understand the process, rationale and basis for every decision that IA can help bring about. Only then will we be able to make everyone trust each decision and understand the outcome.
AI models can occupy positions of great business responsibility, so it is natural for customers to demand clear explanations and information about how these models make their decisions. It is ethically right to ask for such explanations and it is ethically right to give them.
Companies must provide customers with the clarity and transparency of information they need. The market must be made to believe that the company is acting transparently in the development of its AI projects. Companies that provide transparency and certainty will be rewarded with greater customer confidence.
In conclusion, we believe that the debate on Artificial Intelligence needs to be deepened considering that it will change every aspect of our lives. Explainability is a fundamental ethical value for AI to be properly implemented across all industries.
What development is going to be a success if we are not able to explain every detail of the process with absolute transparency and responsible ethics?