Date Published: 22 April 2022

Why Do We Need Explainable AI?

Artificial Intelligence (AI) has the potential to make our lives more comfortable, such as smartwatches, Alexa, Google search and smart home devices. AI touches nearly every part of our day, but few understand what AI is? Artificial intelligence (AI) is a computer programme with the ability to perceive, learn, reason, and act like humans. AI aims to make computers cognitive by building intelligent machines capable of performing tasks that typically require human intelligence, such as learning, planning, knowledge representation, perception and problem-solving.

IBM defines AI as anything that makes machines act more intelligently. Similarly, the Economist Intelligence Unit defines it as a set of computer science techniques that enable systems to perform tasks that usually require human intelligence. Dr Ashok Goel, Georgia Institute of Technology, explains AI as the science of building artificial minds by understanding how human minds work and building one. While going through various definitions, there are some common traits such as AI acting on the information, mimicking human thinking, and being outcome-focused and adaptive.

One of the limitations of defining AI is its use in heterogenous applications and industries, which brings in a wide range of definitions depending on the background and objectives of the individuals involved. O’REILLY claims it is impossible to determine as we don’t understand human intelligence. Paradoxically, AI advances will help define what human intelligence isn’t more than what artificial intelligence is. However, AI can be summarised as computer technology with intelligent behaviours.

AI helps people to work creatively and effectively, so some of the benefits of having AI tools include:

Manufacturing and production: AI helps manufacturing companies maintain quality control and reduce repetitive tasks with continued data analytics monitoring. Besides, AI helps in predictive analysis and combines it with human intelligence to forecast product demand and pricing.

Health sector: Some prominent sectors where AI could play an essential role in the medical industry are predicting diseases, automating day-to-day operations, preventive interventions, and precision surgery. It helps to reduce human error by supporting medical staff in inspecting the details of patients, providing virtual assistance and making accurate diagnoses.

Food and agriculture: With an increasing global population, the UN Food and Agriculture Organisation (FAO) estimates that farmers will have to produce 70% more food by 2050 to meet the world population’s needs.  So, AI has been used to predict crop production, automate farm machinery, and improve the monitoring of crop growth.

Health and safety: Another advantage AI could bring is replacing humans in hazardous environments such as defusing bombs, coal mining, space travel and exploration, or search and rescue in disaster areas.

On the other side, AI has some disadvantages, such as:

Black box problem: A black box is a device, system, or program that maps an input to an output. An external observer cannot view or monitor what is going on inside the black box and the rationale for individual mappings. This is mainly when the black box uses a particular kind of AI, namely deep learning, which consists of various hidden nodes and learns independently by recognising the patterns. AI faces trust and acceptance issues because people are not getting the complete picture of what is happening. This leads to challenges in trusting the decisions and suggestions made by an AI application.

Unemployment: With the growth in automation, AI will increase productivity, specialised in job roles with fewer errors and high performance. Moreover, machines are often better than humans at physical tasks, and they can move faster, more precisely, and lift greater loads. Also, machines are more capable than humans in some tasks, leading to the replacement of humans by AI.

Cost: AI depends on hardware and software that needs updating and regular maintenance. So, to keep the machine functioning requires a considerable sum of money and a highly technical workforce.

Lack of cognitive power: Most current AI lacks the same cognitive ability as humans and can’t grow or improve with experiences. Most of the tasks will be repetitive where the input doesn’t change, so continual reassessing, retraining and rebuilding are needed. It can’t function well in changing environments.

Overall, and despite the downsides, AI has more advantages than disadvantages. To minimise the limitations, one approach used in recent years is Explainable AI. Explainable AI is defined as AI systems that explain the reasoning behind the prediction. It helps to understand and visualise what the model is learning and explain what it predicts. Explainable AI is a crucial aspect of ethical AI and works on four core values: transparency, trustworthiness, fairness, and accountability. Though, there are a few limitations to this approach, too. If not designed carefully, the model’s performance can potentially have lower accuracy than the black-box model’s performance. It is also challenging to generate accurate explanations that the general public can understand. However, AI needs to build trust by explaining the process and making intelligent decisions understandable to all AI stakeholders.

Here are some examples where explaining AI behaviour and decisions can be beneficial to businesses:

Fraud detection: For financial services companies, explaining why a transaction was flagged is crucial for their fraud team and internal stakeholders. The use of advanced algorithms in fraud detection mandates the need for Explainable AI and ensures that all AI stakeholders understand and trust AI’s decisions.

Healthcare: Explainable artificial intelligence is instrumental in turning black-box into glass boxes when using machine learning models and explaining their respective decision-making processes in healthcare. For example, doctors can explain the diagnosis to the patient, prepare a care plan and explain how the treatment will help the patient. This can help to build trust between patients and doctors.

Recruitment: Nowadays, companies use explainable AI to screen resumes and explain why a particular CV was selected. This can reduce bias and unfairness and build trust.

Loan approvals: Banks may be able to use explainable AI to explain why a loan was approved or denied. Explainability can explain why the loan was approved or not approved, and the reasons for that can help build trust in AI systems.

To conclude, we need AI to be trusted, transparent, unbiased, and justified to achieve all the benefits of AI while mitigating the risks. At the same time, the world needs AI applications that benefit every aspect of our life and everyone in society. Explainable AI can build a strong relationship between humans and machines by explaining the decisions made by AI. This helps increase understanding between humans and machines by making decisions fair and transparent, making AI a truly responsible technology. Therefore, the future of AI is much more promising with explainable AI.

What next?

Explainable AI helps organisations understand and manage their AI solutions better. It also de-risks AI adoption in heavily regulated industries, and having explainable AI is one of the core requirements for all AI regulations and frameworks. So, to ensure your AI project provides the desired benefits to your organisation, make sure your AI is explainable.

Seclea provides tools for your AI stakeholders (Data Scientists, Ethics Leads, Risk & Compliance Managers) to develop and deploy explainable AI solutions that meet relevant AI regulations. We are here to help; email us at [email protected] or fill out the short form.