What is AI bias, and how does it impact all of us?
Combining computers and robust datasets to enable decision-making and problem-solving is called Artificial intelligence (AI). There are immense benefits projected due to the transformative nature of AI in the modern world, such as facial recognition, fraud detection, natural language processing (NLP) and self-smart household appliances – some of the applications that can make life frictionless and comfortable. AI is also becoming a critical foundation of the digital economy, underpinning data-driven innovation to transform our lives. However, there has been an alarming rise in the reports of gender, race, and other biases existing in such systems.
In layman’s terms, we can define bias as errors in estimation or over or under-representing populations in sampling, leading to harmful discrimination against certain groups of people based on gender, ethnicity, colour or region. An article published by Motherboard in 2019 reported that Twitter converted Barack Obama’s picture (black man) into a white man as a classic example of AI bias. Similarly, a popular algorithm used by many US-based healthcare systems was biased against black patients.
How is AI bias a significant risk for businesses?
AI bias in business can occur for multiple reasons, such as incomplete data where AI is trained on incomplete and unbalanced data that does not represent the whole population. Also, existing data might have historic prejudices encoded in them that are hard to recognise. Furthermore, during the development of an AI application, a lack of awareness about AI bias and fairness can also lead to AI designs that might be discriminatory against protected characteristics.
AI bias can cause reputational damage, loss of opportunity and consumer trust in the company. As 70% to 80% of the market value comes from brand value, equity, intellectual capital and goodwill, organisations are especially vulnerable to losing their reputation due to AI Bias. In 2021, Google fired Dr Timnit Gebru, a scientist who was fired from google company after pointing out the bias in speech technology against disadvantaged marginalised groups. This impacted Google’s reputation, such as resignations by staff and turning down the project awarded by google scholar. Google has failed to build trust in its AI work and its ability to support minority voices. Similarly, Microsoft’s 21% error rate for darker-skinned women and the Tay chatbot began to tweet negative comments such as racism and sexism.
According to Tech News World, a 2020 survey claims that one firm is being affected by AI bias in every three firms as the participant in the survey reveals discrimination against people by gender (34%), age (32%), race (29%), sexual orientation (19%) and religion (18%)3. Furthermore, in 2021, Italy’s privacy controller fined two famous firms, Deliveroo and Fodio, after finding their algorithm biased against gig economy workers.
It is essential to address the risks posed by the biases that might impact business in the long run. So, to make companies responsible for mitigating AI bias, here are a few ways:
Improving the diverse teams in terms of demographic and skillset is needed. As various AI models perform poor due to less available data for women and minority groups, it needs to be addressed by adding training data to improve accuracy in the decision-making and reduce unfair results.
A company must have transparent, automated decision-making so the user knows how and why a particular decision was made. This process will help to build good consumer and business relationships.
Feedback loop for customers:
The use of high-risk AI applications needs to have a way to collect feedback from the consumer related to the automatic decision. Email, phone or mailing addresses could help businesses be accountable to their consumers.
Recognising and providing incentives:
One of the approaches we can build to reduce bias is identifying companies encouraging a fairer environment by providing incentives or recognising them by the government or policymakers. This process can create a sense of appreciation for business and potentially give a public-facing acknowledgement for best practices.
Aware of technical limitations:
As bias can’t be eradicated from the dataset or models, it is essential to understand and recognise the limitations of these biases. Awareness and keeping humans in the loop could mitigate bias.
How to calculate the cost of AI Bias?
Bias in AI not only damage the reputation but has more significant consequences such as losing customers, revenue, and employee and paying legal fees or even being fined for the dame caused by biased technology. A report published in 2022 by DataRobots reveals that one in three (36%) organisations surveyed have experienced challenges or direct business impact due to AI bias in their algorithms. Such as lost revenue (62%), lost customers (61%), lost employees (43%), Incurred legal fees due to a lawsuit or legal action (35%) and damaged brand reputation/media backlash (6%). Therefore, organisations should proactively identify, track and manage AI Bias to mitigate AI risk.
Can AI Bias be Accepted as Residual Risk?
No, residual risk related to AI bias is acceptable due to the mandatory requirements of EU AIA regulation (and other AI regulations currently under development or consideration across the globe), where the company must be liable to manage the fairness of their AI applications.
In a nutshell, AI has many benefits for businesses. The company should follow the existing policies to align with ISO/IEC DIS 23894 Information technology — Artificial intelligence — Guidance on risk management. It is a necessary standard that helps organisations to manage risk by taking control over residual risk. However, skipping the residual risk could continue the risk cycle in the company and damage its reputation. When a company starts to be responsibly designed, fair AI technology will help avoid the unfortunate consequence of discrimination and unethical application and build trust between consumers and business.
How to effectively manage the Risk of AI Bias?
AI Bias in AI solutions is a significant issue and risk for your organisation. All AI regulations and frameworks require fairness and non-discrimination. So, to ensure your AI project provides the desired benefits to your organisation, make sure your AI is fair.
Seclea provides tools for your AI stakeholders (Data Scientists, Ethics Leads, Risk & Compliance Managers) to ensure fair and transparent AI solutions that meet relevant AI regulations. We are here to help; email us at firstname.lastname@example.org or fill out the short form.