What is AI bias, and how does it impact all of us?
Combining computers and robust datasets to enable decision-making and problem-solving is called Artificial intelligence (AI). There are immense benefits projected due to the transformative nature of AI in the modern world, such as facial recognition, fraud detection, natural language processing (NLP) and self-smart household appliances – some of the applications that can make life frictionless and comfortable. AI is also becoming a critical foundation of the digital economy, underpinning data-driven innovation to transform our lives. However, there has been an alarming rise in the reports of gender, race, and other biases existing in such systems.
In layman’s terms, we can define bias as errors in estimation or over or under-representing populations in sampling, leading to harmful discrimination against certain groups of people based on gender, ethnicity, colour or region. An article published by Motherboard in 2019 reported that Twitter converted Barack Obama’s picture (black man) into a white man as a classic example of AI bias. Similarly, a popular algorithm used by many US-based healthcare systems was biased against black patients.
How AI Bias is a significant risk for businesses?
AI bias in business can occur for multiple reasons, such as incomplete data where AI is trained on incomplete and unbalanced data that does not represent the whole population. Also, existing data might have historic prejudices encoded in them that are hard to recognise. Furthermore, during the development of an AI application, a lack of awareness about AI bias and fairness can also lead to AI designs that might be discriminatory against protected characteristics.
AI bias can cause reputational damage, loss of opportunity and consumer trust in the company. As 70% to 80% of the market value comes from brand value, equity, intellectual capital and goodwill, organisations are especially vulnerable to losing their reputation due to the AI Bias. In 2021, Google fired Dr Timnit Gebru, a scientist who was fired from google company after pointing out the bias in speech technology against disadvantaged marginalised groups. This impacted Google’s reputation, such as resignations by staff and turning down the project awarded by google scholar. Google has failed to build trust in its AI work and its ability to support minority voices. Similarly, Microsoft’s 21% error rate for darker-skinned women and the Tay chatbot began to tweet negative comments such as racism and sexism.
According to Tech News World, a 2020 survey claims that one firm is being affected by AI bias in every three firms as the participant in the survey reveals discrimination against people by gender (34%), age (32%), race (29%), sexual orientation (19%) and religion (18%)3. Furthermore, in 2021, Italy’s privacy controller fined two famous firms, Deliveroo and Fodio, after finding their algorithm biased against gig economy workers.
It is essential to address the risks posed by the biases that might impact business in the long run. So, to make companies responsible for mitigating AI bias, here are a few ways:
Improving the diverse teams in terms of demographic and skillset is needed. As various AI models perform poor due to less available data for women and minority groups, it needs to be addressed by adding training data to improve accuracy in the decision-making and reduce unfair results.
A company must have transparent, automated decision-making, so the user knows how and why a particular decision was made. This process will help to build good consumer and business relationships.
Feedback loop for customers:
The use of high-risk AI applications needs to have a way to collect feedback from the consumer related to the automatic decision. Email, phone or mailing addresses could help businesses be accountable to their consumers.
Recognising and providing incentives:
One of the approaches we can build to reduce bias is identifying companies encouraging a fairer environment by providing incentives or recognising them by the government or policymakers. This process can create a sense of appreciation for business and potentially give a public-facing acknowledgement for best practices.
Aware of technical limitations:
As bias can’t be eradicated from the dataset or models, it is essential to understand and recognise the limitations of these biases. Awareness and keeping humans in the loop could mitigate bias.
How to calculate the cost of AI Bias?
Bias in AI not only damage the reputation but has more significant consequences such as losing customers, revenue, and employee and paying legal fees or even being fined for the dame caused by biased technology. A report published in 2022 by DataRobots reveals that one in three (36%) organisations surveyed have experienced challenges or direct business impact due to AI bias in their algorithms. Such as lost revenue (62%), lost customers (61%), lost employees (43%), Incurred legal fees due to a lawsuit or legal action (35%) and damaged brand reputation/media backlash (6%). Therefore, organisations should proactively identify, track and manage AI Bias to effectively mitigation AI risk.
Can AI Bias be Accepted as Residual Risk?
No, residual risk related to AI bias is acceptable due to the mandatory regulation of ISO 31000:2018, where the company must be liable to manage the security of assets. Residual risk is the threat or vulnerability that remains after the intervention of the dangers and remediation efforts have been implemented.
The residual risk could be lessened using the following steps:
- Doing Nothing: Organization can accept the residual risk by not taking any action. However, such action could further risk the organisation falling into the trap of significant threats. It is a mandatory requirement of ISO 31000:2018 regulation which helps organisations calculate the safety of assets before and after sharing with vendors.
- Identify the governance, risk and compliance (GRC) regulations in the country and align them with the business regulations such as the Sarbanes-Oxley Act, HIPAA, ISO 27001, European Union GDPR, European Union Artificial Intelligence Act (AIA), and ISO 31000:2018.
- Taking action: First, the risk needs to be categorised as acceptable or above acceptable(risky). If the risk is above a good label, it must be reduced to an adequate level. However, if the residual risk cost is too high, the company may skip without considering the existing policies.
- Transfer the risk: Companies buy insurance plans and transfer any risk to a third party. Even after purchasing insurance policies, some risk still exists within the organisation. For example, the organisation must pay if the insurance company denies the claim or causes bankruptcy.
In a nutshell, AI has many benefits for businesses. The company should follow the existing policies to align with ISO 31000:2018 and especially with ISO/IEC DIS 23894 Information technology — Artificial intelligence — Guidance on risk management. It is a necessary standard that helps organisations to manage risk by taking control over residual risk. However, skipping the residual risk could continue the risk cycle in the company and damage its reputation. When a company starts to be responsibly designed, fair AI technology will help avoid the unfortunate consequence of discrimination and unethical application and build trust between consumers and business.
How to effectively manage the Risk of AI Bias?
AI Bias in AI solutions is a significant issue and risk for your organisation. All AI regulations and frameworks require fairness and non-discrimination. So, to ensure your AI project provides the desired benefits to your organisation, make sure your AI is fair.
Seclea provides tools for your AI stakeholders (Data Scientists, Ethics Leads, Risk & Compliance Managers) to ensure fair and transparent AI solutions that meet relevant AI regulations. We are here to help; email us at [email protected] or fill out the short form.