Bookkeeping Service Providers

  • Accounting
  • Bookkeeping
  • US Taxation
  • Financial Planning
  • Accounting Software
  • Small Business Finance
You are here: Home / Addressing AI Bias Head-On: It’s a Human Job

Addressing AI Bias Head-On: It’s a Human Job

August 21, 2020 by cbn Leave a Comment

Researchers working directly with machine learning models are tasked with the challenge of minimizing cases of unjust bias.

Artificial intelligence systems derive their power in learning to perform their tasks directly from data. As a result, AI systems are at the mercy of their training data and in most cases are strictly forbidden to learn anything beyond what is contained in their training data.

Image: momius - stock.adobe.com

Image: momius – stock.adobe.com

Data by itself has some principal problems: It is noisy, nearly never complete, and it is dynamic as it continually changes over time. This noise can manifest in many ways in the data — it can arise from incorrect labels, incomplete labels or misleading correlations. As a result of these problems with data, most AI systems must be very carefully taught how to make decisions, act or respond in the real world. This ‘careful teaching’ involves three stages.

Stage 1:  In the first stage, the available data must be carefully modeled to understand its underlying data distribution despite its incompleteness. This data incompleteness can make this modeling task nearly impossible. The ingenuity of the scientist comes into play in making sense of this incomplete data and modeling the underlying data distribution. This data modeling step can include data pre-processing, data augmentation, data labeling and data partitioning among other steps. In this first stage of “care,” the AI scientist is also involved in controlling the data into special partitions with an express intent to minimize bias in the training step for the AI system. This first stage of care requires solving an ill-defined problem and therefore can evade the rigorous solutions.

Stage 2: The second stage of “care” involves the careful training of the AI system to minimize biases. This includes detailed training strategies to ensure the training proceeds in an unbiased manner from the very beginning. In many cases, this step is left to standard mathematical libraries such as Tensorflow or PyTorch, which address the training from a purely mathematical standpoint without any understanding of the human problem being addressed. As a result of using industry standard libraries to train AI systems, many applications served by such AI systems miss the opportunity to use optimal training strategies to control bias. There are attempts being made to incorporate the right steps within these libraries to mitigate bias and provide tests to discover biases, but these fall short due to the lack of customization for a particular application. As a result, it is likely that such industry standard training processes further exacerbate the problem that the incompleteness and dynamic nature of data already creates. However, with enough ingenuity from the scientists, it is possible to devise careful training strategies to minimize bias in this training step.

Stage 3: Finally in the third stage of care, data is forever drifting in a live production system, and as such, AI systems have to be very carefully monitored by other systems or humans to capture  performance drifts and to enable the appropriate correction mechanisms to nullify these drifts. Therefore, researchers must carefully develop the right metrics, mathematical tricks and monitoring tools to carefully address this performance drift even though the initial AI systems may be minimally biased.

Two other challenges

In addition to the biases within an AI system that can arise at each of the three stages outlined above, there are two other challenges with AI systems that can cause unknown biases in the real world.

The first is related to a major limitation in current day AI systems — they are almost universally incapable of higher-level reasoning; some exceptional successes exist in controlled environment with well-defined rules such as AlphaGo. This lack of higher-level reasoning greatly limits these AI systems from self-correcting in a natural or an interpretive manner. While one may argue that AI systems may develop their own method of learning and understanding that need not mirror the human approach, it raises concerns tied to obtaining performance guarantees in AI systems.

The second challenge is their inability to generalize to new circumstances. As soon as we step into the real world, circumstances constantly evolve, and current day AI systems continue to make decisions and act from their previous incomplete understanding. They are incapable of applying concepts from one domain to a neighbouring domain and this lack of generalizability has the potential to create unknown biases in their responses. This is where the ingenuity of scientists is again required to protect against these surprises in the responses of these AI systems. One protection mechanism used are confidence models around such AI systems. The role of these confidence models is to solve the ‘know when you don’t know’ problem. An AI system can be limited in its abilities but can still be deployed in the real world as long as it can recognize when it is unsure and ask for help from human agents or other systems. These confidence models when designed and deployed as part of the AI system can minimize the effect of unknown biases from wreaking uncontrolled havoc in the real world.

Finally, it is important to recognize that biases come in two flavors: known and unknown. Thus far, we have explored the known biases, but AI systems can also suffer from unknown biases. This is much harder to protect against, but AI systems designed to detect hidden correlations can have the ability to discover unknown biases. Thus, when supplementary AI systems are used to evaluate the responses of the primary AI system, they do possess the ability to detect unknown biases. However, this type of an approach is not yet widely researched and, in the future, may pave the way for self-correcting systems.

In conclusion, while the current generation of AI systems has proven to be extremely capable, they are also far from perfect especially when it comes to minimizing biases in the decisions, actions or responses. However, we can still take the right steps to protect against known biases.

Mohan Mahadevan is VP of Research at Onfido. Mohan was the former Head of Computer Vision and Machine Learning for Robotics at Amazon and previously also led research efforts at KLA-Tencor. He is an expert in computer vision, machine learning, AI, data and model interpretability. Mohan has over 15 patents in areas spanning optical architectures, algorithms, system design, automation, robotics and packaging technologies. At Onfido, he leads a team of specialist machine learning scientists and engineers, based out of London.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT … View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

More Insights

Share on FacebookShare on TwitterShare on Google+Share on LinkedinShare on Pinterest

Filed Under: Uncategorized

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Archives

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • May 2021
  • April 2021
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • March 2016

Recent Posts

  • How Azure Cobalt 100 VMs are powering real-world solutions, delivering performance and efficiency results
  • FabCon Vienna: Build data-rich agents on an enterprise-ready foundation
  • Agent Factory: Connecting agents, apps, and data with new open standards like MCP and A2A
  • Azure mandatory multifactor authentication: Phase 2 starting in October 2025
  • Microsoft Cost Management updates—July & August 2025

Recent Comments

    Categories

    • Accounting
    • Accounting Software
    • BlockChain
    • Bookkeeping
    • CLOUD
    • Data Center
    • Financial Planning
    • IOT
    • Machine Learning & AI
    • SECURITY
    • Uncategorized
    • US Taxation

    Categories

    • Accounting (145)
    • Accounting Software (27)
    • BlockChain (18)
    • Bookkeeping (205)
    • CLOUD (1,322)
    • Data Center (214)
    • Financial Planning (345)
    • IOT (260)
    • Machine Learning & AI (41)
    • SECURITY (620)
    • Uncategorized (1,284)
    • US Taxation (17)

    Subscribe Our Newsletter

     Subscribing I accept the privacy rules of this site

    Copyright © 2025 · News Pro Theme on Genesis Framework · WordPress · Log in