Bookkeeping Service Providers

  • Accounting
  • Bookkeeping
  • US Taxation
  • Financial Planning
  • Accounting Software
  • Small Business Finance
You are here: Home / IOT / Building responsible and trustworthy AI

Building responsible and trustworthy AI

June 17, 2020 by cbn Leave a Comment

As use of AI becomes even more pervasive, data scientists and organisations just ‘doing their best’ won’t be sufficient. Scott Zoldi, AI expert at FICO explains that with the rise of AI advocates, responsible AI will be the expectation and standard.

In recent years, data and AI have become widely used across a multitude of industries to inform and shape strategies and services, from healthcare and retail to banking and insurance. And most recently, AI has come to the fore in tracing in the battle against coronavirus.

However, increasing volumes of digitally generated data, coupled with the need for automated decisioning enabled by AI, are posing new challenges, for businesses and governments, with a growing focus on the reasoning behind AI decision-making algorithms.

As AI takes decision-making further away from those individuals the decision affects, the decisions can appear to become more callous, perhaps even careless. It is not uncommon for organisations to cite data and algorithms as the justification for unpopular decisions and this can be a cause for concern when it comes to respected leaders making mistakes.

Some examples include: Microsoft’s racist and offensive online chatbot in 2016, Amazon’s AI recruitment system which ignored female applicants in 2018 and the Tesla car which crashed in Autopilot after mistaking a truck for a suspended street sign in 2019.

In addition to the potential for incorrect decision-making, there is
also the issue of AI bias. As a result, new regulations have been introduced to
protect consumer rights and keep a close watch on AI developments.

The pillars of responsible AI

Organisations need to enforce robust AI now. To do this they must strengthen and set their standards with three pillars of responsible AI: explainability, accountability, and ethics. With these in place, organisations of all types can be confident they are making sound digital decisions.

Explainability: A business relying on an AI decision system should ensure it has in place an algorithmic construct that captures the relationships between the decision variables to arrive at a business decision. With access to this data, a business can explain why the model made the decision it did – for example flagged a transaction as a high risk of fraud.. This explanation can then be used by human analysts to further investigate the implications and accuracy of the decision.

Accountability: Machine learning models must be built properly and with a focus on machine learning limitations and careful thought to the algorithms used. Technology must be transparent and compliant. Thoughtfulness in the development of models ensures the decisions make sense, for example scores adapt appropriately with increasing risk.

Beyond explainable AI, there is the concept of humble AI — ensuring that the model is used only on the data examples similar to data on which it was trained. Where that is not the case, the model may not be trustworthy and one should downgrade to an alternate algorithm.

Ethics: Building on explainability and accountability, ethical models must have been tested and any discrimination removed. Explainable machine learning architectures allow extraction of the non-linear relationships that typically hide the inner workings of most machine learning models. These non-linear relationships need to be tested, as they are learned based on the data on which the model was trained and this data is all-too-often implicitly full of societal biases. Ethical models ensure that bias and discrimination are explicitly tested and removed.

Forces that enforce responsible AI

Building responsible AI models takes time and
painstaking work, with meticulous ongoing scrutiny crucial to enforce continued
responsible AI. This scrutiny must include regulation, audit and advocacy.

Regulations are important for setting the
standard of conduct and rule of law for use of algorithms. However, in the end
regulations are either met or not and demonstrating alignment with regulation
requires audit.

Demonstrating compliance with regulation requires a framework for creating auditable models and modelling processes. These audit materials include the model development process, algorithms used, bias detection tests and demonstration of the use of reasonable decisions and scoring. Today, model development process audits are done in haphazard ways.

New blockchain-based model development audit systems are being introduced to enforce and record immutable model development standards, testing methods and results. Further, they are being used for recording detailed contributions of data scientists’ and management’s approvals throughout the model development cycle.

Looking to the future, organisations ‘doing
their best’ with data and AI will not be enough. With the
rise of AI advocates and the real suffering that is inflicted due to wrong
outcomes of AI systems, responsible AI will soon be the expectation and the standard
across the board and around the world.

Organisations must enforce responsible AI now and strengthen and set their standards of AI explainability, accountability and ethics to ensure they are behaving responsibly when making digital decisions.

The author is Dr. Scott Zoldi is chief analytics officer at FICO.

About the author

Dr. Scott Zoldi is chief analytics officer at FICO. While at FICO, Scott has been responsible for authoring 110 authored patents, with 56 granted and 54 pending. Scott is actively involved in the development of new analytic products and Big Data analytics applications, many of which leverage new streaming analytic innovations such as adaptive analytics, collaborative profiling and self-calibrating analytics. Scott serves on two boards of directors, Software San Diego and Cyber Centre of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

Share on FacebookShare on TwitterShare on Google+Share on LinkedinShare on Pinterest

Filed Under: IOT

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Archives

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • May 2021
  • April 2021
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • March 2016

Recent Posts

  • FabCon Vienna: Build data-rich agents on an enterprise-ready foundation
  • Agent Factory: Connecting agents, apps, and data with new open standards like MCP and A2A
  • Azure mandatory multifactor authentication: Phase 2 starting in October 2025
  • Microsoft Cost Management updates—July & August 2025
  • Protecting Azure Infrastructure from silicon to systems

Recent Comments

    Categories

    • Accounting
    • Accounting Software
    • BlockChain
    • Bookkeeping
    • CLOUD
    • Data Center
    • Financial Planning
    • IOT
    • Machine Learning & AI
    • SECURITY
    • Uncategorized
    • US Taxation

    Categories

    • Accounting (145)
    • Accounting Software (27)
    • BlockChain (18)
    • Bookkeeping (205)
    • CLOUD (1,321)
    • Data Center (214)
    • Financial Planning (345)
    • IOT (260)
    • Machine Learning & AI (41)
    • SECURITY (620)
    • Uncategorized (1,284)
    • US Taxation (17)

    Subscribe Our Newsletter

     Subscribing I accept the privacy rules of this site

    Copyright © 2025 · News Pro Theme on Genesis Framework · WordPress · Log in