As use of AI becomes even more pervasive, data scientists and organisations just ‘doing their best’ won’t be sufficient. Scott Zoldi, AI expert at FICO explains that with the rise of AI advocates, responsible AI will be the expectation and standard.
In recent years, data and AI have become widely used across a multitude of industries to inform and shape strategies and services, from healthcare and retail to banking and insurance. And most recently, AI has come to the fore in tracing in the battle against coronavirus.
However, increasing volumes of digitally generated data, coupled with the need for automated decisioning enabled by AI, are posing new challenges, for businesses and governments, with a growing focus on the reasoning behind AI decision-making algorithms.
As AI takes decision-making further away from those individuals the decision affects, the decisions can appear to become more callous, perhaps even careless. It is not uncommon for organisations to cite data and algorithms as the justification for unpopular decisions and this can be a cause for concern when it comes to respected leaders making mistakes.
Some examples include: Microsoft’s racist and offensive online chatbot in 2016, Amazon’s AI recruitment system which ignored female applicants in 2018 and the Tesla car which crashed in Autopilot after mistaking a truck for a suspended street sign in 2019.
In addition to the potential for incorrect decision-making, there is
also the issue of AI bias. As a result, new regulations have been introduced to
protect consumer rights and keep a close watch on AI developments.
The pillars of responsible AI
Organisations need to enforce robust AI now. To do this they must strengthen and set their standards with three pillars of responsible AI: explainability, accountability, and ethics. With these in place, organisations of all types can be confident they are making sound digital decisions.
Explainability: A business relying on an AI decision system should ensure it has in place an algorithmic construct that captures the relationships between the decision variables to arrive at a business decision. With access to this data, a business can explain why the model made the decision it did – for example flagged a transaction as a high risk of fraud.. This explanation can then be used by human analysts to further investigate the implications and accuracy of the decision.
Accountability: Machine learning models must be built properly and with a focus on machine learning limitations and careful thought to the algorithms used. Technology must be transparent and compliant. Thoughtfulness in the development of models ensures the decisions make sense, for example scores adapt appropriately with increasing risk.
Beyond explainable AI, there is the concept of humble AI — ensuring that the model is used only on the data examples similar to data on which it was trained. Where that is not the case, the model may not be trustworthy and one should downgrade to an alternate algorithm.
Ethics: Building on explainability and accountability, ethical models must have been tested and any discrimination removed. Explainable machine learning architectures allow extraction of the non-linear relationships that typically hide the inner workings of most machine learning models. These non-linear relationships need to be tested, as they are learned based on the data on which the model was trained and this data is all-too-often implicitly full of societal biases. Ethical models ensure that bias and discrimination are explicitly tested and removed.
Forces that enforce responsible AI
Building responsible AI models takes time and
painstaking work, with meticulous ongoing scrutiny crucial to enforce continued
responsible AI. This scrutiny must include regulation, audit and advocacy.
Regulations are important for setting the
standard of conduct and rule of law for use of algorithms. However, in the end
regulations are either met or not and demonstrating alignment with regulation
requires audit.
Demonstrating compliance with regulation requires a framework for creating auditable models and modelling processes. These audit materials include the model development process, algorithms used, bias detection tests and demonstration of the use of reasonable decisions and scoring. Today, model development process audits are done in haphazard ways.
New blockchain-based model development audit systems are being introduced to enforce and record immutable model development standards, testing methods and results. Further, they are being used for recording detailed contributions of data scientists’ and management’s approvals throughout the model development cycle.
Looking to the future, organisations ‘doing
their best’ with data and AI will not be enough. With the
rise of AI advocates and the real suffering that is inflicted due to wrong
outcomes of AI systems, responsible AI will soon be the expectation and the standard
across the board and around the world.
Organisations must enforce responsible AI now and strengthen and set their standards of AI explainability, accountability and ethics to ensure they are behaving responsibly when making digital decisions.
The author is Dr. Scott Zoldi is chief analytics officer at FICO.
About the author
Dr. Scott Zoldi is chief analytics officer at FICO. While at FICO, Scott has been responsible for authoring 110 authored patents, with 56 granted and 54 pending. Scott is actively involved in the development of new analytic products and Big Data analytics applications, many of which leverage new streaming analytic innovations such as adaptive analytics, collaborative profiling and self-calibrating analytics. Scott serves on two boards of directors, Software San Diego and Cyber Centre of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow
Leave a Reply