Paving the way to explainable and accountable AI
According to a new Mckinsey study “The Data-Driven Business of 2025”, most employees will increasingly use data to optimize nearly every aspect of their jobs. The year 2025 is fast approaching and we are already seeing an increase in the availability of vast and varied new data sources. To be able to optimize this amount of data, especially unstructured or high-frequency temporal data, organizations must rely on modern computing technology and advanced ML/AI algorithms.
Unlocking “black box” algorithms
We can already see hundreds of millions, if not billions, of people using and benefiting from AI/ML-based technology and applications in their daily activities: internet searches, browsing, health technology, autonomous vehicles and assistants. like Siri and Alexa, to name a few. .
The use of such “black box” algorithms also carries risks and raises ethical questions regarding transparency, “fairness” and whether these algorithms are used responsibly.
Technologies in themselves are never “bad” nor “good”; their effect depends on how we use them. New ML algorithms and AI applications are no different in this sense from other past technological advances. The question we need to ask ourselves is how do we strike the right balance between the great benefits and potential that this technology and algorithms can bring and the risks that come with them?
Are “black box” algorithms a problem?
Google, Amazon, Netflix and many more are great examples of companies that are very innovative in their use of advanced ML/AI algorithms. They seem to base and run almost every aspect of their business on these advanced algorithms. Netflix, for example, bases different personal movie recommendations, layout or network routing decisions on data analysis through advanced ML/AI algorithms.
Often, the internal logic of these advanced algorithms appears as a magical “black box”. Just because we don’t understand the drivers behind a certain decision or how the “magic” happens isn’t necessarily a problem. This will obviously depend on the context, the type of request or decision and how it is going to be used and for what purpose.
For example, in the case of Netflix, what are the consequences of a bad movie recommendation? Or what if there is “bias” in some of the decisions Netflix makes? This is of course an undesirable result that should be avoided to begin with, but we can probably be more forgiving and try to improve it next time.
Explainable and responsible AI
Clearly, for many industries, and especially for regulated industries like insurance and banking, the need for governance, transparency and explainability is critical. We are seeing a flood in the use of AI/ML algorithms to automate manual processes and decisions of great importance to consumer livelihoods. Patient triage, disease diagnosis, identity authentication, credit decision-making, job candidate screening and claims settlement are just a few examples. These and many other examples represent decisions where the consequence of making the wrong recommendation or introducing “biased” or “unfair” treatment into the decision-making process can have a significant impact on consumers or businesses.
This has led to the emergence of new fields of research: Explainable & Responsible Artificial Intelligence. Experts are developing tools that allow us to peek inside the black box and unravel at least some of the magic. For businesses to trust and adopt the use of “black box” AI, there must be a mechanism that provides stakeholders and professionals with the ability to interpret the complex decision-making processes of AI and s ensure they meet regulatory requirements. Consumers can also benefit by being able to understand and possibly direct the key drivers behind crucial decisions.
Explainability is the bridge that makes complicated AI more understandable and transparent. We should not fear these new algorithms and technological advances or try to limit their power, but rather use them intelligently and responsibly, to generate more value for our society.
Written by Reuven Shnaps, PhD, Chief Analytics Officer at Earnix.