65% of leaders can’t explain how their AI models make decisions, survey finds
Elevate your enterprise data technology and strategy to Transform 2021.
Despite the increasing demand and use of AI tools, 65% of companies cannot explain how decisions or predictions about AI models are made. This is according to the results of a new investigation from global analytics firm FICO and Corinium, who interviewed 100 C-level analytics and data managers to understand how organizations are deploying AI and whether they are ensuring AI is used ethically .
“Over the past 15 months, more and more companies have invested in AI tools, but have not elevated the importance of AI governance and responsible AI to the board level. directors, ”said Scott Zoldi, chief analytics officer of FICO in a press release. “Organizations are increasingly using AI to automate key processes that in some cases make life-changing decisions for their customers and stakeholders. Senior management and boards must understand and apply auditable and immutable AI model governance and product model oversight to ensure decisions are accountable, fair, transparent and accountable. “
The study, commissioned by FICO and conducted by Corinium, found that 33% of leadership teams have an incomplete understanding of AI ethics. While IT, analytics, and compliance staff are the most aware, understanding between organizations remains uneven. As a result, there are significant barriers to building support – 73% of stakeholders say they have struggled to gain management support for responsible AI practices.
Responsible implementation of AI means different things for different companies. For some, “responsible” means adopting AI in an ethical, transparent and accountable manner. For others, it means ensuring that their use of AI remains consistent with laws, regulations, standards, customer expectations and organizational values. Either way, “responsible AI” promises to guard against the use of biased data or algorithms, providing assurance that automated decisions are justified and explainable – at least in theory.
According to Corinium and FICO, while nearly half (49%) of survey respondents report an increase in resources allocated to AI projects over the past year, only 39% and 28% say they have prioritized the AI governance and model oversight or maintenance. , respectively. Lack of consensus among executives on a company’s responsibilities for AI potentially contributes to the lack of ethics. The majority of companies (55%) agree that AI for data ingestion must meet basic ethical standards and that the systems used for back office operations must also be explainable. But nearly half (43%) say they have no responsibilities beyond complying with regulations to manage AI systems whose decisions could indirectly affect people’s livelihoods.
What can businesses do to embrace responsible AI? Tackling bias is an important step, but only 38% of companies report having incorporated bias mitigation steps into their model development processes. In fact, only a fifth of respondents (20%) to the Corinium and FICO survey actively monitor their models in production for reasons of fairness and ethics, while only one in three (33%) have a model validation team to evaluate newly developed models.
The results are consistent with a recent Boston Consulting Group investigation out of 1,000 companies, who found less than half of those that performed AI at scale had fully mature and “responsible” AI implementations. The late adoption of responsible AI belies the value these practices can bring. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to stand up for them – and in turn, punish those that don’t.
However, companies seem to understand the value of evaluating the fairness of model results, with 59% of survey respondents saying they do so to detect model bias. Additionally, 55% say they isolate and assess the model’s latent features for bias, and half (50%) say they have a coded mathematical definition for data bias and actively check for bias in sources. unstructured data.
Companies also recognize that things need to change, as the vast majority (90%) agree that inefficient model oversight processes are a barrier to AI adoption. Fortunately, nearly two-thirds (63%) of respondents to the Corinium and FICO report believe that AI ethics and responsible AI will become a central part of their organization’s strategy within two years.
“The business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards of directors need to be aware of the risks associated with technology and best practices to proactively mitigate them, ”Zoldi added. “AI has the power to transform the world, but as the popular saying goes, with great power comes great responsibility.”
VentureBeat’s mission is to be a digital city place for technical decision-makers to gain knowledge about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in running your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the topics that interest you
- our newsletters
- Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
- networking features, and more
Become a member