Report reveals surprising disinterest in ethical and responsible use of AI among business leaders
A new report from FICO and Corinium has found that many companies are deploying various forms of AI in their businesses without considering the ethical implications of potential problems.
There have been hundreds of examples over the past decade of the many disastrous ways AI has been used by businesses, from facial recognition systems. unable to make out darker-skinned faces to healthcare applications that discriminate against African American patients recidivism calculators used by the courts bias against certain races.
Despite these examples, Report on the state of the AI responsible for FICO shows that business leaders put little effort into ensuring that the AI systems they use are both fair and safe for widespread use.
The survey, conducted in February and March, features insights from 100 leaders in the AI-driven financial services industry, with 20 leaders from the United States, Latin America, Europe, the Middle East, Africa and the Asia-Pacific region.
Executives, who fill positions ranging from chief data officer to chief executive officer, represent companies that generate more than $ 100 million in annual revenue and were asked how their companies are ensuring that AI is used responsibly and ethically.
Almost 70% of respondents could not explain how specific decisions or predictions of an AI model are made, and only 35% said their organization made an effort to use AI transparently. and responsible.
Just 22% said in the survey that their organization has an AI ethics committee that can make decisions about the fairness of the technology they use, and the remaining 78% said that ‘they were “ill-equipped to ensure the ethical implications of using new AI systems”.
Almost 80% said they had significant difficulty in getting other senior executives to consider or prioritize ethical use of AI practices. Few, if any, executives have fully understood the business and reputational risks associated with unfair, unethical, or mismanaged use of AI.
More than 65% said their company had “inefficient” processes in place to ensure all AI projects complied with regulations, and nearly half rated these processes “very inefficient.”
Despite the lack of care in how their companies use AI, 77% agreed that AutoML technology could be misused and 90% agreed that inefficient model monitoring processes are a problem. barrier to adoption of AI.
While some IT and compliance employees had some knowledge of AI ethics, the vast majority of shareholders had a poor understanding of the concept, according to respondents.
A lack of understanding of the ramifications of poorly managed AI has little effect on companies’ desire to incorporate AI, with 49% of respondents reporting an increase in resources spent on AI projects over the past year. the last year.
“At the moment, companies decide for themselves what they think is ethical and unethical, which is extremely dangerous. Self-regulation does not work,” Ganna Pogrebna told the inquiry, Head of Behavioral Data Science at the Alan Turing Institute.
Respondents overwhelmingly said there was no consensus on corporate responsibility in deploying ethical AI, especially AI which “can impact people’s livelihoods. or cause injury or death ”.
A majority of respondents said they have absolutely no responsibility for ensuring the AI they use is ethical beyond simple regulatory compliance.
More than half of respondents said that AI used for data collection and back-end business operations must meet basic ethical standards. But the numbers have dropped by less than half for AI systems that “indirectly affect people’s livelihoods.”
According to survey respondents, 80% struggle to create the kind of process needed to ensure AI was used appropriately.
Companies are increasingly pressuring employees to deploy AI systems quickly, regardless of the ethics surrounding the use of AI, with 78% of respondents saying they have problems getting support leadership to prioritize AI ethics and responsible AI practices.
When asked about the standards and processes in place to govern the use of AI, half of respondents said they “ensure overall explainability”, while 38% said they have detection steps and mitigation of data bias.
Only 6% of respondents said they did so by making sure development teams were diverse.
Ethical AI executives faced a variety of hurdles, including organizational policy, poor data quality, and lack of data standardization.
“Many don’t understand that your model is unethical unless it is shown to be ethical in production,” Scott Zoldi, chief executive of FICO, said in the study.
“It is not enough to say that I built the model ethically and washed my hands of it. What we lack today is an honest and frank talk about the most responsible algorithms and the most secure. “