Chuck Garric

Main Menu

  • Home
  • Output gap
  • Business ethics
  • Pre-market
  • Discount basis
  • Saving investment

Chuck Garric

Header Banner

Chuck Garric

  • Home
  • Output gap
  • Business ethics
  • Pre-market
  • Discount basis
  • Saving investment
Business ethics
Home›Business ethics›When AI and ESG meet

When AI and ESG meet

By Paul Gonzalez
September 30, 2021
0
0


Like politics or religion, artificial intelligence is a subject that elicits strong opinions.

Many members of the environmental and sustainability communities are singing its praises as a climate change technology, citing its superhuman ability to optimize the integration of renewables into power grids, or to detect deforestation and other threats. to biodiversity, or to drive business resilience planning using extreme weather conditions. models. The list of potential applications is long.

I am definitely guilty of singing this song. The energy management system developed by cold storage company Lineage Logistics is one of my favorite examples to boast: when I wrote about it a few years ago, the company had managed to reduce by half the energy consumption of the facilities where it was deployed, saving customers at least $ 4 million along the way. What not to like?

In fact, it’s unusual to find a large company that doesn’t think about at least using AI as a resource to automate all kinds of tasks that would take homo sapiens a lot more time to manage manually (if they could handle it. ). At least half of executives polled in late 2020 by McKinsey said their companies are already using AI for product development, service optimization, marketing and sales, and risk assessment.

Why is this important for ESG concerns?

The corporate adoption of AI will strain ESG strategies far more than most of us realize.

One place where AI will have a disproportionate influence almost immediately is in reports. I guess you’ve already read a lot of articles about how software applications with AI have become essential in detecting – and even deflecting – questionable claims. “We can decode what they say and tell us,” Neil Sahota, an artificial intelligence expert who advises the United Nations on applications and ethics, told me when we discussed why these tools have attracted so much attention. “Are [companies] really doing what they say they are doing? “

Two resources adopted by ESG analysts and fund managers for this purpose are ClimateBert, developed by the Task Force for Climate-related Financial Disclosures, and the Paris Capital Transition Assessment (PACTA) Agreement, created by 2 Degrees Investing. Both use machine learning algorithms and neural networks to assess ESG claims much faster than any human analyst could.

PACTA, along with a sister resource in beta testing, FinanceMap, was responsible for a recent analysis published by think tank InfluenceMap that focused on claims of nearly 800 funds with ESG or climate messages. This analysis found that more than half of climate-themed funds included holdings that were not aligned with the goals of the Paris Agreement. Given the pervasive concern about greenwashing, you can bet that investors and other stakeholders won’t hesitate to use such tools to investigate ESG allegations.

Of course, these tools can work the other way around as well. Software from companies like Intelligent and Datamaran (and an ever-growing list of vendors) can help businesses better manage their material climate change risks and test whether their public disclosures about them would be accepted. You can think of the people who perform these tests as a sort of ‘white hat’ equivalent of the ESG risk management team, the name used to describe hackers who test companies’ cybersecurity defenses by attempting to break them down. to break.

AI ethics vs. ESG claims

Reporting and disclosure aside, the corporate adoption of AI – and the processes by which it is governed – is something that will strain ESG strategies much more deeply than most. of us don’t currently recognize it. Multiple factors are at play, including the enormous amount of energy required to power AI applications, concerns about algorithmic biases that discriminate against minorities and women, and questions about privacy and the amount of data collected for inform decisions.

“You could end up with a social equity issue,” said Rob Fisher, partner and executive of KPMG Impact, the company’s ESG services division. “If you’re using AI to make decisions about people that could have a disparate impact, how do you deal with that? How much information about people is it appropriate to capture? What decisions are we going to let a machine make? “

Two of the biggest tech companies – Alphabet’s Google and Microsoft – have struggled very publicly with ethical issues related to how other companies wish to use AI. Google turned down a financial services company that offered to use AI to make decisions about creditworthiness, fearing the process would perpetuate discriminatory practices. The company also feels a “reputational damage” from its decision to part ways with its well-regarded AI ethics chief at the end of 2020. Microsoft’s dilemma is clearer: It’s a big one. AI provider to the oil and gas industry, which uses this information to inform fossil fuel extraction decisions. This strategy has led some to question its broader climate strategy.

And then there’s Facebook, which recently found itself apologizing for an “unacceptable mistake” in which its AI-driven algorithms categorized a video about black men as about primates. The list of concerns about his algorithms and the potential damage to societal institutions and mental health – damage he would have been well aware of – is being investigated by a Senate subcommittee.

As the use of AI by businesses becomes more common, it’s not just the tech giants that will have to justify the ethics behind how these algorithms make decisions. Apparently, however, this type of governance remains the exception rather than the rule.

“Despite the costs of getting it wrong, most companies grapple with data ethics and AI through ad hoc discussions on a per product basis,” ethical risk consultant Reid Blackman wrote in a recent Harvard Business Review article. “Businesses need a plan to mitigate risk – how to use data and develop AI products without falling into ethical traps along the way.”

Microsoft, Google, Twitter, and other tech companies heavily reliant on AI are assembling ethics teams to deal with collisions between AI and their ESG agendas. Can you say the same for your business?


Related posts:

  1. Is it okay to ask health care providers if they are vaccinated?
  2. Opinion | Is the Bitcoin craze coming to your 401 (k)?
  3. UK business restructuring following January Brexit deal
  4. Advising new institutional investors: Abdulaziz Hayat welcomes risk averse investors in the VC asset class
  • Privacy Policy
  • Terms and Conditions