From awareness to action: why digital ethics is gaining importance
The concept of digital ethics is no longer new. Leading academic institutions and think tanks have been raising ethical concerns around technology for decades, and recent years have seen a plethora of guidelines for creating, testing, and managing technology to be responsible and worthy. trust.
But, over the past year, digital ethics gained ground as private and public sector organizations focused their attention, and increasingly their resources, on these issues. Four forces are currently at work to transform awareness of digital ethics: accelerated adoption, growing awareness, declining levels of trust and future regulation.
1. Expedited adoption
The seemingly overnight transformation of organizations during the pandemic is now well documented, with many developments to enable remote working and offer digital products and services in order to survive.
Beneath the transformation is an acceleration in the adoption of more advanced technologies such as automation and artificial intelligence (AI). Although data has already been heralded as the ‘new oil’ in terms of perceived value to businesses and our economy, organizations are now taking a more serious look at what it means for them and developing data strategies that could change business models. integers. .
Without integrating digital ethics into this acceleration, ethical risks proliferate. The more data we use, the more technologies we incorporate without understanding the potential consequences. Without sufficient testing or incorporating the right safeguards, the greater the potential for harm – to individuals, to society, and to the reputation of the organizations themselves.
2. Growing awareness
The benefits of using technology to improve our daily personal and professional lives are clear. In fact, technology plays and will continue to play a leading role in solving some of the world’s greatest challenges, from climate change to providing better and more equitable health care. However, the public is now more aware of the potential for unintended consequences of technology, such as the amplification of disinformation, as well as bias and discrimination on digital services based on poor data or faulty algorithms.
At the same time, more and more people are aware of how certain digital business models work, using personal data in ways that consumers are increasingly concerned about. A 2020 Doteveryone report showed a 5% decrease (from 25% to 20%) in the number of Brits who felt they didn’t need to read the terms and conditions of digital products. In addition, the proliferation of articles in the media reporting incidents of ethical breaches due to technology or the use of data has raised public awareness of these issues, and the popularity of films such as The Social Dilemma and Coded Bias shows great interest in these topics.
3. Deficient levels of trust
The Edelman Trust Barometer annual report has highlighted a number of emerging concerns in recent years. This includes a growing trust gap between the informed public and the general population, and a decline in public trust in the technology sector.
The aforementioned Doteveryone report also showed that half of those polled believed that being cheated on or hurt on the internet went hand in hand with being online, and that they did not believe that technology providers had created their products for the benefit of people.
As trust levels clearly decline, the understanding that trust has real value for businesses and public sector organizations – and is a critical success factor in achieving their ambitions – has grown. An example of this can be found in a recent study published by the Open Data Institute and Frontier Economics, which shows that there is a link between trust and people’s willingness to share their data – something many organizations would like to see. their customers or service users do in order to provide more or better services.
With public trust in a precarious position, organizations are beginning to recognize the need to address digital ethics issues.
4. Regulatory perspectives
While governments are still not keeping pace with the evolution of technology and the acceleration of the adoption of increasingly advanced technologies such as AI, there are now signs that they are on. the political and legislative radar. For example, the EU has proposed regulations on AI, and the US is increasing oversight of tech giants, including Apple, Google, Facebook, and Amazon. It is now likely that many parts of the western world will see regulations introduced within the next two years.
The fact that regulation is only on the horizon, but not imminent, does not excuse organizations for not taking action. In light of all of the factors outlined above – accelerated adoption resulting in increased ethical risk, growing public awareness and fragile public trust – there are many reasons to immediately embed digital ethics into organizational strategies and management. governance.
Additionally, working now to understand data and technology risks will prepare organizations for the future by empowering them with greater knowledge. This can then be applied to things we might see in EU law (and any UK version of it) such as AI risk assessments and product labeling.
Signs of progress
These four forces describe the driving forces behind the signs of progress in digital ethics that we are seeing today. organizations understand that they must build and maintain trust between employees, customers and other stakeholders, and trust cannot be achieved if something is wrong with the data or the technology they use. They are also starting to view ethical risks as business risks, which multiply as digital strategies advance.
While this is progress, most organizations struggle to find ways to take practical action, and many don’t know where to start.
Where to go from here
It is now clear that digital ethics is a strategic concern and not just something abstract and theoretical, it is time to act. We recommend that you start by identifying the risks and opportunities related to digital ethics within your current digital agenda, as well as in your future roadmap by asking three key questions:
1. Have I assessed the extent to which my digital program supports or undermines other strategic objectives of my organization? For example, can my employees with disabilities use our organization’s technology in a way that aligns with our diversity and inclusion policy? Do my customers understand how we use their data and have we confirmed that they understand it so that we can build trust and loyalty? Digital programs often reflect only part of an organization’s strategy, sometimes at odds with strategic goals such as employee engagement and branding. Verifying misalignment will reveal digital ethical risks.
2. Do I understand how the technology we are using actually works? Most business leaders don’t have or at least will have gaps in their knowledge of the digital tools and products they use and create. And the more gaps there are, the more risk there is.
3. Am I making the most of data and technology to benefit society and improve digital ethics? Digital ethics is not just about mitigating risk, but about understanding how an ethical approach to technology can open up opportunities to create positive impact. For example, while organizations need to be sensitive to the need to eliminate unnecessary data collection, this information can help deliver better, fairer, and more accessible services by helping them understand demographics and user needs.
By critiquing your organization against these three areas, you can gain a high-level understanding of where the challenges – and opportunities – can lie. It is then possible to prioritize the actions in relation to the areas of the most concern, as well as to integrate the necessary mechanisms to create a better alignment between the organizational and digital strategy.
While the costs of inaction in terms of risk are too high, the rewards for making progress are great, from regulatory readiness to improving user engagement and building stakeholder confidence. .
Jen Rodvold, Head of Digital Ethics and Technology for Good, Sopra Steria