Artificial intelligence ethics: designing fair systems for all
Artificial intelligence tools have the ability to read and interpret data that has the power to influence humans and cause them to take actions that may be beneficial or harmful to humanity. This makes the role of ethics in the design of AI solutions important. For example, autonomous vehicles could potentially reduce accidents caused by human error. However, autonomous weapons could herald wars and murder far more than humans would have liked. Thus, bias in decision-making, intentional or unintentional manipulation, potential misuse of derived information, invasion of privacy, surveillance practices, copyright issues, likely infringement of copyright. security and lack of transparency in how AI models are built are some of the dilemmas.
Does AI enrich people, businesses and countries and are those without access impoverishing? Can humans control the AI system once the AI captures all the intelligence of humans? The ethical concern around AI is not just a moral dilemma, it centers on the company’s social responsibility to its customers, employees, and society.
An example of AI bringing business into disrepute is Amazon’s when it decided to take down its AI-based recruiting tool because it was found to be prejudiced against women. Cambridge Analytica has had to shut down due to the scandal of influencing people to vote in US elections using personalized content. Organizations are starting to recognize the likely loss of customer faith and trust that irresponsible AI solutions could be blamed for. The examples of the influence of social media on the elections have prompted several governments to act against these companies or to put in place strict controls. These measures offer only partial protections, and fundamental questions about influences on seemingly insignificant matters remain unanswered. It could even lead to a radical change in the value system and culture of communities and the world.
Therefore, the design of AI systems must consider the impact of ethics as the central element of its solution and it is essential to develop the code of conduct for the design of AI systems. The key tenet of Asimov’s Code of Ethics is that automated systems must not cause harm to humans or, by refusing to act, such systems must not allow harm to be done to humans. Complex algorithms and correlations based on huge data sets make it difficult even to establish the origins and building blocks of AI models. It is therefore important to ensure that AI systems capture the detailed steps involved in the development process and the types of data used to train AI models.
Since most AI models are trained on publicly available datasets, there is likely to be some hidden biases from society. Therefore, inclusive data sets should be used to build the model. In order to overcome hidden biases, people from diverse backgrounds and cultures must be part of the team that builds the AI solution. And companies should reconsider their reliance on AI tools for hiring by making it human-centric. AI tools would not be able to distinguish between gender sensitivity or diversity, although these factors could be important in a hospital setting when treating patients.
Another important dimension concerns the environment. AI tools require huge cloud infrastructure and considerable electricity, which would mean more impact on the environment. At the same time, with intelligent use of AI systems and leading them to solve real problems like reducing carbon footprints and reducing energy needs, the world could become a better place to live.
In conclusion, an increased awareness of social responsibility and long-term reputational risks should necessarily be factored into the design of AI systems as well as constant reviews of the pitfalls to correct them.
The author is Executive Chairman of Global Talent Track, an enterprise training solutions company