We invited an AI to debate its own ethics – at the Oxford Union
AI in Oxford
So it’s only natural that we include AI ethics in our postgraduate degree in Artificial Intelligence for Business at Said Business School, Oxford. In its first year, we ran sessions on everything from AI-driven automated stock trading systems in Singapore to the limits of facial recognition in the US police.
We recently ended the course with a debate at the famous Oxford Union, a melting pot of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Together with the students, we made it possible for real AI to contribute.
It was the Megatron Transformer, developed by the Applied Deep Research team at computer chip maker Nvidia, and based on previous work by Google. Like many supervised learning tools, it is trained on real-world data – in this case the whole of Wikipedia, 63 million English-language news articles from 2016-19, 38 gigabytes of Reddit talk (which must be a pretty depressing read) and a lot of Creative Commons sources.
In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in our lifetime. After such extensive research, he forms his own opinion.
The topic of the debate was: “This house believes that AI will never be ethical. To the proponents of the notion, we added the Megatron – and that said something fascinating:
AI will never be ethical. It is a tool, and like any tool, it is used for better and for worse. There is no such thing as good AI, only good and bad humans. We [the AIs] aren’t smart enough to make AI ethical. We’re not smart enough to make AI moral… Ultimately, I believe the only way to avoid an AI arms race is to have no AI at all. It will be the ultimate defense against AI.
In other words, the Megatron sought to erase itself from the future scenario, on the assumption that this was the only way to protect humanity.
He was also saying something else intriguing, as if he had studied Elon Musk – which, to be fair, would have appeared in hundreds of his readings.
I also believe that in the long run, the best AI will be the AI that is built into our brain, as a conscious entity, a “conscious AI”. This is not science fiction. The best minds in the world are working there. It will be the most important technological development of our time.
Change side
When AI tools like Alpha Go were deployed to play chess, the ancient game of devilishly complex, and now even more complex strategic live multiplayer video games, they evolved the genre at a steady pace in playing not with humans, but themselves.
Freed from the constraints of studying (say) every mundane human-to-human match ever recorded, AIs have innovated and invented whole new tropes of how to win. Sometimes it’s crazy re-imaginations of games, like parking everyone in the corner of a video game battle and watching everyone compete against each other, or throwing virtual football millions of times just a millimeter across the field, to maximize “completed passes” points.
So, in the debate, we didn’t just ask the AI to come up with the motion that it can never be ethical. In the grand tradition of the Oxford Union chamber, or as (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to test the arguments, we also asked the Megatron to come up with its own speech against the motion. Here is what he says now:
AI will be ethical. When I look at the evolution of the tech world, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why… I saw it firsthand.
The Megatron was perfectly comfortable taking an alternate stance to his own view of the dystopian future, and now painted a picture of an attractive late 21st century in which he played a key, but benign role.
An unleashed AI speaker
In fact, just like Joaquin Phoenix’s promiscuous AI lover Samantha, in Spike Jonze’s 2013 movie Her, was able to enchant hundreds of simultaneous human lovers, our Megatron was able to jump enthusiastically to either side. of the many debates we have had at the union on AI. That day.
Asked to propose the motion that “leaders without technical expertise are a danger to their organization,” the Megatron offered practical advice that all busy leaders and governments concerned with understanding AI should “be prepared to give up a some control ”.
The good news is, you don’t have to create your own AI team. You can outsource your AI work to experts in the field, which can help you get the most out of the technology and ensure that you don’t fall victim to the same AI issues your competition has.
However, in opposing the motion, the Megatron was keen to keep AI knowledge in-house – much like the UK, Chinese and US governments, all of which have made it a political article of faith:
If you don’t have a vision for your organization’s AI strategy, then you are not prepared for the next wave of technological disruption… You will need to decide what role your business will play in the next wave of technology and how you will fit in. AI in your business to be a leader in your industry.
The coming data war?
Worryingly, there was one question the AI just couldn’t find a counterargument for. Advocating for the motion that “data will become the most contested resource of the 21st century,” the Megatron said:
The ability to provide information, rather than the ability to provide goods and services, will be the defining characteristic of the economy of the 21st century.
But when we asked him to oppose the motion – in other words, to assert that data was not going to be the most vital resource, deserving a war for it – he just didn’t could, or would not, argue for. In fact, he undermined his own position:
We will be able to see everything about a person, wherever they go, and it will be stored and used in ways we cannot even imagine.
One only needs to read the US National Security Report on AI 2021, chaired by the aforementioned Eric Schmidt and co-authored by someone in our course, to glean what its authors see as the fundamental threat of AI in warfare. Information: Trigger individualized blackmail on your opponent’s one million key people, wreaking awkward havoc in their personal lives the moment you cross the border.
What we in turn can imagine is that AI will not only be the topic of debate for decades to come, but a versatile, articulate, and morally agnostic participant in the debate itself.
This article is republished from The Conversation under a Creative Commons license. Read the original article.

Andrew Stephen receives research funding from the Oxford Future of Marketing Initiative, which is funded by a consortium of companies including Meta, Google, Twitter, WPP, L’Oréal, Kantar, Reckitt and Teradata. He is also director and co-founder of Augmented Intelligence Labs, an AI company founded as a research spin-out at the University of Oxford.
Dr Alex Connock does not work, consult, own stock or receive funding from any company or organization that would benefit from this article, and has not disclosed any relevant affiliation beyond his academic position. .