Prime Minister Kyriakos Mitsotakis participated tonight in the conference “The Lyceum Project: AI Ethics with Aristotle”, held at the Athens Conservatoire. The Prime Minister’s speech follows:
“Well, thank you, Professor Nounesis for inviting me to what I hope will be the first of many such events taking place here in this auditorium, right next to the Lyceum of Aristotle, at a time when I do think that these types of discussions are more relevant than ever.
Let me start by saying that I’m really privileged today to be attending what I hope will be a fascinating discussion between two exceptional scholars, John Tasioulas from Oxford University and Josiah Ober from Stanford University. I happen to know Josh for many years, as, for those of you who don’t know that, he holds the Constantine Mitsotakis Chair at Stanford University. He’s a scholar who has studied the Athenian democracy in great detail, has written eloquently about its relevance today, in those times of democratic challenge. I find it absolutely fascinating that he is embracing with great enthusiasm the challenge of connecting the disruption of artificial intelligence to the classical wisdom that emanated out of this great city 2,500 years ago.
And, of course, my congratulations to Democritus and the World Human Forum for organising what I think is going to be a very interesting discussion. I thought that as introductory remarks, I share with you some quick thoughts about the sort of answers that I would expect to get from this discussion as a policy maker.
Let me make three quick observations, pointing out that I shouldn’t forget that, but I was also at Stanford 31 years ago at a time when the World Wide Web was making its first steps, we barely had mobile phones, and of course, artificial intelligence was still something that science fiction writers were writing about.
The first topic is very deep, profound, philosophical, the relationship between humans and machines. I remember when I was studying at Harvard doing my business degree right after leaving Stanford. Yes, I did make that mistake, I did not stay on the West Coast. But we’re all humans, we all make mistakes. I remember that at the time, we were reading a book called “In the Age of the Smart Machine”. These were the days when the Internet was making its first steps.
Of course, this discussion, the relationship between humans and technology is an old discussion. But the question that we have to answer now is whether there’s something profoundly different about the revolution of artificial intelligence, compared to previous technological revolutions. I happen to believe that there is something that is different, because for the first time we have tools that can essentially simulate more and more of the actual manifestations of human intelligence.
This is something that has never happened before. We can now, when we talk to the thinkers around artificial intelligence, we can envision a form of machine learning that spans the entire spectrum of human cognitive abilities. We’re not just talking about doing tasks better than humans do in terms of helping us professionally, but writing better music, painting, discussing, engaging in philosophical debates. This was the prerogative of the human species. This is what made us different from the world around us.
This is changing now. This is creating a completely new reality which forces us to recalibrate our relationship to technology, which has acquired human characteristics when we talk about intelligence itself. Of course, this raises profound questions about the role that humans will have in work or in leisure. Tremendous implications when it comes to the nature of the workforce and the jobs that will be replaced by artificial intelligence. These are all momentous challenges that we really need to think deeply about.
The second point which I wanted to raise has to do with the brains behind this development. Who develops this technology? And for what purpose? Is this just the prerogative of scientific research? Is this something that is developed in universities for the benefit of humankind and of research in general? Probably no.
This is now a market-driven process. These technologies are driven by a few very large companies with practically unlimited resources that can afford to hire any bright brain out of any university. They’re doing this because they are profit-oriented enterprises. So we need to think hard about the motive behind these developments, which affect the lives of all of us.
Of course, we also need to understand. I’m not an engineer, but I think I’ve really tried to read a lot about artificial intelligence. We have the privilege in Greece of having an exceptional high-level Committee advising me on artificial intelligence, so I better educate myself in order to understand what these people are actually telling me. But it seems to me that there are very, very few people in the world who really understand how these large language models actually look. This, I would suggest, could be a reason of concern. At the same time, when the developments are moving so quickly, when I’m not even sure that the scientists who develop them fully understand the implications of how these algorithms actually work.
So, without believing in a sort of terminated future where machines are going to take over the world and eliminate all of us, this should give us a sort of reason to be concerned. If the rationale is profit, then speed is of essence. “The faster we move, the better it will be”. If you look at the valuations that the markets assign to these companies, there’s a reason to be quick.
But does this give us really time to pause and think about what it is we’re doing? And this is probably a more benign version of the world, because there’s another version of the world in a more, I would say, state control, a more statist approach towards artificial intelligence, where you can see artificial intelligence as a weapon against your geopolitical opponents.
There, again, you have no reason to stop and pause because you want to gain some sort of competitive advantage in this new Cold War, which is defined by technology.
Where does this leave us, especially us in Europe? We try to take a more values-based approach towards smart regulation of artificial intelligence.
My third observation, which brings us to the debate that we will have today, relates to the philosophical heritage and how does it help us to make sense of the complicated work, but also the complicated choices that we have to make. I’m not a deep scholar of Aristotle, but I’ve read lots of his works, and I do think that as a thinker who examines the relationship between human nature, ethics, and the political community, yes, there is a lot that maybe he can teach us in providing a conceptual and philosophical framework to think about these deep problems.
What would Aristotle think of a life where we would no longer need to work because our work would be done by a machine? Would this be a good life? Would this be a flourishing life? We probably know the answer. As a thinker who was passionate about the ability and the responsibility to lead a fulfilling life, these choices that we make about how we use technology, I think we will find a lot of stimuli in his thinking.
I personally agree with those who think of technology, including artificial intelligence, as an intelligent tool. A tool, not a colleague, not a friend, not a citizen. Because where does this lead us? At some point, the robots have the right to vote. If we recognise them as fully independent. It may seem absurd now, but if we don’t draw the proper boundaries today, these are questions that may actually come back to haunt us.This dystopian image of a future in the latest Ishiguro novel, “Klara and the Sun”, where we have a robot, essentially an AI robot as a friend and as a companion, is not that far from reality today.
Of course, my last point, a critical question which I’m sure you may want to comment on: implication for human rights. Do we have a right as individuals to know if a decision that affects our lives has been taken by a machine? For the young kids here in the audience, if your resumes are screened by an AI algorithm, do you have a right to know that? Then if yes, what are the implications of this?
I’m not saying that it’s a good or a bad practice. I’m just raising the issue which is happening today and what are the implications as we open up this sphere of possibilities. What are the implications for relationships of trust, which are so important in human communities? We can trust individuals. We trust each other because we develop networks of relationships. Can we trust machines? What does trust mean when you interact with a machine?
But there is no flourishing community, starting from ancient Athens and the city states to the modern success of liberal democracy, that does not flourish in the concept of trust. But trust presupposes networks of connection, which I’m not sure we can necessarily establish with machines.
My last issue of concern, which is more related to maybe biology and medicine than it is to ethics and philosophy, has to do with cognitive development. When I was at Stanford, 31 years ago, I took a friend of mine and we crossed the US by car. There was no Google Maps at the time. I used regular maps. But our brain has developed over millions of years, and spatial understanding and orientation is an important component of cognitive development.
What if we replace all these processes by machines that make our life easier? What will it mean at the end of the day in terms of how our brain has been programmed by the evolution over millions of years? Are we overriding the way we have developed biologically by simply starving our brain of certain important cognitive functions which are absolutely critical?
We know what it means to write a nice essay and how important it is to compose text. But the truth is that the large language models are doing this probably as well as we do, or even better. The temptation to stop writing and just use an algorithm is there. Maybe we shouldn’t kid ourselves. This is happening. It’s happening not just in the educational system. But what are the long term implications of these questions?
I think I could probably stop here by concluding that I think Greece and this place has a role to play in these discussions, in terms of connecting -this is a term that my sister, Alexandra, has coined- ancestral intelligence and artificial intelligence. Is there a natural connection? I think there is. That’s why I think these dialogues are of great value.
I would love to see this event become an annual global gathering of interdisciplinary thinkers that think hard about these questions. I hope we’re going to make a good beginning here today. Certainly, this will have the full support of myself and the AI Committee that we put together. I am sure that the discussion that will follow is going to be very fascinating. Thank you again very much for being here today.