Prime Minister Kyriakos Mitsotakis’ conversation with Yossi Matias, VP of Engineering and Research at Google, and Country Manager Google in Greece, Peggy Antonakou, at an event celebrating 15 years of Google in Greece

Prime Minister Kyriakos Mitsotakis participated in a conversation with Yossi Matias, Vice President of Engineering and Research at Google, and Country Manager Google in Greece, Peggy Antonakou, at an event celebrating 15 years of Google in Greece. The remarks by the Prime Minister follow:

Asked about his vision for the use of artificial intelligence and the initiatives the government has undertaken on this front, the Prime Minister stated:

Well, first of all, congratulations on your first 15 years, which certainly have been very successful and very impactful. Thank you for the opportunity to collaborate on various areas, important ones, such as the digital skills initiatives. I’m sure there will be much more to come. I’m fascinated by AI as any policymaker should, both because of the tremendous opportunities, but also because of the responsibility that we have to make sure that we manage the risks associated with this transformational new technology.

But let me just quickly talk about the opportunities from the perspective of a politician. We have set up a high-level advisory group to the Prime Minister, which includes some of our best AI scientists. Greece is actually overrepresented in the AI scientific community and we feel that this is a knowledge that we need to leverage. And the purpose of this group is basically to do three things. The first is to make sure that it helps us be more effective in terms of our public policy in various public policy areas.

We may talk a little bit later about AI and climate and how we can use AI technologies to better predict natural disasters. But this is not just about climate, this is about health, this is about using big data for tax evasion. This is about even mundane topics, but extremely important, such as traffic management and how you regulate the stoplights in a big city.

I fundamentally believe that there is no area of policy that will not be impacted by AI. I want to make sure that all our public policies at least understand the implications of AI and have an AI dimension into them.

The second aspect of our work is how do we help businesses with their AI transformation? I’m not just talking about the big businesses, I’m talking about strengthening the startup ecosystem in Greece, which is incredibly active, relatively successful for the size of our country, in order for us to be producers and not just consumers of technology.

The third role is the role of the responsible regulator. We need to think about broader implications and how we can contribute to the European thought process around responsibly regulating AI without stifling its innovative drive. But I think that coming from Greece, we have an additional responsibility to think more systematically about the ethics around AI. If we don’t do it in Greece with all the philosophical background of classical Greece, who will do it? My vision would be to ensure that Greece is becoming a global thought centre when it comes to the ethics of AI.

There are many fascinating topics that should be discussed, and these are not just theoretical discussions. These are real-life discussions, real-life implications that even have to do with the issue of fundamental rights. For example, is it a fundamental human right to know if I’m interacting with a human or a machine? These are topics which are already being discussed, and I want to make sure that Greece is at the forefront of this discussion.

Asked about the regulatory framework that should govern artificial intelligence, without hampering innovation, Kyriakos Mitsotakis noted:

I wish I had a clear answer to this, but many people who are much more knowledgeable than me have not yet reached a conclusion about how you can strike this balance between ensuring that we protect against some of the negative consequences of AI without stifling innovation. The European Union in general has been at the forefront, I think, of responsible regulation when it comes to the Internet. The GDPR initiatives, I think, certainly moved in the right direction in terms of data protection. And the discussion around the AI Act, I think will help us to maintain this leadership when it comes to attempting to strike this balance.

But let me just talk a little bit about some of the risks that we need to address. While mentioning that when we talk about creating essentially a global system of regulation, it is clear to me that in the past, when we’ve created intra-governmental models of cooperation, usually this involves state actors. We will go to COP. Who is represented at COP? Who takes the floor? It is the countries. Of course, the companies are going to be there, but at the end of the day, these are national decisions. We commit, for example, to reducing our emissions by X percentage.

In the case of AI, this will not work. We need to bring the big technology companies to the table and come up with a new concept of a global regulatory regime that A, does not leave the companies outside, and B, ensures that all the big players are there. Because the biggest danger, especially when it comes to regimes that don’t necessarily share our democratic values, is that we agreed to rules that restrict the development of AI ourselves. Then there are other countries which go completely off and use AI as a geopolitical tool. This is a real risk, frankly. It is already happening, and we need to be honest about it.

For us, democracies, we have an additional risk where we need the cooperation of the big technology companies. This is related to the integrity of our democratic process and how we can use technology to foster and strengthen democracy rather than weaken it. We’re talking about misinformation, we’re talking about disinformation, we’re talking about deep fakes, we’re talking about the ability now of AI to easily create text, image, voice that could easily be used in a context where disinformation can spread like wildfire.

This is an obligation of, I think, the big technology companies to recognise that this is real, it is happening. It is not just about content moderation. This is one aspect of what needs to be done. But the challenges will be much more sophisticated, especially for those of us who have been… The AI needs data. So the more data it can have of me speaking publicly, the more accurate a fake image of myself will be. If we’re not there already, we will be there very soon. 2024 is an important year for elections. I’d be very interested to also hear your thoughts on how the big technology companies can help protect the democratic process against those who seek to undermine it.

Asked about the areas where Greece would like to move faster in the use of artificial intelligence, the Prime Minister said:

I would single out three sectors, which I’m sure are also sectors of priority for Yossi. The first we already touched upon. There, the solutions are already, to a certain extent, applicable. This has to do with climate change and especially with more accurate forecasting models. Google already has accurate flood forecasting models that may be much more accurate than one we could have envisioned, even a couple of years ago, and making sure that we have access to this technology can actually be life-saving. The more sophisticated it will become with our weather forecasting models, the better we’ll be able to prepare for weather events which will happen.

The same is true for wildfires. We’re already testing technologies in terms of identifying wildfires faster than we could before, which of course, is critical in terms of our early intervention. How a wildfire actually spreads is frighteningly complex, but there is no problem complex enough not to be addressed by detailed AI analytics.

So all these are real issues where we could actually play a leadership role. On healthcare, I think there it’s very clear that the benefits can be tremendous from offering – you mentioned this already – offering tools to our researchers in terms of breaking up the components of proteins to early identification of skin cancer, to making sure that our doctors make more educated decisions. There are numerous, numerous applications which could be put to good use.

And of course, when it comes to education, there are things that are slightly more challenging because the large language models are transformative in the sense that they can do something we never thought could be done. The ability to generate articulate speech, text is a prerogative only reserved for the human species. Suddenly there’s somebody else who can do it and maybe do it even better than us.

The implications of the large language models for education, how we can use them constructively, how we can ensure, for example, that we have the accurate tools to identify when they’re used, the most simple case. How do we know in an exam or in a paper that this paper is not written by an algorithm? These are very, very important challenges that need to be addressed.

On top of that, I would add another layer of concern that I have, which is probably more relevant to the neuroscientists that you will have at your panel joining us. The human brain is a product of an evolutionary process that has taken millions of years, and it responds to specific sensory impacts, and is being trained by repeating and by interacting with the real world. If we take away these interactions and we make it very easy for the human brain to do all sorts of things, what is going to be the impact on our long-term cognitive abilities?

Google Maps is fascinating. It’s great. We all love it and the applications are tremendous. But I remember the time when I crossed the US 30 years ago using only my maps. There are some skills involved in this. Those are the skills that we’ve always had. Just making sure that making our life easier does not rob us of important aspects of our cognitive abilities and what it means to be human at the end of the day.

These are the real difficult questions to ask, to answer. And of course, at the end of the day, the real question of who makes the decision. Is a decision made by an algorithm? And if yes, up to what level? If I apply for a big firm and I know that the CVs are screened by machines essentially, and not by humans, does the algorithm have an obligation to offer me an explanation of the decisions that it has made? I’m just scratching the surface here, but I just want to highlight the degree of complexity involved in some of these decisions that we will have to take and why this is probably one of the most complicated ethical problems that we would face in terms of properly using the technology while ensuring that the regulation framework is put in place to adequately address some of these concerns.

Asked about the possibility of cooperation between governments, organisations and companies in the field of artificial intelligence, Kyriakos Mitsotakis underlined:

Well, first of all, we’ve never shied away from working with the big technology companies to solve problems of common interest. I think there is a logic in why we would continue to do so because at the end of the day, we need each other. This is not just a question of operating on two different playing fields where we simply regulate and tax and you do your own thing.

We need to work together and we need to understand where we come from. There is going to be an issue here, and that has to do with the fact that these concepts are incredibly complex. It is difficult when the politicians or the regulators are playing catch-up in terms of even understanding the basics of how a large language model works.

We need to acquire that basic level of knowledge. Otherwise, there’s a risk that because we don’t understand anything, we will overregulate, simply because we want to be defensive, and say “this is strange, we don’t understand it. Let’s stop it because we don’t know what’s going to happen”. So we also have an obligation to educate ourselves to the point where we can have an educated discussion with the technology companies because at the end of the day, we need to find some common ground.

Certainly, when it comes to Google and Greece, we’re very much looking forward to working, as we are doing, on numerous projects which will be useful and make a real impact on peoples’ lives, because we should not forget that there are simple things that we can do, as we have done; offer simple digital skills or look at how we use technology to promote our cultural heritage. We can actually make a big, big difference. These are simple problems, we don’t have to think about all the ethical complexities that arise when we tackle more complicated issues.