Science4Parliament Podcast

Science4Parliament - Special AI edition - Part 3

Season 1 Episode 16

Text the Science4Parliament podcast here.

Welcome to the final special AI edition of the Science4Parliament podcast.  These episodes are summaries of the workshops which took place online in early 2024 to inform the IPU’s Artificial Intelligence (AI) resolution.

The resolution aims to encourage parliamentarians to consider the social and ethical impacts of this new technology and the issues that they, as decision-makers, should be aware of when considering legislating for AI  to ensure that its development and use is fair and beneficial for all of humanity.

The workshops were designed as part of the journey to the resolution, as a learning tool, and to stimulate interest and debate. The process was steered by two rapporteurs, Michelle Rempel Garner, Member of the House of Commons of Canada, and Neema Lugangira, Member of Parliament of the United Republic of Tanzania, who moderated the first two sessions, and I moderated the third.  

Session one covers the basics of AI, its development and use, and how it is impacting the world, Session two sees a deeper delve into the emerging impacts of AI on society and how governments need to work to harness potential benefits and mitigate any harms. The third session is an assessment of legislation, what is currently in place and how to plan for the future.

Speakers:
Carol Roach, Chair of the Internet Governance Forum (IGF) Multistakeholder Advisory Group (MAG), set out the basics of what is needed to regulate AI and gave advice to parliamentarians when they are undertaking this challenge.
Martin Ulbrich, senior expert on AI Policy with DG CNECT in the European Commission, was involved in the drafting of the white paper and the whole regulatory area of AI  in the EU. He gave an overview of the development of the EU AI Act, the world's first comprehensive AI law, which aims to ensure the equitable and responsible development and use of this innovative technology.
Finally, Quintin Chou-Lambert, senior programme officer with the office of the UN Secretary-General's Envoy on Technology, gave an oversight on the roadmap for digital cooperation, which provides a vision and direction for an increasingly digital world.

More information
The draft resolution, ‘The impact of artificial intelligence on democracy, human rights and the rule of law’, was published in July 2024 and adopted at the 149th IPU General Assembly (Oct 2024).  Available on the IPU’s webpage.

Links to the other special AI edition episodes:
Part 1 - https://www.buzzsprout.com/2249694/episodes/15896531
Part 2 - https://www.buzzsprout.com/2249694/episodes/15902198
Complete seminars are on the IPU’s YouTube channel.

You can text the show via the link at the top of the page, contact me at dnaughten@gmail.com or via social media:
LinkedIn:  https://www.linkedin.com/in/denis-naughten
X:               https://x.com/DenisNaughten
Blog:          https://substack.com/@denisnaughten
Web:          https://denisnaughten.ie/

Science4Parliament - Special A! edition - Part 3

SPEAKERS

Denis Naughten (host), former chair of the Inter-Parliamentary Union working group on science and technology and member of Parliament in Ireland.
Carol Roche, chair of the Internet Governance Forum, multi stakeholder advisory group.
Martin Ulbrich, senior advisor on artificial intelligence policy with DG CNECT at the European Commission.
Quintin Chou-Lambert, a senior program officer with the Office of the UN Secretary General's envoy on technology.
Dr. Tulia Ackson, president of the Inter-Parliamentary Union


Denis  00:00

Welcome to the third in a series of special artificial intelligence editions for the science for Parliament podcast. My name is Denis Naughten. I'm a member of parliament in Ireland and chair of the Inter-Parliamentary Union Working Group on science and technology. This episode is one of three summaries of workshops that took place online at the beginning of the year to inform the IPU’s artificial intelligence resolution entitled ‘The impact of artificial intelligence on democracy, human rights and the rule of law’. The aim of this resolution is to encourage parliamentarians to consider the social and ethical impacts of this new technology and the issues that they, as decision makers should be aware of when considering legislation on artificial intelligence to ensure that its development and use is fair and beneficial for all of humanity. The workshops were designed as part of a journey to the resolution, as a learning tool and also to stimulate interest and debate. The process was steered by two rapporteurs, Michelle Rempel Garner, member of the House of Commons of Canada, and Neema Lugangira, member of the Parliament of the United Republic of Tanzania, who moderated the first two sessions, and I moderated the third session. This episode is a summary of the legislation needed to develop an international regulatory framework for artificial intelligence. The session was opened by Carl Roach, chair of the Internet Governance Forum, multi stakeholder advisory group, who set out the basics of what is needed to regulate artificial intelligence and gave advice to parliamentarians that they should consider when trying to address this challenge. 

Martin Ulbrich, senior advisor on artificial intelligence policy with DG connect at the European Commission, who has been involved with digital issues within the EU for more than 20 years and is involved in regulating artificial intelligence within the European Union, gave an overview on the development of the artificial intelligence act at EU level, the world's first comprehensive AI law, which aims to ensure equitable and responsive development and use of this Innovative Technology. Please note that this seminar was initially broadcast at the beginning of March 2024, before the final approval of the artificial intelligence act at EU level in May 2024. 

The next speaker, Quentin Chu Lambert is a senior program officer with the Office of the UN Secretary General's envoy on technology, and he gave an oversight on the work of the UN Secretary General's roadmap for digital cooperation, which provides a vision and direction for our increasingly digital world, while laying out the actions needed for the global community to connect, respect and protect all peoples. 

First Carol Roach, chair of the Internet Governance Forum, multi stakeholder advisory group 

 

Carol Roche  03:06

For those who are not familiar with the work of the UN IGF, we are global multi stakeholder platform that facilitates the discussion of public policy issues pertaining to the internet. As parliamentarians, you have been given the challenging task of drafting rules and regulations to bring discipline to the complex technologies and applications of the internet, which of course, includes artificial intelligence technologies, indeed, the increasing deployment of AI in our societies can empower and connect people, we can also further discrimination and deepen digital digital divides and as citizens of the world, we can only wish for a social contract in which the rules of behavior allow us to feel safe and secure in this online environment in order to achieve The objectives of the social contract experts can help inform parliamentarians in drafting legislation to ensure that human dignity, trust, safety and security, as well as privacy and accountability are desirable outcomes of legislation, Whilst not hampering or hindering innovation, technologists have an obligation to help parliamentarians understand in some fundamental way how these emerging technologies and applications work, and spotlight the opportunities as well as the risk or threats they present. We should also focus on outcomes of legislation as the main concern is about how these systems are being used. What are the direct effects and side effects of the use of artificial intelligence? Are there harms we need to defend against those that threaten national security? Local industries and your constituents do? Finally, another point I would like to make, it is crucial to involve communities and people with diverse backgrounds and to be inclusive. We can only realize AI potential to benefit everyone through collective global efforts that draw on the wide range of views of policy makers, technologists, investors, businesses, civil society and academia from all countries and regions.

 

Denis  05:30

Martin Ulbrich, senior expert on AI policy with DG connect in the European Commission,

 

Martin Ulbrich  05:37

I'm mostly going to present today the AI act. So the regulation for artificial intelligence, which is just about to be adopted in the European Union. But before I do that, I would like to make two introductory remarks. First, there's something about AI which we tend to forget and which is important to keep in mind, especially when you start regulating it. You know, AI is all about numbers. AI is basically advanced statistics, about correlations. It's about huge data sets, and very huge, I mean, really, very large, hundreds, 200 300 billion parameters, very, very large numbers. So it's all about numbers, how it works. But the way we think about AI is all about words, starting with the expression AI, artificial intelligence. You know, that was decided in, I think, 1955 or 1983 at a conference to call it artificial intelligence. There's another word for it, which is called deep neural networks. You know, that could have been chosen as well as your official title, and that would have made a big difference. I mean, can you think about any deep neural network movies, or deep neural network characters in movies, or nowadays, a deep neural network strategy. Would we have a deep neural network act in Europe? Would there be deep neural network conferences all over the world? It's very much because we associate this technology with intelligence. And of course, intelligence is something we associate with humans. So artificial intelligence, of course, very much evokes in us the idea of artificial humans. That's why we put a lot of attention to that. But it clouds our judgment when it when it actually comes to how to regulate this technology, I think. And of course, artificial intelligence, the word artificial intelligence, that was 50 years ago, but even today, we still do that all the time. For example, we're talking about hallucinations when you have general purpose AI systems which give you a wrong answer. We're talking about hallucinations. If they learn something. Why do we call this hallucinations? Because we kind of associate artificial intelligence with artificial humans. It's a machine. If I put my foot on the accelerator in my car, and instead of accelerating, it breaks. I'm not saying, Oh, my car is hallucinating. It's malfunctioning. An AI which gives you a wrong answer. We call it hallucinations. Okay, general purpose, AI is a slightly different task, but it's still it's this kind of attribution of agency to a technological system, which is something which we do naturally. If I drop the word hallucination and say just AI invent something, AI doesn't invent something either. All of these are just human activities. So we just very much treat it as it was human, and that really influences the way we think about it and the way we think about regulation. 

That's the one thing. The other thing is about six or so, eight or nine years ago, 2014, 2015, 2016, when the current AI boom started, everybody was really concerned about the impact of AI on the labor market, because AI could be doing so many things. And one particular focus was the impact of AI and the self-driving cars. Because by 2015 people were thinking there would be self-driving cars in large numbers on the road. By 2020 and that, of course, has large impact on the labor market, told young people then, well, now by 2020 there won't be any lorry drivers anymore. Very few of them are taxi drivers. And now in 2024 23 Well, 2324 and there's a number of countries where we don't have enough lorry drivers and not enough tech Well, laureate bus drivers, basically taxi drivers. It's what a problem because of Uber but they're really not enough taxi and bus drivers, partly because we told these people in 2015 the other reasons as well, you know, aging of society, etc, etc. But one of the reasons is because we told them, based on our appreciation of what was going to happen, our forecasting, what was going to happen in AI, that there would be no job for them. So we have to be careful with these forecasts which we think are, which we today think are realistic, but may not be the case in a few years time. The reverse side of that was that in 2015 one thing, nobody. Well, not nobody, but very few people thought about was general purpose AI, chat, GPT. Obviously some people thought about it, because that's when open AI was, created. But if you, if you go back to 2015 and you look at the business conference and speeches, and including the politicians, everybody was talking about self-forming, cars, application, nobody was talking about general purpose technology, so we got that wrong. It's not so rare. Very often, people get forecasting wrong, but it's something we have to keep in mind. When we're talking about AI and regulating AI, we have a bias towards attributing too much agency to it because of words, and we have a bias of overestimating our own knowledge of the future. We tend to think that we know to. Day already, what's going to happen in five- or 10-years time in technology fields and experience tell us that's not really what's what is the case. 

The European Union is indeed the first country to do have a proposal for artificial intelligence regulation in the world. We published the European Commission, which is kind of the executive branch in Europe. We published the proposal in 2021 and at the time, there was really nothing on which to base ourselves, which is very unusual. I mean, very often, if you do regulation, you base your set, well, what you do is you modify an existing regulation. You know you have a regulation already in place. And then you change something, you change the threshold, you increase the subsidy, you introduce an exception or something which is modified. And if you don't have that, if you create, if you regulate a new topic, mostly you base yourself on examples. We in Europe tend to do that, in that case, on examples in our in our member states. So you don't have anything at European level, we will look at maybe what the French are doing, or the Italians, or we look at regional levels, or maybe somebody in Bavaria or Castilla mantra or something. Or we look outside Europe. If there's nothing in Europe, you know, we look at, I don't know any country outside Japan, us, Rwanda, you just look around. But in the in the case of AI in 2021 there was nothing. It's really totally nothing. There was no example anywhere on earth how to regulate AI as our horizontally, as a technology. So we basically had two possibilities of regulating AI. In that case, either you regulate, you don't regulate AI, but you regulate specific AI applications wherever it's necessary. So you just regulate, let's say, AI in face recognition, and then you have another one in AI in motor vehicles, and the third one in AI in education or wherever. So wherever there's a problem, you regulate AI, but then you've got a lot of regulations to do, and they might not always be coherent among each other, which is a problem. Or alternatively, and that will be did in Europe is to run a horizontal regulation for AI, in which case you don't have that problem of inconsistency or incoherence between regulations applying in different sectors. The problem with that approach is, of course, that AI is an extremely versatile technology. You can apply it pretty much anywhere. I was talking about education, manufacturing, agriculture, space engineering, public services, you name it. AI can be applied everywhere. And therefore having one having identical regulations for all of them doesn't make much sense. So instead, what we proposed at the time was to have a regulation based on different risk categories. So basically, our proposal included four risk categories, starting from the forbidden AI, very small to the totally free AI. Totally free AI, in our estimation at the time, was something between 85 and 95% of the market. So most of AI actually is not regulated under the AI. And these are applications which are, you know, if they go wrong, nobody cares very much. It may be bad for the companies using it. They might lose some money, but it's not really important, and therefore there's no interest in regulating that. So that's the green part. So that's the large part of AI, 80% 90% of the market. Vast majority we know we'll have at the European level and European AI office, which is going to deal with general purpose AI systems mostly, but most of the implementation of the Ag will stay at member state level, because obviously it's something which very much depends on local marketing circumstances, and therefore it's very difficult to try to supervise all of this from one central place. I think that's roughly how the AI Act works. 

 

Denis  13:20

Quintin Chu Lambert, Senior Program Officer with the Office of the UN Secretary General's envoy on technology. He gave an oversight on the roadmap for digital cooperation. 

 

Quintin Chou-Lambert  13:31

This initiative of the Secretary General, the high-level advisory body on artificial intelligence, was actually conceived several years back as part of that roadmap for digital cooperation that Dennis mentioned at the beginning. So I would say this idea was born back in 2019, 2020, the roadmap was officially launched. And then through the, you know, breakthrough, technical breakthroughs in AI, the topic started getting a lot more political attention. And last year, the Secretary General announced the formation of this advisory body. And so just to say that this has had a slightly longer gestation period than some of the other initiatives that have been popping up recently, but the questions still arise. You know, what's the reason for having such an advisory body at the global level, the UN and what is it actually doing? So why is AI governance a global issue rather than just a regional, national issue? Well, there are several reasons. One is that overall moral imperative to include the whole world's population, not just the subset of countries that have access to this technology, but also when we think about risks, they can pop up in any country. You know, models can get connected to the internet and scale up very rapidly from any jurisdiction. So if governments are interested in containing the risks of AI, they would need to look at the whole world. Then, of course, from the commercial side, companies are very interested in having frictionless, low cost, border operations. So the interoperability of governance between jurisdictions is very important for those constituencies where they don't want to be running into different regulations in different countries. So there's actually quite a lot of reasons why a global approach could be value adding in this sense. 

And so the UN Secretary General has convened this high level advisory body on AI to analyze the situation and also make recommendations for international governance. There was over 1800 nominations, of which 39 were selected based on a criterion of expertise in these different fields, and also gender and geographically balanced by region. So that's one differentiating aspect of this initiative, that it is, in a sense, one of the more representative or global or inclusive sets of groups looking at this issue. So you know, so far, it has met over 40 times in that first couple of months, there was a bit of a sprint to launch the interim report, which came out at the end of December and is now available online. So these experts got together in these looking at these five different areas of AI governance, the opportunities that everyone's been talking about, but also the enablers, which is often missed out in the conversation around opportunities. What other measures, governance measures, need to be in place to actually harvest the opportunities, rather than having them as theoretical. And one of the key findings of the interim report is that governance, and international governance, can itself be an enabler for harnessing the opportunities so rather than in this false dichotomy of regulation versus innovation, structuring governance in the right way, whether that's regulations or other policy measures like trade incentives, tax incentives, that governance can actually play a good role in traveling The industry to bring the opportunities to fruition. They also looked at risks and challenges of AI, a wide range of views in this body around you know, the urgency, and let's say proximity of certain risk, whether you know they are current farms or future risks, or how far away those future risks might be. 

And then the issue of international governance, in terms of interoperability between jurisdictions alignment with international norms and values, such as the Universal Declaration of Human Rights, and what kind of institutions could help harness the benefits whilst containing risks. It's also feeding into a overarching political process happening at the UN some of you may have heard about the summit of the future, where member states will be adopting a pact for the future, within which we expect there to be a global digital compact annexed to the pact. So this will be an inter governmentally agreed compact at the leaders level. And it's a very rare kind of development. I don't think there's ever been a leaders level forum negotiating a text specifically focused on this kind of digital technology. So this is a policy window where the AI advisory bodies work and recommendations will be landing. So in terms of the you know, a summary of what was in the interim report, it looks at the issue of global governance and identifies a deficit of governance on AI internationally, discusses those risks and challenges, opportunities, enablers, and then looks at what kind of institutional principles and design principles and functions could help address these gaps, address the risks and harness The opportunities and promote the enablers. And there are five guiding principles which cover various areas, including data governance being in lockstep with AI governance alignment with international norms and principles. And then there are some institutional functions to address various different aspects of the governance challenge. So these range from more of a kind of scientific assessment and to identify risks Lighthouse image to an amazing function to pool resources, talent, compute data, to allow countries who don't yet have access to those resources to benefit from this technology to the more intrusive kind of monitoring and compliance functions that might associate with some of the arms control mechanisms in the UN machinery. And the idea here was to start the conversation, not to propose that any of these functions or subset of them are, you know, more important than others, but rather to put them on the table and see what the response is. And that's where we're at now.

 

Denis  19:43

The series of workshops was rounded up by closing remarks by Tulia Ackson, the president of the Inter-Parliamentary Union.

 

Tulia Ackson  19:52

I think we can agree that the workshops have provided us with an excellent overview of the current status. Of artificial intelligence, the experts have also outlined the skill of the challenge that we as parliamentarians must try to address. It is a big challenge, and perhaps a big opportunity for regional and international cooperation. Parliaments are rarely called upon to take action in an area where there is very little existing legislation or precedent. Yet that is where we are today. We need to create a regulatory framework for AI almost from the scratch, even as the technology continues, evolving very fast, dear colleagues, I understand that it is a challenging task, and that is why we need to work closely with the executive branch, civil society, academia and the private sector to ensure that we harness the opportunities of AI while managing the risks.

 

Denis  21:03

Thank you for listening to this special artificial intelligence edition of the Science4Parliament podcast. I hope that this brief overview of the Inter-Parliamentary Union's artificial intelligence workshops for parliamentarians will give you food for thought and will help you in your work as you strive to contextualize and regulate these new technologies as they are evolving. The draft artificial intelligence resolution entitled, ‘The impact of artificial intelligence on democracy, human rights and the rule of law’ was published on the 25th of July, 2024 following extensive collaborations with parliaments and experts. And this was adopted at the 149th Inter-Parliamentary Union General Assembly held in Geneva in October 2024. it is available on the inter parliamentary union web page, ipu.org, the links to the other two workshop summaries are in the episode notes. Please do share them with whoever you think might be interested. If you'd like to listen to the seminars in their entirety, you can do so on the IPU’s YouTube channel, which is @IPUorg, under the artificial intelligence heading and all episodes of the Science4Parliament podcast are on Spotify, Apple podcasts, or wherever you get your podcasts. 

People on this episode