Science4Parliament Podcast

Science4Parliament - Special AI edition - Part 1

Denis Naughten Season 1 Episode 14

Text the Science4Parliament podcast here.

Welcome to the first of the special AI editions of the Science4Parliament podcast.

These episodes are a summary of three workshops which took place online in early 2024 to inform the IPU’s Artificial Intelligence (AI) resolution, which was adopted at the IPU’s General Assembly in October 2024.

The aim of this resolution is to encourage parliamentarians to consider the social and ethical impacts of this new technology and, as decision-makers, the issues to be aware of when considering legislating for AI to ensure that its development and use is fair and beneficial for all of humanity.

The workshops were designed as part of the journey to the resolution, as a learning tool and also to stimulate interest and debate. The process was steered by two rapporteurs, Michelle Rempel Garner, a Member of the House of Commons of Canada, and Neema Lugangira, a Member of Parliament of the United Republic of Tanzania who moderated the first two sessions and I moderated the third session.  

Session one covers the basics of AI technologies, how they are developed and used, and how they are impacting the world, session two sees a deeper delve into the emerging impacts of AI on society and how governments need to work to harness these potential benefits and mitigate any harms. The third session was an assessment of legislation in relation to AI, what is currently in place and how to plan for what may be needed in the future.

Speakers:
Tulia Ackson, the president of the Inter-Parliamentary Union (IPU), opened the workshops and spoke about the importance of the IPU’s artificial intelligence resolution to provide leadership and guidance to parliamentarians globally.
Yoshua Bengio, founder and scientific director of the Montreal Institute for Artificial Intelligence, gave an overview of the basics of AI.
Inma Martinez, chair of the multi-stakeholder expert group and co-chair of the steering committee at the Global Partnership on Artificial Intelligence,  spoke about the work of the partnership, the potential of artificial intelligence and the importance of who is actually doing the regulating.

More information
The draft AI resolution, ‘The impact of artificial intelligence on democracy, human rights and the rule of law’, was published on 25 July 2024, following extensive collaborations with parliaments and experts, and was adopted at the 149th IPU General Assembly in October 2024 and is available on the IPU’s webpage 

Links to the other two workshop summaries are below; please share them with anyone who might be interested:

Science4Parliament - Special AI edition -
Part 2 - https://www.buzzsprout.com/2249694/episodes/15902198
Part 3 - https://www.buzzsprout.com/2249694/episodes/15902412

The complete seminars are on the IPU’s YouTube channel @IpuOrg

Any comments or questions?  Text the show at the link at the top or contact me:
Email:         
dnaughten@gmail.com
LinkedIn:  https://www.linkedin.com/in/denis-naughten
X:                  https://x.com/DenisNaughten
Blog:          https://substack.com/@denisnaughten
Web:          https://denisnaughten.ie/

Science4Parliament  - Special AI episodes  - Session one

SPEAKERS

Denis Naughten (host), former chair of the Inter-Parliamentary Union working group on science and technology and member of Parliament in Ireland.
Dr. Tulia Ackson, President of the Inter-Parliamentary Union,
Dr. Yoshua Bengio, founder and scientific director of the Montreal Institute for Artificial Intelligence (MILA),
Ms. Inma Martinez, chair of the multi-stakeholder expert group and co-chair of the steering committee at the Global Partnership on Artificial Intelligence (GPAI)

Denis  00:00

Welcome to the first in a series of special AI editions for the Science4 Parliament podcast. My name is Denis Naughten. I'm a member of Parliament in Ireland and chair of the Inter-Parliamentary Union working group on science and technology. This episode is one of three summaries of workshops that took place online at the beginning of the year to inform the IPU's artificial intelligence resolution entitled 'The impact of artificial intelligence on democracy, human rights and the rule of law'.  The aim of this resolution is to encourage parliamentarians to consider the social and ethical impacts of this new technology and the issues that they as decision makers, should be aware of, when considering legislation on artificial intelligence to ensure that Its development and use is fair and beneficial for all of humanity. The workshops were designed as part of a journey to the resolution as a learning tool and also to stimulate interest and debate. The process was steered by two rapporteurs, Michelle Rempel Garner, member of the House of Commons of Canada, and Neema Lugangira, member of the Parliament of the United Republic of Tanzania, who moderated the first two sessions, and I moderated the third session. 

 Denis  01:30

Session one covers the basics of artificial intelligence technologies, how they are developed and used, and how they are impacting on the world. Session two sees a deeper delve into the emerging impacts of artificial intelligence on society and how governments need to work to harness these potential benefits and mitigate any potential harms. The third session took a look at artificial intelligence legislation, what is currently in place, and how to plan for what may be needed in the future. Tulia Ackson, the president of the IPU, opened the workshops, and spoke about the importance of the IP use artificial intelligence resolution to provide leadership and guidance to parliamentarians globally. Session, one experts were, Yoshua Bengio, founder and scientific director of the Montreal Institute for artificial intelligence, who gave an overview of the basics of AI. And then we had Inma Martinez, chair of the multi-stakeholder expert group and co-chair of the steering committee at the Global Partnership on Artificial Intelligence, who spoke about the work of the partnership, the potential of artificial intelligence and the importance of who is actually doing the regulating? 

Tulia Ackson, the president of the IPU opening the series of workshops. 

 

Tulia Ackson  01:47

It's very important for parliamentarians of the world to understand how AI works in our own parliaments, but also in the works that we do as people who oversee what the government do, but also people who advise the government. So it's very important for parliamentarians to understand this, but because we are also people who literally custodians of human rights, and we are supposed to make sure societies are democratic. And now when we discuss about the impact of AI, it's not just the negative impacts. We have seen the positive impact of AI in our areas of development, social development, you talk about economic development, just human development in general. And at the moment, we can't run away from it. So it's very important for the advisors of the government to be able to understand this subject very well, and that's why I'm here, but also as one of the leaders in my own country, I'm the speaker, so it's very important for us to understand and so participate in the shaping of the policies in our own respective country, but also at the IPU level to be able to come up with the resolution. 

 

Denis  04:04

Yoshua Bengio, founder and scientific director of the Montreal Institute for Artificial Intelligence,

 

Yoshua Bengio  04:11

 I'll tell you about advances in AI, which I'm sure everybody's heard about. I'll talk about how this is on a course to give a lot of power to whoever controls those AIs, and that could be extremely useful if used in ways that are aligned with human needs, but it could also be very dangerous. And so, there are a number of risks that I'll talk about. And first, something about advances. The progress we've seen in recent years, is pretty, pretty amazing. I've been in AI for over 30 years when I was a grad student, and if you had asked me five years ago, I would never have guessed that we would be where we are today, even less 10 years ago. But you have to understand that one of the big factors that is making all this possible is we're building these AI systems by training them on data and that we're training them with more and more data. Essentially, they're like going through everything they can find on the internet, texts, images, videos, code, lots and lots of things, scientific papers. And they're trained in a way that they extracting information from that data, but in a way that that information is not easily accessible to us, so we don't really know exactly what they've learned, and they're getting better and better. And 2023 was a special year where these systems really reached a level of competence in manipulating language that has been very impressive to a lot of people and is a milestone in AI. So, the question, really, that is important, is, where is this going? And what scientists talk about is AGI, artificial general intelligence. So, when we have AI that is as competent as humans on cognitive tasks, so just what requires the intellect on pretty much every task that's AGI and this might be happening in just a few years or a couple of decades. There's a lot of uncertainty. The problem is, when we reached that, or even before, what does it mean for society? 

 Yoshua Bengio  06:17

So that's what I'll talk about. And if you look at again trends in the last few years, one of the things that scientists and technologists and companies have understood is that you can really take the same, almost the same recipe that we've known for several years and just scale the systems to more computational power, more data make the algorithms more efficient, and you you get pretty impressive progress. So you see charts showing how with time, the state of the art systems are, you know, becoming more and more compute hangry. So this is also creating potentially problems in the future in terms of their climate impact, but I'm going to focus today on what it could mean for society. 

I need to introduce a number of concepts in AI. First that one of the things we are trying to do with these systems, and we're training them to do, is to reach goals, to tell us how to achieve some goal, for example, or if it is an AI that can do things like, for example, in a dialog with humans or interacting on the internet, we can set it a goal, and after having been trained in various goals, it can be applied to any new goal, not always with the same success, but that idea, idea that is important here is that intelligence is the ability to plan, to find ways to achieve those goals and to have the knowledge that is required. But it's  separate from which goal the AI should be applied to. You can see where I'm going with this. AI is dual use, meaning that you can set it to do things that are good or bad according to society's norms. And the smarter it gets, the more good things, but also the more bad things it could be used for. Another important concept is the concept of self-preservation. So, every living thing has an instinct to try to preserve itself, to survive. And right now, the kinds of systems we have don't have that sort of strong goal of preserving themselves, but they could. Somebody could give them that goal, or it's something that could emerge as a side effect of the goals we give it. And if there is ever a superhuman AI that has that self-preservation goal, well, that means it will most likely resist being turned off. And you can see that could be a problem, that its interest might not be aligned with human interests. 

 Yoshua Bengio  08:57

So, the next question, I think, is, what can governments do? And I've made a testimony to the US Senate recently about this, and I had three main recommendations. So, the first one has to do with, obviously, with guardrails regulations, both at the national level and international treaties. For example, we want to make sure that the AI systems that are not safe are not built and are not deployed. Of course, this raises all kinds of questions, like, how do we determine that something is safe? And in my opinion, this is something that companies should have the burden to demonstrate. The second main point I was making is about this scientific challenge of understanding the risks, understanding how we could build AI that respects our norms, our laws, that doesn't harm humans, that could be controlled. This is something we need to invest in massively, in my opinion, or we stop developing AI, right? Unfortunately, even if we had a few countries, let's say even the United States, saying, ‘okay, we're going to stop developing AI’, other countries will continue. And so this is really not an option, and just, even just for a military reason, the US is not going to stop doing that. And so, we need to really find ways to make AI safe, and that's a scientific problem. We also, of course, need to make sure that as we're doing it, we're doing it right. We're doing it in a way that we that these AI systems are not going to be abused or securing more power by individuals or companies or corporations or countries. And the last point is that even with the best intentions and the best laws and treaties, there will be people who will try to build dangerous AI systems, and we need to prepare for that.


Denis  10:47

Inma Martinez, chair of the multi-stakeholder experts group and co-chair of the steering committee at the Global Partnership on Artificial Intelligence. 

 

Inma Martinez  10:58

Our mission is to ensure that the AI that is inherited by the world is fit for purpose in the sense that it is fit for humans to use. It contains the values that we want society to continue thriving on, and that it creates not just economic progress, but also societal welfare benefits. When AI is being developed, traditionally, is being developed in academia, and in recent years, it began to be developed across the board in many technology companies. So, the emergence of the corporate labs began to take precedent, and now, more than ever, because the objectives of corporate life are very different to the objectives of academia. In the private sector, you need to look at shareholders, how you create shareholder value, how you can increase the value of the shares in your company, how you make investors in the capital markets like your company, and value it even higher. So, the objectives are very different to, you know, the scientific endeavors that scientists in the lab have, and this is the separation. And this is what Dr Bengio mentioned as well, that the stakes and the objectives had put the development of AI into very different parts of this chess board. 

And should we at any moment press the stop button? This was proposed last year, and in somehow, in some aspects, of course, we have the right to do that. We have the right to preserve the type of world that we want to build. Of course we can. There is no law as to topping it but, but I think that relevant stopping is we have the right to challenge it, and we must challenge that whatever AI comes to the world has a real, demonstrable usability and convenience and meaning. It really solves a problem, and it does it preserving our values. You know, it does it with fairness, with equality, you know, ensuring that it creates goodness in the world and not havoc and AI, has to have, even when in development, a real target for what is it that we're developing here for the purpose of what? So, it should not just challenge human cognition in vain or for sport, certain aspects of AI development are trying to aim for a machine that is more creative than a human. You know that it can paint or compose music better than a human and these are the areas where governments have to fight for humanity. Humanity continues to prove that at physical level, every four years, at the Olympics, we beat records. So at physical level, humans continue to evolve. Well, the same thing about creativity and the splendor of our mind, and we have a right to preserve this. We have the right to allow humans to continue to thrive in creative ways. And therefore, when a system challenges the creative processes of humans, we should really pay attention to what exactly this thing intends to do, because maybe we don't need it. 

 Inma Martinez  14:13

And thirdly, and the most important, AI is a tool is not an entity. Dr Bengio exposed what happens when it is allowed to act irresponsibly and on its own, without any guardianship, but to retain control over it, we need to treat it as a tool. Has to have these boundaries of servicing, of usability, and being able to be switched off, being able to be audited, etc. So, when it comes to democracy, as it was previously explained, of course, AI has allowed for massive spread of misinformation and disinformation. But let's remember where it happens. It happens on social platforms. I wouldn't even call them media they're not media companies. They don't have that status in my world. They're platforms. And of course, if it continues. Is to happen. It has happened before that they interfere with consumer and voters’ sentiment, and it feeds people radicalized content. So again, this is its capabilities, but it's actually upon a channel of communications. So, the two go together, and we must look at it from the type of threats that it generates. So, of course, it threats democratic representation, false bots, false accounts, can put out content that confuses politicians as to, you know, consumer or citizen behavior, like people are complaining about X, and actually is not the people is a bunch of bots, you know, confuses how governments, you know, use these platforms to actually measure, you know, the sentiment it has to create accountability. We cannot have foreign actors meddling in national elections, creating a false accounts, disseminating false information, and then, because they're foreign actors, nobody can, you know, can, can make them accountable. Probably the biggest threat is that it destroys trust. If people cannot trust the news, if people cannot trust things that they read or images, then who is the bearer of the trust for the world? And it makes it very difficult for governments to actually take that baton and lead it if everything around us is a fake. So, the biggest threat is accountability, and the fact that we need to start steering trust towards government official accounts and interventions and really come really hard on the disseminators of this AI, which are the social media platforms. 

Now, when it comes to human rights, it's really important to remember that data is what feeds artificial intelligence. So, it all starts with data justice. If we cannot be assured, then the data sets that are being used within an AI system are not all the ones that should be there. They're biased. They don't contain all segmentations, all ages, all representations. They don't provide equality across the data sets that have been fed into the system, etc. It's all about the data, but also the fact that when governments use artificial intelligence, especially the justice system, well, it's been revealed that many times AI actually created more equality or more equity, but also it had been programmed to actually do the opposite and favor certain segments of the population and whatnot. So we need to start looking at these very closely to in relation to the data, and most importantly, the Global South has to be in the conversation in the development of all of these AI systems, because we need the different contexts. There are different cultural heritage, the different languages, their input, because the world is our world. It belongs to everyone.

 

Denis  18:08

Thank you for listening to this special artificial intelligence edition of the Science4Parliament podcast. I hope that this brief overview of the inter-Parliamentary Union's artificial intelligence workshops for parliamentarians will give you food for thought and will help you in your work as you strive to contextualize and regulate these new technologies as they are evolving. The draft artificial intelligence resolution entitled 'The impact of artificial intelligence on democracy, human rights and the rule of law', was published on the 25th of July, 2024 following extensive collaborations with parliaments and experts, and adopted at the 149th Inter-Parliamentary Union General Assembly in October of 2024. It is available on the inter-Parliamentary Union webpage ipu.org.  the links to the other two workshop summaries are in the episode notes. Please do share them with whoever you think might be interested. If you'd like to listen to the seminars in their entirety, you can do so on the IPUs YouTube channel, which is at IPU. org, under the artificial intelligence heading, and all episodes of the Science4Parliament podcast are on Spotify, Apple podcasts or wherever you get your podcasts. 

People on this episode