HOMELAND SECURITYTECHNOLOGY

United Nations calls for “Developing Ethical AI, and avoid potential dangers”

By R. Anil Kumar

  • UN is involved in global efforts to make AI and other forms of online technology safer

  • United Nations Educational Social and Cultural Organisation (UNESCO), calls out to implement its recommendations on the ethics of artificial intelligence to avoid its misuse

UNITED NATIONS, February 23. After a year of hype surrounding the latest version of ChatGPT and other new AI tools, governments are starting to make concerted efforts to bring in effective regulations on the use of this powerful technology, with the support of the UN science agency, UNESCO.

UNESCO first developed its Recommendations on the Ethics of Artificial Intelligence back in 2021, when much of the world was preoccupied by another international threat, the COVID-19 pandemic. The Recommendations, which were adopted by the 194 UNESCO Member States, contain concrete guidance on how public and private money can be channelled to programmes that benefit society.

Since then, a great deal of work has been done to put this guidance into practice, with legislators, experts and civil society representatives meeting at UNESCO forums to share information and report on progress.

Shortly after the 2024 forum, which took place in Slovenia in early February, the participants: Aisen Etcheverry, Minister of Science and Technology in the Chilean Government; Irakli Khodeli, Head of the AI Ethics Unit at UNESCO; and Mary Snapp, Vice-President of Strategic AI Initiatives at Microsoft, Spoke on safer use of AI.

Expressing views, Aisen Etcheverry, Minister of Science and Technology in the Chilean Government said, “We were one of the first countries to not only adopt the Recommendations, but also to implement them, with a model that ensures AI is being used ethically and responsibly.”

So, when ChatGPT came on to the market, and we saw all the questions it raised, we already had expert research centres in place and capabilities within the government. Our companies were already working with AI, and we had basically all the pieces of the puzzle to tackle a discussion that is complicated on the regulation side.

Over the last year, things have evolved, and we’ve seen an increase in the use of AI by government and agencies, so we launched something similar to an executive order, basically instructions on how to use AI responsibly.

One great example is at the agency charged with providing social benefits. They generated a model that allows them to predict which people are least likely to ask for the benefits that they’re entitled to.

Then they send people to go and visit those who have been identified to inform them of their entitlements.

“I think it’s a beautiful example of how technology can enhance the public sector, without removing the human interaction that is so important, in the way governments and citizens interact,” Etcheverry said.

Stating that the government is doing to protect citizens from those who want to use AI in harmful ways, Aisen Etcheverry said that “The UNESCO Recommendations really helped us to develop critical thinking about AI and regulations. We have been having public consultations with experts, and we hope that we can present a bill to Congress in March,” Etcheverry stated.

We have also been thinking about how we can train people, not necessarily in programming, but to empower those who are using and designing AI so that they are more responsible for the outcome from a more social perspective.

On a related subject, we need to remember that there is a digital divide; many people do not have access to digital tools. We need regional and international cooperation to ensure that they benefit from this technology, she added.

Irakli Khodeli, Head of the AI Ethics Unit at UNESCO said, tackling the digital divide is a big part of the UNESCO Recommendations.

One of the fundamental ideas on which the agency is based is that, science and the fruits of scientific progress should be equitably divided amongst all peoples.

That rings true for artificial intelligence because it holds so much promise for assisting humans in achieving our socioeconomic and developmental goals.

That’s why it’s important that when we talk about the ethical use and development of AI, we don’t just focus on the technologically advanced part of the world, where the companies are actually wielding these tools, but we also reach out to the Global South countries that are in different stages of development to involve them in this conversation about the global governance of AI, Khodeli said.

Mary Snapp, Vice-President of Strategic AI Initiatives at Microsoft, said, Technology is a tool that can enhance human experience or it can be used as a weapon. That’s been true since the printing press, and it’s true now. So, it’s very important for us, as an industry, to ensure that there are safety breaks, that we know what computers can do and what technology can do and what it should not do.

Frankly, in the case of social media, perhaps we didn’t address the issues earlier on. This is an opportunity to really work together early on to attempt to mitigate what could be some more negative effects while still recognizing the tremendous promise of the technology.

At the UNESCO meeting in Slovenia, Microsoft signed up to an agreement to develop AI on ethical lines. Stating that what does that mean in practice, Mary Snapp said in 2019, “We created an office of responsible AI that sits within [Microsoft President] Brad Smith’s organization. This office has a team of experts, not only technology experts, but also humanities academics, sociologists and anthropologists.

We do things like “red teaming” [using ethical hackers to emulate real attacks on technology], encouraging the AI to do harmful things so that we can mitigate that.

We don’t necessarily share exactly how the technology will work, but we want to ensure that we are sharing the same principles with our competitors. Working side by side with UNESCO is absolutely critical to doing this work right for humanity, Snapp stated.

Related Articles

Back to top button