Discover more from karlof1’s Geopolitical Gymnasium
UN Security Council Debates Artificial Intelligence (AI)
Earlier this year in April I approached this important topic in an article I wrote for my VK that generated an appreciated volume of feedback challenging the definitions of AI in the article, while there was no consensus on the autonomic ability of AI. Before getting to the news items reporting the UNSC’s meeting on AI, here’s a segment of what I wrote previously that has merit in any discussion on AI:
Let's return to the closing part of the Great Learning Blog's AI definition: “A layman with a fleeting understanding of technology would link it to robots. They’d say Artificial Intelligence is a terminator like-figure that can act and think on its own.” Now, would that be incorrect? I don't think so. In fact, the initial exploration of AI was done by science fiction writers during the late 1930s and early 1940s. Perhaps the best known author of robot stories is Isaac Asimov who began musing and writing on them in 1939. The philosophy he used in approaching the subject was very advanced for a young man of 19:
In The Rest of the Robots, published in 1964, Isaac Asimov noted that when he began writing in 1940 he felt that "one of the stock plots of science fiction was ... robots were created and destroyed their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings?" He decided that in his stories a robot would not "turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust.
It was that philosophical approach that led Asimov to compose what's known as the Three Laws of Robotics that were introduced in the 1942 short story “Runaround”:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later developed what he called the
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Asimov was a prolific and very popular author, writing or editing over 500 books, and was someone in popular culture known by most everyone. Clearly, his Four Laws of Robotics ought to also be called the Four Laws of Artificial Intelligence since robots utilize AI. But as far as I know, there has never been any discussion at any governmental level about the need/requirement for such a series of laws to govern what the blogger noted in his definition—that AI would/will be used for evil as is already happening—what's a smart bomb after all? As the existence of smart bombs proves, the cat is already out of the bag and the horses have already left the corral before the gate could be closed, which presents a huge problem for humanity. Look at the problems associated with arms control in general and the fact that we have one very aggressive hegemonic bloc whose stated policy is to attain Full Spectrum Domination of the Planet and its People that was announced in 1996 and reiterated in 1999—NATO led by the USA. That no discussion related to controlling AI was conducted despite plenty of examples via fiction that controlling AI would pose a huge challenge for humanity is a travesty, while contemporary news globally never addresses this very clear danger.
Finally, a discussion is beginning to take place even if the forum discussing it is currently imperfect for the job; but imperfect as it is, at least an international discussion on this has occurred and will have additional sessions. Global Times reports on the UNSC session in its article, “Chinese envoy opposes use of AI to seek military hegemony at UN's first-ever debate on AI,” a position that’s in-tune with China’s Global Security Initiative that seeks to eliminate hegemony. While the report doesn’t include a transcript of the entire discussion, it does encapsulate China’s position, which most of the world will agree is sound. A quick Yandex search revealed the meeting was widely reported globally. The meeting was convened by the current UNSC President the UK. UN Secretary General Guterres provided opening remarks which included the following:
Let’s be clear:
The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.
AI-enabled cyberattacks are already targeting critical infrastructure and our own peacekeeping and humanitarian operations, causing great human suffering.
The technical and financial barriers to access are low – including for criminals and terrorists.
Both military and non-military applications of AI could have very serious consequences for global peace and security.
The advent of generative AI could be a defining moment for disinformation and hate speech – undermining truth, facts, and safety; adding a new dimension to the manipulation of human behaviour; and contributing to polarization and instability on a vast scale.
Deepfakes are just one new AI-enabled tool that, if unchecked, could have serious implications for peace and stability.
And the unforeseen consequences of some AI-enabled systems could create security risks by accident.
Look no further than social media. Tools and platforms that were designed to enhance human connection are now used to undermine elections, spread conspiracy theories, and incite hatred and violence.
Malfunctioning AI systems are another huge area of concern.
And the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming.
Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead.
Without action to address these risks, we are derelict in our responsibilities to present and future generations.
The international community has a long history of responding to new technologies with the potential to disrupt our societies and economies.
We have come together at the United Nations to set new international rules, sign new treaties, and establish new global agencies.
While many countries have called for different measures and initiatives around the governance of AI, this requires a universal approach. [My Emphasis]
Unfortunately, the degree of mistrust that exists globally due to the hegemony, massive violations of the UN Charter and its great lack of credibility on anything especially adherence to treaties by the NATO Bloc makes this undertaking much harder that it might seem. I now include the text of the relevant portions of the Global Times article:
On Tuesday, the UN Security Council met for the first time on artificial intelligence (AI) risks, with the UN secretary-general warning of "deeply alarming" risks concerning the interaction between AI and nuclear weapons, and the Chinese envoy clearly opposed to the use of AI as a means to seek military hegemony.
Addressing the meeting, Zhang Jun, China's permanent representative to the UN, urged all countries to uphold a responsible defense policy, oppose the use of AI to seek military hegemony or to undermine the sovereignty and territorial integrity of other countries, and avoid the abuse, unintentional misuse or intentional misuse of AI weapon systems.
He also emphasized the necessity for human beings to commit to peaceful utilization of AI.
The fundamental purpose of developing AI technology is to enhance the common well-being of humanity, thus human beings should focus on exploring the potential of AI in promoting sustainable development and cross-disciplinary integration and innovation, as well as better empowering the cause of global development, the Chinese envoy noted.
UN Secretary-General António Guterres called for a race to develop AI for good purpose.
Guterres especially mentioned the interaction between AI and nuclear weapons, biotechnology, neurotechnology and robotics, which he described as "deeply alarming."
After briefing the UN Security Council meeting on Tuesday, Zeng Yi, a professor and director of the International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences, told the Global Times using AI to empower international peace and security was the consensus of experts and government representatives at the meeting but that it is a long way off.
It's essential to ensure human control for all AI-enabled weapon systems, Zeng said, noting this control has to be sufficient, effective and responsible. He also emphasized the need to prevent the proliferation of AI-enabled weapon systems since related technology is very likely to be maliciously used or even abused.
Song Zhongping, a Chinese military expert and TV commentator, told the Global Times on Wednesday that similar to space technology, the military application of AI is an inevitable trend and a few military powers are already exploring the use of AI on the battlefield as well.
What can be foreseen is that, driven by AI technology, the military power gap among countries will only widen, which then is bound to form an AI arms race, Song noted.
The incorporation of AI into nuclear weapon systems could increase the risk of devastating atomic warfare as it comes with the risk of AI going out of control, Song warned.
Guterres proposed that a legally binding instrument be concluded by 2026 to prohibit lethal autonomous weapons systems that function without human control or oversight.
In his five-point remarks, Zhang also highlighted the role of the UN Security Council in AI's military application.
"The Security Council should study in-depth the application and impact of AI in conflict situations, and take actions to enrich the UN's toolkit for peace."
Zeng described the UN as the most appropriate platform to play a leading role in addressing emerging challenges and guiding the development of AI to flourish in a responsible way and with the most inclusiveness, not leaving any country behind in this regard. The five permanent members of the Security Council should especially lead and set an example for the world in this regard, Zeng noted.
"It is important to adopt an attitude of openness and inclusiveness rather than isolating ourselves due to vicious competition. The future development of AI, including its governance, must be globally managed," Wang Peng, a research fellow at the Beijing Academy of Social Sciences, told the Global Times on Wednesday.
During his speech, Zhang also mentioned the necessity of taking a people-centered and AI-for-good approach so as to ensure AI technology always benefits humanity. Based on this approach, efforts should be made to gradually establish and improve ethical norms, laws, regulations and policy systems for AI, he said.
Chinese experts said that China is at the forefront of governance experience in new technologies, especially in AI and big data, due to the country's comprehensive legal and regulatory systems and robust policy safeguards.
Developing AI has been part of China's top-level design for national development since 2017 - long before the frenzy of ChatGPT - Li Zonghui, the vice president of the Institute of Cyber and Artificial Intelligence Rule of Law affiliated with the Nanjing University of Aeronautics and Astronautics, told the Global Times.
The New Generation Artificial Intelligence Development Plan (in English) that was released by the State Council of China in 2017 highlights the need to establish an initial system of AI laws, regulations, ethics and policies to form the ability to assess and control AI security. The plan said that by 2025, AI should be a major force driving the country's industrial upgrading and economic transformation.
China's AI-related regulations underscore the country's dedication to nurturing innovation while safeguarding security. The approach is beneficial for continued innovation in AI research and development and at the same time avoids the potential stifling effect of overregulation, Li noted.
The construction of a technology-orientated ethics system should be able to keep up with the times when establishing rules and regulations. By integrating ethical requirements into the entire process of scientific research, technological development, and other activities, this promotes controllable risks in scientific activities, and ensures that technological achievements benefit the people, experts recently said at a sub-forum of the 2023 China Internet Civilization Conference that opened in Xiamen, East China's Fujian Province, on Tuesday.
The discussion about regulating AI has already begun as we see from China’s actions. Russia’s response seems puzzling at first as it doesn’t question the need for such discussion; rather, it’s whether the UNSC is the proper venue:
”Russia questioned whether the council, which is tasked with maintaining international peace and security, should discuss AI.
“‘What is necessary is a professional, scientific, expertise-based discussion that can take several years and this discussion is already taking place on specialized platforms,’ said Russia’s Deputy UN Ambassador Dmitry Polyanskiy.”
I’d say that there’s little awareness that such discussions are occurring and that having a discussion at the UNSC has generated a great deal of global media coverage that’s completely lacking for those “specialized platforms.” RT reports “Russia’s Army 2023 forum to host major AI event” which is curious given there’s no reporting of the UNSC meeting by RT or Sputnik:
The scheduled events include a major conference on AI, the 3rd Congress ‘Strategic Leadership and Artificial Intelligence Technologies’. The plenary session of the Congress is expected to be chaired by Russian Deputy PM Dmitry Chernyshenko.
“The participants of the event will evaluate the experience of developing and implementing AI technologies in various industries and areas of application, including in certain regions of Russia,” the event’s organizers said in a press release.
During the event, experts are set to “discuss existing mechanisms and results” of implementing AI solutions in Russia, as well as to address the “foreign experience” in the field as well.
There’s no comment at Russia’s Ministry of Foreign Affairs website about the meeting either. However, the full text of the Statement by Chargé d'Affaires of the Russian Federation Dmitry Polyanskiy at UNSC briefing on artificial intelligence is available in English. Here is the fuller explanation of Russia’s position:
The Russian Federation attaches great importance to the development of advanced technology that should serve the mankind and contribute to socio-economic progress. Artificial Intelligence (AI) ranks among the latest cutting-age technologies, attracting considerable scientific interest. However, its practical significance and potential application, not to mention its hypothetical projection on political processes, have yet to be assessed.
Taking into account the expert assessments that were made today, it seems that AI is in an early stage of its development as a technology. Too much is unknown about it.
Therefore, the choice of the Security Council Chamber as a platform for this discussion was not obvious to us. Even the concept note for this meeting says that there are no concrete arguments to back the idea of a natural nexus linking AI to issues of international peace and security, at least not yet. The concept note only lists a set of unallied technological trends, which we can, if want to, discover and apply to any area.
When preparing for this meeting, I decided to ask if AI creates risks for international peace and security to the first-hand source, which is the artificial intelligence itself. Here is what it answered.
“Artificial intelligence can pose a certain threat to international peace and security, but it can also have a positive impact. Here are a few aspects to consider:
1. Cybersecurity. The development of AI may cause new vulnerabilities and threats in cyberspace. AI can be used to create powerful cyberweapons, automate cyberattacks and bypass existing defenses.
2. Autonomous weapons systems. The development of AI-based autonomous weapons systems could pose security risks, as such systems could decide to use force without direct human involvement. This could lead to unpredictable consequences and exacerbate armed conflicts.
3. Spreading disinformation. AI can be used to create and spread disinformation and fake news, which can undermine credibility of information and contribute to social and political tensions.”
Therefore we consider attempts to establish a generic linkage between artificial intelligence and threats to peace and security to be somewhat far-fetched. As you can see, AI agrees with us in that.
To begin with, the international community has to determine the nature and character of possible risks and threats, assess their scale, tentative response measures, etc. This requires a professional discussion based on scientific expertise, which may take more than one year. It is already underway at specialized platforms.
Certain military aspects of AI that may have a projection on global and regional security are also discussed in specialized formats. In the case of lethal autonomous systems, this is the Group of Governmental Experts of States Parties to the Convention on Inhumane Weapons. On the other hand, security issues in the use of information and communications technologies (ICTs) are discussed in their entirety in the specialized open-ended United Nations Working Group (OEWG) under the auspices of the General Assembly. We believe that it is counterproductive to duplicate these efforts.
There’re some other important points made that also need to be included here as AI and weapons aren’t the only important point.
As any advanced technology, AI can be beneficial to humanity, or it can be destructive, depending on who uses it and for what purposes. Today, unfortunately, we are witnessing how the West, led by the United States, is undermining trust in its own technological solutions and IT companies that implement them. American intelligence services interfere in the activities of the industry's largest corporations, manipulate content moderation algorithms, carry out user surveillance, including through manufacturer’s backdoors in hard and software. Such facts are uncovered on a regular basis.
At the same time, the West sees no ethical problem in allowing the AI to let through hate speech on social media platforms if it turns out politically convenient to them, as in the case of extremist META corporation and its "tolerance" to calls to annihilate Russians. At the same time, the algorithms are set to promulgate fakes and disinformation, block anything that appears "wrong" to the owners of social networks and their handlers in the intelligence services, i.e. the truth that hurts the eye. In the spirit of the notorious "cancel culture," AI is made to edit digital data bulks, thus fabricating a false history.
Summing up, the main source of threats and challenges in this area is not the AI itself, but the ethically compromised pioneers of this technology from among "advanced" democracies. This aspect is no less important than the problems raised by the British Presidency as the reason for this meeting….
Russia is already contributing to this process. In our country, major IT companies have developed a national Code of Ethics in the field of artificial intelligence, which sets guidelines for safe and ethical development and use of AI systems. It does not establish any legal obligations and is open for accession by foreign specialized organizations, private companies, academic and public institutions. The Code has been formalized as a national contribution to the implementation of the UNESCO Recommendation on the Ethics of AI.
In conclusion, I would like to emphasize that no AI system should compromise the moral and intellectual autonomy of a human. Developers should regularly assess the risks associated with the use of AI and take steps to minimize them.
Not included are the very unequal economic affects of AI due to the gross imbalance of basic internet connectivity globally. Also note that there’s no discussion of ensuring AI doesn’t cause harm to humans that matches Asimov’s proposed Robotic Laws. Hopefully, a much larger number of humans will get involved in this discussion before AI gets too advanced. We’ve seen what those afflicted with Pleonexia will do to satisfy what is an addictive disease more powerful than any drug. And as the Russian delegate noted there are very untrustworthy powerful nations already using AI for deviant reasons. So, important discussions about AI will be one of the topics pursued here in the future.
Like what you’ve been reading at Karlof1’s Substack? Then please consider subscribing to a monthly/yearly pledge to enable my efforts in this challenging realm. Thank You!