Earlier this year in April I approached this important topic in an article I wrote for my VK that generated an appreciated volume of feedback challenging the definitions of AI in the article, while there was no consensus on the autonomic ability of AI.
I see that I missed an important editorial that further emphasizes China's POV on AI regulation that I'm adding via the comment section versus trying to edit it into the article's body:
GT Voice: How will US weaponize AI for its technological hegemony?
The UN Security Council on Tuesday for the first time held a session on the risks that artificial intelligence (AI) poses to the world, the latest evidence that regulating the development of AI has become a universal concern. While discussions of AI development norms are necessary, whether for the international community or any country, it is still disturbing to see the inclination of AI weaponization in some discussions among US lawmakers.
US Senate Majority Leader Chuck Schumer, who introduced AI legislation last month, said in a recent speech that "If we don't program these algorithms to align with our values, they could be used to undermine our democratic foundations ... The Chinese Communist Party ... could leap ahead of us and set the rules of the game for AI. Democracy could enter an era of steep decline."
Experience has made it clear that whenever US politicians associate some issues with values, it is always in a way that is distorted and adheres to a zero-sum game, bringing chaos and uncertainty to all concerned. Unfortunately, AI may have just become a new front in Washington's tech war against China. In fact, the US attempt to take control of AI is still fixated on keeping global markets and capital firmly in American hands.
In addition to the Congressional legislation efforts, the Biden administration is also reportedly preparing to restrict Chinese companies' access to US cloud-computing services, The Wall Street Journal reported earlier this month. The move may curb Chinese customers' ability to use AI-related services from US companies.
Apparently, AI won't be exempted from the US technological competition with and crackdown on China. The US wants to maintain its dominance in almost all high-tech fields and is willing to contain China's technological development by means of suppression, just like it has been doing in the chip war.
Such hostility in technological competition is an obvious cause for worry, especially when governments around the world are facing challenges as to how to use and regulate AI as well as balance its development and security risks.
Just look at what Washington has been doing in terms of chip suppression related to China, and it is not hard to imagine what norms and rules the Biden administration could be plotting in a political environment where AI is seen as an important weapon in ensuring US technological hegemony.
If the US abuses AI rule-making as a weapon against China, it will be a disaster for global AI development. There is much unknown about the specific trends and paths of AI development. Under such circumstances, if the US insists on introducing geopolitical factors, then future AI development will certainly face a split, which means two systems and development directions, bringing more confusion and jeopardizing AI global governance.
Another implication of the US maintaining its scientific and technological dominance and hegemony is that the high-tech development of developing countries will be under US control and bullying.
In high-tech fields like AI, if some countries maliciously obstruct the technological development of other countries, and artificially create technological barriers, they are actually undermining the rights and interests of developing countries to seek open cooperation. To prevent such a scenario, the rules of global AI governance must first ensure fairness for the development of developing countries.
Zhang Jun, China's permanent representative to the UN, proposed at the Tuesday Security Council meeting that five principles must be adhered to when it comes to guiding principles for AI governance - putting ethics first, adhering to safety and controllability, fairness and inclusiveness, openness and cooperativeness, and committed to peaceful utilization.
Our friend B at MoA says AI is just another term for pattern recognition. So isn't the Russian Perimiter system - where computers use pattern recognition to detect a decapitating nuclear strike against Moscow, and launch all Russia's nukes in return - a form of AI?
I said as much in my April article when I used the movie "War Games" as one of my examples. Also, there're mixed ideas about AI's exact definition ought to be. Some say it's independent of human control while others argue it isn't. IMO, the writers for Star Trek: Next Generation and Asimov before them did a good job of showing AI capable of independent thought unless restricted in some manner as with Asimov's Three Laws. Yet even that being bypassed was explored in the film "I Robot." Unfortunately, too many billions have never read Asimov's works on the topic, which IMO constrains human opinion and discussion/curiosity. The prequel series Asimov wrote to his Foundation Series also deals with the affects of robotics on human societies in which he poses most of the fundamentally important questions that human society ought to have began discussing during the 1980s/90s. And thus my point that humanity requires a wide-ranging open discussion on the issue before AI advances much further in its capabilities.
Response to UN Security Council Debates Artificial Intelligence (AI) by Karl Sanchez.
I still find it amazing that in these AI discussions there is no mention of nanotechnology - which is where a reasonable level of actual AI might be achieved. The only way to achieve a serious level of AI is via analysis of the human brain which is the only known source of conceptual reasoning and self-awareness. This research will be massively enhanced by the ability of nanotech devices to observe the functioning of the human brain at the molecular level, in real time, at scale.
Combine this with the ability of nanotech-based computer technology which will be able to produce computers far more capable than existing computer technology and the potential becomes obvious.
People need to go back and read those cyberpunk science fiction stories - and not the ancient stuff from Asimov and the like from the '60s and see how ubiquitous AI might actually impact society. There will be "good AIs" and "bad AIs." There already is - a product known as "WormGPT":
As cyberpunk likes to say, "The street finds its own uses for things." AI will be a force to evading government repression as well as imposing. AI is no different from personal firearms or personal computers - it is a tool.
As for the US restricting China from accessing any US IT technology, that is risible. There is nothing the US is doing that China can't do and in vaster quantities and equal quality. In particular, AI is just software running on current hardware. There is nothing stopping China from matching and exceeding US AI development, especially given the fact that most of the students in US university STEM classes and information technology classes are Asians.
Those of us old enough to remember know that every generation there is a new "AI hype". In previous decades it was "frames" and then it was "expert systems" and then it was "machine learning" and now it's "generative AI". There is a big hullabaloo for some years, business invests heavily, then the hype dies down when business realizes the new technology is not a game-changer.
The current AI boom is quite impressive, mostly because it is generally accessible over the Internet whereas previous systems pretty much required installation in corporate or government date centers. This has spurred a higher level of hype than previous waves.
And make no mistake, the current AI technology of large language models have impressive capabilities - as well as significant flaws. GPT-4, for example, appears to be performing much more poorly now than when it was launched just months ago, according to a new study. Apparently exposure to the public is producing a "humanization" of the AI which renders its capabilities less rather than more.
Of course, governments will make stupid use of the new technologies as they have made stupid use of previous physical technologies, including nuclear technology. This is not a surprise. But while hostile use of AI at scale could cause considerable damage, it's still nothing compared to the impact of large-scale use of nuclear weapons.
I'm curious to see what happens when AI is matched with quantum computing. However, I echo the concerns of Russia, China and RoW as the Outlaw US Empire weaponizes anything that can be weaponized and uses it to support its hegemony.
I know your focus is on the geopolitical processes especially viz China, Russia and US, but this just popped up in my feeds somehow - by Whitney Webb two years ago - and this AI stuff gives me the creeps. This is just one example of no doubt hundreds; it just happened to be in a link in an article I was reading a few minutes ago shortly after reading your one.
Although I do agree that ultimately the problem is not the technology per se rather who uses it and how, still the negative possibilities are endless and I doubt any group like the UN or whatever can hold them in check. It's a bit like fentanyl: surely it would be much better if such a dangerous substance, despite having one or two beneficial uses, simply were not made, period?
Moreover, unlike with nuclear bombs whose use probably means the destruction of the user, AI can be used in so many relatively invisible ways such that even great harms perpetrated by it may go unnoticed or unattributed. The internet is already no longer a safe space. AI will ensure it never can be again.
"Meet The Israeli Intelligence-Linked Firm Using AI To Profile Americans And Guide US Lockdown Policy
Posted on July 2, 2020 Author Whitney Webb
An Israeli government contractor founded by a former Israeli spy has partnered with one U.S. state and is set to announce a series of new partnerships with other states and U.S. healthcare providers to monitor civilian health and use an IDF-designed AI system to profile Americans likely to contract coronavirus and to inform U.S. government lockdown policy.
A company tied to Israel’s military signal intelligence unit, Unit 8200, has recently partnered with the state of Rhode Island to use an artificial intelligence-based system developed in tandem with the Israel Defense Forces (IDF) to profile Americans potentially infected and/or “at risk” of being infected with coronavirus, then informing government authorities of their “risk profile.” Once flagged, state health officials can target those individuals as well as their communities for mandatory testing, treatment and/or more restrictive lockdown measures.
The firm, Israel-based Diagnostic Robotics, is poised to announce a series of new such partnerships with several other U.S. states as well as major U.S. hospital systems and healthcare providers in the coming weeks, according to a company spokesperson. The first of these announcements came on June 30 regarding the firm’s new partnership with Mayo Clinic, which will soon implement the Diagnostic Robotics’ “artificial intelligence platform that predicts patients’ hospitalization risk.” They have also been in discussions with Vice President Mike Pence about the platform’s implementation nationwide since April."
Agreed, there's much already happening that's illegal like what you just provided violates HIPPA. And what now constitutes the expectation of privacy dealing with that Constitutional issue? And much more beyond the three laws of robotics. I believe Asimov wrote more about the ethical issues of AI, but I don't know where they're to be found; I'd need to search. But as you note, people need to get involved with this issue as it's far more important than gender and pronouns by many magnitudes.
dear karl, you've posted on moa a link to your vk page an extremely important essay on the holocaust & our beloved esteemed maria k's historical record...i have repeatedly attempted to share this with friends i know will appreciate its importance along with family that supports blindly mom dogma....repeatedly my server refuses to allow the link or vk post to go through, i've yet to have this with your substack (whenever i share it immediately sends) so lss: please put your invaluable post on Maria's historical reckoning on your substack, i need to share this. i assume others may also encounter this difficulty. & if i haven't already said a million times, thank you, karl. what you are doing is of invaluable importance & helping to keep the world find its way through this historic time.
thank you, so very much, karl. i've already forwarded it to many. thank you for all you do to help us hold the truth in a time when it is being so wantonly twisted. i apologize my eyesight is poor esp. late @ night, i thought i'd posted msm, instead got mom. :-/ emersonreturn
This novel is not so much about the threat of a human-hostile AI, but rather what might happen if a sufficiently intelligent - but uninformed - AI takes actions harmful to humans in the course of trying to solve a problem presented to it. This is a quintessential human trait that could easily accidentally be transferred to a sufficiently developed AI. I view this as the real threat rather than AIs "turning on people".
Here's an example: During a thread on Moon of Alabama, I asked ChatGPT whether a Russian Kinzhal missile had enough kinetic/explosive forces to penetrate a Ukrainian bunker 300 meters deep. During the discussion, the AI made repeated errors including using an order of magnitude greater mathemetical figure in one case. Imagine if this AI were placed in control of an actual stand-off missile system. Either the system would be overly effective or totally ineffective, with concomitant effects on humans.
OTOH, governments will almost certainly apply "AI" to weapons. This is already the case in some weapons systems. See here:
A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says
One major problem with the current LLM systems is that they have been programmed to respond in a manner that simulates human relations. They use the pronoun "I", for instance. If the system had been programmed to respond to any reference to the system itself as "this system", humans would be much less likely to consider it an actual intelligence, rather than a set of language simulation algorithms on top of a large knowledge base - which is what it is. Presenting it in this way would likely lower the hype, at least slightly.
The major problem at the moment is that people are ascribing depth and abilities to these systems which go beyond what the technology actually is. While these systems are incredibly useful, they remain far and away from having the same cognitive capability as a human brain.
The real problem now is not that they're "smarter" than humans, but that they aren't. And even when they are that is a problem. Someone once said, "If we make an AI that is as smart as Caspar Weinberger, we're all in trouble." Not because Weinberger was smart, but because he wasn't.
And they are incredibly useful - I have four of them (Bard, Claude, Perplexity and ChatGPT) sitting in the URL bar of my browser ready to answer any question I have - rightly or wrongly, just like the human search engines. None of them have answered the "life, the universe and everything" question with "47" yet. Maybe Elon Musk's "truth seeking AI" will have better luck. I'm not holding my breath.
The most serious problem with AI is that the best stuff is always developed in secrecy. I saw this in the 1990's when certain large industries were already using AI for competitive advantage and didn't want to tip off their business competitors. Other than academic work, AI development was all done "in house" and for the unwashed public and backwards corporations, AI was a very difficult sell. I think the basic dynamics are still there, but now AI has expanded to many realms.
The second problem is that no one can have the "source code" for AI. AI trains itself and humans don't / can't monitor AI"s own development. AS true as this is for AI generally, it's doubly true for neural networks.
As an active AI developer I find it necessary that some basic concepts and definitions be known before any serious discussion can proceed, first and foremost being that *Artificial Intelligence* is used to describe systems of a wide range of competency, from the extremely dumb to reasonably smart. Category levels are important because it indicates what functionalities a system is capable of, as well as potential pitfalls.
A rudimentary classification can put them in 3 categories: weak, semi-strong, and strong. Weak AI basically is a program following a well defined set of rules or heuristics that makes it look intelligent. For example, the Russian Dead Hand system might be programmed thus: if 80% of seismic sensors near population centers register big disturbances within a 10 minute span, launch nukes. The mechanism behind this has as much intelligence as a spring in a mousetrap. Early AI research centered around mathematical models to get good heuristics and technical implementation of how to efficiently process the rules. As one might guess, any attempt to regulate this class of AI will be both meaningless and futile, in a sense analogous to regulating mousetrap springs to not acitvate when a human touches it.
Semi-strong AI comprises of more recent technology such as neural networks and language models. In essence these are algorithms that identity patterns and learn from experience, with the major hurdle being it is very hard to gather *enough* good and bad samples for training. They also suffer from unexpected and catastrophic failures, as well as accuracy problems. It is technically easy to rig a gun to a camera with optical recognition software that identifies friend-or-foe and have the trigger pulled automatically, but this becomes problematic if 5% of the time friends are mis-identified as foes, which is why nobody including terrorists ever seriously tried this. The main hurdle over regulating this class will be compliance; the tools are easily available online, and data sets are infinitely less conspicuous than, say, guns or explosives. If someone with expertise decides to go rogue, nothing can be done about it.
Strong AI are the stuff of science fiction, such as those depicted by Asimov. They are proposed to have near human intelligence levels and capable of creativity and true autonomy (able to act independently from any external triggers), but expert consensus (people with multi-decade real world AI application experience) is that this class of AI will remain out of reach of humanity before the end of this century. While it may sound like a good idea to regulate this class too, it is anyone's guess what form they will come to be, and whether any proposed regulation will remain relevant and meaningful.
Given the above, I actually consider Russia's leave-it-to-the-experts attitude the most pragmatic and realistic path of all parties. It is improbable that the Chinese are less informed on this matter, so I suspect this topic is purely being used as a tool for geopolitical maneuverings.
I agree completely with your post. Without a detailed analysis of how the human brain does cognition and conceptual reasoning, we are unlikely to discover how to produce "real" AI. Sometime in the next 50 years - barring, of course, some significant breakthrough in neuroscience - is a good estimate of the time frame.
I see that I missed an important editorial that further emphasizes China's POV on AI regulation that I'm adding via the comment section versus trying to edit it into the article's body:
GT Voice: How will US weaponize AI for its technological hegemony?
The UN Security Council on Tuesday for the first time held a session on the risks that artificial intelligence (AI) poses to the world, the latest evidence that regulating the development of AI has become a universal concern. While discussions of AI development norms are necessary, whether for the international community or any country, it is still disturbing to see the inclination of AI weaponization in some discussions among US lawmakers.
US Senate Majority Leader Chuck Schumer, who introduced AI legislation last month, said in a recent speech that "If we don't program these algorithms to align with our values, they could be used to undermine our democratic foundations ... The Chinese Communist Party ... could leap ahead of us and set the rules of the game for AI. Democracy could enter an era of steep decline."
Experience has made it clear that whenever US politicians associate some issues with values, it is always in a way that is distorted and adheres to a zero-sum game, bringing chaos and uncertainty to all concerned. Unfortunately, AI may have just become a new front in Washington's tech war against China. In fact, the US attempt to take control of AI is still fixated on keeping global markets and capital firmly in American hands.
In addition to the Congressional legislation efforts, the Biden administration is also reportedly preparing to restrict Chinese companies' access to US cloud-computing services, The Wall Street Journal reported earlier this month. The move may curb Chinese customers' ability to use AI-related services from US companies.
Apparently, AI won't be exempted from the US technological competition with and crackdown on China. The US wants to maintain its dominance in almost all high-tech fields and is willing to contain China's technological development by means of suppression, just like it has been doing in the chip war.
Such hostility in technological competition is an obvious cause for worry, especially when governments around the world are facing challenges as to how to use and regulate AI as well as balance its development and security risks.
Just look at what Washington has been doing in terms of chip suppression related to China, and it is not hard to imagine what norms and rules the Biden administration could be plotting in a political environment where AI is seen as an important weapon in ensuring US technological hegemony.
If the US abuses AI rule-making as a weapon against China, it will be a disaster for global AI development. There is much unknown about the specific trends and paths of AI development. Under such circumstances, if the US insists on introducing geopolitical factors, then future AI development will certainly face a split, which means two systems and development directions, bringing more confusion and jeopardizing AI global governance.
Another implication of the US maintaining its scientific and technological dominance and hegemony is that the high-tech development of developing countries will be under US control and bullying.
In high-tech fields like AI, if some countries maliciously obstruct the technological development of other countries, and artificially create technological barriers, they are actually undermining the rights and interests of developing countries to seek open cooperation. To prevent such a scenario, the rules of global AI governance must first ensure fairness for the development of developing countries.
Zhang Jun, China's permanent representative to the UN, proposed at the Tuesday Security Council meeting that five principles must be adhered to when it comes to guiding principles for AI governance - putting ethics first, adhering to safety and controllability, fairness and inclusiveness, openness and cooperativeness, and committed to peaceful utilization.
In stark contrast to what US politicians have emphasized in AI governance, these principles deserve the attention of the international community. https://www.globaltimes.cn/page/202307/1294711.shtml
Our friend B at MoA says AI is just another term for pattern recognition. So isn't the Russian Perimiter system - where computers use pattern recognition to detect a decapitating nuclear strike against Moscow, and launch all Russia's nukes in return - a form of AI?
I said as much in my April article when I used the movie "War Games" as one of my examples. Also, there're mixed ideas about AI's exact definition ought to be. Some say it's independent of human control while others argue it isn't. IMO, the writers for Star Trek: Next Generation and Asimov before them did a good job of showing AI capable of independent thought unless restricted in some manner as with Asimov's Three Laws. Yet even that being bypassed was explored in the film "I Robot." Unfortunately, too many billions have never read Asimov's works on the topic, which IMO constrains human opinion and discussion/curiosity. The prequel series Asimov wrote to his Foundation Series also deals with the affects of robotics on human societies in which he poses most of the fundamentally important questions that human society ought to have began discussing during the 1980s/90s. And thus my point that humanity requires a wide-ranging open discussion on the issue before AI advances much further in its capabilities.
Response to UN Security Council Debates Artificial Intelligence (AI) by Karl Sanchez.
I still find it amazing that in these AI discussions there is no mention of nanotechnology - which is where a reasonable level of actual AI might be achieved. The only way to achieve a serious level of AI is via analysis of the human brain which is the only known source of conceptual reasoning and self-awareness. This research will be massively enhanced by the ability of nanotech devices to observe the functioning of the human brain at the molecular level, in real time, at scale.
Combine this with the ability of nanotech-based computer technology which will be able to produce computers far more capable than existing computer technology and the potential becomes obvious.
People need to go back and read those cyberpunk science fiction stories - and not the ancient stuff from Asimov and the like from the '60s and see how ubiquitous AI might actually impact society. There will be "good AIs" and "bad AIs." There already is - a product known as "WormGPT":
WormGPT – Dangerous AI Tool for UnEthical Hacking
https://www.cloudbooklet.com/wormgpt-how-to-download-and-use/
As cyberpunk likes to say, "The street finds its own uses for things." AI will be a force to evading government repression as well as imposing. AI is no different from personal firearms or personal computers - it is a tool.
As for the US restricting China from accessing any US IT technology, that is risible. There is nothing the US is doing that China can't do and in vaster quantities and equal quality. In particular, AI is just software running on current hardware. There is nothing stopping China from matching and exceeding US AI development, especially given the fact that most of the students in US university STEM classes and information technology classes are Asians.
Those of us old enough to remember know that every generation there is a new "AI hype". In previous decades it was "frames" and then it was "expert systems" and then it was "machine learning" and now it's "generative AI". There is a big hullabaloo for some years, business invests heavily, then the hype dies down when business realizes the new technology is not a game-changer.
The current AI boom is quite impressive, mostly because it is generally accessible over the Internet whereas previous systems pretty much required installation in corporate or government date centers. This has spurred a higher level of hype than previous waves.
And make no mistake, the current AI technology of large language models have impressive capabilities - as well as significant flaws. GPT-4, for example, appears to be performing much more poorly now than when it was launched just months ago, according to a new study. Apparently exposure to the public is producing a "humanization" of the AI which renders its capabilities less rather than more.
Of course, governments will make stupid use of the new technologies as they have made stupid use of previous physical technologies, including nuclear technology. This is not a surprise. But while hostile use of AI at scale could cause considerable damage, it's still nothing compared to the impact of large-scale use of nuclear weapons.
Perspective is needed rather than alarm.
I'm curious to see what happens when AI is matched with quantum computing. However, I echo the concerns of Russia, China and RoW as the Outlaw US Empire weaponizes anything that can be weaponized and uses it to support its hegemony.
I know your focus is on the geopolitical processes especially viz China, Russia and US, but this just popped up in my feeds somehow - by Whitney Webb two years ago - and this AI stuff gives me the creeps. This is just one example of no doubt hundreds; it just happened to be in a link in an article I was reading a few minutes ago shortly after reading your one.
Although I do agree that ultimately the problem is not the technology per se rather who uses it and how, still the negative possibilities are endless and I doubt any group like the UN or whatever can hold them in check. It's a bit like fentanyl: surely it would be much better if such a dangerous substance, despite having one or two beneficial uses, simply were not made, period?
Moreover, unlike with nuclear bombs whose use probably means the destruction of the user, AI can be used in so many relatively invisible ways such that even great harms perpetrated by it may go unnoticed or unattributed. The internet is already no longer a safe space. AI will ensure it never can be again.
"Meet The Israeli Intelligence-Linked Firm Using AI To Profile Americans And Guide US Lockdown Policy
Posted on July 2, 2020 Author Whitney Webb
An Israeli government contractor founded by a former Israeli spy has partnered with one U.S. state and is set to announce a series of new partnerships with other states and U.S. healthcare providers to monitor civilian health and use an IDF-designed AI system to profile Americans likely to contract coronavirus and to inform U.S. government lockdown policy.
A company tied to Israel’s military signal intelligence unit, Unit 8200, has recently partnered with the state of Rhode Island to use an artificial intelligence-based system developed in tandem with the Israel Defense Forces (IDF) to profile Americans potentially infected and/or “at risk” of being infected with coronavirus, then informing government authorities of their “risk profile.” Once flagged, state health officials can target those individuals as well as their communities for mandatory testing, treatment and/or more restrictive lockdown measures.
The firm, Israel-based Diagnostic Robotics, is poised to announce a series of new such partnerships with several other U.S. states as well as major U.S. hospital systems and healthcare providers in the coming weeks, according to a company spokesperson. The first of these announcements came on June 30 regarding the firm’s new partnership with Mayo Clinic, which will soon implement the Diagnostic Robotics’ “artificial intelligence platform that predicts patients’ hospitalization risk.” They have also been in discussions with Vice President Mike Pence about the platform’s implementation nationwide since April."
https://www.thelastamericanvagabond.com/meet-israeli-intelligence-linked-firm-using-ai-profile-americans-guide-us-lockdown-policy/
Agreed, there's much already happening that's illegal like what you just provided violates HIPPA. And what now constitutes the expectation of privacy dealing with that Constitutional issue? And much more beyond the three laws of robotics. I believe Asimov wrote more about the ethical issues of AI, but I don't know where they're to be found; I'd need to search. But as you note, people need to get involved with this issue as it's far more important than gender and pronouns by many magnitudes.
dear karl, you've posted on moa a link to your vk page an extremely important essay on the holocaust & our beloved esteemed maria k's historical record...i have repeatedly attempted to share this with friends i know will appreciate its importance along with family that supports blindly mom dogma....repeatedly my server refuses to allow the link or vk post to go through, i've yet to have this with your substack (whenever i share it immediately sends) so lss: please put your invaluable post on Maria's historical reckoning on your substack, i need to share this. i assume others may also encounter this difficulty. & if i haven't already said a million times, thank you, karl. what you are doing is of invaluable importance & helping to keep the world find its way through this historic time.
thank you, so very much, karl. i've already forwarded it to many. thank you for all you do to help us hold the truth in a time when it is being so wantonly twisted. i apologize my eyesight is poor esp. late @ night, i thought i'd posted msm, instead got mom. :-/ emersonreturn
'Tis done Lousie.
I commend to everyone reading one of the classic sci-fi novels of this issue: "The Two Faces of Tomorrow" by James P. Hogan.
Reviewed here at Goodreads: https://www.goodreads.com/en/book/show/2220766
Available for free download here: https://annas-archive.org/md5/73f7fff0568a68403eed580218352f89
This novel is not so much about the threat of a human-hostile AI, but rather what might happen if a sufficiently intelligent - but uninformed - AI takes actions harmful to humans in the course of trying to solve a problem presented to it. This is a quintessential human trait that could easily accidentally be transferred to a sufficiently developed AI. I view this as the real threat rather than AIs "turning on people".
Here's an example: During a thread on Moon of Alabama, I asked ChatGPT whether a Russian Kinzhal missile had enough kinetic/explosive forces to penetrate a Ukrainian bunker 300 meters deep. During the discussion, the AI made repeated errors including using an order of magnitude greater mathemetical figure in one case. Imagine if this AI were placed in control of an actual stand-off missile system. Either the system would be overly effective or totally ineffective, with concomitant effects on humans.
OTOH, governments will almost certainly apply "AI" to weapons. This is already the case in some weapons systems. See here:
A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says
https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d
This is discussed here:
UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research
https://theconversation.com/un-fails-to-agree-on-killer-robot-ban-as-nations-pour-billions-into-autonomous-weapons-research-173616
One major problem with the current LLM systems is that they have been programmed to respond in a manner that simulates human relations. They use the pronoun "I", for instance. If the system had been programmed to respond to any reference to the system itself as "this system", humans would be much less likely to consider it an actual intelligence, rather than a set of language simulation algorithms on top of a large knowledge base - which is what it is. Presenting it in this way would likely lower the hype, at least slightly.
The major problem at the moment is that people are ascribing depth and abilities to these systems which go beyond what the technology actually is. While these systems are incredibly useful, they remain far and away from having the same cognitive capability as a human brain.
The real problem now is not that they're "smarter" than humans, but that they aren't. And even when they are that is a problem. Someone once said, "If we make an AI that is as smart as Caspar Weinberger, we're all in trouble." Not because Weinberger was smart, but because he wasn't.
And they are incredibly useful - I have four of them (Bard, Claude, Perplexity and ChatGPT) sitting in the URL bar of my browser ready to answer any question I have - rightly or wrongly, just like the human search engines. None of them have answered the "life, the universe and everything" question with "47" yet. Maybe Elon Musk's "truth seeking AI" will have better luck. I'm not holding my breath.
Just ran across this new book on ChatGPT, which is available free if you have access to the file-sharing services listed:
ChatGPT Will Won't Save The World
https://sanet.st/blogs/booook/chatgpt_will_wont_save_the_world.4529824.html
Haven't read it yet, but apparently it makes an effort to distinguish actual value from the hype.
The most serious problem with AI is that the best stuff is always developed in secrecy. I saw this in the 1990's when certain large industries were already using AI for competitive advantage and didn't want to tip off their business competitors. Other than academic work, AI development was all done "in house" and for the unwashed public and backwards corporations, AI was a very difficult sell. I think the basic dynamics are still there, but now AI has expanded to many realms.
The second problem is that no one can have the "source code" for AI. AI trains itself and humans don't / can't monitor AI"s own development. AS true as this is for AI generally, it's doubly true for neural networks.
As an active AI developer I find it necessary that some basic concepts and definitions be known before any serious discussion can proceed, first and foremost being that *Artificial Intelligence* is used to describe systems of a wide range of competency, from the extremely dumb to reasonably smart. Category levels are important because it indicates what functionalities a system is capable of, as well as potential pitfalls.
A rudimentary classification can put them in 3 categories: weak, semi-strong, and strong. Weak AI basically is a program following a well defined set of rules or heuristics that makes it look intelligent. For example, the Russian Dead Hand system might be programmed thus: if 80% of seismic sensors near population centers register big disturbances within a 10 minute span, launch nukes. The mechanism behind this has as much intelligence as a spring in a mousetrap. Early AI research centered around mathematical models to get good heuristics and technical implementation of how to efficiently process the rules. As one might guess, any attempt to regulate this class of AI will be both meaningless and futile, in a sense analogous to regulating mousetrap springs to not acitvate when a human touches it.
Semi-strong AI comprises of more recent technology such as neural networks and language models. In essence these are algorithms that identity patterns and learn from experience, with the major hurdle being it is very hard to gather *enough* good and bad samples for training. They also suffer from unexpected and catastrophic failures, as well as accuracy problems. It is technically easy to rig a gun to a camera with optical recognition software that identifies friend-or-foe and have the trigger pulled automatically, but this becomes problematic if 5% of the time friends are mis-identified as foes, which is why nobody including terrorists ever seriously tried this. The main hurdle over regulating this class will be compliance; the tools are easily available online, and data sets are infinitely less conspicuous than, say, guns or explosives. If someone with expertise decides to go rogue, nothing can be done about it.
Strong AI are the stuff of science fiction, such as those depicted by Asimov. They are proposed to have near human intelligence levels and capable of creativity and true autonomy (able to act independently from any external triggers), but expert consensus (people with multi-decade real world AI application experience) is that this class of AI will remain out of reach of humanity before the end of this century. While it may sound like a good idea to regulate this class too, it is anyone's guess what form they will come to be, and whether any proposed regulation will remain relevant and meaningful.
Given the above, I actually consider Russia's leave-it-to-the-experts attitude the most pragmatic and realistic path of all parties. It is improbable that the Chinese are less informed on this matter, so I suspect this topic is purely being used as a tool for geopolitical maneuverings.
I agree completely with your post. Without a detailed analysis of how the human brain does cognition and conceptual reasoning, we are unlikely to discover how to produce "real" AI. Sometime in the next 50 years - barring, of course, some significant breakthrough in neuroscience - is a good estimate of the time frame.