54 Comments

"Animal House?" Or, Animal Farm?

Expand full comment

Damn!! I missed that on my proofread, but it's fixed now. Thanks very much!!

Expand full comment

(I kinda' thought it was a Freudian Slip, which gave me a chuckle.)

Expand full comment

Please sir, can I have another.

Expand full comment

Though it does rather feel like some swamp monsters might have been put on double secret probation at this point (i.e given various faculty positions).

Expand full comment

i think it is supposed to be animal farm..

on a different note, in the early 80's i attended a free lecture that isaac asimov gave at nyu which i always remember..

Expand full comment

Came here to ask the same thing, LOL.

Expand full comment

There is a problem with Asimov's conception of robotic law. While well-intentioned and indeed pragmatic, his three statements lack an actual grounding in the way robots are (and will be) technically implemented. Their software, as indeed all software known to mathematicians, is fundamentally different from human perception and sense-making. The clouds of data a machine generates can't be parsed meaningfully by itself in ways that would allow for essential features we're used to taking for granted in a human mind. They are basically sorting algorithms; more complex versions of an apple parser which puts each fruit in one of two categories, divided by weight, or size, or colour, or whatever combination of input numbers in a high-dimensional space you're using to control the robot's effectors.

The problem is that AI only resembles cognition, while at the heart of it are numbers, entirely devoid of meaning beyond their mathematical definition. Just as the big data hype ran out when it became clear to everyone that data analysis will not become fully automated anytime soon, the robot question will lead to the realization that algorithms can only recognize numbers, but not things, objects, new situations, aspects of empathy, meaning of speech, and with that, how a "human" can be identified in the first place. All work-arounds are circumstantial by their nature, and while Asimov's laws make pragmatic sense, they do not on the level of machine interaction. Therefore, I suggest to drop Asimov's wording for something more adequate to the problem.

Also, if you want robots to be peaceful, don't send them to fight. As machines they are, as all technology is, ethically neutral. It is their use which opens the angle for moral reflection.

Expand full comment

Asimov was decades ahead of everyone as he was writing specifically about robots with positronic brains. In my previous writings on AI, it's been demonstrated that with current abilities, AI still isn't 100% human-free or completely autonomous, which IMO you note in your comment. Now, altering what Asimov proposed is fine as that means we're conversing about the problem and trying to develop a solution.

Expand full comment

The "intelligence" of millions, perhaps billions, of flesh and blood humans are ethically reversed or ethically void and will therefore break the Asimov laws of Robotics, for fun, for profit, convenience, and even for their religion, tribe or country.

The dangers inherent in Human Intelligence is a larger problem than Artificial Intelligence... and modern contemporary AI has nowhere near the reasoning ability of Dr. Asimov's Robots with Positronic brains.

Expand full comment

Yes, fiction is confronting reality. Lavrov made that very clear in his speech at the Science Fiction conference he addressed a week or so ago that I reported. Yes, the current human danger to Humanity is well noted and is currently being deterred, and its existence is even more reason for holding a conference on regulating AI.

Expand full comment

Effective regulation that protects all humanity is certainly something that is needed.

As a non-expert in the field, with a bit of reading & experience only as a sporadic consumer, I am pessimistic about any effective global regulation

And if the regulation is no global, that itself is reason for Nations will to sign onto the Regulations, will only likely be willing, if all other Nations sign... and there is adequate surveillance and corrective measures that will be implemented.

How does that happen?

Seems unlikely.

What little experience I have with using AI, it is obvious that they are limited by their database and by the programming that biases potential responses, including imposing bans of outlawed output.

A far cry from a servant of the user or a sentient human manufactured to have some limits to what the brain can decide to do.

As a Psychologist who has been actively observing the world for a long time, AI looks to be a tool that will lead to significant change (if it does not lead to efficient and effective war that ends with the eradication of civilization).

1. greatly reduce the number of humans who will be able to/need to live a life that will necessitate productive work.

2. add to the wealth of humans proportional to their existing relative wealth.

I am left to wonder what use the drones will be in a AI Robot Earth. While consumers will be needed to provide the profits for Oligarchs, without Asimov-like rules built into the Robots, at some point there will be no need for the Oligarchs or the pet drones.

Expand full comment

Your comment clearly points out the need for an international discussion on AI that Putin and others are requesting. "Star Trek" placed the development of a positronic brain in the 24th Century, which IMO is far too late. Current robots already have a limited ability to learn. Do chips need to become as small as a cell to construct a positronic brain? Even the massive HAL2000 creation in Clarke's "2001" was endowed with emotion and reasoning abilities.

Expand full comment

My research areas included learning, neuropsych, and emotion... and my views on these guides my comments on your most recent comment:

1. LEARNING is relatively simple to program, because learning is defined as a change in behavior due to experience (for computers this is a change in output according to (continuing) input. IMO this is not the dangerous part of AI, even though what goes into databases can be dangerous because of what is included and/or what is excluded.

2. EMOTION is considered a DISorganizing influence on behavior. Emotion in humans is very variable (in many ways) and difficult to predict if, how, and the extent to which the emotions will modify behavior. Emotion interferes with what would normally be successful behavior. Emotion in AI brains would be one of the most dangerous things to allow.

3. REASONING is an intervening variable between input and output. It would including thinking about the data and making decisions. In a computer system, this should be a relatively easy program to write, but if you write it to include everything in the database, you are departing from the human relative handicap of not having all data available to use, at least not at all times.

What you have left out, or what occurs to me at this moment, is MOTIVATION.

Motivation affects reasoning. It affects what data is available or considered important (in people at least). It even affects aspects of DATA input and initial processing (sensation and perception, respectively, in humans).

I would expect that Motivation, and the elimination of Emotion, are the concepts that are most related to Asimov's Laws of Robotics and the most dangerous aspects of AI in Positronic Brains.

The societal reorganization and continued exploitation of society by Oligarchs seems to me to be second order, potentially existential, dangers.

Expand full comment

Thanks again for your reply. I must commend the writers of "Star Trek" for trying to anticipate issues with AI in their screenplays concerning the Data character. Data was motivated to become more human-like, which became the basis for several different episodes. There're very few TV series I miss, but "Star Trek: Next Generation" is one of those few.

There's another technology that's also a geopolitical concern: autonomous weaponry of the sort that are launched and "forgotten"--don't need to be minded/piloted--somewhat like the computer system portrayed in "War Games", although it's not a flying machine.

In Fleming's "Moonraker," there was the sort of oligarch needing to be closely watched, with the one portrayed in the movie even more so. And we have some with the Hugo Drax POV.

Expand full comment

I am a layman so excuse me. what comes to mind: what does the mass psychology through “fake news” mean by directing thinking and influencing individual thinking? is there something left of the individual when this process is perfected by “the masters” with the help of AI?

Expand full comment

IMO, the outcome is dependent upon the individual., which is an unsatisfactory answer and reliant on that person's holistic context. Recall the "Star Wars" clones were all weak minded and easily influenced by those using The Force. I have no idea how current SciFi is depicting AI.

Expand full comment

I used to teach a course in what I thought of as Applied Normal Psychology to Business Student and did a module on Opinion and Opinion Change. It is useful in advertising or helping to convince the boss or subordinates....

There are lots of data, experimentally established, on these topics and everyone who communicates uses the concepts in their everyday communication, whether they are talking to individuals, trying to sell something or present news (or narratives). Not knowing the explicit concepts established by the rules just makes the average person less efficient in using them to be convincing. If you sa a list of "tips" you would probably think most of them are obvious.

Whether you are a German in 1940 or viewer of TV commercials or you believe that your country, car, or pet dog is the best, you were affected by information you somehow received and that information affected you more of less, depending on whether you already had an opinion, whether you had a psychological investment in the idea to begin with, who provided the information, and many more factors.

Those who know the techniques don't need AI to be successful, even at the national level, if they have enough money or power that allows them to control the information, how it is presented, how often it is presented, and whether the most effective type of communication is directed to the type of audience that is most susceptible to changing opinions because of the aspects of their current beliefs.

It is not AI that allows mass market brainwashing. After all, each different culture successfully has done that to those who adhere to their culture. It is the money and power to reach the audience over considerable time, with the right messages, that leads to success. AI could help it happen, though.

AI could help compose and send the right type of message, tailored for the individual or relevant subset of individuals, repeatedly, modifying the message according to the goals of those in charge of the program.

I presume there are people who understand all this. Some work in advertising, as campaign consultants, and for a large variety of official and surreptitious NGOs and Govt. agencies around the world whose goals are to give people ideas, or combat existing beliefs and change them to different ones.

Successful mass brainwashing certainly makes people less individualistic, but only on the topics promoted by the thought control. BTW, I used cultural differences as an illustration because everyone can see that there are extreme differences in the behavior between different countries (due to what we usually call "culture", for instance, about the acceptability of public violence, sexual behavior, dress codes, and discrimination (for or against) a variety of categories of individuals.

At best, I answered your concern, indirectly, but I will leave you with the "operational" definition used in the study of opinion and opinion change:

What is an opinion?

It is a judgment of fact.

I used to give a long lecture on the operational definition of "fact".

In studying opinion change, for practical purposes a "fact" is simply ASSUMED.

In rereading your post, I get the idea you are interested in the effect of social pressure in making people conform the the views of large groups, just to avoid being ostracized. Research showed that being convinced is positively correlated to degree of unanimity:

the higher the % of the group that believes, the more likely the average non-believer will tend to change their opinion.

Expand full comment

I echo TC's compliment! There's an old rock tune I always enjoyed as it reminded me of things I needed to remain aware of--"Madison Avenue Man." IMO, knowing that manipulation of information is rife gives us an advantage in the quest for objectivity.

Expand full comment

Thank you for the comprehensive response!

I’ll go through it a few more times…

As an almost 70-year-old, I have the idea that the influence of opinion is gaining more and more grip on my environment family and friends. Independent thinking seems to be decreasing and the influence of media and politics seems to be increasing.

The first thing I was wondering now that you’re talking about group intelligence is whether this doesn’t decrease in general? Or the height of human civilization and intelligence is not behind us? The ancient Greeks, the Chinese.

I use Google translate and my thoughts can come across as a bit confusing:)

Expand full comment

You ask good questions, whose answers are way more complicated to explain to the layman, without dumbing it down so that it is no longer scientifically valid. I can try...

1st I have to start off with the operational definition of Intelligence used by scientists: It is the score obtained on an Intelligence Test, not how smart someone is or how successful, or whether the person is a genius at something, like doing math in their head very quickly.

IQ in other words.

And IQ tests are biased to favor the dominant culture in a country.

In fact there are quite a few mental abilities, all of which can be measured and allow people or groups to be compared to each. For example, those with high math abilities, may have lower than average capabilities in other mental skills.

How can you validly even compare a math genius who is average (aka "normal") at everything else, to someone who has a photographic memory? How can you compare people who get no formal education in their society with people in another country where almost everyone has 10 or more years of schooling?

Another example is that Left-handed people tend to be more verbal than rightees and are more likely to be lawyers or actors... because the brains of right versus left handed people differ in their wiring.

You can also look up the Whorfian Hypothesis (nothing to do with Klingons), which states that one's language has a big effect on how you perceive the world. Inuit, for example, have different words for 7 types of snow and that helps them cope with their environment. I believe the native Hopi people have no tenses and this leads to communal races ending in everyone crossing the finish line together. For them, the past, present and future exist together.

How could they develop physics if they only understood a language without tense?

Again, I can't answer the question in the way you probably meant it, but there are group aspects of mental abilities.

When I was still in school, I attended a small seminar given by Julian Jaynes, from the Princeton Psych Dept. He was a very impressive fellow and spoke many languages, including ancient Greek, Latin, and others. Instead of doing scientific studies on living people, he studied ancient civilization, reading the surviving works from those times, trying to learn how they thought. His overall conclusion was that 1000s of years ago, the development of the brain got complicated enough that people started developing insight.

Insight into their own thought processes as being their own.

Before this, he believed, people saw the natural world as being controlled by outside forces, like Gods, and that their own thoughts, were NOT their own.

Interesting idea, although I am skeptical it can be proven to be correct. I believe Amazon sells one or more of Dr. Jayne's books if you are interested.

Expand full comment

Thank you!

Expand full comment

"Where did the word robot come from?

Who did invent the word "robot" and what does it mean ...

Czech

The word, derived from the Czech noun “robota”, meaning “forced labour”, is an accomplishment of Capek's older brother, the cubist painter and writer Josef Capek."

One could justifiably suggest that anyone who has not inherited sufficient means or resources to live an independent life constitutes a 'forced labourer'. Thus the three laws are easily adapted to a moral code for workers and peasants alike.

1 sounds very much like the second commandment, 2 and 3 could be similar to the first commandment - however you interpret that particular contractual agreement with the 'Universal or Cosmic Architect'.

Expand full comment

As always, thanks for your comment. The Wiki article on the Three Laws is very extensive and I probably should've linked to it in the body of my article as it's explained how he arrived at them, which was the result of collaborative thinking and reading of other author's works in his formative period, 1930s-40s. Much is apparently explained in his autobiography, "In memory yet green : the autobiography of Isaac Asimov, 1920-1954," which was published in 1980 twelve years before he passed. Few know that Asimov is the most prolific writer of all time, yet none of his works are incorporated into the core of English Literature. Why? is an excellent question.

Expand full comment

Always enjoyed the stories I have read. I consider that science fiction was merely a vehicle for his storytelling, settings that allowed him to address far more subtle and complicated issues. It apppears that the autobiography is only available to 'borrow' at Internet Archive. Otherwise it seems to have aquired the status of a semi-rare 'collector's item' on the second-hand market, which says something positive to me. He wrote during interesting times with some insight, which of course will delay any recognition by the Culture police.

I'll check out the Wiki article. Cheers.

Expand full comment

He wrote three autobiographies, the first volume is the one I noted; I didn't know then that it was the first of two volumes and that he wrote a third that was published posthumously. Somehow I need to find time to learn more about him.

Expand full comment

'I, Asimov: A Memoir' perhaps? Published in 1995 according to the advert. More pertinently 'finding the time is the issue' ;o)

Expand full comment

I could stop replying to comments since they’re becoming more numerous or sleep less;-)

Expand full comment

Human greed ensures that this becomes reality. Why should it be any different with artificial intelligence and autonomous robotics? Unlimited human cruelty manifested itself in British and German concentration camps, in Guantanamo, in the dropping of nuclear bombs, in every war, on the streets of various major cities, currently in Gaza, etc.. All this has happened and is happening out of greed.

Self-learning AI? First of all, this AI will eliminate all the bugs of human programmers. After this step, will the AI also perceive algorithms to control and limit the AI as bugs? What about human emotions? Will they be relevant for an AI? So many questions.

And then there are these alleged philanthropists. Usually either oligarchs driven by greed or cases for the closed psychiatric ward, such as people like Yuval Noah Harari. When Bill Gates and Larry Fink publicly reffer to the urgent need for population reduction and B. Gates laughingly answers a CNN question about his profit growth with: "2000%" during the corona experiment, then we should not assume that people like Elon Musk have noble intentions.

As I always put it. The only thing that is evolving is technology. But the human species is still at the same stage of development as in pre-antiquity.

But we now have a new messiah called Trump. Oh yes, and Elon Musk. They will save us. LOL AI will learn. Humans haven't for thousands of years.

Man is a misconstruction. Either by evolution or by God. Depending on which creation you believe in. Neither evolution nor God reckoned with greed.

Expand full comment

"And then there are these alleged philanthropists. Usually either oligarchs driven by greed or cases for the closed psychiatric ward, such as people like Yuval Noah Harari."

Interesting. I'm no fan of Harari. Basically I place him in the same category as Jordan Peterson, Sam Harris and other so-called 'thought leaders.' People whose job it is to make their audience feel smarter than they actually are, and thereby lead them to conclusions that may not actually be valid.

I don't see how that qualifies any of them for the nuthouse though. These are very clever people who've mastered the art of persuasion. Frankly, the fact that someone would attend one of their talks or buy one of their books speaks to me of how desperate some people are to be told what to think.

Expand full comment

Nature generally deals harshly with greed as when a species overshoots its resource base via a massive culling, thus the warnings by many about ecological overshoot. Most "bent brains" have arisen in the West likely because there're no cultural brakes to reign in their antihumanness. IMO, if Humans fail and become extinct, Nature will replace us with another likely similar creature. There's a very interesting question brought forth by Lovelock and Margulis at the end of the 1900s asking if Nature itself is sentient and acted out in SciFi by George Lucas in "The Phantom Menace." Curiosity asks if Gates was always evil or did he become that way once he became filthy rich?

Expand full comment

thanks karl... the possible implications are potentially very scary... the fact is, collectively as a species we have not shown the ability to surmount obstacles with other human beings easily or with much grace.... it would be nice to believe that politicians would have a higher vision for the future of us and with the interface with AI... perhaps putin and some have this vision, but it seems like a rare trait to have great foresight and is generally missing in politicians who are more concerned about the short term... here is to hope and being optimistic of the future...

Expand full comment

Asimov saw those contradictions 80+ years ago. They gave him an award he wasn't expecting--the Hugo--but that wasn't what he wanted.

Expand full comment

a very bright and fascinating person he was!!

Expand full comment

The most important question of scale , specifically nano scale, injexted into bldg 7 and one and two and of course thru both the spray painted skypainting and the needlerape viruganda depopulation agenda scamdemic Harmacide hacksxxxine

Nanoithermite is said to have sliced thru the steel like hot knife thru butter

Expand full comment

Thank you Karl,

Inspired by you, I started the Foundation again, received the books on an ereader in a few seconds.

How would Asimov like that :)

Expand full comment

Did you get the prequels too? Yes, I can't recall what those educational devices were called in his books, but yes he would have liked that.

Expand full comment

Asimov's law was written for sentient robots. Programs intended for machines that think, reason, and who have emotion. Not for machines, which are what these current robots are.

Expand full comment

Yes, and those sorts of machines are coming, thus the need to discuss their regulation and that of AI.

Expand full comment

The "Three laws" are an idealistic construct that wouldn't survive for a minute in the real world given malice and software bugs.

But another concept from science fiction that could work very well instead can be found in the works of Frank Herbert: The "Butlerian Jihad" in Dune.

In Frank Herbert's Dune universe, the Butlerian Jihad was a crucial historical event that shaped human civilization. It was a crusade against thinking machines and artificial intelligence that took place approximately 10,000 years before the main events of Dune.

The jihad began when humans realized they had become overly dependent on thinking machines and artificial intelligence, to the point where they were at risk of being controlled or replaced by them. The movement was named after Jehanne Butler, who initiated the rebellion after discovering that thinking machines were making life-and-death decisions about human medical care, ironically (or presciently), she found the machines were running a mass abortion program besides other stuff.

The crusade resulted in the complete destruction of "thinking machines," including computers and robots, and led to a fundamental reorganization of human society in the Dune Universe.

Expand full comment

While writing, I lamented not knowing the Dune story well having only watched the first movie and never reading the book. Given the current trend for using AI in obtaining medical diagnoses, there's yet another reason for a discussion about AI's regulation. Thanks for your excellent comment!

Expand full comment

I think a major point of robotics is being missed in the debate, probably because it's still on the distant horizon, but the outline of that time has already been dealt with in science fiction. That is the notion of transplanting one's intelligence into a cloned or manufactured replica of themselves. Essentially the quest for immortality. I see characters like Bill Gates as the frustrated potential market for such technology. They have the money to not only to pay for the process, but to push it along by funding research with that ultimate goal. The frustration comes as a result of knowing that they were born too soon to actually benefit from that work and that they have to resign themselves to growing old and dying along with the rest of us. It's actually a modern version of the vampire myth or Mary Shelley's Frankenstein, with the added enticement that it appears to actually be possible if you just throw enough money at it.

Expand full comment

Though Asimov's Laws seem pragmatic, I wonder how practical they would be to implement? The First Law contains a contradiction in imperatives for the robot. It could be faced with a situation where it must act to save a human, but at the same time doing so puts another human (or multiples of them) at risk.

Robots are fundamentally different from all other tools because they have some degree of agency. At the same time, that agency is logically an extension of its owner's decisions. That says there's some degree of liability for the owner should the robot cause harm of any sort. Unless it's the government's robot and in that case you're collateral damage.

Using robots for terrorism is a theoretical possibility, but in that field what could a robot do that a human couldn't do better? Terrorism demands strong motivation on the part of its users, and machines (so far) don't get hyped up on the rush of fanaticism.

I worry that we're bringing a colonial mindset to the subject of robots/AI. Though I'm not suggesting he thought that way, Asimov's Laws could've been the same ones plantation owners imposed on their slaves in the early Americas and elsewhere. To their owners the slaves were property with a purpose. Robots will be the same, but who cares because they're just a machine, right? Nevertheless it seems at least theoretically possible that Artificial Intelligence will one day be effectively indistinguishable from the human kind. If and when that happens, shouldn't there be laws protecting AI from us?

Expand full comment

Your excellent comment is why Asimov wrote so many robot stories so he could explore the various possibilities you mention. IMO, it's very clear he wanted Humanity to discuss what he saw as a very important problem it would eventually have to face. We are now there. That's why I've written what is now my third article on the topic because we're there. The Russians certainly understand where us humans are now and the imperative need to deal with this issue.

Expand full comment

This one is fascinating. Thanks, Karl.

Expand full comment

Israel is already using quadrocopter drones, a form of robot?, to murder Gazans, while US has produced the robot dogs that shoot guns.

Expand full comment

As I wrote, such weapons are already being employed, although they aren't 100% AI.

Expand full comment

The Asimov robot rules will be honored like the creaky pledge re Genocide: "never again"! Yeah...right.

Expand full comment

I rather doubt many people know of Asimov's Three Laws, which is one of the reasons for my article and those preceding it. Putin for the last several years has underscored the importance of AI and the need for a serious conference about regulating it. He even said the nation that first masters AI will control the planet. So, he well understands the strategic stakes at play, and his team is very well informed on the matter. What needs to occur is greater outcry for the global conference Putin and other leaders desire on the subject. I'm performing my part. A great many other substack writers like yourself read what I write--thank you--and can also lobby for that discussion. I understand your cynicism, but there's still an opportunity to make a difference on this issue.

Expand full comment

Let's all keep in mind that after Asimov constructed his Three Laws, the rest of the science fiction world spent the next fifty years poking holes in those same Three Laws, demonstrating that no matter how you work it, situations will arise where humans will be harmed by robots. It's unavoidable precisely because humans design them and their applications.

It would be interesting to see what an actual human-equivalent AI would say about those Laws.

It's like dealing with animals. I don't trust animals - at all. Why? Because animals have personalities and "bad days" just like humans. They are also on a Bell Curve like humans - meaning some of them have a lot of any particular attribute and some have almost none of that attribute - just like humans.

One might say, "Well, these robots are all identical." Nothing is identical, especially not after they've been in use for a while. Each robot's experiences will not be identical, and if they "learn" from experience, they will begin to diverge very quickly in their responses. This is already obvious in the current crop of "AIs."

Where the discussion needs to be right now is not the Three Laws - but the use of robotics in the military. The US is working on autonomous fighter planes - there was already a science fiction movie about that stupidity - Stealth (2005) - and I believe Russia has robotic armored vehicles, as does Israel (they converted old M113 personnel carriers to be bomb carriers, used to blow up buildings in Gaza.)

A couple AI companies have already obtained contracts with the US Department of Defense. This is what happens when a former head of the NSA gets on the board of OpenAI.

And that is why open-source AI is so critical. Corporations and the state must not control all AI.

Expand full comment

Fine article, but a little panicky? Rest assured we are ensouled conscious beings, for a really small period of time trapped in a earthen vessel and this hype with robots will fade away. It is a law of nature that there is nothing that is exempt of the characteristics of growing out of some little germ to disappear in time. All and everything is in a constant flux of transformation. As a great wise of ancient Greece (can't remember his name just now, I guess it was Solon) was told by Egyptian priests that there have been many, many high civilizations in the past and they all have disappeared, almost without a trace, then why would the present one be an exception to the rule? It will dwindle through great cataclysms or diseases, or whatever is beyond our power. Never forget, humanity is guided by divine beings, call them Buddha’s if you like, who work according to the laws of karma. There is not a bird on a twig that is not cared for. If we were only to agree that life is a kind of purification process for the soul, it would answer many questions of the heart. If we could live without all the technologies and without so much greed, robots would not be needed. It is greed and egotism of the unwise that feel a need for robots. When economies start to decline, the demand for robots will disappear and small businesses do not need them, do they?

Expand full comment

I don't see how the passive stance you advocate will help solve the situation since you're dismissing reality.

Expand full comment

Thank you, thank you very much(!) for taking the trouble to reply. This will hopefully allow me to redress this embarassing predicament of serving you some sort of ‘wordsalad.’

I made a mistake by giving way to a very strong urge to react to your mesmerizing story while being in a hurry. After reading your work I had only three minutes left to react as I had to leave and I finished it in a hurry. Now I hope you will allow me to show you how a so-called ‘passive stance’ can certainly solve the situation, and it is embedded in Reality.

First we would have to define what is REAL. What is reality to ordinary man? If people are taking much trouble and pains to watch the glorious Trevi fountain in Rome at work, or the Fontana del Moro, whatever, they are awe-inspired by the fascinating splash of water. They identify that as a beautiful piece of art, when it is only the phenomenon they admire and are seemingly oblivious of the fact that it is a silly waterpump in modern fountains or the build up power of a mass of water that shows that fountain. It is the great divide between phenomenon and noumenon. The result and the cause of something. If you cut off the power of so many fountains the splash of water stops in an instant. Where is the fountain, where did it go? So what is the fountain really? Is it the water that is powerfully thrown out or is it the power behind it? I would like to argue it is neither. The reality of the fountain can only be found in the ideas that make up the fountain. Here we see, as in the case of robotics, that there are only ideas at work. Earth is encompassed by myriad of ideas, there is a real ocean of ‘thought’ about which Salman Rushdie wrote his wonderful childrens book Haroun and the see of stories. With such a POV you are much closer to reality, still it is not the real Reality, that is to say, as mystics like to see it and with who I like to identify. The real Reality is that sphere that encompasses ALL and everything. Mystics like to call it the Absolute (which means free of everything that would deminish it). The Absolute is the Alpha and Omega, the ‘father’ and ‘mother’ of every phenomenon that is produced on this Earth and to which everything ‘returns’ after being formed. Actually, nothing can return to it, as nothing really left. It is like a infinitely large closed vessel. Nothing can escape the Absolute, otherwise it could not be called the absolute. In this absoluteness ‘reality’ is experienced by all its infinite number of entities, ensouled beings, in many different ways, as described in the story of the Cave by Plato. In this reality of Plato there are many layers of consciousness. There is an almost infinite layering of consciousnesses, stacked on each other, from the debasing and material short sighted consciousness of the evil ones, who are without the slightest spark of compassion, to the highest layers on which compassionate spiritual people live, some like to call them gods. The shaman on the plains of Mongolia or the witch doctor among the peoples of the Kalahari-desert, is probably never troubled by thoughts about issues through the propagation of robots. Why not? Because it is not their world, it is not reality to them, and it very probably will never be as they like to stay far away of so-called civilization. They know that a peaceful life outside the body is much more vivid and loving than inside this ugly prison of flesh, with all its painful diseases. Together with the real masters of the universe they do not give a damn about technology, because it only gets people enslaved to the imagined beauties of this world of tears. If you really, really desire and use your Willpower, then you can stay away from poisonous Western civilization and still help your fellow human beings. The Masters of life, full of compassion, who live in the shadows to inspire and help people, do not fear death, they know that death for the ordinary man is nothing more than a state of peaceful rest before they are reincarnated again, and again, until they have learned all there is to learn. Each one of us will be hurled into the labyrinth of the Minotaurus at birth so to speak, and time and again we have like Theseus to subdue the ugly monster and find the exit. And in each new life we are given a ‘thread of wool’ by our own Ariadne, but in most cases we let it slip. A Master of life does not need to enter the labyrinth anymore, as he has dealt with the hellish creature in himself in one of his incarnations and is forever free. So when we remain passive to the enticements of our senses and imagined fears, robotics are only the predicament of the people who like to fly private jets and enjoy a martini at a luxurious resort, somewhere, I don’t care. The life of the elite seems wonderful, but rest assured, it is not and will never be. So, the worries about robotics are not for me, and hopefully neither for you. It is for the people who are the consumers of what robotics produce. It will pass as people grow in wisdom and compassion. Be wise as serpents and harmless as doves (Matthew 10:16) and anyone can save him or herself a lot of trouble and live enjoying heavenly bliss. By the way, I love your writings!

Expand full comment