Discussion about this post

User's avatar
Harry Hobbes's avatar

"Animal House?" Or, Animal Farm?

Expand full comment
The Philosophers Corner's avatar

There is a problem with Asimov's conception of robotic law. While well-intentioned and indeed pragmatic, his three statements lack an actual grounding in the way robots are (and will be) technically implemented. Their software, as indeed all software known to mathematicians, is fundamentally different from human perception and sense-making. The clouds of data a machine generates can't be parsed meaningfully by itself in ways that would allow for essential features we're used to taking for granted in a human mind. They are basically sorting algorithms; more complex versions of an apple parser which puts each fruit in one of two categories, divided by weight, or size, or colour, or whatever combination of input numbers in a high-dimensional space you're using to control the robot's effectors.

The problem is that AI only resembles cognition, while at the heart of it are numbers, entirely devoid of meaning beyond their mathematical definition. Just as the big data hype ran out when it became clear to everyone that data analysis will not become fully automated anytime soon, the robot question will lead to the realization that algorithms can only recognize numbers, but not things, objects, new situations, aspects of empathy, meaning of speech, and with that, how a "human" can be identified in the first place. All work-arounds are circumstantial by their nature, and while Asimov's laws make pragmatic sense, they do not on the level of machine interaction. Therefore, I suggest to drop Asimov's wording for something more adequate to the problem.

Also, if you want robots to be peaceful, don't send them to fight. As machines they are, as all technology is, ethically neutral. It is their use which opens the angle for moral reflection.

Expand full comment
46 more comments...

No posts