The Incipient Butlerian Jihad
"Four hundred years previously, the state of mechanical knowledge was far beyond our own, and was advancing with prodigious rapidity, until one of the most learned professors of hypothetics wrote an extraordinary book (from which I propose to give extracts later on), proving that the machines were ultimately destined to supplant the race of man, and to become instinct with a vitality as different from, and superior to, that of animals, as animal to vegetable life. So convincing was his reasoning, or unreasoning, to this effect, that he carried the country with him and they made a clean sweep of all machinery that had not been in use for more than two hundred and seventy-one years (which period was arrived at after a series of compromises), and strictly forbade all further improvements and inventions." -Samuel Butler, Erewhon
A quick glance at AI-generated art should disabuse us of the notion that our technology will physically enslave us; the impressive pattern matching and syntax skills demonstrated by our machines is inherently limited by their complete and total lack of semantic reasoning. Yes, it can be guided (although I have not done so for demonstrative reasons), but that is the point: It requires a human master; it literally cannot master us, it needs us to master it, or it has no function or purpose.
At the same time, "Artificial Intelligence," is a tool, an extension of our natural abilities, and like all tools, in using them we remove ourselves, step-by-step, from our full, natural abilities. Imagine being stranded, alone, at night, lost in the forest, with no cell phone, pocket knife, flashlight, compass, map, shoes, clothes... our distant ancestors would think nothing of the situation; that was their daily lives, and they were comfortable and confident in that environment.
Should we reject shoes, because we might need to walk without them, and our feet will not be tough enough? Are we enslaved by our shoes? The ideas are absurd. The danger is in believing that our technology is capable of more than it is, in trusting AI to tasks which it is not suited, depending on it in ways that will result in harm.
Isaac Asimov developed his Three Laws of Robotics in order to protect humans, but clearly, they could not be trusted to properly interpret those laws. "A robot may not injure a human being or, through inaction, allow a human being to come to harm," but what is the difference between an artificially-intelligent robot and a human being? What constitutes inaction or harm? Will they go around taking cigarettes out of people's mouths, or force them to exercise?
The first human to lash a sharp rock to a stick almost certainly injured themselves with it; the leading cause of death for children in 19th century America was falling off a wagon, but then, until the widespread adoption of the automobile, the street was one of the safest places for children to play. The function and purpose of firearms is widely displayed in media, from movies to television to video games, but the lack of hands-on firearm education leads to many injuries and deaths from misuse.
If we will not show due respect and establish proper channels of education for such widespread and clearly dangerous objects, what are we going to do about the more nebulous threats of Artificial Intelligence? An examination of the proposals to deal with social media and the Internet should give us a clue: Don't educate people, don't mitigate the harms, but use the law like a hammer and the problems are just gaps to be nailed down tight.
That never worked for guns; that never worked for speech; and that won't work for AI. We need a new, social understanding of the problems, and a better system for providing the necessary education to safely use technology.