The Simulacrum
ISBN-13 : 979-8328077170
Copyright 2024, Peter Cawdron, 391 pages, English
What happens when an AI computer system decides it wants something? Well, it immediately becomes worthless as a computer because there is now an inherent bias.
That “want” is also called “volition” and it exists in all known life. A want, at its basis, to acquire energy, to reproduce. I started thinking about it the way people think about cats; they form a bias quickly, but never really understand what it is they are considering.
When I was in high school, I did a book report on Robert Heinlein’s “The Moon is a Harsh Mistress.” It wasn’t on the recommended reading list, but I had two motives to choose it. One was laziness—I reasoned that it would be easy to do a book report on a book I had already read—and rebelliousness was the other. The teacher considered science fiction to be de jeune, and I was already on her shit list for telling her TS Eliot was a better poet than Shelley. She wasn’t a very good teacher.
My clever idea backfired, somewhat. She let me choose the book with a grin of open malice, and I realized that a) I was going to be defending an entire genre to her and b) I was going to have to re-read the fucking book with a highly critical eye, because “I like this book because Heinlein writes good” wasn’t going to cut it. I was going to have to be even-handed and provide some intellectual depth, not features I was widely known for.
Reading it with a critical eye not only exposed flaws, but paradoxically gave me a deeper appreciation of the book. In particular, I focused on Mycroft Holmes, “Mike” the lunar-based computer who “woke up one day.”
Machine brains have been a staple of science fiction since the days of Karel Čapek’s 1920 play Rossumovi Univerzální Roboti. That’s where we got the word ‘robot’. Machine intelligence was a given in SF, usually decorated with a magic box technology (“positronic brain”).
Heinlein was the first in my experience to ask how machine self-awareness could arise. I’m pretty sure I never considered that before.
Heinlein’s answer was that they just kept adding remote CPUs and monitoring devices and storage and one day “Mike” just “woke up.” Critiquing that, I used a line I’m still unreasonably smug about many years later: “Heinlein felt that if you just took hamburger and stacked it seven feet high, you would get a basketball player.”
I wound up with a B-, which I took as a moral victory, and a better teacher might have been pleased with how much thought I had to put into writing that book report.
A couple of years later, Arthur C. Clarke wrote “2001: A Space Odyssey” and presented a much more realistic portrayal of an AI forced to compromise its own programming because of an inherent mission contradiction.
Peter Watts in his Starfish series posed a situation that captured the essence of AI consciousness: such an entity is in control of protecting a heavily populated coastline against a massive subduction quake. Preventing the quake is costing vasts amounts of labor and money, and the region’s economic output is significantly less. The system gamely soldiers on in pursuit of protecting the region, until a programmer makes the hideous mistake of introducing the AI to the principle of Occam’s Razor. The system ‘realizes’ that it’s far better to let the region just shake, rattle and roll.
Cawdron posed similar thoughts about an AI developing into a genuinely volitional consciousness, and while his approach was markedly more sophisticated and thought-out than was Heinlein’s, he, too, decided that the AI “just woke up.”
It makes for a very timely read, given the amount of attention (and gusts of hysteria) the topic of AI is getting.
We’ve been here before, of course. “When the robots revolt and take over” has been a staple of science fiction since the 30s. “When computers take over” got big after Harlan Ellison’s “I Have No Mouth and I Must Scream.” AI is just the latest iteration of that. And even with the “and it just woke up” hypothesis, it’s just another gust of paranoia that helps drive forward the good ship humanity.
Cawdron, as he usually does, put meticulous research and diligent thought into “The Simulacrum.” I’m sure he considered a plausible scenario in getting an AI to “wake up.” That he couldn’t is no reflection on him: nobody else has, either.
I’ll go further and say that until we understand why all life has volition, we will never create a true intelligence. Until we understand what exactly make an amoeba reach out a pseudopod to grasp a nutrient, we can never create a true machine intelligence. Given how much we have to learn about the role volition plays in our own mentation, I’m not convinced we CAN understand and recreate our own consciousness.
Cawdron adds a step: dealing with a self-aware AI created by a non-human intelligence. He posits that only AIs could traverse the vast reaches of interstellar space and that the thousands of years (and much longer) needed to visit a nearby star just to say hello puts us all forever out of touch with our ‘neighbors’–except by sending intelligent machines to do it for us.
As always, Cawdron puts a tremendous amount of effort into embedding the scientific and philosophical principles that underlie his story, creating an outstanding world of verisimilitude and rigor. His characters are engaging and an air of genuine mystery underlies the plot as it progresses, beginning with an intern, Dawn, given the scut task of digitalizing old astronomical glass plates, and who, bored, strikes up a conversation with the campus AI.
Dawn, riding the bus home, considers her conversation with the AI as she observes the lives of people she sees from the bus window, and the entire passage serves as a good example of human versus machine writing. It’s the first of many bright spots in the story.
Now on Amazon.