Why I don’t care about the singularity
I remember having my first serious existential crisis when I was about five or six. It dawned on me that one day I would cease to exist and that seemed really, really sad. What could be sadder than not existing anymore? For a few nights, as I remember it, my mother had to cradle me in my bed as I cried myself to sleep. She was principled and did not offer up any sugary, diaphanous visions of heaven, although when my time comes to comfort my own kids through their existential crises, I am sure that I will be sympathetic to atheists who can’t fall back on the easy out. Mom’s assurances that “we all live on in the memory of our loved ones” provided a slight distraction to break up the crying jags but didn’t really assuage. Eventually, enough time passed and I got over it. I have no explanation of how I got over it, other than an evolutionary just-so story; one can assume that the ability to get past such total bleakness and go on living must have co-evolved with our ability to ponder our own self-awareness and existence. Otherwise our big brains wouldn’t have helped much as we lay in our cave, helpless and shivering, in existential shambles, staring into the void, starving to death.
Like most people, I would love the option to live a very, very long time, well past my life expectancy. Perhaps forever, or at least well beyond a timescale that I have any intuitive feel for. Perhaps I’ll want to die after 10,000 years, but let me decide when I get there. I’ll take Yoda’s (874 years young!) or a Gandalf’s (2019 years walking Middle Earth in material form) lifespan as a start. So let us toast with a glass of resveratrol, preferably a nice burgundy, to the prospect of a long, long life.
I am also a computer geek, in the pre-hipster sense of the word. I could program my Apple II plus (48kb with the RAM upgrade!) before I could ride a bike. My love for computers is unconditional. I have no essential problem with a future that involves a great deal of time for myself and humanity in general plugged into virtual worlds, having virtual sex, slaying virtual beasts, eating virtual food, as long as the food and sex are decent. I don’t have a particularly negative reaction to idea of shucking this material shell — the concept is easier to digest now that I am past my youth and feel the decay of this so-called temple. I was captivated by Stephenson and Gibson in my late adolescence and spent many an afternoon playing video games well into adulthood. I am pretty much a Platonist. After college I started a computer company that, among other things, designed avatar creation software for users virtual worlds long before Second Life was around. I’d still like to think of myself of as a red pill kind of person, but for that very reason, I’ll take just about any kind of existence over complete nonexistence.
In sum, I should be smack in the sweet spot of the Singularity’s target demographic. For the purposes of this thought experiment, let us set aside the possibly insurmountable technical difficulties and suppose that we have achieved the ability to make an exact duplicate (to whatever degree of accuracy that will satisfy) of a brain’s complete state and completely capture the rules of operation of this enormously complicated dynamical system so that we can run a near-perfect simulation of ourselves; that is to say, suppose we have achieved the Singularity Upload. Assuming all that… I can say categorically that I don’t care one whit about the Singularity.
More specifically: the Singularity does absolutely nothing to alleviate my existential angst.
And isn’t this the prime value proposition of the Singularists? The rest of the potential benefits of Singularification (I much prefer the term “Quickening”) seem pretty trifling compared to immortality.
Let me explain.
Just WHO is doing the experiencing when my simulation is sitting inside a computer? Is it me, the same me I think I was a minute ago? How do I have any guarantee that there will be any continuity of identity when I make the upload? Maybe my post-self will “think” they still feel like me, but how can that doppelgänger be believed, especially if my original fleshy self is still around to complain? I can’t even guarantee that the illusion of continuous existence I am experiencing now is a reality. These problems aren’t new. Replace with “inside a computer” with “sleeping” and you have a well-worn conundrum.
Do I care if, from the perspective of others, and from any possible experiment that can be done, that my clone or my new identity is indistinguishable from my former self? Good for the experimenters, but it doesn’t do much for me, does it! This is the very crux of the mind-body problem. It doesn’t go away just because you swap one body for another, or meat for metal or semiconductor voltages. Clearly there is a big problem here with the very notion of being alive in the post-singularity when digital copies can be made at will. There are some tantalizing theories about how perhaps some fundamental property of the universe will get in the way of free copying of sentient assemblies, some unique quantum entanglement state that can potentially be extended indefinitely but never copied. I guess that would rescue some comforting notions of a private, privileged existence, but even if true, I can’t really say if a preserved quantum entanglement state is something that will be really enjoyable in the way I enjoy life now. One can be pretty sure that being in cryostasis doesn’t feel like anything, despite the future prospect of revival.
The point is: until someone figures out this whole mind-body problem, the ability to copy 1%, 10%, or 99.999% of our brain state configuration into a computer is of no existential consequence. I am quite sure I will still be dead, based on my quaint, simplistic 2015 notion of “death,” even if my simulation lives on to tell stories over virtual beers about how misguided my former material self was about all this mortality stuff. My guess is that by the time we get anywhere close to lossless copying of the configuration states of systems of the level of complexity of a brain-artifact, the notion of existence, and the words we use to talk about existence (will they be words, or PURE ENERGY?), and intellects in general, will be so radically different than they are today that this discussion will seem amusingly trivial, only incrementally more clever than a caveman farting contest.
Now, I think it is a little intellectually lazy to hide behind the Mysterian defense “our brains are too simple to understand our own brains” — or to use the sheer innumeracy of cells in the brain as some kind of complexity/emergence juju dust — we should be harder on ourselves, or perhaps I should say more optimistic. To attack these issues we could start by asking what is so special about the sufficient number of neurons is to endow systems with high cognition, language, and self-awareness (or what that magic number is). But this is not really germane to the task at hand, which is figuring out how to live forever. Building robots that pass the Turing test is super cool but also not relevant to questions about my own mortal coil.
The singularists are tacitly asking us to take, on faith, that this act of digitization will grant immortality, despite there being any reason to believe so. Surely once we are pure bits and can be copied ad infinitum, the already nebulous notions of identity and consciousness are gonna get dicey. But simply the act of copying one’s brain state configuration into a computer won’t grant me immortality by fiat, no more than a video recording of an aborigine will capture his soul, though it may scare the living shit out of him. I find philosophy a nice entertainment but not currently “actionable” for much (I think philosophists were last useful in 399 BC), though computer scientist Scott Aaronson has some nice arguments otherwise. But we’re talking about the mother of all metaphysical problems, so to get anywhere with this problem we might even need philosophers again — empirical science isn’t promising much for us here in the near term (here I must disclose that I am a practicing neuroscientist). In any case, long before then we’ll have to decide whether we feel bad unplugging ELIZA 9.0 despite her plaintive cries coming through the speech synthesizer.
For the same reason, I really don’t care about cloning myself, other than the nice idea of having many replacement organs on hand to forestall out-of-warranty wear and tear. (I will be the first in line to pre-order my decerebrated clone; no ethical problem there for me.) The idea of freezing my head and waiting for some future regenerative medicine process to re-animate me in the far future holds more water than the Singularity providing a mortality loophole simply by making a digital copy of me, much less merging it in with other digitized folks, which, by the way, sounds incredibly boring to my primitive perspective of fun. The validity of the cryonics “wait-it-out” strategy is highly dubious for a number of practical reasons, but at least it isn’t asking for my faith.
I will admit that my position is not without its own contradictions or need for faith. I am not sure why I can fall asleep at night, generally happily, if at all. Interrupting the perceived stream of consciousness of awakeness should scare me into gluing my eyeballs open. How do know I am the same person when I wake up? Was I just cloned and a copy of all of my memories implanted in me? This is the horror that we all face each night as we close our eyes. I am pretty sure that’s why the Children’s Prayer (“Now I lay me down to sleep…”) was invented. Worse yet, what if this body swap is going on every time we blink, or every microsecond, or every quantum time step of around 10^-44 seconds? That would be a cruel trick. So perhaps I delude myself every time I go to sleep, aided by an endogenous cocktail of sleep-promoting hormones, shutting down my recursive circuits so I can fall asleep in ignorant bliss. But that doesn’t mean that the singularity is doing anything for me. Just because we can prove incompleteness doesn’t mean we stop doing math.
I am a big fan of Ray Kurzweil the inventor. He made some really gorgeous music synthesizers. However, Ray Kurzweil the futurist, and worse yet, Ray Kurzweil the patent medicine death-cure peddler, I can do without (however enticing an Anti-Aging MultiPack multivitamin sounds). Setting aside the abject silliness of the basic exponential extrapolation argument, how someone could write an entire book on a hypothetical scheme for evading mortality and decline to even mention in passing the fundamental, long-standing definitional problem of alive versus dead baffles me. I think an easy explanation for Kurzweil going off the deep end is simply a wishful suspension of disbelief in the same vein of the tendency of a religious person’s fervor about the afterlife to grow with age. Why so many Silicon Valleyites have jumped on the singularity bandwagon is a little disappointing to me, as it seems to demonstrate that computer literacy does not strongly correlate with common sense. These kinds of mid-to-late-life crises of irrationality are always still always surprising from people so technically accomplished, though not all that rare (Newton, Shockley, Pauling, Duesberg…). But it is simply a technologists’s version of an afterlife fantasy. I can’t really be bothered with the details of any other religion’s heaven, regardless of the flavor, so why I should I care about the Singularity?
The Singularity: another vision of the afterlife I don’t care about.