Is Superintelligence a kinder, gentler Armageddon?

“First we build the tools then they build us.” -Marshall McLuhan

Following recent developments in the field of quantum mechanics the thought struck me that we might need to amend the IQ scale to allow for scientific notation. That’s because computers are about to get exponentially smarter and faster. When that happens we may be facing humanity’s latest end of the world scenario. It’s a kinder, gentler Armageddon suitable for the ergonomic smart-enabled iPhone wielding society we have come to enjoy being. No worries, Android users will receive the pastry themed update, eventually. This Armageddon has a name, that name is Superintelligence.

But let me start at the beginning.

The physical limitations of processing are a big deal in computer science. The upper limit of any contemporary computer is the speed of light minus any energy lost in the form of heat as electrons run along their circuits. Innovations such as reversible logic gates hold a lot of promise for a reduction in that energy loss. But what if we could do an end run around the laws of thermodynamics? Heck while we’re at it why not skip relativity too?

About a month ago you might have seen some people in the media talking about how the transporter beam was now a possibility. I’ll leave that for others to speculate on, but what the media got excited about were new breakthroughs in the field of quantum teleportation. Efforts by both the US Army and the Deflt University of Technology have demonstrated the ability to “teleport” photons from one place to another by means of quantum entanglement.

To outline this as simply as possible quantum entanglement is a phenomenon whereby quanta (in this case photons) become entangled with each other. In this condition the quanta share their quantum filed states. In effect what happens to one instantaneously happens to the other. If we engineer this phenomena for use in a computer we could theoretically exchange bits of data faster than light. So hit the road Newton and Einstein! We’ve got massive amounts of data to process and we can’t be bothered with thermodynamics or cosmological constants.

But what will the result of an FTL processor be? And how would we apply such fast computing? Well the first thing one needs to understand is just how much of a game changer this will be. It’s difficult to predict what the outcome of any technology is but I can speculate on how FTL computing might be applied to a modern computer.

Let’s say you have an FTL computer running a contemporary operating system like Windows or Macintosh. Normally you need to install updates and patches to maintain such a system. But if your computer runs faster than light maybe the computer could patch itself. Without the standard limitations of time the FTL processor could run through the permutations on all possible variations of the software, choose the optimal configuration, write the code, and install it all within a reasonable amount of time. Perhaps even instantaneously if the FTL computer is connected to others like it across the internet where other nodes are running infinite permutations of the same process. This network then becomes a collective of self-improving machines that grows exponentially, which may sound familiar to fans of George Takei.

The key words here are exponential self-improvement. As this trend swings into full force a point of critical mass will occur, a point of no return beyond which the application of data processing will far exceed the human capacity to manage by conventional means.

From here we enter the realm of speculation, just what does all this computing power mean? On the positive side we may be able apply this power towards data intensive subjects like medical research, sustainable energy, the economy, anything where big numbers rule or where countless variations need to be considered. But on the other hand a self improving intelligence is what keeps humans the dominant species. Will a superintelligent entity act in our best interest or will it reshape our environment to suit its program?

The programs we set for these machines are key. It’s not too difficult to imagine a scenario where we program humans out of existence, an idea Hollywood has capitalized on many times. But I suggest a more subtle outcome with hints of the perfect utopia all mixed up with the eventual erosion of our greatest human traits. We could cure cancer, balance energy production and restructure finances to guarantee everyone a high standard of living. At last all people would be free to peruse their dreams. Or would we?

With the need for constant computational balance in this false utopia wouldn’t we process ourselves out of purpose and choice? If the program says you go, you go, and do what ever function you are designed for. Yes, designed for. Because while we’re eradicating disease we’re also engineering perfect biological components for the larger structure of society.

Does human creativity figure into the superintelligent paradigm? If every problem has a solution what about the questions that sustain our humanity? Would curiosity and creativity become obsolete? Or worse rendered into mere distractions and indulgences for the perfect techno-agrarian society. If the absence of the human soul isn’t frightening enough just imagine the boredom.

So it becomes a question of limitation and application. If we press the button on superintelligence will we be ready to turn it towards the right problems? Will we be ready willing and able to shut it off when the time comes?