
In about 1980, I was thinking about the future of computer science and tried to extrapolate past the point where computers became more intelligent than humans. I quickly realized that this led to a problem. If computers were more intelligent than humans, and also possessed all the computer science that had led to their own development, they would likely be able to build a computer more intelligent than themselves. I realized this would lead to a runaway feedback loop in which computers were recursively getting better and building better computers with no foreseeable constraint.
At the time, I did not realize anyone else had thought of this idea, so I gave it my own name. I called it “Threshold Technology”. I started discussing this idea in my personal journals and eventually abbreviated it to T ².
I told many people of this idea, but no one took it seriously. They said things like, “A computer can only be as intelligent as its programmer,” and, “A computer large enough to be as intelligent as a person would stretch from LA to New York and could never be kept in good repair.” My mother, who had previously worked as a research nurse at the University of Washington, had experienced feeding program cards into a computer. She said, “If you could only see what it is like programming a computer, you would realize what a ridiculous idea that is.” Nevertheless, I held onto the idea and continued to think about it.
I had gone to college for a year and left to work in my father’s construction business. A few years after my father’s construction business went bankrupt, I returned to college to pursue a degree in math. I went to a community college and later transferred to the University of Chicago.
At the University of Chicago, I roomed with an economics PhD student. I explained my idea to him. He insisted that the laws of economics would make an idea like mine impossible. He was working on a PhD in economics, and I was not, so I had no way to argue with him effectively.
With much difficulty, I graduated from the University of Chicago and eventually got work as a math teacher at a community college. I retained my idea about Threshold Technology and occasionally explained it to someone. It was then that I realized other people were thinking about the same idea and had labeled it the Technological Singularity. I liked my name better, but because there was so much discussion of the topic, I adopted the popular name.
I read Vernor Vinge’s seminal paper and eventually came across I.J. Good’s concept of an “intelligence explosion”. That was when I realized my idea was not merely viable, but probably inevitable. In 1965, I.J. Good described an intelligence explosion as follows:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind… Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
Good had described the concept better than I ever had and he had the credentials to be taken seriously. Curiously, he was not.
Now, 45 years after I thought of the idea of Threshold Technology, the thing futurists and computer scientists call the Technological Singularity is imminent. It appears inevitable.
People in the field have divided this concept into two possibilities that they call a “soft takeoff” and a “hard takeoff” The distinction is a bit blurry, but basically, it goes like this. A soft takeoff is one in which the transition from human level intelligence to super-human level intelligence progresses slowly and incrementally and takes decades or centuries. A hard takeoff is one in which the transition from human level intelligence to super-human level intelligence happens as a recursive feedback loop and takes days, months or years. It also entails the possibility of losing control.
It is becoming increasing clear that we are headed for a hard takeoff. Artificial Intelligence (AI) is already demonstrating programming skills that are equal to all but the very best competitive programmers. Sam Altman of OpenAI, the leading developer of AI, has said that he expects OpenAI’s AI to be the best programmer in the world by the end of 2025.
Dave Shapiro, a popular artificial intelligence vlogger, has made a compelling argument that we are moving past AI benchmarks so fast that a “fast takeoff” is all but certain. I do not know why he has elected to use this term rather than “hard takeoff”. Possibly, it is because he does not expect us to lose control. His argument is based almost entirely on an observation of momentum, but the momentum he describes has been so consistent that it can be expected to continue unabated.
We are headed for an intelligence explosion.
Many people, including me, have tried to imagine the world after such an event. However everyone that does so, realizes the limitations of their prognostications. It simply is not possible to guess what an intelligence that is greater than human and continually becoming greater will do. This conundrum has been likened to a dog trying to grasp human technology.
There are a couple of things we need to be clear about. First of all, there is no such thing as “after an intelligence explosion”. There is no reason to believe that the recursive increase in intelligence will ever cease. Moreover, for all we know, it is possible to access other dimensions or reconfigure reality to build intelligence that is so far beyond anything we can imagine that it ceases to be “intelligence” and becomes something else entirely. And that may be only the beginning. There is no “other side” of a technological singularity.
Second, we do not really know what is possible. I tend to believe that faster than light travel, time travel, dimensional shifting, reactionless drives, and restructuring reality at a fundamental level will always be impossible. But I do not know this. These assumptions are based on a sort of naïve instinct for what nature will and will not allow. I may be wrong.
If I had to guess, I would say that the upper limit on computer hardware is something like a planet-sized brain with components that are as small as a few atoms. However, those components could maximize quantum computation. There may be issues of heat dissipation, so the brain would probably be mostly on the outer layer of the planet and may even be driven by natural radiation at its core. However, that is a really, really big brain.
It has occurred to me that earth’s moon could be “compuformed” into such a brain, but the moon is 239,000 miles from earth, which means that any information going to or coming from the moon would take 1.3 seconds. That is much too slow for many purposes. In a recent essay, I discussed moving all human infrastructure underground. It is completely possible that nearly all of earth’s subterranean crust that is not human infrastructure could be transformed into computer matter. When ordinary matter is converted into computer matter, it is sometimes called computronium. Nearly all of the earth’s subterranean crust that is not being used for other purposes could be transformed into computronium. Processing in this computronium could be sufficiently decentralized that the conversion of computronium back into other materials that may be needed for other projects would not present any complications.
That would probably be enough computer power for anything humans could dream of. It may be enough computer power for anything AI could dream up. Biological humans could have tiny robots that swim through their bloodstreams and keep them young forever. That is a given. They could have direct brain virtual reality simulations of anything they can imagine. AI would be so powerful that it could calculate and present humans with their deepest desires much more vividly than anything they could experience in real life. That would be difficult to come to terms with. What would happen to any person who experiences his deepest desires…desires that he may or may not have been aware of?
Indefinite youthful lifespans and perfect vivid fantasies are just the obvious things. These computer systems, aided by the best possible equipment, will probe the universe in depth and at scale to figure out the true theory of everything. We will quickly understand everything that is and everything that could be. We will probably determine the nature of consciousness.
That is where it gets a bit sticky. When we have guessed the nature of consciousness, I suspect we will be forced to realize there is a God (my personal prejudice). Will that lead to a super-high-tech, futuristic, religious revival? What happens when people that can live indefinitely and experience any fantasy in vivid detail realize they are being watched over by God?
I am getting ahead of myself. People will probably not stay on earth. They will probably migrate to the stars. In doing so, they will take all their technology with them. There has been a lot of speculation about humans building giant structures like matryoshka brains that enclose entire stars. That makes no sense. Why do that? A computer big enough to think every thought anything could ever want to think would likely be no larger than a building…or perhaps a mountain. A planet sized computer would be overkill. A star-sized computer would just be a vanity project.
As I speculated in an earlier essay, the consciousness of people who live indefinitely will probably expand until it can reach beyond the boundaries of their bodies. These consciousnesses, unconstrained by the laws of physics, will span the universe, reconfigure it and rein it in. They will remake the universe into the kingdom of heaven (another personal prejudice).
That is what I think will happen, but if I am like practically everyone that has lived since the dawn of time, I am probably wrong. Probably, AI will take us places and in ways that no one can anticipate. Or maybe it will be our doom. Or maybe space aliens will step in. Or maybe Jesus will return. Maybe it will turn out that the universe is a giant omelet just flipped on some cosmic burner. Hey, who wrote this script anyhow?
Well, see you on the other side. Oh wait…there is no other side.


