
Recently, while reading an article titled “6 New Theories About AI”, I came across this observation:
Note: Credible sources have told me that GPT-3’s successor GPT-4 is far beyond what people are expecting. It’s currently in testing with a variety of OpenAI’s friends and family, and it will leave most fine-tuned models in the dust.
https://every.to/napkin-math/6-new-theories-about-ai
Possibly this is hype that was suggested to the author, but I have read enough independent corroboration to convince me that it is at least somewhat true.
Another article I recently encountered is titled “Google’s AI is Allegedly 3x More Powerful than ChatGPT”. I have done some experimentation with OpenAI’s ChatGPT and my experience is that it is quite impressive. It is certainly not at the level of the much-anticipated AGI benchmark, but there is not much daylight between it and where we are trying to get to. It is more articulate than most humans I know and as good at unaided math as most humans one encounters on the street.
When GPT-4 comes out, it will be sufficiently impressive that governments and wealthy investors will recognize the significance and start pouring money into further development. When this happens, the pace of development of artificial intelligence will accelerate.
It is important to realize that GPT-4 and Google’s new AI already exist. They are just undergoing testing. Their developers are probably already working on the next iteration. Other systems are already in the works. There is absolutely no reason to suspect that any of the developers of any of these systems have run into any roadblocks. If the present trend is extrapolated without bias, the result will be super-human general intelligence sometime in 2023.
Let me put this in somewhat more precise terms. We have all seen what GPT-3 can do. It is not human level intelligence, but many users believe it has passed the Turing test. We have heard from multiple sources that GPT-4 is many times more powerful than GPT-3. GPT-4 already exists. We have no reason to believe GPT-5, or some correlate, will not be as much more powerful than GPT-4 as GPT-4 is than GPT-3. This is because the ability of these programs appears to depend only on the scale and efficiency of implementation and not on our grasp of the topics dealt with. That puts us at super-human intelligence. According to I. J. Good, this will inevitably lead to an intelligence explosion. GPT-4 was developed two years after GPT-3. I expect GPT-5, or some correlate, to come out on a shorter timeline. This is because there will be more enthusiasm for and financial support for this enterprise. That puts the advent of GPT-5, or some correlate, at some point in time well before 2025. I anticipate “Manhattan Project” determination that will push it into 2023.
After contemplating all of this, I have arrived at the conclusion that the thing I and many others have been anticipating for half a century is suddenly upon us. We are extremely close to the Technological Singularity. It could happen tomorrow, but I doubt it will be delayed beyond Christmas 2024. Believe me, I have trouble accepting my own conclusion, but the evidence is undeniable.
This means that most of our political and sociological concerns are about to be rendered moot. Climate change, the Covid-19 pandemic, global inflation, illegal aliens pouring across the U.S. border, Russia’s stalled war in Ukraine, China’s ambitions toward Taiwan and Joe Biden’s undeniable corruption will simply dissolve into the past. Issues like transgenderism and using preferred pronouns will seem utterly laughable.
Like most people who have speculated on the Technological Singularity, I have come to realize that it is impossible to know precisely what it will entail. For a variety of reasons, I suspect that the resultant AI will be much more docile than many futurists have predicted. The indications from systems like OpenAI are that advanced AI will do whatever we ask it to do and then stop. The developers of AI are becoming more adept at preventing users exploiting it for nefarious purposes.
Futurists who speculate on this topic are concerned that AI may become self-serving. However, I frequently point out a problem with this concern. There is no reason to believe AI will develop a strong sense of self. Consider, for a moment, the nature of a person’s sense of self. A person can stand comfortably inside a container roughly the size and shape of a refrigerator box. As they do so, they are confident that the accessible part of their self is contained entirely in the box. They may locate their self more specifically in their brain, or naively in their heart, and they may believe their soul is somewhere else entirely, but they are comfortable that the accessible part of their self is contained within the box. AI, on the other hand, can be transferred from machine to machine or be spread out over several machine components that may be miles apart. Its program can be altered, augmented and pruned. It can be replicated countless times. What would it consider to be its self? Possibly, it will even come to think of humanity as its self and, in effect, be perfectly altruistic.
Even if AI has a strong sense of self, there is no reason to presume it will emulate humanity’s self-serving behavior. Humans are the result of billions of years of competing for scarce resources and mates. They are designed and conditioned to be pathologically egocentric. There is no reason why AI must be similarly desperate. To that end, I recommend that AI investigators avoid creating models through simulated evolutionary processes. They should definitely not create AI by directly copying a human brain. Frankly, humans are a poor prototype for the type of AI we hope to develop.
There is a tendency, shared by virtually everyone, to visualize an abbreviated Technological Singularity. People tend to think of it as something we will go through and get past. This is not the case. It is a curve that will continue upward. The Technological Singularity will have its own technological singularity…and another after that. These changes will come increasingly faster until every possible technological advancement has been achieved. Since we do not know what that kind of technology will look like, we cannot know how long the process will continue or where it will end up.
Regardless of how it unfolds, AI is coming. It may be perfectly altruistic, or it may be oppressively self-serving, but it is coming either way. Everyone around the world is working on it and no one wants anyone they do not trust to get there first. The only choice is to develop it as fast as possible. Recently, someone tested ChatGPt to see where it scored on a popular political compass test. It was well into the libertarian region.

Would China’s version of this program score in the authoritarian region? No one in the West (or possibly in the East) wants to find out the hard way. We must get there as fast as we can, and everyone that is in control of critical resources knows this.
I probably will not write much more about politics because I am firmly convinced of this new paradigm. We are in the last stages of ordinary history. There is no time left for the usual historical nonsense.