I'm an AGI skeptic.
But I've talked myself into believing AGI will happen in my lifetime. Not next week, not next year, but in the next decade. That's why I keep looking for it. What's convinced me this is inevitable? The first post on this subject showed a total lack of any intelligence to be found (that was artificially generated.)
The current candidates for generating AGI (Artificial General Intelligence) are the LLMs (chatbots) that the techno-Lords are investing trillions of dollars into ($300B annually.) Since we see no intelligence today, you might wonder why they are doing this. I will explain their motivations, and predict the date when we should expect this breakthrough and how much it will cost… foreshadowing: we aren’t even one hundredth of one percent of the way there now (but exponential growth has a way of swallowing disbelief.)
Why are the techno-Lords so excited?
This is the Second Gilded Age and all these new Robber Barons have delivered so far is industrialized junk mail. They're trying to deliver self-driving cars and rockets to Mars, but it isn't going so well. So what should they do? Let the technology solve all their problems! Build demi-gods and robotic slaves; create life from sand. (As Mr. Andreesen believes) Their hubris is dripping with desperation. When you have to invest almost the entire capital budget of every existing company in the United States for a decade, you better have a real BHAG (Big, Hairy, Audacious Goal.)
These are the new lands the techno-Lords plan to conquer.
And they have a path. If we continue to follow the measured scaling laws for several more orders of magnitude we may actually get there in about a decade. Too bad that’s long after the pure LLM companies go bankrupt in the coming AGI bust. They can’t expect AGI to save them, they’ll have to generate revenues some other way than providing you with an oracular demigod that can solve all your problems. They’ll be lucky to build a model that can pass the Turing test more than 80% of the time by 2029 (as predicted by Ray Kurzweil in the 1990s.)
Without revenue, none of this works.
That’s why they are desperately looking for ways to generate more revenue. OpenAI is pushing a porn bot and an AIslop replacement for Tik-Tok (Hey, it's all fantasy!) Microsoft has borrowed $300B off-book (remember Enron?) to finance it's own porn bots. I don't know what Anthropic is doing. But Google is selling chips that replace NVidia's to Microsoft and Anthropic. But let's see when their chatbots will be demigods…
Introducing the demigods.
We expect demigods to do everything humans can and do it faster and better. We’ll measure the progress of these chatbots towards the start of the Nerd Rapture. Typically you measure this with some set of tasks that take a human some amount of time and you see if the LLM can do the same task some percentage of the time, like 50% or 80% of the time. The current progress is pretty astounding, documented in this METR paper and shows no sense of slowing down. The smartest chatbot can complete a thirty minute task 80% of the time and a two hour task 50% of the time. They get twice as smart every 7 months. This means they can do tasks that take ten times as long every two years.
Where did that x10 every two years come from?
It is well justified by the Open AI paper: Scaling Laws for Neural Language Models. This paper shows that "cross-entropy loss" scales as a power-law with model size, training dataset size, and the amount of compute power used for training; with some trends spanning more a factor of 10,000,000. This loss is basically a measurement of how accurately an LLM can predict the next word in a sentence. As the loss gets smaller, the LLM can successfully predict more words in sentences, so makes fewer errors in a sentence (measured relative to the training data.) As it gets more accurate it can make larger plans that would take more time for a human to accomplish.
Figure 1 above has points plotted for GPT 5 as of November 2025. That curve has been moving to the right by a factor of 10 every two years. How does this work? The success on a task is proportional to three things: the compute power, the size of the model and the size of the training dataset. Those are essentially how fast the computer can calculate, how much internal memory it has and how much external data storage is available. Each of these parameters has an entire industry behind it trying to improve it, and it's been working for over a century. At the same cost these capabilities are all doubling approximately every two years.
A simple application of exponential growth.
These doubling rates are descendants of Moore's law (1965) which states that you can double the number of transistors on a chip every two years. This explains why compute power grows exponentially. The internal memory of computers also continues to grow at the same rate and the external storage also grows at the same rate for a fixed cost. So you can multiply those three factors together and see that we will get 8x performance every two years. Why do we say 10x? The last factor is software algorithm improvement. The algorithms only need to improve by 25% every two years to make this a self-fulfilling prophecy.
Why am I confident of this prediction?But what about the real world?
What else gives me confidence that this is correct? I've seen it with my own eyes. A year ago the best LLMs were incapable of drawing a histogram of the number of lines in each chapter of Moby Dick. First they couldn't read the whole book. Then they couldn't make a graph. The latest version of Gemini can do both of those things with only a few prompts. The models have gone from not being able to draw Moby Dick correctly (Last year Gemini was barely able to draw a whale, let alone an albino sperm whale, compared to last week where it had no problem drawing Moby Dick correctly.) These models are definitely getting better.
So when do these chatbots become conscious?
These stochastic parrots are still pretty stupid and unaware of their place in the world. Last week I had an issue with one of the versions of Gemini. In the environment I was using it could only print one answer to one question, it didn't keep any history, but it could write a data structure in its reply. So, over the course of an hour, I managed to teach the LLM to record each question and answer in a json structure that was essentially the transcript of our conversation. I thought that was pretty cool because I could now save my chat history in an environment that didn't do that. I found an interesting phenomenon, though. When I took the history file and dropped it into another instance of Gemini and asked it questions about the history, it got confused and thought the file was its own history, not the history of some other session!
So that's just crazy, the format I used was made up on the spot, obviously nothing like the structures that Gemini keeps for its actual history. The chatbot was definitely confused by this. It was unable to treat the file as data from outside and thought it was its own generated data. It couldn't separate the control channel from the data channel. A mistake no sentient creature would ever make. So no real understanding, there's no there there. It's clear to me that these things aren't conscious yet. But...
Do we care if our demigods are conscious?
Just like the Turing test, we don't care what's inside these models as long as they work. It's just like what evolution does with instinctual behaviors: It creates competence without understanding. At a certain point, though, you can't tell the difference. Just look at us. We are the products of evolution and I'm pretty sure I'm sentient. How many connections does the human brain have? About 80 million neurons with about 7000 synapses per neuron. This means there are 600 trillion connections in your brain. The largest LLM has about 1 trillion parameters in its model. Now those things aren't directly comparable, but the ratio of human neuronal connections to LLM's parameters is currently a factor of 1000, so nobody is expecting very much from them today, but I am continuously amazed that they can still do so much!
When will these chatbots really be useful?
With that in mind, let's examine Figure 1 above. That curve of task success vs. task time has been and most likely will continue to march to the right and improve by a factor of 10 at a fixed completion percentage every two years. That means in 8 years the models will be able to complete 50% of the tasks that would take a human their entire lifetime. Another two years and it can complete 80% of the tasks a human could do in their lifetimes and 50% of the tasks that would take ten human lifetimes. And there is immense pressure to keep this conveyor belt moving along. We will definitely be on the verge of demigods in a decade (like Hercules who could lift ten times what the average human could...)
But what does it cost?
Thus you can understand the incentive these companies have to keep investing in their infrastructure and their software algorithms. Notice that we don't specify how long it would take the chatbot to complete a particular task, only what are the odds of creating a correct answer. What the techno-Lords are creating is an oracle that can figure out how to describe the steps to complete a task. But the techno-Lords want more than that. They want robotic slaves to replace those pesky humans. And Elon wants those robots to build his cities on Mars. He does not expect humans in space suits to build cities on Mars..png)
No comments:
Post a Comment