2024 12 27 The AGI Sham
The apparent leap in the capabilities of large language models has been made possible because venture capitalists have been persuaded, by earnest but misguided entrepreneurs like Sam Altman, that given enough resources such models would eventually be developed into generalized artificial intelligence. In other words, with enough money, OpenAI will build you a machine that is capable of doing the same cognitive tasks as your human employees but doesn’t need a 401k or health care and will work a 168 hour week without complaint without a paycheck.
These investors and researchers now find themselves in one of Zeno’s paradoxes. Each new investment of cash has yielded incremental improvement in the capabilities of the models, but in every case building the next bigger and better model requires substantially higher investments than the one before it.
We have witnessed these various players, all of whom hope to be the first to cross the AGI finish line, go through endless iterations of this process at breakneck pace, and none of them are any more clear on where that line is today than they have ever been. Admirably, I suppose, the researchers cling to the hope that with just a few more rounds of funding, they might make the breakthrough needed to cross, but eventually one of the funders is going to decide that being the first to abandon ship is preferable to sinking aboard it.
Soon there will be a mutiny, as the investors begin to demand returns. The cheap pricing designed to lure users into buying subscriptions to access existing models will need to go up drastically. Costs will need to be cut, demands will be made to reduce resource consumption rather than racing ahead at full steam. Casual users who have become reliant on cheap access to current models will be forced to decide whether they can continue to afford them at rising prices. Companies who have hired developers who are too inexperienced to work without them will have to decide whether to continue to foot the bill for subscriptions, and developers who have paid for access themselves will face either learning to program or spending more of their salaries paying an LLM to do it.
Once the signs of this looming collapse are recognized, other speculators are going to become alarmed. Companies like Nvidia whose business exploded with the demand for hardware needed to train models will be in peril when cloud providers signal that they aren’t going to be placing more massive GPU orders. An economy that has been propped up by this massive infusion of cash- most of which was converted to heat exhausted from datacenters -is going to have to face the reality that the finish line never existed and there is going to be no AI revolution this year, or next.
It hasn’t all been for nothing; some of this new technology has great potential in reducing human toil, like sorting the stones out of conveyors of dried beans moving too fast for any eye to monitor, translating text in ancient scrolls, or identifying photos that include your cat. They provide value, but they don’t provide the kind of value that will result in the kinds of subscription fees that would be necessary to satisfy the venture capitalists who have invested billions of dollars believing they would ascend to baron status following a second industrial revolution.
Once the spell cast on investors by the AGI pied piper has broken, and they begin to understand that every additional billion spent will yield increasingly finer improvements to the LLMs of today, they will begin to look for other opportunities. Many companies whose business models hinge on the appeaeance of AGI finish line will implode.
Since I’ve already speculated a great deal, I may as well offer my final prediction which is that people like Roger Penrose and Miguel Nicolelis who have argued that intelligence is not computable are correct, and that the lure of the piles of money you would be rolling around in were you to find a way to compute it was too strong for them to be troubled by those arguments. In a Penrose vs Altman showdown, I know who my money is on. I don’t believe that the idea of creating artificial, generalized intelligence is impossible, just that it cannot exist on the substrate of a Turing machine.
We have only the most nascent understanding of how intelligence arises from human consciousness. The idea that the same phenomenon could occur spontaneously by computing a bunch of tensor products using a bunch of graphics card reveals just how wishful is the thinking of millionaires dreaming of being billionaires can be when they fall under the sway of a con man who dazzles them with parlor tricks.