Author: Gary Marcus, Translation: MetaverseHub
Apart from LK-99, also known as room-temperature superconductor, which may just be a flash in the pan, I have never seen anything more hyped than generative artificial intelligence. Many companies have reached valuations of tens of billions of dollars, and news reports are abound; from Silicon Valley to Washington, to Geneva, everyone is talking about it.
But in reality, the current revenue has not met expectations, and may never do so.
These valuations are expected to reach trillions of dollars, but the actual revenue of generative artificial intelligence is rumored to be only in the hundreds of millions of dollars. Although this revenue may grow by a factor of 1,000, it is purely speculative. We should not simply assume that it will materialize.
- On-chain Narratives From Lootverse to Collective Narrative Experiments in Full-Chain Games
- Base effect friend.tech earns $650,000 in user revenue within one week.
- Analysis of the dilemma of on-chain games Why put games on the blockchain?
So far, most of the revenue seems to come from two aspects, one is writing semi-automatic code (programmers like to use generative tools as assistants), and the other is writing text.
I believe that programmers will be happy to use generative AI as their assistant; its auto-completion feature is very helpful to their work, and they are trained to identify and fix uncommon errors. Undergraduates will continue to use generative AI, but their pockets are not deep (they are likely to turn to open-source competitors).
Other potential paying customers may lose interest quickly. This morning, influential venture capitalist Benedict Evans posted a series of articles on “X” (formerly Twitter), mentioning this issue:
My friends who tried to use ChatGPT to answer searches to help with academic research also faced similar disappointments. A lawyer who used ChatGPT in legal research was condemned by a judge and made a written commitment not to use ChatGPT without supervision in the future. A few weeks ago, a news report suggested that the use of GPT may be declining.
If Evans’s experience is a warning sign, then the entire field of generative AI, at its current valuation, may quickly go towards its demise. Programmers will continue to use it, as will marketers who need to write a large amount of text to promote products and improve search engine rankings. But whether it’s coding or mediocre text writing in terms of speed and quality, it is far from enough to sustain the current valuation dream.
Even OpenAI will find it difficult to achieve its $29 billion valuation; if year after year it can only achieve revenue in the tens or hundreds of millions of dollars, then competition startups with valuations in the low billions of dollars are likely to eventually go bankrupt. Microsoft’s stock price has risen by nearly half this year, perhaps largely due to expectations of artificial intelligence, but its stock price may fall; NVIDIA’s stock price has soared even more, but it may also fall back.
Recently, I discussed this issue with Roger McNamee, an early pioneer in the field of software investments. Perhaps the only truly attractive economic application scenario for us is search. For example, using Bing, supported by ChatGPT, instead of Google search, but the relevant technical issues are enormous; we have no reason to believe that the “illusion” problem will be solved quickly. If it cannot be resolved, the bubble is easily burst.
But what worries me now is not just the entire generative AI economy, but more about the possibility of large-scale and painful corrections based on expectations rather than actual business uses in the future, and the fact that we are building the whole world and national policies on the premise that AI generation will change the world, which may turn out to be unrealistic in hindsight.
At the national level, regulatory measures to protect consumers may be delayed (such as those related to privacy, reducing bias, requiring data transparency, and combating false information), because opponents believe that the development of generative AI should be promoted as soon as possible. We may not be able to obtain the necessary consumer protection because we are trying to cultivate something that may not grow as expected.
In my opinion, almost everyone has made a fundamental mistake, which is to equate generative AI with AGI (Artificial General Intelligence, intelligence and agility comparable to or higher than humans).
Perhaps people in every industry want you to believe that AGI is imminent. This inspires their narrative of inevitability and drives up their stock prices and the valuations of their startups. Dario Amodei, CEO of Anthropic, recently predicted that we will achieve AGI in 2-3 years. Demis Hassabis, CEO of Google DeepMind, also made predictions about AGI in the near future.
I have serious doubts about this. On the core issues of generative AI, we face not one, but many serious and unresolved problems, from their tendency to create illusions and false information, to their inability to reliably interface with external tools such as Wolfram Alpha, to their instability from one month to another (which makes them unsuitable for engineering applications in large-scale systems).
The reality is that, apart from pure technological optimism, we have no concrete reason to believe that the solution to these problems is imminent. Scaling up the system helps in some aspects but not in others; we still cannot guarantee that any particular system is honest, harmless, or beneficial, rather than sycophantic, dishonest, harmful, or biased.
AI researchers have been studying these issues for many years. It is unrealistic to fantasize that all these challenging problems will suddenly be solved. For 22 years, I have been complaining about the illusion error; people have been promising that solutions are coming, but they have never been realized. Because our current technology is based on autocomplete, not on factual foundations.
In the past period of time, people turned a deaf ear to these concerns; but recently, some technology leaders seem to have finally understood this truth. Just a few days ago, Fortune magazine published a report:
If the hallucination problem cannot be solved, generative artificial intelligence may not be able to create trillions of dollars in value every year. If it may not be able to create trillions of dollars in value, then it may not have the expected impact. If it does not have this impact, perhaps we should not build our world based on this assumption.