OpenAI CEO Sam Altman has sparked significant attention by stating, “We are now confident we know how to build AGI as we have traditionally understood it.” This raises important questions: what exactly does Altman mean by AGI (artificial general intelligence), and how does it differ from current generative AI applications like ChatGPT? Moreover, given the challenges of monetizing generative AI, what drives the pursuit of such an ambitious goal? Altman’s reference to AGI might imply a system of Agentic AI that goes beyond traditional robotic process automation (RPA) by incorporating decision-making capabilities through large action models (LAMs), akin to the large language models (LLMs) powering generative AI. However, AGI is far beyond Agentic AI. Achieving AGI requires moving beyond existing technologies and delving into deeper layers of scientific knowledge. The transition from generative AI to AGI demands not just technical Innovation but also a broader understanding of intelligence itself. Hence, it’s worth digging down about the underlying basis and reason of Sam Altman’s AGI confidence.
Layered Structure of Scientific Knowledge and Its Role in Innovation
There has been the issue of science for ascending from Gen AI to AGI. As Thomas Samuel Kuhn stated in his book, “The Structure of Scientific Revolutions”, our knowledge about the creation is structured in layers. As we move to a deeper layer, a paradigm shift occurs in our understanding, inventions, and innovations, enabling us to get our jobs done better, unfolding industrial revolutions. For example, transitioning from electrical science to quantum science has provided far more effective means to manipulate materials, energy, and light, leading to inventions like photocopying, capturing light using electronic image sensors, tiny solid-state switches as integrated circuits, lasers, and many more. In retrospect, the depth of the layer of scientific knowledge determines our ability to invent, innovate, and scale them up.
The evolution of every invention faces the reality of saturation. To overcome it, we need the next technology core based on deeper science to create a new wave out of Reinvention. For example, Edison’s filament light bulb, based on electrical heating, faced saturation. To overcome it, deeper science-based gas bulbs or tubes were developed, followed by quantum science-based LED light bulbs. The invention of artificial intelligence (AI) is no different. Like the light bulb and many other inventions, its evolution demands a deeper layer of scientific knowledge to fuel successive bigger waves.
Path to AGI: Beyond Generative AI and Into Deeper Scientific Knowledge
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. AGI goes far beyond memorization of Codified Knowledge through training data sets. It will not be restricted by the data it has been trained on or limited to imitating the range of human examples available for learning. AGI should possess human-like intuitions to innovate actions and handle situations never faced before. Hence, transitioning from generative AI (Gen AI) to AGI requires a paradigm shift in our knowledge of learning, thinking, and generating ideas to demonstrate human-like intelligence. This demands scientific knowledge far deeper than what was leveraged to power Gen AI. Notably, although insufficient for AGI, the recent Nobel Prize recognizing foundational discoveries and inventions in machine learning with artificial neural networks marks a step toward this deeper layer of knowledge.
The Scientific Limitations of Generative AI and the Challenge of Progressing to AGI
As we know, the underlying science of generative AI (Gen AI) relies on learning through the values of the weight matrix of a feed-forward neural network, adjusted via training data sets. Consequently, Gen AI is limited to the codified knowledge presented in its training data. While progress has been made due to advancements in computing power, particularly with GPUs, and the expansion of training data sets, Gen AI outputs still suffer from errors. Moreover, signs of saturation are evident, as there are no additional data to further improve accuracy, and increased computing power cannot overcome the inherent limitations of the GPT (Generative Pre-trained Transformer) model, constrained by its underlying science. Furthermore, there has been no Breakthrough in discovering a deeper layer of science. Therefore, from a scientific perspective, there is no reason to believe we are ready to transition from Gen AI to AGI.
Sam Altman’s AGI Agenda: Unfavorable Economics of Technology and Innovation
The question is, why is Sam Altman proposing this AGI agenda so strongly? Is he a stupid man? Perhaps not. He is a businessman; he needs money to run his business. He and his investors have an urgency to inflate and sustain the valuation of OpenAI, which currently stands at $157 billion. According to CNBC, OpenAI lost $5 billion to generate a revenue of $3.7 billion in 2024. Moreover, the hype surrounding ChatGPT replacing knowledge workers has been fading as people gain deeper insights into the errors it produces and what can truly be gained from it. Besides, humans have been sharpening their abilities and preferring to use ChatGPT as a productivity improvement assistive tool. More importantly, willingness to pay has not been showing expected signs.
Premature Saturation: The increase in the cost of training, from $100 million to $1 billion to $10 billion for releasing the next versions with minor improvements, indicates that GPT technology has prematurely reached saturation before satisfactorily automating the codified knowledge of humans. Consequentially, OpenAI has backtracked in its attempt to release GPT 5 or Orion, which claimed to be far superior to ChatGPT 4. Such a reality has been raising the question of whether the Gen AI bubble will burst.
After spending over $200 billion to build supercomputing facilities with Nvidia’s millions of GPUs, costing as high as $70,000 apiece, they have likely learned an important lesson: the theory that the more training data and compute you throw at these models, the better they get, is wrong. Consequentially, due to early stage diminishing return, increasing the willingness to pay and expanding the market through the release of successive better versions, like the way Apple succeeded with iPhone, has faced an insurmountable barrier.
Ironically, they could have realized this reality much earlier by paying closer attention to the science of feedforward neural networks. Besides, Nvidia’s bold move to power Humanoid robots like Tesla Optimus with its GPUs to sustain its revenue—since it does not foresee future demand for building data centers to power GPT models—underscores this reality.
The Profit Struggle and OpenAI’s Financial Dilemma
Hence, OpenAI faces the inevitable outcome: GPT applications, like ChatGPT, cannot reach profit. If Sam Altman acknowledges this reality publicly, OpenAI will be left without the funds to sustain loss-making operations. As a result, employees will be laid off, and the $157 billion valuation of OpenAI will collapse. Neither Sam Altman nor his investors are ready to face this reality. To buy more time, they aim to create a new Disruptive innovation narrative—shifting towards AGI, hoping that this newly born AGI hype cycle will bring in additional funds, inflate the valuation further, and provide time for existing investors to offload their shares.
Evaluating Sam Altman’s AGI Confidence: Hype or Opportunity?
To determine whether Sam Altman’s confidence in knowing how to build AGI is merely an attempt to generate hype for sustaining OpenAI amid an apparent loss trap or a signal of a new investment opportunity, we must focus on the underlying science and the performance limits and profitable revenue trend demonstrated by the science of neural networks in achieving the mission of generative AI (Gen AI). We should also look into the moves made by GPU suppliers like Nvidia and data center builders out of those of gears like Amazon and Microsoft for catering to the AI training demand. With the given unfolding reality, it may not be unfair to reason that “Sam Altman’s AGI confidence suffers from the weak scientific foundation and bleak economic reality due to premature saturation of Gen AI and is likely rooted in fuelling hype.”