The announcement of the $500 billion Stargate project by Donald Trump has provided a significant boost to the artificial intelligence (AI) community. In addition, the Magnificent Seven companies have gained a combined $10 trillion market capitalization, driven by the release of transformative AI applications like ChatGPT. However, the rise of AI has sparked widespread concerns about job loss, with many predicting significant disruption in the workforce. Besides, despite a high gain in market capitalization, flagship AI innovations suffer from perfection and are in deep loss. This raises critical questions about the future of artificial intelligence: Will it be a tool for empowerment and progress, generating profit and new Wealth, or will it prematurely saturate despite birthtime excitement, leaving the most highly promising AI innovations and investments in Death Valley? Instead of getting carried away by the birthtime excitement, we should focus on limits for predicting the future of AI.
Understanding Artificial Intelligence
According to the dictionary, intelligence refers to the human ability to acquire and apply knowledge and skills. Hence, artificial intelligence (AI) is a machine’s ability to perform tasks that normally require human intelligence. This raises intriguing questions: Is a calculator artificially intelligent since it can compute numbers using acquired knowledge and skills? Similarly, is building design optimization software considered AI for applying advanced computational skills? However, contemporary AI extends beyond these tools. It refers to machines that learn, enabling them to acquire and apply human-like knowledge and skills. This learning ability differentiates AI from traditional software, making it a powerful tool for Innovation and problem-solving. AI evolves, reflecting dynamic, human-like capabilities.
Neural Networks: Tools for Pattern Recognition
The predominant method of developing AI involves training a neural network with patterns, enabling it to correlate inputs to outputs by adjusting the weights of links between nodes. Thus, a neural network-based AI can be considered a weight matrix-based pattern recognition tool. However, the knowledge and skills acquired by such AI models are confined to the training datasets, representing humanity’s existing body of knowledge and skills.
Due to its knowledge aggregator role, this tool lacks the ability to understand underlying meaning or assess the merit of information. Instead, it relies on statistical correlations, often favoring consensus over contradictory but superior insights. As a result, groundbreaking discoveries might be sidelined. Nevertheless, for tasks requiring the compilation of mature knowledge, such as Newton’s Principles or solving differential equations, neural network-based AI performs remarkably well, demonstrating its value in refining established methodologies.
The Limitations of Training-Based AI Underpins Future of Artificial Intelligence
Training-based AI derives its knowledge and skills solely from its training datasets, reflecting the existing body of human knowledge. However, humans don’t rely entirely on past experiences or manuals to perform tasks like walking to a shopping mall or preparing a salad. Instead, they generate new knowledge and skills forming experience, enabling adaptation and perfection. To our surprise, humans constantly keep generating knowledge and skills while doing jobs, even the same job again and again. Hence, we value experience. Human beings’ intelligence is not limited to acquiring and applying existing body of knowledge and skills. Instead, the vital part of human intelligence is to keep generating knowledge and ideas and applying them in real-time, while performing tasks.
This capability is crucial for avoiding mistakes, such as preventing accidents while driving. In contrast, current AI systems lack this ability to learn autonomously during task execution. Although synthetic data can supplement training, it risks misrepresenting reality. Similarly, Reinforcement Learning (RL), which relies on correcting mistakes through feedback, isn’t viable for many critical applications, like driving.
Consequently, training-centric AI suffers from premature saturation, limiting its effectiveness in real-world tasks. This creates a gap between initial excitement and the actual threshold of reliability required for real-life applications.
Challenges and Limitations of ChatGPT
ChatGPT has generated excitement for its human-like writing capabilities as a generative AI, using statistical predictions based on speech and language patterns. However, it exhibits mistakes and lacks clarity in complex scenarios. For instance, errors have surfaced when analyzing executive orders (presumably drafted by AI tools) issued by Donald Trump on his first day in office. Additionally, ChatGPT cannot assess the merit of knowledge that deviates from commonly stated views, limiting its reliability for knowledge-intensive roles.
While some argue that early technologies often appear inferior, not all achieve scalability or cross the threshold of reliability. For example, thermoelectric semiconductors, despite their potential, have yet to revolutionize waste energy harvesting. Similarly, the progress of ChatGPT is uncertain, as the AI community faces a data scarcity issue, challenging the release of more advanced versions. Another example is Amazon’s AI adventure. Despite the hype and staggering investment, due to latency and fabricated answers, Amazon’s Alexia is mainly used for a narrow set of simple tasks such as playing music and setting alarms, resulting in low willingness to pay and staggering loss. Without overcoming these barriers, ChatGPT and similar AI models may fail to meet their promised potential.
Future of Artificial Intelligence: Autonomous Driving and AI Hallucinations Examples
Autonomous driving performs well in structured environments and favorable weather but struggles to detect subtle signals in rough conditions. A critical concern is hallucinations—fabricated responses or actions. In busy city streets, drivers make thousands of micro-decisions during a single trip, such as slowing down or adjusting lanes. If an autonomous vehicle experiences even occasional hallucinations, trust becomes a significant issue.
Despite over $80 billion invested in R&D, the promise of reliable autonomous vehicles remains largely unfulfilled. Similarly, chatbots, including generative AI systems, exhibit hallucinations, with an estimated 27% error rate reported by analysts. These challenges raise concerns about the safety and practicality of AI systems in high-stakes scenarios. Until these technologies achieve a high threshold of accuracy and reliability, their full potential in real-world applications will remain elusive. The journey toward robust AI solutions continues, but trust must be earned, not assumed.
The Limitations of Current AI and Business Challenges
The current generation of training-based AI machines lacks the capability to fully take over human roles in many jobs. As a result, the vision of artificial general intelligence (AGI) or superintelligence remains distant from today’s AI science. While tools like ChatGPT have sparked excitement about the business potential of AI, inflating the top seven technology companies’ valuation by $10 trillion, there is a strong possibility that many high-profile AI innovations may fail to achieve profitability.
For instance, Amazon’s AI device unit reported losses of $25 billion between 2017 and 2021. Moreover, its recent $8 billion investment over 18 months to integrate generative AI into Alexa has faced significant challenges (as reported by the Financial Times). Similarly, autonomous vehicles and other AI applications continue to struggle with reliability, raising doubts about their long-term viability. Until AI overcomes these technical and business hurdles, its transformative promise may remain unrealized.
The Risks of Overhyping AI Investments
Despite the shaky science foundation of AI, fresh initiatives, such as Donald Trump’s $500 billion Stargate program for mega data centers and AI infrastructure, aim to give AI a significant push. However, these efforts raise concerns about an agenda to inflate valuations and attract investors’ money, often driven by cult-like characters hyping AI’s potential.
With a weak scientific base, such endeavors risk wasting resources and swindling investors, as lofty promises remain undelivered. The reality is that no amount of funding can substitute robust science and thoughtful implementation required for achieving transformative AI innovations.
To prevent further resource misallocation, it is critical to scrutinize the science behind AI claims and avoid blindly investing in overhyped promises. Only a clear understanding of AI’s limitations and potential can guide investments toward realistic, sustainable outcomes that benefit both investors and society. Hence, future of artificial intelligence requires deep down approach.