Low-cost sensors and broadband wireless connectivity have been offering us very detailed, real-time data. Of course, the processing of these data provides intelligence. Such a possibility of data has created the impression that we can imitate human-like intelligence in both products and processes, subsequently making human role in productive activities irrelevant. The potential of profiting from such an opportunity has triggered an exponential growth of artificial intelligence (AI) Startups. However, are such data misguiding AI startups?
Gartner’s prediction of 5.8 billion endpoints of Internet of Things (IoTs) in 2020, a 21% increase from 2019, underscores data-centric reasoning. To take a share of this emerging opportunity, VC fund managers have been accelerating risk capital supply. According to 2019 data from the National Venture Capital Association, 1,356 AI-related companies in the U.S. raised $18.457 billion—setting new funding records. Moreover, without the word AI in the executive summary or pitch deck, it’s increasingly getting difficult to draw VC fund managers’ attention. After Silicon, AI has become a buzzword in this data-centric world. However, data alone does not produce economic outputs.
The challenge is to extract intelligence from these data to surpass the human role. Of course, it has been happening to surpass the codified capability of human beings. However, are we going to be equally successful in overcoming human intelligence having a strong innate ability? The initial performance data of AI algorithms of learning is also misleading. Furthermore, rapid learning saturates before taking over human intelligence. Hence, we run the risk of data misguiding startups on multiple fronts.
Sensors, Connectivity and IoT: the root cause of data misguiding AI startups
Eyes are our most powerful sensors. Due to visual ability, human beings are indispensable for many jobs. Just 20 years ago, we were under the impression that we would never be able to have an image sensor having enough pixels to be comparable to 120 million rod cells, and 6 million cone cells of our eyes. However, we are not far for having it at an affordable price. Already, a high-end smartphone offers a camera with a whopping 108 megapixels. In addition to the human eye comparable image sensor, we have a compact high-performing computing unit having teraflops capacity. These numbers are creating the impression that we can imitate human vision in machines.
We are now entering into 5G era. As low as 10ms latency makes industrial IoTs, like automobiles, communicate with each other faster than possibly human beings can do with their neighbors. Does it mean that coordination between IoTs will no longer limit our ability to replace human roles?
Data, Intelligence, and innate abilities
Of course, we have made tremendous success in producing data. We can collect information from these data. We can also collect some type of intelligence too. However, our progress in collecting subtle intelligence appears to be too primitive. For example, algorithmic performance in detecting subtle eye contact between human drivers and pedestrians during the sharing of busy cross-sections appears to be very much out of reach of modern machine vision algorithms. Furthermore, once objects like human faces are partially covered, most advanced, machine vision algorithms miserably fail.
Although we can produce a billion transistors on a tiny chip, our capability to develop the required number of sensors at varying depths in flexible materials for developing human-like fingers for robots is far from the necessity. The failure to develop humans like robot fingers alone will prevent us from reaching many goals of developing artificially intelligent machines.
Data analytics—old wine in a new bottle
Along with the explosion of data, we have also exploded with a number of high-sounding key phrases in extracting information from data. It began with data mining, which has graduated to data analytics. How far they are different from the age-old statistical algorithms, deserves to be clarified. In addition to different statistical inference algorithms like Bayesian, clustering techniques, or regression analysis, what is new into it? Have we just given new names to our century-old computational techniques, like old wine in a new bottle? However, are these techniques strong enough to imitate humans’ sensory abilities like near and far vision or sound localization?
Memorization based learning technique makes data misguiding AI startups worse
At best, current learning algorithms could be compared with memorization-based techniques. Neural Network-based Deep learning algorithms rely on the memorization of training sets by changing the weights of artificial neurons’ connections. Although they behave like physical neurons, can they imitate human vision or other sensory capabilities? Do we have learning algorithms that can develop a model of reality from a few samples? By simulating that model, can they recognize all possible deformations of the objects? In this respect, if we compare a 3-year-old toddler’s basic intelligence, the best AI algorithm evaporates like a drop of water on a red-hot surface. The challenge in dealing with the subtle variation in real-life is far more than computing millions of moves to take over the world’s best chess player. It appears that some data are misguiding AI startups.
Progression rate—last 5% makes it fail: one of the sources of data misguiding AI startups
Another misguidance from data is in assessing the progress of AI learning algorithms. In the beginning, the AI learning machine learns very quickly from the given data set. However, upon reaching say 80 to 90 percent accuracy, such algorithms start showing oscillation. The estimation of reaching the milestone based on the past progress data keeps repeatedly failing. The progress of attaining 95 percent accuracy becomes meaningless once we face the situation of having no means achieving the next 5 percent. On the other hand, a half-backed solution for AI is better than having no solution at all. Until AI solution exceeds human intelligence, there is no market value for it.
Lesson from ASIMO and Autonomous Vehicles
At the dawn of the 21st century, Honda impressed the world with ASIMO. Upon showing spectacular performance of walking, dancing, and playing, Honda came up with the impression that its Humanoid ASIMO would bring revolution in elderly care service delivery. However, after an additional 18 years of R&D, the Honda R&D team failed to imitate many needed human-like innate abilities in ASIMO. Subsequently, ASIMO failed to qualify for elderly care jobs. Consequentially, upon investing $500 million in R&D, Honda management decided to stop further R&D on ASIMO.
Another example is autonomous vehicles. Many experts came up with the observation that driving buses, trucks, and cars was a repetitive routine job. The demonstration of the U.S. military-funded autonomous vehicles underscored such judgment. Hence, Silicon Valley icons like Google embarked on it. Soon after, dozens of high-profile companies and startups joined the journey of innovating autonomous vehicles. Iconic figures like Elon Musk and media outlets like the Economist started spreading the message—the autonomous vehicle was just around the corner. However, upon showing demonstration and absorbing about $80 billion in R&D, they appear to be in the Death Valley for now.
It seems that data are misguiding us. Furthermore, we are underestimating human intelligence. Hence, it’s fair to say that we should focus on deepening our understanding to avoid data misguiding AI startups. Othe wise, data run the risk of guiding AI startups to death valley.