The much-publicized threats posed by the AI singularity have raised such fears that most experts think the danger is akin to nuclear war. Some are certain AI will reduce people to simpletons obedient to super intelligent machines. Many expect it to cause mass unemployment.

To examine this fear of an AI apocalypse, a TechCast study used collective intelligence to combine background data, the best judgment from 30 experts, and responses from ChatGPT. By including ChatGPT, the study provides a small breakthrough that serves as a model for merging AI and human intelligence. TechCast has used this method for 20 years to forecast emerging technologies with good accuracy. For instance, we forecast more than a decade ago that AI would take off in 2023. (Technology Forecasting & Social Change, 2012)

The data below demonstrate that AI the bold claims and profound fears over a purported singularity may be overblown. The study is limited, but there is no evidence to support the prospect for an AGI that is superior to humans, mass unemployment and existential threats, commonly professed to be all but certain. The results suggest that AI will become far more powerful, yet subject to superior human agency with moderate to strong capabilities for controlling its dangers. We also forecast only a modest rise in unemployment and beneficial social and human impacts.

These findings could help dispel the exaggerated fears over AI … roughly the way the massive Y2K threat fizzled 23 years ago because corrective changes were made. Likewise, the legitimate fears over AI could be resolved by some form of “managed AI.” Sam Altman, CEO of Open AI, said: “Society is capable of adapting as people are much smarter …than a lot of experts think.” (Time, July 3, 2023)

Good to Strong Prospects for Control

Our results show diverse estimates on controlling AI. A few experts seem convinced AI will produce a super intelligence that surpasses humans and therefore cannot be controlled, but most responses average 50% control ("occasional failures)." A small optimist group expects 90% control ("minimum failures)."
ChatGPT splits the difference between the moderates and optimists at 75% control.

Considering the full range of these data, it seems most likely that AI systems will result in a moderate level of control (50%), and possibly the strong control (75%) predicted by ChatGPT. In short, AI may possibly be dangerous occasionally, yet we are likely to learn how to maintain good control to gain the enormous benefits in productivity and more creative lives.

Unemployment To Rise About Three Percent              

The data reflect sharp differences on this issue. A small number of experts think AI will automate most jobs, causing almost mass unemployment of 70% or even 100%. The bulk of responses is based on moderate views and averages 11% unemployment. ChatGPT provided a boilerplate analysis but no estimate... a limitation of AI. Overall, we conclude unemployment is likely to rise modestly above ordinary levels of 8% globally to about 11%, although it may reach crisis levels of 20% in some nations and industries.

Further, an earlier TechCast study found that routine jobs would be completely automated, while service and knowledge work would be upgraded. The big gains would consist of a new group of creative and collaborative work. This previous study estimated unemployment would reach 11% globally, matching the current forecast precisely.

AGI Forecast About 2037, but Doubts About Super Intelligence

Most experts think this new wave of chatbots almost match human intelligence even now and progress is so dramatic that AGI would follow soon. Their estimates suggest that AI systems are likely to perform the full range of human intelligence about 2037 +/- 5 years. But an unusually large number of “Much Later/Never” responses raise serious doubts about the feasibility of AGI at all, much less produce an artificial super intelligence. ChatGPT thinks the probability of AGI is only 50%.

Because only humans experience true consciousness, we suspect AGI will always remain inferior because it cannot express emotions, beliefs, purpose, vision, and other subjective thought as humans do. Yes, these higher-order functions can be simulated, but only humans possess the agency able to draw on heightened consciousness and choose to take deliberate actions.

Stable and Prosperous Social Impact

Despite some variance, the data overall suggest positive social impacts. ChatGPT estimated +5 on a scale of -10 (disaster) to +10 (stable prosperity), compared to +3 by our experts. We feel confident, therefore, in expecting substantial gains in social stability and prosperity.

More Intelligent and Creative Human Impact

Results shows close agreement between ChatGPT's estimate of +5 and this study’s average of +4, on a scale of -10 (obsolete, robotic) to +10 (intelligent, creative). The conclusion is clear: AI is likely to help humans become more intelligent and creative.

Conclusions: Will the AI Singularity Become Another Y2K?

It seems obvious that AI systems should improve dramatically. IBM recently announced plans to build a 100,000-qubit quantum computer, which could add the processing power needed.

However, these gains may involve elaborations of this same basic AI model. There seem to be no major innovations in the prevailing AI structures that rely on deep levels of neural networks crunching huge amounts of data to identify statistical patterns. Until some unknown breakthrough occurs, we think AI is likely to become an unusually powerful tool, yet it could also become safe and well managed.

Four classes of control seem needed to manage the errors and threats that seem likely. First, AI systems should be designed using principles like those espoused by Isaac Asimov to prevent dangerous outcomes. Then they should be tested before being released to the public, just as new medicines, cars, and aircraft are routinely tested. When in use, AI must be monitored for dangers by qualified supervisors and corrective changes made. Finally, AI systems must be equipped with override capabilities so they can be aborted if necessary. This is a huge challenge, especially as AI is considered a black box that cannot be well understood. Yet Sam Altman thinks, “We can manage this.”

To put the dangers in historic perspective, the potential threat of AI was nicely anticipated by the invention of other methods of managing knowledge. When Socrates was presented with the first books after a long tradition of oral communications, you might think he saw the vast possibilities of storing knowledge. But no, he feared writing would prevent people from thinking and would spread falsehoods. Likewise, when the first PCs started using the Internet in 1980s, they were met with a wave of "computerphobia" – fear of breaking the computer, looking stupid, and losing control.

While the dangers of AI seem formidable, similar threats were rampant 23 years ago when the Y2K problem terrified the world. The onset of the year 2000 was thought to trigger massive system failures throughout societies because computers were not equipped to handle dates in the 2000 range. When New Year’s Eve 2000 was celebrated, however – nothing happened. Just as Y2K proved to be safe because the threats spurred corrective changes, AI threats could be resolved now by managing it well.

AI is far more complex than the Y2K threat. No doubt about that. But it might be good to question the dramatic fears about an AI apocalypse and focus on better managing this powerful technology.


William E. Halal, PhD, is Professor Emeritus at George Washington University and director of the TechCast Project. Halal was cited by the Encyclopedia of the Future as one of the top 100 futurists in the world. His latest book is Beyond Knowledge: How Technology Is Driving an Age of Consciousness.