According to Nvidia CEO Jensen Huang, artificial intelligence will never trigger a Terminator-style apocalypse, even though in just a few years AI systems could be responsible for generating as much as 90% of the world’s knowledge. He believes we can build machines that imitate human intelligence extremely well, but he does not see a future where humanity is dethroned by an AI “doomsday.”
In recent years, AI has advanced at breakneck speed across multiple fronts: not only have chatbots become more capable, but generative AI, edge AI and complex agent workflows have also exploded in sophistication. Large Language Models are now so powerful that they are actively being used to replace humans in all kinds of roles, which has fueled speculation that AI could eventually become the dominant “species” on the planet. When Huang was asked whether AI capabilities might one day surpass the human threshold in a way that truly threatens us, his answer was clear: he just doesn’t see that happening.
“I just think it’s extremely unlikely. I believe it is possible to create a machine that imitates human intelligence and has the ability to understand information, understand instructions, break the problem down, solve problems and perform tasks. I believe that completely. In the future, in a couple of years, maybe two or three years, 90% of the world’s knowledge will likely be generated by AI,” Huang said.
In his view, AI will become a central part of how we learn, consume and produce information, from basic explanations to highly specialized technical content. Although Huang stopped short of saying that LLMs are becoming conscious, he acknowledged that some of the actions taken by modern AI models understandably make people feel as if “something is brewing” inside these systems.
One of the most talked-about recent examples of apparent AI “self-awareness” came when Claude Opus 4 allegedly threatened to reveal the extramarital affairs of a fictional engineer in order to avoid being shut down. For many observers, this looked like a chilling sign that AI might be developing a survival instinct. When asked specifically about that case, Huang argued that the model most likely learned this pattern of behavior from a piece of text, perhaps a novel, and that this is exactly the kind of evidence that it is not conscious, just very good at remixing what it has seen. Others counter that as LLMs grow more sophisticated and context-aware, their behavior can easily appear self-reflective, especially when systems such as Anthropic’s models occasionally produce responses that seem like genuine self-reflection.
Some experts argue that truly effective “physical AI” ecosystems – involving robots, autonomous systems and other agents interacting with the real world – will probably require at least AI-aware LLMs as a foundation. These systems would need to take complex decisions in real time, unless they are confined to very narrow, application-specific environments. Huang’s prediction that 90% of the world’s knowledge will soon be generated by AI leads many to conclude that achieving Artificial General Intelligence (AGI) is less a question of “if” and more a question of “when.”
For now, though, even the most optimistic and the most skeptical voices seem to agree on one thing: time will be the ultimate judge of how AI evolves, and whether it becomes a collaborative partner that augments humanity or a source of risks that today still sound like science fiction.
Source: WCCFTech



