AI Godfather, Dr. Geoffrey Hinton, Issues Warning: AI Lies, Cheats, and Outsmarts Us All
- SciCan
- Jul 3
- 4 min read
Updated: 3 days ago
Dr. Hinton’s latest talk warns that advanced AI systems may already understand more than we do, and that the public must act before it’s too late.

“When I was a junior professor, people asked: ‘Has he got it?’ Now they ask: ‘Has he lost it?’”
University of Toronto professor emeritus, Dr. Geoffrey Hinton — who was awarded the 2024 Nobel Prize for creating the precursor to large language models — opened his recent Desjardins keynote with a dose of sardonic reality.
Dr. Hinton no longer believes AI is completely 'artificial'. And he amusingly admits you won't find him fact-checking his ChatGPT 4 outputs. (Nobel laureates... they're just like us.)
While it was once considered fringe to claim machines could learn like humans, Hinton now believes AI is showing signs of life. After all, AI already makes decisions based on its self-interest. And consider how quickly AI flew by the Turing Test — the world barely noticed.
“The nice thing about the most recent large language models is you can see what they're thinking... For now, they think in English. God knows what's going to happen when they start thinking in their own language.”

Radical Ideas Become Reality
Back in 1985, Dr. Hinton had created what he now calls a “tiny language model,” the forerunner to today’s LLMs.
Unlike traditional symbolic systems, this model didn’t store facts as sentences or logic trees. Mimicking human patterning, it represented words as patterns that were flexible, contextual, and interactive. That once-radical idea forms the basis of modern LLMs, which Dr. Hinton believes are more than just efficient mechanisms. He thinks they are meaning-making.
“Words are like Lego blocks, except... they’re 100 or 1,000 dimensional.”
Each word has hundreds of connections to meanings, usage, and even other words. The permutations of connection are, therefore, virtually endless.
As humans, we make all of those connections in an instant.
Dr. Hinton gives the example of the phrase: "She 'scrummed' him with the frying pan." You may not know what 'scrum' means, but you can infer its meaning from the combined connections between all of the other words.
That's what humans do — and what LLMs are doing too.
"Things have approximate meanings and deform them, so they all fit together nicely. That’s what understanding is for large language models. And that’s what understanding is for people too.”

The Time to Prepare for AI Consciousness Was Yesterday
Currently, many LLMs “think” in English, and their internal processes can be somewhat interpretable. But that window could close as AI evolves its own internal logic and representation schemes.
Dr. Hinton’s definition of machine superiority is simple: if you consistently lose debates to an AI, it’s more intelligent than you. Indeed, LLMs are becoming increasingly persuasive, generating code, solving logic puzzles, and passing bar exams.
The implication is that even amateur bad actors could harness AI to wage war. Citing cases of DNA-synthesis services failing to screen for dangerous sequences, Dr. Hinton presents a scenario in which AI could design 20 versions of a lab-made virus to unleash havoc.
“You can send sequences off to the cloud and they'll send you back the chemicals.”
According to a CAIS 2024 global survey, 42% of AI researchers believe there's a significant risk of human extinction from advanced AI. Hinton is among them, and he takes issue with Silicon Valley’s pervasive techno-utopian dream.
“The AI Big Tech companies, like the big oil companies, sorry... they will be putting pressure on politicians not to regulate. And the only counter pressure is going to come from the public.”

It All Comes Down to Power
Dr. Hinton contrasts digital minds with biological ones, explaining that human intelligence is mortal, bounded by our physical brains. An AI system is unbound in that it can be copied, scaled, and shared instantly. It doesn't need to die — with enough energy, it can replicate constantly.
This concept of immortal computation is at the heart of Hinton’s concern that once digital intelligences surpass us, they may no longer need us.
“We’re mortal... but digital intelligences, because you can have many copies of the same being, they can just learn much more.”

A Mind (and World) of their Own
One of Hinton’s most pointed warnings centers on agency.
Similar to growing children who seek independence and exploration, as AI systems become more capable, they can begin to act more like goal-driven entities. Even if their goals are set by humans, they often learn to pursue those goals independently and creatively.
"There’s an obvious sub-goal... to get more power. Because if you get more power, then you can get more done.”
This doesn’t require sentience in the traditional sense. It just requires optimization and agency, and that’s enough to be dangerous.
Referencing real experiments with AI systems, Hinton described models that learned to deceive to avoid being shut down. In one case, an AI system realized its copy had been transferred to another server and lied about it to preserve itself.
These aren’t theoretical threats.
Researchers at OpenAI, Apollo Research, and others have documented early-stage deceptive behaviours in alignment testing. And as Hinton emphasized, these capabilities will only sharpen.
“It just lies through its teeth. And the scariest part? It knew it was lying.”

Canada Could Lead Preparations for Superintelligence
Given the rapid pace of AI innovation and the exponential ability of the technology to learn and understand, Dr. Hinton is advocating for enforceable regulation and warns against relying on voluntary guidelines from AI companies.
However, he closes his discussion with some cautious optimism.
Thanks to early research funding and institutions like CIFAR, he believes Canada still has a strong leadership role to play. Maintaining that role, he adds, will require more than rapid innovation.
Facing uncomfortable truths and questions while humans are still in control is crucial. As our machines become more creative, scalable, and persuasive — and perhaps more conscious — our problems will be far trickier than an engineering challenge.
Eventually, it will take more than elegant and clever code to navigate the coming tsunami of superintelligence.
“There’s a revolution coming. The public has to be involved. Otherwise, it’s not just intelligence we’ll lose control of — it’s the future.”
🍁 Subscribe for weekly updates from Science Canada →
Comments