Uncovering GPU Security Flaws That Could Cripple AI Models
- News

- Sep 2
- 1 min read
Hardware hacks hit artificial intelligence.

Computer scientists at the University of Toronto have shown that Rowhammer attacks — once thought to target only CPUs — can also compromise GPU security, the hardware powering most AI systems.
Their proof-of-concept “GPUHammer” exploit caused model accuracy to collapse from 80% to just 0.1%, a failure researchers liken to “catastrophic brain damage.”
The study, accepted to USENIX Security Symposium 2025, warns that GPU users in shared cloud environments are most at risk, where attackers could tamper with another user’s AI workloads. While enabling error correction code (ECC) can mitigate the threat, it slows machine learning performance by up to 10%, and may not stop more advanced attacks.
“Traditionally, security has been thought of at the software layer, but we’re increasingly seeing physical effects at the hardware layer that can be leveraged as vulnerabilities.”
— Dr. Gururaj Saileshwar, Assistant Professor


















Comments