Wielding Math to Enforce AI Safety & Trust
- News

- Nov 4
- 1 min read
Updated: 1 day ago
Making safe AI spaces a reality.

STARTER STATS
~85% of Canadians want AI to be regulated for ethical and safe use, according to a Leger poll from August 2025
Key areas of concern: privacy (83%), work disruption (78%), societal over-dependence (83%), and cognitive decline (46%)
We've entered an AI era, with as many challenges as opportunities. That's where AI governance comes into play, a core element of which is trust. As AI systems are entrusted with critical tasks like managing power grids or steering autonomous vehicles, how do we know they can be trusted? Researchers at the University of Waterloo have combined applied mathematics, neural networks, and logic‐based verification to build AI controllers that don’t just work, but come with provable guarantees of stability and safety.
How it works: One logic-based AI system generates the controller or proof, and another verifies it, producing a layered safety framework. As more systems become autonomous and embedded in infrastructure (transportation, energy, robotics), the lack of formal safety guarantees poses a significant risk. By open-sourcing their toolbox and collaborating with industry, the team is helping shift AI from “black-box” automation toward verifiable, trustworthy automation.
Their approach could help regulators, developers, and users align around shared standards of safety and reliability.
"What these AI controllers and proof assistants are doing is taking over computation-intensive tasks, like deciding how to deploy power in a grid or constructing tedious mathematical proofs, that will be able to free up humans for higher-level decisions.”
— Dr. Jun Liu


















Comments