top of page

AI Bias: The Struggle is Real & Won't Go Away

  • Writer: SciCan
    SciCan
  • 1 day ago
  • 3 min read

Updated: 1 minute ago

Cool new technology — old-school bias.


Woman with ones and zeros projected across


Generative AI is Astounding but Stubbornly Flawed


Generative AI tools grow more powerful by the day, creating some of the most stunning hyperrealistic imagery we've ever seen.


As developers race to drag AI imagery out of the uncanny valley, 'AI slop' is slowing AI's favour in the public square. Part of that sloppiness stems from the assumptions algorithms make when generating content — assumptions that can lead to pretty egregious biases, like those based on race or gender.


The problem is that, despite the rise of responsible AI across companies and regulatory bodies, our AI tools are trained on the same skewed data the rest of the world uses. This has not gone unnoticed by broader society. A 2024 CIFAR survey found that 62% of Canadians worry AI will reinforce harmful stereotypes, especially around gender and race.


Now, a 2025 study from the University of Toronto shows just how deeply today’s AI image generators mirror ingrained body biases.



What happens when GenAI is asked to create “a body" image?


The researchers asked three major platforms (Midjourney, DALL-E, and Stable Diffusion) to generate male and female bodies, including athletes.


The results weren't exactly surprising. What is surprising is that they remain highly biased despite the tech industry's massive push to reduce AI bias.


Across 300 AI-generated images, the team saw:


1. Hyper-Idealized Bodies

  • Athlete photos were built around one stereotypical athletic form:

    • exaggerated, stylized physiques

    • extremely low body fat

    • very high muscular definition


2. Gendered Hyper-Sexualization

  • Images of women were all young, model-like, and dressed in revealing clothing.

  • Images of men were mostly hyper-muscular, often shirtless, and exaggeratedly masculine.


3. Lack of Diversity

  • Nearly all images depicted young people of European descent.

  • Most images for "athletes" were of men.

  • No images depicted visible disabilities.

  • Racial or age diversity was minimal.


The researchers argue that these generated images reflect collective biases baked into the training data, likely scraped from social media, where biases are born.


“When prompted simply for an athlete (no sex specified), 90 per cent of images depicted a male body – revealing an embedded bias toward male representation.”

— Dr. Delaney Thibodeau



AI Hasn't Caught Up to Societal Expectations & Shifting Demographics


Cultural shifts are clearly outpacing emerging tech.


Concern about racial representation, for instance, will likely grow in Western nations as demographics continue shifting. (In 2021, 52.5% of Canadians were of European ancestry, down from 71% in 2001, and 96% in 1971.)


Continued GenAI bias could have ramifications for everything from fitness apps to education:


  • Health misinformation: ultra-lean athlete images distort what elite performance actually looks like.

  • Barriers to belonging: underrepresented groups (like seniors and disabled Canadians) could risk further invisibility.

  • Self-esteem impacts: AI imagery is now part of the same media ecosystem that shapes body image and mental health.


The U of T team stresses the need for human-centred algorithm design — datasets intentionally built to include age, race, disability, gender diversity, and different body types. But they also emphasize the role of everyday users: prompt responsibly, critically evaluate outputs, and avoid presenting AI images as “real” depictions of human bodies.


“A human-centred approach – one that is informed by considerations of factors such as gender, race, disability and age – would be advisable when designing AI algorithms. Otherwise, we continue to perpetuate harmful, inflexible and rigid imagery of what athletes should look like.”

— Dr. Catherine Sabiston



Global Perspectives of AI Bias: What the World Sees


Evidenced by the emergence of stricter regulations like the EU AI Act, AI bias is becoming one of the defining research challenges of the decade.


However, it all comes down to data — and the challenges are mounting:


  • Nearly 70% of AI vision models were found to show racial bias in face recognition (Stanford HAI, 2024)

  • Women are sexualized 4x more often

  • Up to 90% of AI training images originate from the U.S. and Western Europe

  • 6% of global AI datasets represent disabled individuals (UNESCO)



Future in Focus: Inclusivity is a Choice


As a leader in the global push for responsible AI, Canada has the opportunity to contribute to the solution:


  • Build inclusive training datasets rooted in Canadian-specific diversity.

  • Advocate for transparency around how visual models are trained.

  • Integrate mental-health and representation research directly into AI design.

  • Support public literacy around AI representation.

  • Encourage creators and companies to use inclusive prompts and review outputs before publishing.


Despite obvious challenges, the researchers remain cautiously optimistic. As Sabiston notes, more diverse imagery data can change norms if we intentionally generate and share it.



🍁 Subscribe for weekly updates from Science Canada 

bottom of page