Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Muehe ,

“Inclusive models” would need to be larger.

[citation needed]

To my understanding the problem is that the models reproduce biases in the training material, not model size. Alignment is currently a manual process after the initial unsupervised learning phase, often done by click-workers (Reinforcement Learning from Human Feedback, RLHF), and aimed at coaxing the model towards more "politically correct" outputs; But ultimately at that time the damage is already done since the bias is encoded in the model weights and will resurface in the outputs just randomly or if you "jailbreak" enough.

In the context of the OP, if your training material has a high volume of sexualised depictions of Asian women the model will reproduce that in its outputs. Which is also the argument the article makes. So what you need for more inclusive models is essentially a de-biased training set designed with that specific purpose in mind.

I'm glad to be corrected here, especially if you have any sources to look at.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • random
  • incremental_games
  • meta
  • All magazines