Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

cyd ,

Problem is, there isn't a way to open up the black boxes. It's the AI explainability problem. Even if you have the model weights, you can't predict what they will do without running the model, and you can't definitively verify that the model was trained as the model maker claimed.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines