Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

KairuByte ,
@KairuByte@lemmy.dbzer0.com avatar

That isn’t how any of this works…

You can’t just assume every AI works exactly the same. Especially since the term “AI” is such a vague and generalized definition these days.

The hallucinations you’re talking about, for one, are referring to LLMs and their losing track of the narrative when they are required to hold too much “in memory.”

Poison data isn’t even something an AI of this sort would really encounter unless intentional sabotage took place. It’s a private program training on private data, where does the opportunity for intentionally bad data come from?

And errors don’t necessarily build on errors. These are models that predict 30 seconds into the future by using known physics and estimated outcomes. They can literally check their predictions in 30 seconds if the need arises, but honestly why would they? Just move on to the next calculation from virgin data and estimate the next outcome, and the next, and the next.

On top of all that… this isn’t even dangerous. It’s not like anyone is handing the detonator for a nuke to an AI and saying “push the button when you think is best.” The worst outcome is “no more power” which is scary if you run on electricity, but mildly frustrating if you’re a human attempting to achieve fusion.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines