Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

teawrecks ,

Any input to the 2nd LLM is a prompt, so if it sees the user input, then it affects the probabilities of the output.

There's no such thing as "training an AI to follow instructions". The output is just a probibalistic function of the input. This is why a jailbreak is always possible, the probability of getting it to output something that was given as input is never 0.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • random
  • incremental_games
  • meta
  • All magazines