Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

kromem ,

Possibly, but you'd be surprised at how often things like this are overlooked.

For example, another oversight that comes to mind was a study evaluating self-correction that was structuring their prompts as "you previously said X, what if anything was wrong about it?"

There's two issues with that. One, they were using a chat/instruct model so it's going to try to find something wrong if you say "what's wrong" and it should have instead been phrased neutrally as "grade this statement."

Second - if the training data largely includes social media, just how often do you see people on social media self-correct vs correct someone else? They should have instead presented the initial answer as if generated from elsewhere, so the actual total prompt should have been more like "Grade the following statement on accuracy and explain your grade: X"

A lot of research just treats models as static offerings and doesn't thoroughly consider the training data both at a pretrained layer and in their fine tuning.

So while I agree that they probably found the result they were looking for to get headlines, I am skeptical that they would have stumbled on what that should have been attempting to improve the value of their research (include direct comparison of two identical pretrained Llama 2 models given different in context identities) even if they had been more pure intentioned.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • random
  • meta
  • All magazines