Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

okamiueru , (edited )

What the actual fuck.

I'm tired of constantly running in to the basic lack of understanding that LLMs are not knowledge systems. They emulate language, and produce plausible sentences. This journalist is using the output of a LLM as a source of knowledge... What a fucking disgrace this should be for Forbes.

Imagine a journalist just quoting a conversation with their 10 year old, where they played a game of "whatever you do, you have to pretend like you really know what you're talking about. Do not be unsure about anything, ok?", and used the output as a source for actual facts.

If you use ChatGPT, or Bard, or any LLM for anything beyond creative output, or with the required comprehension to vet the output, just stop. Don't use tools you don't understand the function or limitations of.

I've already had to spend hours correcting a fundamental misconception someone got from ChatGPT, which was part of a safety mechanism of medical software. I've also had the displeasure of finding self-contradicting documentation someone placed in a README, which was a copy-paste from ChatGPT.

It's such a powerful tool and utility if you know what it can help with. But it requires a basic understanding, that too many people are either too lazy to make the effort for, or just lacking critical thought processes, and "it sounded really plausible", (the full extent of what it's designed to do) fools them completely.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • privacy@lemmy.ml
  • random
  • incremental_games
  • meta
  • All magazines