Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

FaceDeer ,
@FaceDeer@kbin.social avatar

I've been saying this all along. Language is how humans communicate thoughts to each other. If a machine is trained to "fake" communication via language then at a certain point it may simply be easier for the machine to figure out how to actually think in order to produce convincing output.

We've seen similar signs of "understanding" in the image-generation AIs, there was a paper a few months back about how when one of these AIs is asked to generate a picture the first thing it does is develop an internal "depth map" showing the three-dimensional form of the thing it's trying to make a picture of. Because it turns out that it's easier to make pictures of physical objects when you have an understanding of their physical nature.

I think the reason this gets a lot of pushback is that people don't want to accept the notion that "thinking" may not actually be as hard or as special as we like to believe.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines