Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

OpenStars ,
@OpenStars@startrek.website avatar

One thing that trips me up is that even if at best someone SUCCEEDS in developing such an AI, even one that can essentially replace humanity (in whatever roles), what then would become of us afterwards? Wall-E tells a poignant and, to me at least, extremely realistic portrait of what we would do in that eventuality: sit down and never even so much as ever bother to stand up again. With all of our needs and every whim catered to by a slave force, what use would there even be to do so?

Star Trek was only one possible future, but how many would have the force of will or mind, and then be backed up by enough someones capable of enacting such a future, much less building it up from scratch? Also, it is best to keep in mind how that society was (1) brought back from an extinction-level event, which well-neigh almost destroyed the Earth (i.e., if it had been a tad bit more powerful it would have, thus it was by an extremely narrow margin that they escaped oblivion to begin with), followed by (2) meeting up with external beings who caused humanity to collect itself to face this new external pressure, i.e. they were "saved", by the aliens presence. Even though they managed to collect themselves and become worthy of it in the end, at the time it happened it was by no means an assured event that they would survive.

Star Wars, minus the Jedi, seems a much more likely, to my fatalism-tainted mind, where people are literally slaves to the large, fat, greedy entities who hoard power just b/c they can. Fighting against that takes real effort, which we seem unwilling to expend. Case in point: the only other option to Trump is... Biden, really!? Who has actually managed to impress me, doing far more than I had expected - though only b/c my expectations were set so low to begin with:-).

Some short stories if you are interested:

One is that I was a Reddit mod, for a small niche gaming sub. I stepped down. I guided the sub at a time when literally nobody else was willing to step up, and as soon as some people did, I stepped back, mostly just training them, and then when one more agreed I stepped out entirely. Perhaps it corrupted me, but apparently not too much - maybe b/c it was not "much" power?

Two, I cannot find the article right now b/c of enshittification of Google, but there are some fascinating studies showing that AIs do all sorts of crazy things, which supports how much of it is truly logical/rational behavior rather than crazy to begin with. One described a maze-running experiment where, once the "step cost" got to be high enough, the agent was trained to undertake higher & higher risks in order to just exit the maze ASAP - even if that meant finding the "bad"/"hell" rather than "good"/"heaven" exit. Like if good=+100 points, bad=-100 points, and the step cost is -10 points, with the goal being to maximum your score, then every 10 steps is equivalent to another "bad" exit. So like if you took 30 steps to find the good exit that is only -300+100=-200 points whereas if you took only 5 steps to find the bad exit that is -50-100=-150, which is overall higher than the good exit. Suicide makes sense, when living is pain and your goal is to minimize that, for someone who has nothing else to live for. i.e., some things seem crazy only when we do not fully understand them.

Three, this video messed me up, seriously. It is entirely SFW, I just mean that the thoughts that it espoused blew me away and I still have no idea how to integrate them into my own personal philosophy, or even whether I should... but the one thing I know for sure is that after watching it, I will never think the same way again.:-)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • memes@lemmy.ml
  • incremental_games
  • meta
  • All magazines