Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

OpenStars ,
@OpenStars@startrek.website avatar

I used to think that. Now I think that even if robots (more properly I mean a true artificial sentience) were to ever replace humanity, then they too could just as easily fall prey to the same effects that plague us, just b/c they abut natural laws encoded into the physics of the universe.

One issue I take with what you are saying is that the value judgements depend on what you are measuring the ideal against. Whereas, from a "survival of the fittest" (or even "survival of what happened to survive") standpoint, then Genghis Khan is one of the most successful people who ever lived, alongside the "mitochondrial Eve" and the "Y-chromosomal Adam" (yes those are real biological terms, though they are separated by at least a few hundred thousand years and both iirc were pre-Homo sapiens).

Mathematical game theory shows us that cheaters do prosper, at least at first, before they bring down the entire system around them. Hence there is a "force" that pulls at all of us - even abstract theoretical agents with no instantiation in the real world - to "game the system", and that must be resisted for the good of society overall. But some people (e.g. Putin, Trump, Jeff Bezos) give in to those urges, and instead of lifting themselves up to live in society, drag all of society down to serve them. What Google did to the Android OS is a perfect example of people corrupting that open source framework, twisting and perverting it into almost a mockery of its former self. For now, it is still "free", especially in comparison to the walled garden of its chief competitor, but that freedom is a shadow of what was originally intended, it looks like to me (from the outside).

So I am giving up on "idealism", and instead trying to be more realistic. I don't know what that means, unfortunately, so I literally cannot explain it better than that - but something along the lines of knowing that people will corrupt things, what will my own personal response be to the process? e.g., as George Carlin suggested, should I just recuse myself from voting entirely, or (living in the USA as I do) have things changed since then, and whereas before the two sides were fairly similar, nowadays it is important to vote not for the side of corruption, but against the side of significantly worse destruction, including of the entire system? (which arguably even needs to be destroyed, except if that happens in that manner, it is likely to lead to something far, far worse)

Anyway, yeah it is far worse than that, and I find it the height of irony that people, who absolutely cannot refuse to take care of ourselves, are now looking to make robots/AI, who we seem to be hoping will do a better job of that than we (won't) do? It is the absolute "daddy please save me" / cry for a superhero / savior, as always, abrogating responsibility to do anything to someone else to "like: just fix all the stuff, and junk, ya' know whaddi mean?" And therefore we fear robots (& AI) - as we should, b/c we know already what we (humans) are willing to do to one another, and thus we fear what they (being "other") might do to us as well. I am saying that it is our own corruption that we fear, mirror-reflected/projected onto them.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • memes@lemmy.ml
  • incremental_games
  • meta
  • All magazines