Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

ClamDrinker

@ClamDrinker@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

ClamDrinker ,

Kings never went away, they just changed to a different form and name to remain accepted in society, as the ones with the crowns ended up in the gallows.

ClamDrinker , (edited )

People are kind of missing the point of the meme. The point is that Nuclear is down there along with renewables in safety and efficiency. It's lacking the egregious cover up in the original meme, even if it has legitimate concerns now. And due to society's ever increasing demand for electricity, we will heavily benefit from having a more scalable solution that doesn't require covering and potentially disrupting massive amounts of land before their operations can be scaled up to meet extraordinary demand. Wind turbines and solar panels don't stop working when we can't use their electricity either, so it's not like we can build too many of them or we risk creating complications out of peak hours. Many electrical networks aren't built to handle the loads. A nuclear reactor can be scaled down to use less fuel and put less strain on the electrical network when unneeded.

It should also be said that money can't always be spent equally everywhere. And depending on the labor required, there is also a limit to how manageable infrastructure is when it scales. The people that maintain and build solar panels, hydro, wind turbines, and nuclear, are not the same people. And if we acknowledge that climate change is an existential crisis, we must put our eggs in every basket we can, to diversify the energy transition. All four of the safest and most efficient solutions we have should be tapped into. But nuclear is often skipped because of outdated conceptions and fear. It does cost a lot and takes a while to build, but it fits certain shapes in the puzzle that none of the others do as well as it does.

ClamDrinker ,

Some personal thoughts:
My own country (The Netherlands) has despite a very vocal anti-nuclear movement in the 20th century completely flipped now to where the only parties not in favor of Nuclear are the Greens, who at times quote the fear as a reason not to do it. As someone who treats climate change as truly existential for our country that lies below projected sea levels, it makes them look unreasonable and not taking the issue seriously. We have limited land too, and a housing crisis on top of it. So land usage is a big pain point for renewables, and even if the land is unused, it is often so close to civilization that it does affect people's feelings of their surroundings when living near them, which might cause renewables to not make it as far as it could unrestricted. A nuclear reactor takes up fractions of the space, and can be relatively hidden from people.

All the other parties who heavily lean in to combating climate change at least acknowledge nuclear as an option that should (and are) being explored. And even the more climate skeptical parties see nuclear as something they could stand behind. Having broad support for certain actions is also important to actually getting things done. Our two new nuclear powered plants are expected to be running by 2035. Only ten years from now, ahead of our climate goals to be net-zero in 2040.

ClamDrinker ,

You can certainly try to use the power as much as possible, or sell the energy to a country with a deficit. But the problem is that you would still need to invest a lot of money to make sure the grid can handle the excess if you build renewables to cover 100% of the grid demand for now and in the future. Centralized fuel sources require much less grid changes because it flows from one place and spreads from there, so infrastructure only needs to be improved close to the source. Renewables as decentralized power sources requires the grid to be strengthened anywhere they are placed, and often that is not practical, both in financial costs and in the engineers it takes to actually do that.

Would it be preferable? Yes. Would it happen before we already need to be fully carbon neutral? Often not.

I'd refer you to my other post about the situation in my country. We have a small warehouse of a few football fields which stores the highest radioactivity of unusable nuclear fuel, and still has more than enough space for centuries. The rest of the fuel is simply re-used until it's effectively regular waste. The time to build two new nuclear reactors here also costs only about 10 years, not 20.

Rather continue with wind and solar and then batteries for the money.

All of these things should happen regardless of nuclear progress. And they do happen. But again, building renewables isn't just about the price.

ClamDrinker ,

Ideas are great - but execution is king. Because execution is where most of your creativity actually makes a difference in how the idea is represented. If you have a good idea and a good execution, it's very hard for someone to take that away from you. If you have a good idea, but execute it poorly, someone taking that idea and executing it better will leave you in the dust. But without the better execution that wouldn't work.

Better execution isn't always fair though - we often start out in life being unable to compete because of lack of experience, financing, and publicity. But it's basically how the entire entertainment industry works. Everyone just shuffles ideas around, and try to execute it better (or different enough) from the previous time the idea made the rounds.

After finding good ideas, get people hooked on your execution, and they will not be able to get that anywhere else unless someone else comes along and does it even better, but with practice that can also be you.

ClamDrinker ,

If you're here because of the AI headline, this is important to read.

We’re looking at how we can use local, on-device AI models -- i.e., more private -- to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities.

They are implementing AI how it should be. Don't let all the shitty companies blind you to the fact what we call AI has positive sides.

ClamDrinker , (edited )

It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn't exist in the physical world. Humans hallucinate too - all the time. It's just that our approximations are usually correct, and then we don't call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It's also why we don't notice our blinks, or why we don't see the blind spot our eyes have.

AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

Hallucinations shouldn't be treated like a bug. They are a feature - just not one the big tech companies wanted.

When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.

ClamDrinker ,

I'm not an expert in AI, I will admit. But I'm not a layman either. We're all anonymous on here anyways. Why not leave a comment explaining what you disagree with?

ClamDrinker ,

Hallucinations in AI are fairly well understood as far as I'm aware. Explained in high level on the Wikipedia page for it.
And I'm honestly not making any objective assessment of the technology itself. I'm making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it's given, but that's something even a layman might know)

How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don't have an answer there either), but a true fix should be impossible.

I can't exactly say why I'm passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I'm also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.

ClamDrinker , (edited )

I'm not sure where you think I'm giving it too much credit, because as far as I read it we already totally agree lol.
You're right, methods exist to diminish the effect of hallucinations. That's what the scientific method is.
Current AI has no physical body and can't run experiments to verify objective reality. It can't fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.

ClamDrinker , (edited )

Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

It could be humble enough to admit it doesn't know, but it can still be mistaken and think it has the right answer when it doesn't. It would feel neigh omniscient, but it would never truly be.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there's no guarantee that didn't change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

ClamDrinker ,

Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

What I mentioned can't really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you 'hallucinated' a truth that never existed, but you were just that confident it was correct to share and spread it. It's how we get myths, popular belief, and folklore.

For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what's going to happen, you basically can't function in reality.

ClamDrinker ,

The thing is, games have a minimum difficulty to be somewhat generally enjoyable, and the game designers have often built their game around this. The fun is generally in the obstacles providing real resistance that can be overcome by optimizing your strategy. It means that these obstacles need to be mentally picked apart by the player to proceed. They are built like puzzles.

This design philosophy - anyone who plays these games can tell you - is deeply rewarding if you go through it, because it requires genuine improvement that you can notice and be proud of. Hence why there is often a limit to how much easier you can make games like these without losing that because you forget the obstacle before even realizing it was preventing you from doing something.

It's often not as easy as just tweaking numbers. And often these development teams don't have the time to rebalance a game for those lower difficulties, so they just don't.

Honestly, the first wojack could be quite mad too, because often making an easy game harder also misses the point, where the game is just more difficult, but doesn't actually provide you with that carefully crafted feeling of constant improvement. Instead some easy games can become downright frustrating because obstacles feel "cheap" or "lacking depth" now that you have to spend a lot more time on them.

But making an easy game harder by just tweaking the numbers is definitely easier on the development team, and gives existing players a chance to re-experience the game, which wouldn't happen the other way around. But it's almost certainly not a better option for new players wanting a harder difficulty.

At the end of the day though, often there are ways to get what you want. Either by cheating, modding, or otherwise using 'OP' usables in the game. Do whatever you want to make the game more enjoyable to yourself. But if you make it too easy on yourself you might come out on the other end wondering why other people enjoyed the game so much more than you did.

ClamDrinker ,

Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

It's totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a 'bad' example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn't originally in the training data. There's no reason that can't be good training data itself.

George Carlin Estate Files Lawsuit Against Group Behind AI-Generated Stand-Up Special: ‘A Casual Theft of a Great American Artist’s Work’ (variety.com)

George Carlin Estate Files Lawsuit Against Group Behind AI-Generated Stand-Up Special: ‘A Casual Theft of a Great American Artist’s Work’::George Carlin's estate has filed a lawsuit against the creators behind an AI-generated comedy special featuring a recreation of the comedian's voice.

ClamDrinker ,

I agree and I get it's a funny way to put it, but in this case they started the video with a massive disclaimer that they were not Carlin and that it was AI. So it's hard to argue they were putting things in his mouth. If anything it's praiseworthy of a standard when it comes to disclosing if AI was involved, considering the hate mob revealing that attracts.

ClamDrinker , (edited )

While the estate might have a fair case on whether or not this is infringement (courts simply have not ruled enough on AI to say)
I think this is a silly way to characterize the people that made this. If you wanted to turn a profit from a dead person using AI to copy their likeness, why Carlin? He's beloved for sure, but he's not very 'marketable'. Without context to those who have never seen him before, he could be seen as a grumpy old man making aggressive statements. There are far better dead people to pick if your goal was to make a profit.

Which leads me to believe that he was in part picked because the creators of the video were genuine fans of his work (the video even states so as far as I remember) and felt they could provide enough originality and creativity. George Carlin is truly a one of a kind comedian whose words and jokes still inspire people today. Due to this video (and to an extent, the controversy), some people will be reminded of him. Some people will learn about him for the first time. His unique view on things can be extended to modern times. A view I feel we desperately need at times. None of that would be an issue as long as it was made excessively clear that this isn't actually George. That it's a homage. Which these people did. As far as I see, they could be legally in the wrong, but morally in the right. It's unfair to characterize them purely by their usage of AI.

ClamDrinker ,

I mean, fair enough. But what alive person titles their show "I'm glad I'm dead?"
Especially since people that know George know he's dead. It's almost The Onion level of satire.
And once the video starts, it immediately starts with a disclaimer that it's not Carlin, but AI.
Nobody would sit through the entire show only to be dumbfounded later that it wasn't actually Carlin risen from the dead.

ClamDrinker ,

You're right, it can lead to a flood of new material that could overshadow his old works.
But that would basically require it to be as good if not better than his old works, which I just don't think will happen.
Had nobody bat an eye at this, it would have just sunk into obscurity, as is the fate of many creative works.
Should more shows be made, I think after the third people would just not even care anymore. Most haven't even bothered to watch the first, after all.

ClamDrinker , (edited )

I agree that George is one of the best stand up comedians, but that doesn't change that his material is very much counter-culture. It's made to rub people the wrong way, to get them to think differently about why things are the way they are. That makes it inherently not as good of a money maker as someone who tries to please all sides in their jokes. I'd like to believe if he was alive today he would do a beautiful piece on AI.

In your second point I have to wonder though. Who made it a headline? Who decided this was worth bringing attention to?
Clearly, the controversy did not come from them. There is nothing controversial about an homage. But it is AI, and that got people talking.
You can be of the opinion they did it for that reason, but I would argue that they simply expected the same lukewarm reception they had always gotten.
After all, people don't often solicit themselves to be at the center of hate. Even when the association pays off, experiencing that stuff has lasting mental effects on people.

And again, if they wanted to be controversial to stir up as much drama, they could have done so much more.
Just don't disclose it's AI even though it's obviously AI, or make George do things out of character, like a product endorsement, or a piece about how religion is actually super cool.
All of that would have gotten them 10x the hate and exposure they got now.

But instead, they made something that looks like and views like an homage with obvious disclosure.
The only milder thing they could have done is found someone whose voice naturally sounds like George and put him in a costume that looks like George, at which point nobody would have bat an eye. Even though the intent is the same, just the way it was achieved is different.

ClamDrinker ,

For sure! Deceit should be punished. Ethical AI usage should not go without disclosure, so I think we must be understanding to people choosing to be open about that, rather than having to hide it to dodge hate.

I like Vernor Vinge’s take on it in one of his short stories where copyrights are lessened to 6 months and companies must quickly develop their new Worlds/Characters before they become public domain.

That's an interesting idea. Although 6 months does sound like an awfully short time to actually develop something more grand. But I do think with fairer copyright limits we could also afford to provide more protections in the early days after a work's release. It's definitely worth discussing such ideas to make copyright better for everyone.

ClamDrinker ,

We can argue their motives all we want (I’m pretty uninterested in it personally), but we aren’t them and we don’t even know what the process was to make it

Yes, that is sort of my point. I'm not sure either, but neither did the person I responded to (in my first comment before yours). And to make assumptions with such negative implications is very unhealthy in my opinion.

and I think that is because the whole thing sure would seem less impressive if they just admitted that they wrote it.

It's the first time I hear someone suggest they passed of their own work as AI, but it could also be true. Although AI assisted material is considered to be the same as fully AI generated by some. But again, we don't know.

I laughed maybe once, because the whole thing was not very funny in addition to being a (reverse?) hack attempt by them to deliver bits of their own material as something Carlin would say.

I definitely don't think it meets George's level. But it was amusing to me. Which is about what I'd expect of an homage.

ClamDrinker ,

Completely true. But we cannot reasonably push the responsibility of the entire internet onto someone when they did their due diligence.

Like, some people post CoD footage to youtube because it looks cool, and someone else either mistakes or malicious takes that and recontextualizes it to being combat footage from active warzones to shock people. Then people start reposting that footage with a fake explanation text on top of it, furthering the misinformation cycle. Do we now blame the people sharing their CoD footage for what other people did with it? Misinformation and propaganda are something society must work together on to combat.

If it really matters, people would be out there warning people that the pictures being posted are fake. In fact, even before AI that's what happened after tragedy happens. People would post images claiming to be of what happened, only to later be confirmed as being from some other tragedy. Or how some video games have fake leaks because someone rebranded fanmade content as a leak.

Eventually it becomes common knowledge or easy to prove as being fake. Take this picture for instance:
https://lemmy.world/pictrs/image/5c11ddc9-f234-4743-8881-60be66bdc196.jpeg

It's been well documented that the bottom image is fake, and as such anyone can now find out what was covered up. It's up to society to speak up when the damage is too great.

ClamDrinker , (edited )

Healthy or not, my lived experience is that assuming people are motivated by the things people are typically motivated by (e.g. greed, the desire for fame) is more often correct than assuming people have pure motives.

Everyone likes praise to a certain extent, and desiring recognition for what you've made is independent from your intentions otherwise. My personal experience working with talented creative people is that the two are often intertwined. If you can make something that's both fulfilling and economically sustainable, that's what you'll do. You can make something that's extremely fulfilling, but if it doesn't appeal to anyone but yourself, it doesn't pay the bills. I'm not saying it's not possible for them to not have that motivation, but in my opinion anyone ascribed to be malicious must be to some point proven to be that way. I have seen no such proof.

I really understand your second point but... as with many things, some things require consent and some things don't. Making a parody or an homage doesn't (typically) require that consent. It would be nice to get it, but the man is dead and even his children cannot speak for him other than as legal owners of his estate. I personally would like to believe he wouldn't care one bit, and I would have the same basis as anyone else to defend that, because nobody can ask a dead man for his opinions. It's clear his children do not like it, but unless they have a legal basis for that it can be freely dismissed as not being something George would stand behind.

I've watched pretty much every one of his shows, but I haven't seen that documentary. I'll see if I can watch it. But knowing George, he would have many words to exchange on both sides of the debate. The man was very much an advocate for freedom of creativity, but also very much in favor of artist protection. Open source AI has leveled the playing field for people that aren't mega corporations to compete, but has also brought along insecurity and anxiety to creative fields. It's not black and white.

In fact, there is a quote attributed to him which sort of speaks on this topic. (Although I must admit, the original source is of a defunct newspaper and the wayback machine didn't crawl the article)

[On his work appearing on the Internet] It's a conflicted feeling. I'm really a populist, down in the very center of me. I like the power people can accrue for themselves, and I like the idea of user-generated content and taking power from the corporations. The other half of the conflict, though, is that, traditionally speaking, artists are protected from copyright infringement. Fortunately, I don't have to worry about solving this issue. It's someone else's job.

August 9, 2007 in Las Vegas CityLife. So just a little less than a year before his death too.

EDIT: Minor clarification

ClamDrinker , (edited )

A complete false equivalence. Just because improper disclaimers exist, doesn't mean there aren't legitimate reasons to use them. Impersonation requires intent, and a disclaimer is an explicit way to make it clear that they are not attempting to do that, and to explicitly make it clear to viewers who might have misunderstood. It's why South Park has such a text too at the start of every episode. It's a rather fool proof way to illegitimize any accusation of impersonation.

ClamDrinker ,

You're right, South Park doesnt need it either. But a disclaimer removes all doubt. The video doesnt need a disclaimer either, but they made it anyways to remove all doubt. And no, they disclaimed any notion that they are George Carlin. Admitting to a crime in a disclaimer is not what it said, that much should be obvious.

ClamDrinker , (edited )

There’s another thing here which is that you seem to believe this was actually made in large part by an AI while simultaneously stating the motivations of humans. So which is it?

AI assisted works are, funnily enough, mostly a human production at this point. If you asked AI to make another George Carlin special for you, it would suck extremely hard. AI requires humans to succeed, it does not succeed at being human. And as such, it's a human work at the end of the day. My opinion is that if we were being truthful, this comedy special would likely be considered AI assisted rather than fully AI generated.

You seem really sure that I think this is fully (or largely) AI generated, but that's never been a question I answered or alluded to believing before. I don't believe that. I don't even believe fully AI generated works to be worthy of being called true art. AI assisted works on the other hand, I do believe to be art. AI is a tool, and for it to be used for art it requires humans to provide input and humans to make decisions for it to be something that people will actually enjoy. And that is clearly what was done here.

The primary beneficiary of all of the AI hype is Microsoft.
Secondary beneficiary is Nvidia. These aren’t tiny companies.

"The primary beneficiaries of art hype are pencil makers, brush makers, canvas makers, and of course, Adobe for making photoshop, Samsung and Wacom for making drawing tablets. Not to mention the art investors selling art from museums and art galleries all over the world for millions. These aren't tiny entities."

See how ridiculous it is to make that argument? If something is popular, people and companies who are in a prime position to make money off it will try to do so, that is to be expected under our capitalist society. But small artists and small creators get the most elevation by the advance of open source AI. Big companies can already push out enough money to bring any work they create to the highest standards. A small creator cannot, but they can get far more, and far better results by using AI in their workflow. And because small creators often put far more heart and soul into their works, it allows them to compete with giants more easily. A clear win for small creators and artists.

Just to be extra clear: I don't like OpenAI. I don't like Microsoft. I don't like Nvidia to a certain degree. Open Source AI is not their piece of cake. They like proprietary, closed source AI. The kind where only they and the people that pay them get to use the advancements AI has made. That disgusts me. Open Source AI is the tool of choice for ethical AI.

ClamDrinker ,

The court might rule in favor of his estate for this reason. But honestly, I do think there are differences to a singer (whose voice becomes an instrument in their song) and a comedian (whose voice is used to communicate the ideas and jokes they want to tell). A different voice could tell the same jokes as Carlin, and if done with the same level of care to communicate his emotions and cadence, could effectively create the same feeling as we know it. A song could literally be a different song if you swap an instrument. But the courts will have to rule.

ClamDrinker ,

I don't disagree with that, but such differences can matter when it comes to ruling if imitation and parody are allowed, and to what extent.

ClamDrinker ,

Well then we agree. Lets leave ridiculous arguments out of it. There are far better arguments to make.

ClamDrinker , (edited )

I mean, you ignored the entire rest of my comment to respond only to a hyperbole to illustrate that something is a bad argument. I'm sure they are making money off it, but small creators and artists can relatively make more money off it. And you claim that is not 'actually happening'. But that is your opinion, how you view things. I talk with artists daily, and they use AI when it's convenient to them, when it saves them work or allows them to focus on work they actually like. Just like how they use any other tool to their disposal.

I know there are some very big name artists on social media who are making a fuss about this stuff, but I highly question their motives with my point of view in mind. Of course it makes sense for someone with a big social media following to rally up their supporters so they can get a payday. I regularly see them speak complete lies to their followers, and of course it works. When you actually talk to artists in real life, you'll get a far more nuanced response.

ClamDrinker ,

That's a pretty sloppy reason. A nuanced topic is not well suited to be explained in anything but descriptive language. Especially if you care about people's livelihoods and passion. I care about my artist friends, colleagues, and acquaintances. Hence I will support them in securing their endeavors in this changing landscape.

Artists are largely not computer experts and artists using AI are buying Microsoft or Adobe or using freebies and pondering paid upgrades. They are also renting rather than buying because everything’s a subscription service now.

I really don't like this characterization of artists. They are not dumb nor incapable of learning. Technical artists exist too. Installing open source AI is relatively easy. Pretty much down to pressing a button. And because it's open source, it's free. Using them to it's fullest effect is where the skill goes, and the artists I know are more than happy to develop their skills.

A far bigger market for AI is for non-artists and scammers to fill up Amazon’s bookstore and the broader Internet full of more trash than it already was.

The existence of bad usage of AI does not invalidate good usage of AI. The internet was already full of bad content before AI. The good stuff is what floats to the top. No sane person is going to pay to read some no name AI generated trash. But people will read a highly regarded book that just happened to be AI assisted.

But the whole premise is silly. Did we demonize cars because bank robbers started using them to escape the police? Did we demonize cameras because people could take exact photo copies of someone else's work? No. We demonized those that misused the tool. AI is no different.

A scammer can generate thousands of garbage images and text without worth, before an artist being assisted by AI can make a single work. Just like a burglar can make more money easily by breaking into someone's house and stealing all their money compared to working a day job for a month. There's a reason these things are illegal and/or unethical. But those are reflections of the people doing this, not the things they use.

ClamDrinker ,

Perhaps. The world can use more kindness when despite everything, loneliness is at an all time high. It's not a fix but maybe it can be a brake on someone's downwards spiral.

I'd prefer and love to see someone new match George Carlin's level too though, much more than someone trying to become him. I dont think we've quite had a chance to savor the good side of AI yet, but hey you're entitled to your opinion.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines