Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@FaceDeer@kbin.social avatar

FaceDeer

@FaceDeer@kbin.social

Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and is now exploring new vistas in social media.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

FaceDeer ,
@FaceDeer@kbin.social avatar

Fair use is context based. There is no simple yes or no answer.

FaceDeer ,
@FaceDeer@kbin.social avatar

And in many cases it's not. But not in all cases. For example, this sketch is a parody of this scene from the O. C.. It uses copyrighted music as background. Parody is fair use.

FaceDeer ,
@FaceDeer@kbin.social avatar

Cory's original usage of the word gave it a useful and specific meaning. But that has evolved extremely rapidly with popular usage into the word simply meaning "I don't like this thing." Which takes away the usefulness because now it's no longer describing a specific reason for not liking it.

It'd be like if every kind of ailment started being referred to as an "infection." Concussions, sprains, hypothermia, etc, all being passed off as "he got infected." We already have generic terms for that like "he got hurt," and now when someone does get literally infected we've lost the word that would be used to specify that.

Languages evolve, sure. But that doesn't mean it's always in a good direction. In this specific case evolution is enshittifying the language and that's worth a little (admittedly futile) push-back.

FaceDeer ,
@FaceDeer@kbin.social avatar

And, just like enshittification, the term is being thrown about with such wild abandon that it barely means anything any more. Most of the time it seems to me that "Embrace, Extend, Extinguish!" Translates to "thing I like got popular and now may be used by thing I don't like."

FaceDeer ,
@FaceDeer@kbin.social avatar

Not necessarily, but in this particular case it seems bad to me. We're losing a specialized term for something that IMO warrants having one.

FaceDeer ,
@FaceDeer@kbin.social avatar

That's what EEE used to mean, sure. Now it also means "a company I don't like is using a protocol that I do like." That dilution of the original meaning is unfortunate, IMO.

FaceDeer ,
@FaceDeer@kbin.social avatar

Vote counting is an algorithm. I think a lot of people want a unicorn and are apalled when someone offers them a magical horse with a horn because it's not what they wanted.

Google demos new Lumiere text to video engine. Results are a huge leap forward from previous engines. (youtu.be)

Google’s new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time). Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still...

FaceDeer ,
@FaceDeer@kbin.social avatar

It's still driving the state of the art forward, which will result in models that will be used by the public.

FaceDeer ,
@FaceDeer@kbin.social avatar

Indeed. Often the hardest part of an invention is the discovery that a thing is actually possible. Even if nobody knows how it was done they can now justify throwing resources into figuring it out and know what results to keep an eye out for.

FaceDeer ,
@FaceDeer@kbin.social avatar

It's so annoying how suddenly everyone's so convinced that "AI" is some highly specific thing that hasn't been accomplished yet. Artificial intelligence is an extremely broad subject of computer science and things that fit the description have been around for decades. The journal Artificial Intelligence was first published in 1970, 54 years ago.

We've got something that's passing the frickin' Turing test now, and all of a sudden the term "artificial intelligence" is too good for that? Bah.

FaceDeer ,
@FaceDeer@kbin.social avatar

X, apparently.

FaceDeer ,
@FaceDeer@kbin.social avatar

Are you going to somehow reach into my personal computer and remove the software and models from it?

FaceDeer ,
@FaceDeer@kbin.social avatar

Ah, it was the third option, ignorance.

FaceDeer ,
@FaceDeer@kbin.social avatar

If it's horrible and it's also "masquerading" as human art, what does that say about human art?

FaceDeer ,
@FaceDeer@kbin.social avatar

No, I'm just pointing out the common contradiction I see in threads like this, where people argue that AI is both a big threat to "traditional" artists and also that AI is terrible compared to "traditional" artists. It can't really be both.

FaceDeer ,
@FaceDeer@kbin.social avatar

Break it down into chunks and assemble it like Lego.

FaceDeer ,
@FaceDeer@kbin.social avatar

He put quotes around the word "art", which gives me the opposite impression.

FaceDeer ,
@FaceDeer@kbin.social avatar

And yet there's still plenty of traditional restaurants.

Fast food provides a new option. It hasn't destroyed the old. And "terrible" is, once again, in the eye of the beholder - some people like it just fine.

FaceDeer ,
@FaceDeer@kbin.social avatar

Unhealthy things should be forbidden? Even if they were, this is drifting off of the subject of AI art.

FaceDeer ,
@FaceDeer@kbin.social avatar

Things that are bad for society should be suppressed and things which are good for society should be promoted. That would seem to be the point of a society.

Great, now we just need to establish whether AI art is "bad for society", and if it is then whether the effects of attempting to ban it would be worse for society.

Further, I notice a pastern in your replies of bringing up metaphor then rejecting the very metaphor as off topic or irrelevant when it is engaged to it's logical conclusion.

What metaphors did I bring up? You're the one who brought fast food into this. I don't see any other metaphors in play.

FaceDeer ,
@FaceDeer@kbin.social avatar

That seems fairly evident

Hardly. There wouldn't be much debate about it if it was, would there?

You were fine engaging fastfood until I pointed out it, like AI " art " was terrible. Only then did you deride the metaphor as off topic.

Alright, in future I will try to remember to immediately reject any metaphors you bring into play rather than attempt to engage with them.

FaceDeer ,
@FaceDeer@kbin.social avatar

Ethereum switched to proof-of-stake a year and a half ago, it no longer has a significant environmental impact.

Oh wait, this is an analogy, isn't it?

FaceDeer ,
@FaceDeer@kbin.social avatar

No, just pointing out who's in the "loud but wrong" camp on that one. If ecological concerns are why you think crypto is bad, well, that's not clear cut any more.

You want to keep going with this analogy you brought up, then?

FaceDeer ,
@FaceDeer@kbin.social avatar

Is AI art literally violent, or is this another analogy?

FaceDeer ,
@FaceDeer@kbin.social avatar

All analogies eventually fail when you dig into them far enough, by nature of what an analogy is. That is, an analogy is not exactly identical to the thing being analogized. If you want to be able to use analogies but refuse to acknowledge that they eventually lose relevance when you stretch them too far then you're simply not amenable to reason.

And then you go and explicitly beg the very question under debate with an "of course I'm right." No, AI art isn't a "scam," whatever you mean by that.

FaceDeer ,
@FaceDeer@kbin.social avatar

You realize how a word like that can have ambiguous meanings, yes?

FaceDeer ,
@FaceDeer@kbin.social avatar

If you moved to Mars and are upset because of the Martians there, then you're the problem.

FaceDeer ,
@FaceDeer@kbin.social avatar

The damned cat keeps scratching at the seal, though.

FaceDeer ,
@FaceDeer@kbin.social avatar

I've been saying this all along. Language is how humans communicate thoughts to each other. If a machine is trained to "fake" communication via language then at a certain point it may simply be easier for the machine to figure out how to actually think in order to produce convincing output.

We've seen similar signs of "understanding" in the image-generation AIs, there was a paper a few months back about how when one of these AIs is asked to generate a picture the first thing it does is develop an internal "depth map" showing the three-dimensional form of the thing it's trying to make a picture of. Because it turns out that it's easier to make pictures of physical objects when you have an understanding of their physical nature.

I think the reason this gets a lot of pushback is that people don't want to accept the notion that "thinking" may not actually be as hard or as special as we like to believe.

FaceDeer ,
@FaceDeer@kbin.social avatar

I have a theory... so are you and I.

FaceDeer ,
@FaceDeer@kbin.social avatar

No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

How do you know a human wouldn't do the same? We lack the ability to perform the experiment.

An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”

Also a very human behaviour, in my experience.

FaceDeer ,
@FaceDeer@kbin.social avatar

I did some playing around with ChatGPT's understanding of jokes a while back and I found that it actually did best on understanding puns, which IMO isn't surprising since it's a large language model and puns are deeply rooted in language and wordplay. It didn't so so well at jokes based on other things but it still sometimes managed to figure them out too.

I remember discussing the subject in a Reddit thread and there was a commenter who was super enthused by the notion of an AI that understood humour because he himself was autistic and never "got" any jokes. He wanted an AI companion that would let him at least know when a joke was being said, so he wouldn't get confused and flustered. I had to warn him that ChatGPT wasn't reliable for that yet, but still, it did better than he did and he was fully human.

FaceDeer ,
@FaceDeer@kbin.social avatar

I knew you'd say that.

FaceDeer ,
@FaceDeer@kbin.social avatar

I'd take a step farther back and say the argument hinges on whether "consciousness" is even really a thing, or if we're "faking" it to each other and to ourselves as well. We still don't have a particularly good way of measuring human consciousness, let alone determining whether AIs have it too.

FaceDeer ,
@FaceDeer@kbin.social avatar

Or stop whinging about how the hardware isn't the perfect platonic ideal that you imagined and use it when it's good enough.

Seriously, what's the big deal about a battery pack?

FaceDeer ,
@FaceDeer@kbin.social avatar

So complain about that, the thing that is actually a problem for you.

FaceDeer ,
@FaceDeer@kbin.social avatar

It also turns on and off outside of any human control.

FaceDeer ,
@FaceDeer@kbin.social avatar

There's also koboldcpp, which is fairly newbie friendly.

FaceDeer ,
@FaceDeer@kbin.social avatar

This is the sort of thing that I like to send to people who assure me that "all AI generated art looks wrong" or whatever.

No, the AI generated art that looks wrong is the only AI generated art that you notice. The rest slips by.

FaceDeer ,
@FaceDeer@kbin.social avatar
FaceDeer ,
@FaceDeer@kbin.social avatar

To be fair, NASA flubbed plenty of Moon landers too.

FaceDeer ,
@FaceDeer@kbin.social avatar

It was a very Kerbal landing technique they were attempting, got to respect them for attempting new things even when it's their first try at a lander.

Last I heard, speculation was that the solar panels were pointing to the west and so it might "wake up" again later in the Lunar day when the Sun gets past zenith. They landed in Lunar morning to maximize the usable duration of sunlight, so right now the panels would be pointed directly away from it.

FaceDeer ,
@FaceDeer@kbin.social avatar

In this case, the landing legs were on the "side" of the probe. It was supposed to come down to a halt hovering right above the surface and then flop over onto them.

FaceDeer ,
@FaceDeer@kbin.social avatar

I feel like a pretty big winner too. Meta has been quite generous with releasing AI-related code and models under open licenses, I wouldn't be running LLMs locally on my computer without the stuff they've been putting out. And I didn't have to pay a penny to them for it.

FaceDeer ,
@FaceDeer@kbin.social avatar

Meta is the source of most of the open source LLM AI scene. They're contributing tons to the field and I wish them well at it.

FaceDeer ,
@FaceDeer@kbin.social avatar

Six months from now: "damn, we're way behind Meta on AI. We should have spent billions six months ago, it's going to cost way more to catch up."

FaceDeer ,
@FaceDeer@kbin.social avatar

Which will be solved by them spending it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines