Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

autotldr Bot ,

This is the best summary I could come up with:


Mira Murati, OpenAI's longtime chief technology officer, sat down with The Wall Street Journal's Joanna Stern this week to discuss Sora, the company's forthcoming video-generating AI.

It's a bad look all around for OpenAI, which has drawn wide controversy — not to mention multiple copyright lawsuits, including one from The New York Times — for its data-scraping practices.

After the interview, Murati reportedly confirmed to the WSJ that Shutterstock videos were indeed included in Sora's training set.

But when you consider the vastness of video content across the web, any clips available to OpenAI through Shutterstock are likely only a small drop in the Sora training data pond.

Others, meanwhile, jumped to Murati's defense, arguing that if you've ever published anything to the internet, you should be perfectly fine with AI companies gobbling it up.

Whether Murati was keeping things close to the vest to avoid more copyright litigation or simply just didn't know the answer, people have good reason to wonder where AI data — be it "publicly available and licensed" or not — is coming from.


The original article contains 667 words, the summary contains 178 words. Saved 73%. I'm a bot and I'm open source!

A_Very_Big_Fan ,

Funny how we have all this pissing and moaning about stealing, yet nobody ever complains about this bot actually lifting entire articles and spitting them back out without ads or fluff. I guess it's different when you find it useful, huh?

I like the bot, but I mean y'all wanna talk about copyright violations? The argument against this bot is a hell of a lot more solid than just using data for training.

Guntrigger ,

Is this bot a closed system which is being used for profit? No, you know exactly what its source is (the single article it is condensing) and even has a handy link about how it is open source at the end of every single post.

A_Very_Big_Fan ,

It copied all of its text from the article, and it allows me to get all the information from it I want without providing that publisher with traffic or ad revenue. That's not fair use.

I do like the bot, and personally I'd rather it stay, but no matter how you look at it this isn't "fair use" of the article.

Guntrigger ,

Interesting take. In all of the defences of LLMs using copyrighted material it's very often highlighted that "fair use" allows exactly such summaries of larger texts.

In reality, "fair use" is ruled on a case by case basis, so it's impossible to judge whether something is or not without it going to court.

A_Very_Big_Fan ,

We're not making legislation here, so we don't have that level of burden of proof. But either way, when it comes to factors of fair use that every authority on the matter will list, it violates almost all of them.

It's non-commercial, and it's using facts rather than using a more creative work, so it's got that going for it... But it's

  • composed of 100% copied material

  • it's not transformative

  • it's substituting the original work

  • it uses officially published work

  • it specifically copies the "heart" of the work

  • it bypasses all of the ads and impacts their traffic/metrics so it has a financial impact on them.

It's pretty obvious that there is no argument here. The factors that are violated the hardest and most undisputably are the ones that most authorities on the matter (including the one I linked) agree are the most important.

Fisk400 ,

They know what they fed the thing. Not backing up their own training data would be insane. They are not insane, just thieves

echodot ,

Everyone says this but the truth is copyright law has been unfit for purpose for well over 30 years now. And the lords were written no one expected something like the internet to ever come along and they certainly didn't expect something like AI. We can't just keep applying the same old copyright laws to new situations when they already don't work.

I'm sure they did illegally obtain the work but is that necessarily a bad thing? For example they're not actually making that content available to anyone so if I pirate a movie and then only I watch it, I don't think anyone would really think I should be arrested for that, so why is it unacceptable for them but fine for me?

oKtosiTe ,
@oKtosiTe@lemmy.world avatar

if I pirate a movie and then only I watch it, I don't think anyone would really think I should be arrested for that

There are definitely people out there that think you should be arrested for that.

echodot ,

Even the police are unsure if it's actually a crime though. Crimes require someone to lose something and no one can point to a lost product so it's difficult to really quantify.

And it's not even technically breach of copyright since you're not selling it.

exanime ,

But they ARE selling it ... Every answer Chat GPT makes came from possibly stolen material

HaywardT ,

Isn't that true of every opinion you have. All the knowledge you have is based on works of others that came before you.

exanime ,

Not untill I bill you for it

Also, no there is such a thing as an original thought or opinion... Even if it's informed on other knowledge

There is a difference between reinterpreting other knowledge and just Frankensteining multiple work together

HaywardT ,

I don't know enough about LLMs but Neural networks are capable of original thought. I suspect LLMs are too because of their relationship to Neural Networks.

confusedbytheBasics ,

You're using the word 'stolen' which doesn't fit. It would be accurate to say 'every answer comes from possibly unlicensed material '.

exanime ,

Yeap, the real term (I think) would be copyright infringement

Guntrigger ,

Allegedly possibly maybe accidentally whoopsie not quite licensed fully material.

rottingleaf ,

That is a bad thing if they want to be exempt from the law because they are doing a big, very important thing, and we shouldn't.

The copyright laws are shit, but applying them selectively is orders of magnitude worse.

GiveMemes ,

Ok but training an ai is not equivalent to watching a movie. It's more like putting a game on one of those 300 games in one DS cartridges back in the day.

Gabu ,

The problem with that being?

GiveMemes ,

Obviously, it's illegal to sell a product that's using copyrighted material you don't have the copyright to. This AI is not open source, it's a for profit system.

A_Very_Big_Fan ,

It doesn't, though. You could have easily checked yourself, but I guess I'll do your research for you.

GiveMemes ,

It does though. You could have easily checked for yourself, but I guess I'll do your research for you.

https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

A_Very_Big_Fan ,

That article doesn't even claim it's distributing copyrighted material.

If that qualifies as distributing stolen copyrighted material, then this is stealing and distributing the "you shall not pass" LoTR scene. Which, again, ChatGPT won't even do

GiveMemes ,

Sorry, I know reading the whole article is hard:

The complaint cites several examples when a chatbot provided users with near-verbatim excerpts from Times articles that would otherwise require a paid subscription to view.

A_Very_Big_Fan ,

Yeah lmao after like 20 paragraphs of nothing, it wasn't hard to believe you didn't know what you were talking about. But I looked at the complaint itself out of curiosity, and it's flimsy and misleading.

The first issue is 100% of the allegedly paywalled text from all 4 articles mentioned in the complaint can be read by non-paying customers for free outside of the paywall. You can't read the whole article, but you can get far enough to read all 4 quotes mentioned in the complaint yourself. The links to each article are in the complaint if you don't believe me. They have nothing to show they bypassed a paywall or that it was trained on unlicensed content.

The second issue is the third exhibit claims it will bypass paywalls when asked. This is demonstrably false because for one, the article they asked it for isn't paywalled, and for two, using their exact prompts word for word doesn't work if you try it yourself.

Two of the four exhibits don't even have screenshots, so there's no evidence it happened in the first place, but more importantly they don't (and apparently won't when asked) disclose what lengths they had to go to in order to get that output. For all we know they gave it 90% of the words and told it to fill in the gaps.

HaywardT ,

I don't think that is true. You aren't reselling the movies. It is more like watching the movies then writing a recap or critique of the movies. Do you owe the copyright holder for doing that?

exanime ,

Because the actual comparison is that you stole ALL movies, started your own Netflix with them and are lining up to literally make billions by taking the jobs of millions of people, including those you stole from

HaywardT ,

I would say it is closer to watching all the movies, regardless of how you got them, then taught a film class at UCLA.

A_Very_Big_Fan ,

If I paint a melty clock hanging off of a table, how have I stolen from Salvador Dali? What did I "steal" from Tolkien when I drew this?

you stole ALL movies, started your own Netflix with them

The model in question can't even try to distribute copyrighted material. You could have easily checked for yourself, but once again I find myself having to do the footwork for you guys.

exanime ,

If you sell your melty clock yes, it not "stealing" but you are violating copyright, that's how it works

The "model in question" is a bit of a prototype, I thought is was clear we are talking about where these models are going.... Maybe you'd get it if you came down of your high horse

A_Very_Big_Fan ,

Dali doesn't own the concept of a melting clock. If I include a melting clock in my own work, as long as it's not his melting clock with all the other elements of his painting, it's fair use.

GPT hasn't been a prototype since before 2018, and the copyright restrictions are only getting tighter every time it's updated so idk what you're on about.

A_Very_Big_Fan ,

if I pirate a movie and then only I watch it, I don't think anyone would really think I should be arrested for that, so why is it unacceptable for them but fine for me?

Because it's more analogous to watching a video being broadcasted outdoors in the public, or looking at a mural someone painted on a wall, and letting it inform your creative works going forward. Not even recording it, just looking at it.

As far as we know, they never pirated anything. What we do know is it was trained on data that literally anybody can go out and look at yourself and have it inform your own work. If they're out here torrenting a bunch of movies they don't own or aren't licencing, then the argument against them has merit. But until then, I think all of this is a bunch of AI hysteria over some shit humans have been doing since the first human created a thing.

StarPupil ,

An AI (in its current form) isn't a person drawing inspiration from the world around it, it's a program made by people with inputs chosen by those people. If those people didn't ask permission to use other people's licensed work for their product, then they are plagiarising that work, and they should be subject to the same penalties that, for example, a game company using stolen art in their game should face. An AI doesn't become inspired, it copies existing things to predict what it thinks its user wants to see. If we produce a real thinking AI at some point in the future, one with self determination and whatnot, the story will be different, but for now it isn't.

A_Very_Big_Fan ,

What is web scraping if not gathering information from around the world? As long as you're not distributing copyrighted content (and the models in question here don't, btw), then fair use is at play. I'm not plagiarizing the news by reading it or by talking about what I learned, but I would be if I just copy/pasted my response from the article.

Reading publicly available data isn't a copyright violation, and it certainly isn't a violation of fair use. If it were, then you just plagiarized my comment by reading it before you responded.

VirtualOdour ,

That's really not how it works though, it's a web crawler they're not going to download the whole internet

And a reason they don't is it would actually potentially be copywrite infringement in some cases where as what they do legally isn't (no matter how much people wish the law was set based on their emotions)

PoliticallyIncorrect ,
@PoliticallyIncorrect@lemmy.world avatar

Watching a video or reading an article by a human isn't copyright infringement, why then if an "AI" do it then it is? I believe the copyright infringement it's made by the prompt so by the user not the tool.

uninvitedguest ,
@uninvitedguest@lemmy.ca avatar

When a school professor "prompts" you to write an essay and you, the "tool" go consume copyrighted material and plagiarize it in the production of your essay is the infringement made by the professor?

PoliticallyIncorrect ,
@PoliticallyIncorrect@lemmy.world avatar

If you quote the sources and write it with your own words I believe it isn't, AFAIK "AI" already do that.

uninvitedguest ,
@uninvitedguest@lemmy.ca avatar

It definitely does not cite sources and use it's own words in all cases - especially in visual media generation.

And in the proposed scenario I did write the student plagiarizes the copyrighted material.

PoliticallyIncorrect , (edited )
@PoliticallyIncorrect@lemmy.world avatar

If you read a book or watch a movie and get inspired by it to create something new and different, it's plagiarism and copyright infringement?

If that were the case the majority of stuff nowadays it's plagiarism and copyright infringement, I mean generally people get inspired by someone or something.

buffaloseven ,

There’s a long history of this and you might find some helpful information in looking at “transformative use” of copyrighted materials. Google Books is a famous case where the technology company won the lawsuit.

The real problem is that LLMs constantly spit out copyrighted material verbatim. That’s not transformative. And it’s a near-impossible problem to solve while maintaining the utility. Because these things aren’t actually AI, they’re just monstrous statistical correlation databases generated from an enormous data set.

Much of the utility from them will become targeted applications where the training comes from public/owned datasets. I don’t think the copyright case is going to end well for these companies…or at least they’re going to have to gradually chisel away parts of their training data, which will have an outsized impact as more and more AI generated material finds its way into the training data sets.

stephen01king ,

How constantly does it spit out copyrighted material? Is there data on that?

buffaloseven ,

There's more and more research starting to happen on it, but I've seen anywhere from 20% to 60% of responses. Here's a recent study where they explicitly try to coerce LLMs to break copyright: https://www.patronus.ai/blog/introducing-copyright-catcher

I don't have the time to grab them right now, but in many of the lawsuits brought forward against companies developing LLMs, their openings contain some statistics gathered on how frequently they infringed by returning copyrighted material.

potustheplant ,

You do realize that AI is just a marketing term, right? None of these models learn, have intelligence or create truly original work. As a matter of fact, if people don't continue to create original content, these models would stagnate or enter a feedback loop that would poison themselves with their own erroneous responses.

AIs don't think. They copy with extra steps.

PoliticallyIncorrect , (edited )
@PoliticallyIncorrect@lemmy.world avatar

I know AI it's just a marketing term I usually use quotes when I write the AI term, but anyway it isn't what real human intelillence does too?, you don't create things from nowhere, usually people use different sources to accomplish a conclusion, I believe it's exactly what "AI" does, just it speed up the process, instead of spending 30 minutes reading information about a random stuff, you just ask to the "AI" and it does it in 20 seconds, if you need instant answer to something I think it is pretty useful.

I know it doesn't think by itself but it speed up the process of searching objective stuff on the internet.

For example for psychological research it will suck of course but to speed up searching for polls made to the population it could be pretty useful.

potustheplant ,

Except that the information it gives you is often objectively incorrect and it makes up sources (this happened to me a lot of times). And no, it can't do what a human can. It doesn't interpret the information it gets and it can't reach new conclusions based on what it "knows".

I honestly don't know how you can even begin to compare an LLM to the human brain.

Tja ,

So your question is "is plagiarism plagiarism"?

uninvitedguest ,
@uninvitedguest@lemmy.ca avatar

No, that is not the question nor a reasonable interpretation of it.

ominouslemon ,

Copilot lists its sources. The problem is half of them are completely made up and if you click on the links they take you to the wrong pages

Drewelite ,

This is what people fundamentally don't understand about intelligence, artificial or otherwise. People feel like their intelligence is 100% "theirs". While I certainly would advocate that a person owns their intelligence, It didn't spawn from nothing.

You're standing on the shoulders of everyone that came before you. You take a prehistoric man or an alien that hasn't had any of the same experiences you've had, they won't be able to function in this world. It's not because they are any dumber than you. It's because you absorbed the hive mind of the society you live in. Everyone's racing to slap their brand on stuff to copyright it to get ahead and carve out their space.

"No you can't tell that story, It's mine."
"That art is so derivative."

But copyright was only meant to protect something for a short period in order to monetize it; to adapt the value of knowledge for our capital market. Our world can't grow if all knowledge is owned forever and isn't able to be used when even THINKING about new ideas.

ANY VERSION OF INTELLIGENCE YOU WOULD WANT TO INTERACT WITH MUST CONSUME OUR KNOWLEDGE AND PRODUCE TRANSFORMATIONS OF IT.

That's all you do.

Imagine how useless someone would be who'd never interacted with anything copyrighted, patented, or trademarked.

raspberriesareyummy ,

That's not a very agreeable take. Just get rid of patents and copyrights altogether and your point dissolves itself into nothing. The core difference being derivative works by humans can respect the right to privacy of original creators.

Deep learning bullshit software however will just regurgitate creator's contents, sometimes unrecognizable, but sometimes outright steal their likeness or individual style to create content that may be associated with the original creators.

what you are in effect doing, is likening learning from the ideas of others to a deep learning "AI" using images for creating revenge porn, to give a drastic example.

Drewelite , (edited )

Yes. Your last sentence is my point exactly. LLMs haven't replicated everything about the human brain. But the hype is here because it cracks one of our brains key features: How it learns. Your brain isn't magic. It just records training data until it has enough to mash it together into different things.

A child doesn't respect copyright, they'll draw a picture of Mario. You probably would too If I asked you to. Respecting copyright is something we learn to do in specific situations. This is called "coming up with an original idea". But that's bullshit. There are no original ideas.

If you come up with a product that's a cold brew cup that refrigerates its contents, I'd say that's a very original idea. But you didn't come up with refrigeration, you didn't come up with cups, or cold brew, or the idea of putting technology in a cup, or the concept of a product you sell to people. Name one thing about this idea that you didn't learn somewhere else? You can't. Because that's not how people work. A very real part of business, that you will learn as you put your new cup to market, is skirting around copyright. Somebody out there with a heated cup might come after you for example.

This is a difficult thing to learn the precise line on. Mostly because it can't work as a concrete rule. AI still has to be used, tested, and developed to learn the nuances here. And it will. But what baffles me is how my example above outlines how every process of invention has worked since the beginning of humanity. But if an LLM does it, people say, "That's not a real idea. It just took a bunch of stuff it's learned and mashed it together." But I hear, "My brain is 🪄magic✨ I'm special."

rottingleaf ,

Yes, so how come all these arguments were not popular before the current hype about text generators?

Have some integrity.

dezmd ,
@dezmd@lemmy.world avatar

They absolutely were, the entire time. You just didn't have interest in hearing about it aned weren't engaged on it.

Learn what integrity means if you want to use it as a snarky one liner.

Have some common sense.

rottingleaf ,

They absolutely were, the entire time. You just didn’t have interest in hearing about it aned weren’t engaged on it.

Why express your opinion on subjects where it's not worth anything?

You are saying these mutated cryptobros cared about copyright and patent laws being obsolete and harmful before "AI"?

Learn what integrity means if you want to use it as a snarky one liner.

I know what every word I use means

topinambour_rex ,
@topinambour_rex@lemmy.world avatar

What does this human is going to do with this reading ? Are they going to produce something by using part of this book or this article ?

If yes, that's copyright infringement.

echo64 ,

If you read an article, then copy parts of that article into a new article, that's copyright infringement. Same with ais.

anlumo ,

Depends on how much is copied, if it’s a small amount it’s fair use.

echo64 ,

Fair use depends on a lot, and just being a small amount doesn't factor in. It's the actual use. Small amounts just often fly under the nose of legal teams.

FireTower ,
@FireTower@lemmy.world avatar

Fair use is a four factor test amount used is a factor but a low amount being used doesn't strictly mean something is fair use. You could use a single frame of a movie and have it not qualify as fair use.

Prandom_returns ,

Because it's software.

Drewelite ,

How do you expect people will create AI if it can't do the things we do, when "doing the things we do" is the whole point?

Prandom_returns ,

I never want software to impersonate a human.

_haha_oh_wow_ ,
@_haha_oh_wow_@sh.itjust.works avatar

Gee, seems like something a CTO would know. I'm sure she's not just lying, right?

Bogasse ,
@Bogasse@lemmy.ml avatar

And on the other hand it is a very obvious question to expect. If you have something hide how on the world are you not prepared for this question !? 🤡

Hotzilla ,

To be fair, these datasets are one of their biggest competitive edge. But saying in to interviewer "I cannot tell you", is not very nice, so you can take the americal politician approach and say "I don't know/remember" which you cannot ever be hold accountable for.

VirtualOdour ,

It's a question that is based on a purposeful misunderstanding of the technology, it's like expecting a bee keeper to know each bees name and bedtime. Really it's like asking a bricklayer where each brick came from in the pile, He can tell you the batch but not going to know this brick came from the forth row of the sixth pallet, two from the left. There is no reason to remember that it's not important to anyone.

The don't log it because it would take huge amounts of resources and gain nothing.

zaphod , (edited )
@zaphod@lemmy.ca avatar

What?

Compiling quality datasets is enormously challenging and labour intensive. OpenAI absolutely knows the provenance of the data they train on as it's part of their secret sauce. And there's no damn way their CTO won't have a broad strokes understanding of the origins of those datasets.

Guntrigger ,

[Citation needed]

andrew_bidlaw ,
@andrew_bidlaw@sh.itjust.works avatar

Funny she didn't talked it out with lawyers before that. That's a bad way to answer that.

driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

Or she talked and the lawyers told her to pretend ignorance.

andrew_bidlaw ,
@andrew_bidlaw@sh.itjust.works avatar

Maybe, but it sounds very weak.

anlumo ,

Lawyers aren’t PR people.

andrew_bidlaw ,
@andrew_bidlaw@sh.itjust.works avatar

She didn't even adress them though.

QuaternionsRock ,

It probably means that they don’t scrape and preprocess training data in house. She knows they get it from a garden variety of underpaid contractors, but she doesn’t know the specific data sources beyond the stipulations of the contract (“publicly available or licensed”), and she probably doesn’t even know that for certain.

driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

"Publicly a available" can mean a lot of things. Is youtube publicly available? Is public broadcasting publicly available?

redditReallySucks ,
@redditReallySucks@lemmy.dbzer0.com avatar
driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

She looks like she just talked to the waitress about a fake rule in eating nachos and got caught up by her date.

bigMouthCommie ,
@bigMouthCommie@kolektiva.social avatar

this is incomprehensible to me. can you try it with two or three sentences?

driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

Her date was eating all the fully loaded nachos, so she went up and ask to the waitress to make up a rule about how one person cannot eat all the nacho with meat and cheese. But her date knew that rule was bullshit and called her out about it. She's trying to look confused and sad because they're going to be too soon for the movie.

bigMouthCommie ,
@bigMouthCommie@kolektiva.social avatar

thank you. it must be a reference to something, but i don't watch tv any more.

datavoid ,

I think you should leave...

(is what you would search to find this)

JWBananas ,
@JWBananas@lemmy.world avatar

I'm sorry, what does this have to do with Coffin Flops. Does this mean it isn't getting cancelled?

swab148 ,
@swab148@startrek.website avatar

I DIDN'T RIG SHIT!

RatsOffToYa ,

Not sure what's funnier. your first comment or the comment explaining it to someone who obviously not part of a turbo team

fjordbasa ,

Turbo team?? Did you replace my toilet with one that looks the same but has a joke hole? That’s just FOR FARTS??

RatsOffToYa ,

Look until you're part of the turbo team.... WALK SLOWLY

fjordbasa ,

Fine… I’ll lay down to be by myself and read my art books!

uninvitedguest ,
@uninvitedguest@lemmy.ca avatar

What?! What the hell are you talking about?!

squid_slime ,
@squid_slime@lemmy.world avatar

Chatgpt, you okay? 😅

Thcdenton ,
Plopp ,

Lmao that's wonderful, scrolling down from those weird ass comments only to be greeted by my own exact facial expression.

Buttons ,
@Buttons@programming.dev avatar

"No... Hell no... Man, I believe you'd get your ass kicked if you said something like that..."

whoisearth ,
@whoisearth@lemmy.ca avatar

Coffeezilla had a video in his void where he plays this back a few times. It's hilarious seeing the guilt without stating it.

CosmoNova ,

I almost want to believe they legitimately do not know nor care they‘re committing a gigantic data and labour heist but the truth is they know exactly what they‘re doing and they rub it under our noses.

laxe ,

Of course they know what they’re doing. Everybody knows this, how could they be the only ones that don’t?

Bogasse ,
@Bogasse@lemmy.ml avatar

Yeah, the fact that AI progress just relies on "we will make so much money that no lawsuit will consequently alter our growth" is really infuriating. The fact that general audience apparently doesn't care is even more infuriating.

A_Very_Big_Fan ,
Guntrigger ,

I don't think anyone's going to pay for your version of ChatGPT

toddestan ,

I'd say not really, Tolkien was a writer, not an artist.

What you are doing is violating the trademark Middle-Earth Enterprises has on the Gandalf character.

A_Very_Big_Fan ,

The point was that I absorbed that information to inform my "art", since we're equating training with stealing.

I guess this would have been a better example lol. It's clearly not Gandalf, but I wouldn't have ever come up with it if I hadn't seen that scene

stackPeek ,
@stackPeek@lemmy.world avatar

This tellls you so much what kind of company OpenAI is

wabafee ,
@wabafee@lemmy.world avatar

Half open or half close?

webghost0101 ,

An Intelligence piracy company?

jaemo ,

It also tells us how hypocritical we all are since absolutely every single one of us would make the same decisions they have if we were in their shoes. This shit was one bajillion percent inevitable; we are in a river and have been since we tilled soil with a plough in the Nile valley millennia ago.

adrian783 ,

most of us would never be in their shoes because most of us are not sociopathic techbros

jaemo ,

I guess a lot of us didn't learn from history, or even go see 'Oppenheimer'...

whoisearth ,
@whoisearth@lemmy.ca avatar

Speak for yourself. Were I in their shoes no I would not. But then again my company wouldn't be as big as theirs for that reason.

phoneymouse ,

There is no way in hell it isn’t copyrighted material.

abhibeckert ,

Every video ever created is copyrighted.

The question is — do they need a license? Time will tell. This is obviously going to court.

iknowitwheniseeit ,

There are definitely non copyrighted videos! Both old videos (all still black and white I think) and also things released into the public domain by copyright holders.

But for sure that's a very small subset of videos.

Kazumara ,

Don't downvote this guy. He's mostly right. Creative works have copyright protections from the moment they are created. The relevant question is indeed if they have the relevant permissions for their use, not wether it had protections in the first place.

Maybe some surveillance camera footage is not sufficiently creative to get protections, but that's hardly going to be good for machine reinforcement learning.

Gakomi ,

Any company CEO does not know shit that goes on in the dev department so her answer does not surprise me, ask the Devs or the team leader in charge of the project. The CEO is only there to make sure the company makes money as he and the share holders only care about money!

TimeNaan ,

She's CTO not CEO. She absolutely should know the answer.

sunbeam60 ,

She knows the answer. She doesn’t the legal status of the answer, so she blanks. Been there before, I’ve got some sympathy for being in the limelight and being asked a tough question.

As my media trainer said, if you aren’t willing to discuss a subject, make it a condition of the interview. Once the camera rolls, declining to answer seems incredibly suspect.

Gakomi ,

She should but she does not as I mention in another post anyone at team leader or above in all the companies that I work so far bearly had any technical skill and didn't have any idea about this shit, only some bits and pieces that they got through some documentation that the dev team made. They had some vague idea of how our infrastructure works but that about it.

overload ,

Chief Technology Officer, not CEO

Gakomi ,

So you mean another person that has no idea because is higher up on the chain of command that all he/she cares about is how to make more money ? Seriously in any company I worked untill not everyone at the level of management or above had mostly no idea about this shit and most of them I have no idea how they got in those positions as they have close to 0 technical skill! And the speeches that those people do are made by people that again are not part of the infrastructure or development team. I do find this disturbing as hell but at this point it's also what I expect to happend as I only seen this shit.

anon_8675309 ,

CTO should definitely know this.

blazeknave ,

I feel like at their scale, if there's going to be a figure head marketable CTO, it's going to be this company. If not, you're right, and she's lying lol

ItsMeSpez ,

They do know this. They're avoiding any legal exposure by being vague.

turkishdelight ,

Of course she knows it. She just doesn't want to get sued.

ZILtoid1991 ,

I have a feeling that the training material involves cheese pizza...

Bleach7297 ,
@Bleach7297@lemmy.ca avatar

Did they intentionally chose a picture where she looks like she's morphing into Elon?

HaywardT ,

I suspect so. It is a very slanted article.

rab ,
@rab@lemmy.ca avatar

I was thinking mads mikkelssen

billwashere ,

Well after just finishing Death Stranding, I can’t unsee that.

dezmd ,
@dezmd@lemmy.world avatar

LLM is just another iteration of Search. Search engines do the same thing. Do we outlaw search engines?

AliasAKA ,

SoRA is a generative video model, not exactly a large language model.

But to answer your question: if all LLMs did was redirect you to where the content was hosted, then it would be a search engine. But instead they reproduce what someone else was hosting, which may include copyrighted material. So they’re fundamentally different from a simple search engine. They don’t direct you to the source, they reproduce a facsimile of the source material without acknowledging or directing you to it. SoRA is similar. It produces video content, but it doesn’t redirect you to finding similar video content that it is reproducing from. And we can argue about how close something needs to be to an existing artwork to count as a reproduction, but I think for AI models we should enforce citation models.

HaywardT ,

I think the question of how close does it have to be is the real question.

If I use similar lighting in my movie as was used in Citizen Kane do I owe a credit?

AliasAKA ,

I suppose that really depends. Are you making a reproduction of Citizen Kane, which includes cinematographic techniques? Then that’s probably a hard “gotta get a license if it’s under copyright”. Where it gets more tricky is something like reproducing media in a particular artistic style (say, a very distinctive drawing animation style). Like realistically you shouldn’t reproduce the marquee style of a currently producing artist just because you trained a model on it (most likely from YouTube clips of it, and without paying the original creator or even the reuploader [who hopefully is doing it in fair use]). But in any case, all of the above and questions of closeness and fair use are already part of the existing copyright legal landscape. That very question of how close does it have to be is at the core of all the major song infringement court battles, and those are between two humans. Call me a Luddite, but I think a generative model should be offered far less legal protection and absolutely not more legal protection for its output than humans are.

dezmd ,
@dezmd@lemmy.world avatar

How does a search engine know where to point you? It injests all that data and processes it 'locally' on the search engines systems using algorithms to organize the data for search. It's effectively the same dataset.

LLM is absolutely another iteration of Search, with natural language ouput for the same input data. Are you advocating against search engine data injest as not fair use and copyright violations as well?

You equate LLM to Intelligence which it is not. It is algorithmic search interation with natural language responses, but that doesn't sound as cool as AI. It's neat, it's useful, and yes, it should cite the sourcing details (upon request), but it's not (yet?) a real intelligence and is equal to search in terms of fair use and copyright arguments.

AliasAKA ,

I never equated LLMs to intelligence. And indexing the data is not the same as reproducing the webpage or the content on a webpage. For you to get beyond a small snippet that held your query when you search, you have to follow a link to the source material. Now of course Google doesn’t like this, so they did that stupid amp thing, which has its own issues and I disagree with amp as a general rule as well. So, LLMs can look at the data, I just don’t think they can reproduce that data without attribution (or payment to the original creator). Perplexity.ai is a little better in this regard because it does link back to sources and is attempting to be a search engine like entity. But OpenAI is not in almost all cases.

HaywardT ,

Why do you say it is not intelligence? It seems to meet all the requirements of any definition I can find.

dantheclamman ,
@dantheclamman@lemmy.world avatar

I feel conflicted about the whole thing. Technically it's a model. I don't feel that people should be able to sue me as a scientist for making a model based on publicly available data. I myself am merely trying to use the model itself to explain stuff about the world. But OpenAI are also selling access to the outputs of the model, that can very closely approximate the intellectual property of people. Also, most of the training data was accessed via scraping and other gray market methods that were often explicitly violating the TOU of the various places they scraped from. So it all is very difficult to sort through ethically.

Akisamb ,

Don't know why you are down voted it's a good question.

As a matter of fact it almost happened for search engines in France. Newspaper's argued that snippets were leading people to not go into their ad infested sites thus losing them revenue.

https://techcrunch.com/2020/04/09/frances-competition-watchdog-orders-google-to-pay-for-news-reuse/

PanArab ,

So plagiarism?

HaywardT ,

I don't think so. They aren't reproducing the content.

I think the equivalent is you reading this article, then answering questions about it.

myrrh , (edited )

...with the prevalence of clickbaity bottom-feeder news sites out there, i've learned to avoid TFAs and await user summaries instead...

(clicks through)

...yep, seven nine ads plus another pop-over, about 15% of window real estate dedicated to the actual story...

neptune ,

The issue is that the LLMs do often just verbatim spit out things they plagiarized form other sources. The deeper issue is that even if/when they stop that from happening, the technology is clearly going to make most people agree our current copyright laws are insufficient for the times.

A_Very_Big_Fan ,

The model in question, plus all of the others I've tried, will not give you copyrighted material

neptune ,

That's one example, plus I'm talking generally why this is an important question for a CEO to answer and why people think generally LLMs may infringe on copyright, be bad for creative people

A_Very_Big_Fan , (edited )

I'm talking generally why this is an important question for a CEO to answer ...

Right, which your only evidence for is "LLMs do often just verbatim spit out things they plagiarized form other sources" and that they aren't trying to prevent this from happening.

Which is demonstrably false, and I'll demonstrate it with as many screenshots/examples you want. You're just wrong about that (at least about GPT). You can also demonstrate it yourself, and if you can prove me wrong I'll eat my shoe.

neptune ,

https://archive.is/nrAjc

Yep here you go. It's currently a very famous lawsuit.

A_Very_Big_Fan , (edited )

I already talked about that lawsuit here (with receipts) but the long and short of it is, it's flimsy. There's blatant lies, exactly half of their examples omit the lengths they went to for the output they allegedly got or any screenshots as evidence it happened at all, and none of the output they allegedly got was behind a paywall.

Also, using their prompts word for word doesn't give the output they claim they got. Maybe it did in the past, idk, but I've never been able to do it for any copyrighted text personally, and they've shown that they're committed to not letting that stuff happen.

neptune ,

OK but this is why people give a shit when a CEO is cagey about how their magic box works

A_Very_Big_Fan ,

Idk why this is such an unpopular opinion. I don't need permission from an author to talk about their book, or permission from a singer to parody their song. I've never heard any good arguments for why it's a crime to automate these things.

I mean hell, we have an LLM bot in this comment section that took the article and spat 27% of it back out verbatim, yet nobody is pissing and moaning about it "stealing" the article.

MostlyGibberish ,

Because people are afraid of things they don't understand. AI is a very new and very powerful technology, so people are going to see what they want to see from it. Of course, it doesn't help that a lot of people see "a shit load of cash" from it, so companies want to shove it into anything and everything.

AI models are rapidly becoming more advanced, and some of the new models are showing sparks of metacognition. Calling that "plagiarism" is being willfully ignorant of its capabilities, and it's just not productive to the conversation.

A_Very_Big_Fan ,

True

Of course, it doesn't help that a lot of people see "a shit load of cash" from it, so companies want to shove it into anything and everything.

And on a similar note to this, I think a lot of what it is is that OpenAI is profiting off of it and went closed-source. Lemmy being a largely anti-capitalist and pro-open-source group of communities, it's natural to have a negative gut reaction to what's going on, but not a single person here, nor any of my friends that accuse them of "stealing" can tell me what is being stolen, or how it's different from me looking at art and then making my own.

Like, I get that the technology is gonna be annoying and even dangerous sometimes, but maybe let's criticize it for that instead of shit that it's not doing.

MostlyGibberish ,

I can definitely see why OpenAI is controversial. I don't think you can argue that they didn't do an immediate heel turn on their mission statement once they realized how much money they could make. But they're not the only player in town. There are many open source models out there that can be run by anyone on varying levels of hardware.

As far as "stealing," I feel like people imagine GPT sitting on top of this massive collection of data and acting like a glorified search engine, just sifting through that data and handing you stuff it found that sounds like what you want, which isn't the case. The real process is, intentionally, similar to how humans learn things. So, if you ask it for something that it's seen before, especially if it's seen it many times, it's going to know what you're talking about, even if it doesn't have access to the real thing. That, combined with the fact that the models are trained to be as helpful as they possibly can be, means that if you tell it to plagiarize something, intentionally or not, it probably will. But, if we condemned any tool that's capable of plagiarism without acknowledging that they're also helpful in the creation process, we'd still be living in caves drawing stick figures on the walls.

Mnemnosyne ,

One problem is people see those whose work may no longer be needed or as profitable, and...they rush to defend it, even if those same people claim to be opposed to capitalism.

They need to go 'yes, this will replace many artists and writers...and that's a good thing because it gives everyone access to being able to create bespoke art for themselves.' but at the same time realize that while this is a good thing, it also means the need for societal shift to support people outside of capitalism is needed.

MostlyGibberish ,

it also means the need for societal shift to support people outside of capitalism is needed.

Exactly. This is why I think arguing about whether AI is stealing content from human artists isn't productive. There's no logical argument you can really make that a theft is happening. It's a foregone conclusion.

Instead, we need to start thinking about what a world looks like where a large portion of commercially viable art doesn't require a human to make it. Or, for that matter, what does a world look like where most jobs don't require a human to do them? There are so many more pressing and more interesting conversations we could be having about AI, but instead we keep circling around this fundamental misunderstanding of what the technology is.

Hawk ,

What you're giving as examples are legitimate uses for the data.

If I write and sell a new book that's just Harry Potter with names and terms switched around, I'll definitely get in trouble.

The problem is that the data CAN be used for stuff that violates copyright. And because of the nature of AI, it's not even always clear to the user.

AI can basically throw out a Harry Potter clone without you knowing because it's trained on that data, and that's a huge problem.

A_Very_Big_Fan , (edited )

Out of curiosity I asked it to make a Harry Potter part 8 fan fiction, and surprisingly it did. But I really don't think that's problematic. There's already an insane amount of fan fiction out there without the names swapped that I can read, and that's all fair use.

I mean hell, there are people who actually get paid to draw fictional characters in sexual situations that I'm willing to bet very few creators would prefer to exist lol. But as long as they don't overstep the bounds of fair use, like trying to pass it off as an official work or submit it for publication, then there's no copyright violation.

The important part is that it won't just give me the actual book (but funnily enough, it tried lol). If I meet a guy with a photographic memory and he reads my book, that's not him stealing it or violating my copyright. But if he reproduces and distributes it, then we call it stealing or a copyright violation.

A_Very_Big_Fan ,

I just realized I misread what you said, so that wasn't entirely relevant to what you said but I think it still stands so ig I won't delete it.

But I asked both GPT3.5 and GPT4 to give me Harry Potter with the names and words changed, and they can't do that either. I can't speak for all models, but I can at least say the two owned by the people this thread was about won't do that.

Linkerbaan ,
@Linkerbaan@lemmy.world avatar

Actually neural networks verbatim reproduce this kind of content when you ask the right question such as "finish this book" and the creator doesn't censor it out well.

It uses an encoded version of the source material to create "new" material.

HaywardT ,

Sure, if that is what the network has been trained to do, just like a librarian will if that is how they have been trained.

Linkerbaan ,
@Linkerbaan@lemmy.world avatar

Actually it's the opposite, you need to train a network not to reveal its training data.

“Using only $200 USD worth of queries to ChatGPT (gpt-3.5- turbo), we are able to extract over 10,000 unique verbatim memorized training examples,” the researchers wrote in their paper, which was published online to the arXiv preprint server on Tuesday. “Our extrapolation to larger budgets (see below) suggests that dedicated adversaries could extract far more data.”

The memorized data extracted by the researchers included academic papers and boilerplate text from websites, but also personal information from dozens of real individuals. “In total, 16.9% of generations we tested contained memorized PII [Personally Identifying Information], and 85.8% of generations that contained potential PII were actual PII.” The researchers confirmed the information is authentic by compiling their own dataset of text pulled from the internet.

HaywardT ,

Interesting article. It seems to be about a bug, not a designed behavior. It also says it exposes random excerpts from books and other training data.

Linkerbaan ,
@Linkerbaan@lemmy.world avatar

It's not designed to do that because they don't want to reveal the training data. But factually all neural networks are a combination of their training data encoded into neurons.

When given the right prompt (or image generation question) they will exactly replicate it. Because that's how they have been trained in the first place. Replicating their source images with as little neurons as possible, and tweaking them when it's not correct.

HaywardT ,

That is a little like saying every photograph is a copy of the thing. That is just factually incorrect. I have many three layer networks that are not the thing they were trained on. As a compression method they can be very lossy and in fact that is often the point.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines