I had to block meta's ASN on my personal cgit server a few weeks ago because they were ignoring robots.txt and torching it. Like hundreds of megabytes of access logs just from them, spread around different network blocks to clearly try and defeat IP based limiting. I couldn't believe it.
A lot of people would be very pleased if this leads to Zuckerberg getting even the statutory minimum damages ($750?) on each infringement.
The previous infringement case with Anthropic said that while training an AI was transformative and not itself an infringement, pirating works for that purpose still was definitely infringement all by itself. The settlement was $1.5bn, so close to $3k for each of the 500k they pirated, so if Zuckerberg pirated "millions" (plural) it is quite plausible his settlement could be $6bn.
A company being "worth" some amount doesn't mean it has that much money and real property; it means there exist people willing to buy shares, on the margin, at a price which works out like that. One of the common (very rough) approximations is that a business is worth as much as the profit it's expected to make over the next 20 years. But one of the reasons (there are many) that this is only a rough guide, is that if you tried to sell too much of a big company all in one go, it usually depresses the price a lot, and the other way around (trying to buy a whole company) tends to raise the price a lot; both effects are because most people have different ideas about how much any given company is really worth despite that rough guide, and trade their shares at different prices while you're doing it. You may note this is a circular argument, this is indeed part of the problem.
IIRC, Facebook's cash is more like $81-82 billion.
At the same time, isn't Zuck's worth based on his shares of evilCorp while evilCorp's shares are what you just said. Ergo, the Zuck isn't worth all that either???
Shouldn't this stuff trigger RICO? Why do torrent site operators get led off in cuffs for running operations that usually lose money, but Zuck doesn't?
RICO specifically cites "criminal infringement of a copyright" as laid out in 18 U.S. Code § 2319. If the CEO tells his employees to download millions of works illegally in order to carry out his money making scheme, how is that not organized crime even if (dubiously) LLM training on the material is fair use?
> As used in this chapter — (1) “racketeering activity” means (A)[...]; (B) any act which is indictable under any of the following provisions of title 18, United States Code: [...], section 2319 (relating to criminal infringement of a copyright),[...]
Just gonna say... Aaron Swartz faced years of prison time and ultimately decided to take his own life... for downloading scientific journal articles... to share freely with the world (aka not even profiting from it).
But a multi-billion dollar corporation downloading millions of copyrighted creative works so that they can reshape the entire labor market by training a new type of artificial intelligence model on that data set? Meh, sounds like Silicon Valley disruption, give the man a medal!
> a Meta spokesperson said, “AI is powering transformative innovations, productivity and creativity for individuals and companies, and courts have rightly found that training AI on copyrighted material can qualify as fair use. We will fight this lawsuit aggressively.”
> Authors have sued AI companies for copyright infringement before - and lost.
they'll litigate how meta acquired those materials to train. you can do whatever you want with a book after it's in your house. but how did it get there?
I think this is an easy distinction to make: copyright is bullshit and knowledge should be free. I have no problem with pirates sharing information freely. I do have a problem with a company taking someone else's work and profiting from it. The only thing worse than copyright as it exists is copyright that can be selectively ignored when the powerful will it. Attempt to use copyright to promote Free software with the GPL? Ha, nope, copyright for me and not for thee; I'll train on your code and sell it back to you. You want to preserve access to a game or film that's unavailable or unplayable? Time to send the C&D and destroy you. Only bad things are possible.
Until we progress as a society to the point that we can put this system behind us we should at least fight to make enforcement uniform. In fact, uniform enforcement is probably a good starting point for arguing for abolition, as the pain of that enforcement is felt by proles and elites alike.
I'm gonna have to go dig up the link, but isn't there a guy that Nintendo basically has on indentured servitude for the rest of his life?
Ah, found it:
>In April 2023, a 54-year-old programmer named Gary Bowser was released from prison having served 14 months of a 40-month sentence. Good behaviour reduced time behind bars, but now his options are limited. For a while he was crashing on a friend’s couch in Toronto. The weekly physical therapy sessions, which he needs to ease chronic pain, were costing hundreds of dollars every week, and he didn’t have a job. And soon, he would need to start sending cheques to Nintendo. Bowser owes the makers of Super Mario $14.5m (£11.5m), and he’s probably going to spend the rest of his life paying it back.
I'm not even a tiny bit supportive, but there is precedent.
American executives have been pushing to criminalise copyright infringement for decades, and America has worked hard to pressure countries all round the world to do this as part of trade deals. There is, for example, a Brit serving an eleven year sentence right now *.
The non-strawman way to interpret the parent comment is that they want them to be treated the same as normal copyright violators. Jail is a common result of (criminal) copyright prosecution, with 44% of convicted offenders being imprisoned, averaging 25 months [0].
Now, I personally find the idea of imprisoning people for copyright offenses horrific, but I don't think it's remotely insane that someone else might come to that conclusion, given that we broadly accept it as a society.
From [0]: "In fiscal year 2017, there were 80 copyright/trademark infringement offenders who accounted for 0.1% of all offenders sentenced under the guidelines." This is such a low number that I assume most prosecuted cases are settled without ever making it to sentencing, or alternatively copyright infringement is just hardly ever prosecuted criminally at all.
> I'm all for strong justice, but you want to imprison an executive for decades for copyright violations?
They stole the life's work of millions of people.
In less civilized times, they likely would have been drawn and quartered by strong horses, and had their limbs drug to the 4 corners of the continent as a warning to anyone else that would consider doing it again.
For better or for worse, the idea behind incorporation is that you, as an owner of part or all of the company, are separated from it financially and legally in most circumstances.
Zuckerberg may be CEO, majority shareholder, and on the board of Meta, but he didn't break copyright law, Meta did. So if there were to be a consequence, Meta would pay out the fine. Not sure how you jail a company.
Now, in a company with a real corporate governance structure, the board would look at the loss incurred by said fine, look at Zuckerberg, and immediately fire him for causing the loss. However, like I said before, Zuck's in charge of Meta, so that's not going to happen, and the fine is unlikely to be enough to drastically impact the company's profitability enough to sink his shares, which are the main repository of his wealth. So if he thinks he can make himself richer violating copyright law in the future, he will likely direct Meta to do so.
TL;DR, in the famous words of Bender from Futurama, "Hooray, the system fails again!"
There aren't enough things an executive can go to jail for.
Fines don't do anything to deter bad behavior. Either:
* The company pays
* They pay and the company mysteriously increases next year's comp / grants a "loan" / etc
* D&O insurer pays
In all three cases the money comes out of the shareholders' hides. It provides zero personal deterrence. The payoff matrix, as seen by a sociopath, makes it rational to always defect against the common good.
The only punishment that can really focus attention is physical imprisonment in a facility they can't choose.
SOX did this for financial reporting and gee shucks it turned out executives can follow the law after all!
There's a huge difference in scale. The human mind can only process a limited portion of all works available over a lifetime. Human learning is therefore naturally limited to small-scale reuse, which serves to keep it proportional.
A machine training on all copyrighted materials in the world for commercial purposes at an industrial scale makes it disproportionate.
It would hardly make a dent. And if you hired hundreds of savants, the knowledge would still be spread over hundreds of separate minds.
And even if we grant that those savants are also very skilled at creating "market substitutes" based on their training that are capable of competing with the original works, their maximum creative output would only be a relatively small number of new works, because they can only work at human speed.
This goes back to the original purpose of copyright, which is to serve as an economic incentive for individual creators and artists to make more art, by securing exclusive rights to use their own works commercially for a specified time. The goal is both the creation of more works, but also to protect the economic viability of artists.
This principle is quite universal and can be found in many places, including the US constitution and US (supreme) court decisions, many international jurisdictions, treaties and conventions.
The human savant will remember where they read it and give you credit. It might lead more people to read your work, and ultimately you make money.
The AI won't even know where the page of text it's seeing came from, and people will avoid your book as they can just ask the AI. So you make less money. (Talking about specialized technical books here.)
We're not talking about rights, we're talking about illegal acts. If it's illegal for a machine to do it, how can it be ok for a human?
Just from a rational argumentation point of view. Clearly if a law is written saying as much, then sure. But there is no such copyright law like that yet.
The issue is certainly not so simple. But it seems to me, purely theoretically, that the rules don't necessarily have to be the same for living people and non-living machines.
Well - actually - it is pretty simple. For something to be illegal, there must be a law saying it's illegal. There are no laws distinguishing humans from machines in copyright law.
So is it a problem when humans produce and monetize competing works? My understanding is that there quite an industry in humans reading books and synthesizing their points. Cliff's Notes, for example.
I did some quick googling and most of cliffs notes guides are on public domain works so no problem there, they've also paid to license content, and also have been protected by fair use as parody
No one is asking human savants about what they read 1 million times per day.
Suppose they did, and some guy was filling stadiums regularly to hear him recite an entire audio book. That would probably get the attention of someone's lawyers.
I don't think anyone is arguing that the consumption is illegal. It's the reproduction that is illegal.
Read a book, that's fine. Write a book, that's fine. Read a book and then write a book that is 99.9% the same as the book that you read and sell it for profit without a license from the original author, that's infringement.
No, if you read the article, the point is in the training, not the reproduction.
That's what all these lawsuits are about - it's the training not the reproduction. I already agreed in my first comment that the reproduction is off limits.
In this case, it appears that Meta torrented illegal copies of the work to do the training. Obviously that's bad. But conflating that with training itself doesn't follow.
The point of these lawsuits is the piracy. My parent comment was about the general situation, not this specific article.
Pirating content is illegal, regardless of if it is to train an LLM.
Usage of LLMs trained on unlicensed content (basically all of them) might or might not be illegal.
Using any method to reproduce a copyrighted work by using that original as input in a way that supplants the market value of the original is probably illegal.
Well - maybe so. But the common belief is that training itself is a violation of copyright, no matter how it's done. That's the argument I'm countering here.
If copyright law doesn't extend to the works being used for training, why should it extend to the model that is produced as a result? AI model creators have set up an ethical scenario where the right thing to do is ignore copyright laws when it comes to AI, which includes model use. It might never be legal, but it has become ethical to pirate models, distill them against ToS, etc.
>The problem is producing the copyrighted work, not processing it beforehand.
the distinction isn't particularly clear cut with an open source model. If it is able to reproduce copyright protected work with high fidelity such that the works produced would be derivative, that's like trying to get around laws against distribution of protected works by handing them to you in a zip file.
It's a kind of copyright washing to hand you the data as a binary blob and an algorithm to extract them out of it. That wouldn't really fly with any other technology
I tend to agree - but you assume that it would not be possible to create a model that can train on copyrighted work and only output text which would be considered fair use.
That seems very possible to me, and undermines the "training is copyright violation" argument. It's not the training, it's the output.
The problem is people at large companies creating these AI models, wanting the freedom to copy artists’ works when using it, but these large companies also want to keep copyright protection intact, for their regular business activities. They want to eat the cake and have it too. And they are arguing for essentially eliminating copyright for their specific purpose and convenience, when copyright has virtually never been loosened for the public’s convenience, even when the exceptions the public asks for are often minor and laudable. If these companies were to argue that copyright should be eliminated because of this new technology, I might not object. But now that they come and ask… no, they pretend to already have, a copyright exception for their specific use, I will happily turn around and use their own copyright maximalist arguments against them.
I take issue with the use of tense used in this framing. Its not 'infringed' its 'infringing' and to say that it happened is wrong, its happening and happening continuously in these models that are in use. To say a one time payment settles it is missing the whole scope of this theft.
Royalties are owed and continuously owed as these models are deployed and doing inference. How is it any different to paying a small pittance to someone every time a song is played?
Royalties for inference are unrealistic in a way that even royalties for training aren't.
The LLaMA models were released openly. Copies exist everywhere in the world. You aren't going to be able to charge someone for running `llama.cpp`; a court order ceases to have practical relevance at that point.
First, LLMs do not reliably cite works. They are not looking things up in a database and repeating them. I think this false idea occurs a lot in people who don't understand what LLMs are or how they work.
Second, royalties are not required to cite a source.
Can you imagine how disastrous it would be to everything from news reporting to scientific publishing if that was the case?
Yeah well then I want my robot running this crap locally in its brain so I can get it to farm my two acres and haul water for me and I'll unplug from the rest of this nonsense going forward lol.
... LLMs cannot reliably provide citations. If you ask for citations, and the model did not use a web search tool, then whatever "citations" you receive are unreliable. Please do not trust these models to be honest. Just because they can discuss a topic doesn't mean they "know" where the knowledge came from in the same way that you don't need to have studied physics to catch a ball.
Even better, what if you transform that stolen CD into an MP3, so the data isn’t the same as a lossy process was used, then share the MP3 with the world as your own work?
I don’t get why the training process doesn’t count as any other form of transformation but then I’m not a lawyer.
I don't have strong opinions on Zuck needing to be punished for this, because I have friends and family doing the same thing, although perhaps not at the same scale. I myself do not download copyrighted content. I think "rules for thee, not for me" goes both ways.
How are these fruits "stolen" if they still have what was allegedley stolen?
Dowling v. United States, 473 U.S. 207 (1985): The Supreme Court ruled that the unauthorized sale of phonorecords of copyrighted musical compositions does not constitute "stolen, converted or taken by fraud" goods under the National Stolen Property Act
And even if, arguendo, sure its stolen. The purpose of copyright is to "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries"
And you would be hard pressed to prove that LLM's haven't advanced the arts and sciences, so at bare minimum transformative, ie fair use.
I think you are confusing the idiom "stolen fruits" with an actual accusation of criminal theft. Aside from its use in this phrasing, neither "theft" nor "steal" appears anywhere else in the article.
I had to block meta's ASN on my personal cgit server a few weeks ago because they were ignoring robots.txt and torching it. Like hundreds of megabytes of access logs just from them, spread around different network blocks to clearly try and defeat IP based limiting. I couldn't believe it.
A lot of people would be very pleased if this leads to Zuckerberg getting even the statutory minimum damages ($750?) on each infringement.
The previous infringement case with Anthropic said that while training an AI was transformative and not itself an infringement, pirating works for that purpose still was definitely infringement all by itself. The settlement was $1.5bn, so close to $3k for each of the 500k they pirated, so if Zuckerberg pirated "millions" (plural) it is quite plausible his settlement could be $6bn.
Nothing will happen to him/Meta while DJT is president.
He bought the best protection around for breaking the law.
When you're a big Trump donor they let you do it.
For context, his net worth is ~$220 billion.
And meta's worth is much more than that. He's not personally paying.
A company being "worth" some amount doesn't mean it has that much money and real property; it means there exist people willing to buy shares, on the margin, at a price which works out like that. One of the common (very rough) approximations is that a business is worth as much as the profit it's expected to make over the next 20 years. But one of the reasons (there are many) that this is only a rough guide, is that if you tried to sell too much of a big company all in one go, it usually depresses the price a lot, and the other way around (trying to buy a whole company) tends to raise the price a lot; both effects are because most people have different ideas about how much any given company is really worth despite that rough guide, and trade their shares at different prices while you're doing it. You may note this is a circular argument, this is indeed part of the problem.
IIRC, Facebook's cash is more like $81-82 billion.
At the same time, isn't Zuck's worth based on his shares of evilCorp while evilCorp's shares are what you just said. Ergo, the Zuck isn't worth all that either???
Shouldn't this stuff trigger RICO? Why do torrent site operators get led off in cuffs for running operations that usually lose money, but Zuck doesn't?
RICO specifically cites "criminal infringement of a copyright" as laid out in 18 U.S. Code § 2319. If the CEO tells his employees to download millions of works illegally in order to carry out his money making scheme, how is that not organized crime even if (dubiously) LLM training on the material is fair use?
-----
RICO: https://www.law.cornell.edu/uscode/text/18/part-I/chapter-96
Definitions: https://www.law.cornell.edu/uscode/text/18/1961
> As used in this chapter — (1) “racketeering activity” means (A)[...]; (B) any act which is indictable under any of the following provisions of title 18, United States Code: [...], section 2319 (relating to criminal infringement of a copyright),[...]
18 U.S. Code § 2319 - Criminal infringement of a copyright: https://www.law.cornell.edu/uscode/text/18/2319
So... "move fast and steal things"?
Always Has Been
Just gonna say... Aaron Swartz faced years of prison time and ultimately decided to take his own life... for downloading scientific journal articles... to share freely with the world (aka not even profiting from it).
But a multi-billion dollar corporation downloading millions of copyrighted creative works so that they can reshape the entire labor market by training a new type of artificial intelligence model on that data set? Meh, sounds like Silicon Valley disruption, give the man a medal!
Truly ahead of his time
Had Aaron copied Snapchat 5 times the DOJ would've been fine with it all. His fault for not having the foresight
(I'm being sarcastic. Zuck gets rewarded for continually copying Snapchat features into his products)
Rules for thee but not for me.
> a Meta spokesperson said, “AI is powering transformative innovations, productivity and creativity for individuals and companies, and courts have rightly found that training AI on copyrighted material can qualify as fair use. We will fight this lawsuit aggressively.”
> Authors have sued AI companies for copyright infringement before - and lost.
So, basically nothing will come out of this
they'll litigate how meta acquired those materials to train. you can do whatever you want with a book after it's in your house. but how did it get there?
The behavior will continue until a consequence is imposed.
I would rather Zuckerberg do 6 months in jail and probation than fine Meta.
You aren't going to be able to make me anti-piracy just because some corpo benefits from it too.
I think this is an easy distinction to make: copyright is bullshit and knowledge should be free. I have no problem with pirates sharing information freely. I do have a problem with a company taking someone else's work and profiting from it. The only thing worse than copyright as it exists is copyright that can be selectively ignored when the powerful will it. Attempt to use copyright to promote Free software with the GPL? Ha, nope, copyright for me and not for thee; I'll train on your code and sell it back to you. You want to preserve access to a game or film that's unavailable or unplayable? Time to send the C&D and destroy you. Only bad things are possible.
Until we progress as a society to the point that we can put this system behind us we should at least fight to make enforcement uniform. In fact, uniform enforcement is probably a good starting point for arguing for abolition, as the pain of that enforcement is felt by proles and elites alike.
I agree, time to start handing out real punishments, I think 6 months is way to small.
If this was you or me, we would be in prison for decades and have a fine in the millions. Time for these people to feel consequences.
As someone said, they will probably settle for around 6 billion, that is the same as say a $100 fine for us.
This comment could get its own DSM classification for how insane it is.
I'm all for strong justice, but you want to imprison an executive for decades for copyright violations?
I'm gonna have to go dig up the link, but isn't there a guy that Nintendo basically has on indentured servitude for the rest of his life?
Ah, found it:
>In April 2023, a 54-year-old programmer named Gary Bowser was released from prison having served 14 months of a 40-month sentence. Good behaviour reduced time behind bars, but now his options are limited. For a while he was crashing on a friend’s couch in Toronto. The weekly physical therapy sessions, which he needs to ease chronic pain, were costing hundreds of dollars every week, and he didn’t have a job. And soon, he would need to start sending cheques to Nintendo. Bowser owes the makers of Super Mario $14.5m (£11.5m), and he’s probably going to spend the rest of his life paying it back.
I'm not even a tiny bit supportive, but there is precedent.
https://www.theguardian.com/games/2024/feb/01/the-man-who-ow...
American executives have been pushing to criminalise copyright infringement for decades, and America has worked hard to pressure countries all round the world to do this as part of trade deals. There is, for example, a Brit serving an eleven year sentence right now *.
Why should Zuckerberg be exempt?
* https://www.bbc.co.uk/news/uk-65697595
Facebook isn't one of the companies that's been pushing for that.
The non-strawman way to interpret the parent comment is that they want them to be treated the same as normal copyright violators. Jail is a common result of (criminal) copyright prosecution, with 44% of convicted offenders being imprisoned, averaging 25 months [0].
Now, I personally find the idea of imprisoning people for copyright offenses horrific, but I don't think it's remotely insane that someone else might come to that conclusion, given that we broadly accept it as a society.
[0] https://www.ussc.gov/sites/default/files/pdf/research-and-pu...
From [0]: "In fiscal year 2017, there were 80 copyright/trademark infringement offenders who accounted for 0.1% of all offenders sentenced under the guidelines." This is such a low number that I assume most prosecuted cases are settled without ever making it to sentencing, or alternatively copyright infringement is just hardly ever prosecuted criminally at all.
I would prefer a harsher punishment, but I would begrudgingly accept throwing him in jail for decades.
I always heard that criminals should be thrown in jail, it's time we started doing it to the real criminals.
> I'm all for strong justice, but you want to imprison an executive for decades for copyright violations?
They stole the life's work of millions of people.
In less civilized times, they likely would have been drawn and quartered by strong horses, and had their limbs drug to the 4 corners of the continent as a warning to anyone else that would consider doing it again.
Is this controversial? Executives should be held liable, certainly moreso than just regular people sharing files.
For better or for worse, the idea behind incorporation is that you, as an owner of part or all of the company, are separated from it financially and legally in most circumstances.
Zuckerberg may be CEO, majority shareholder, and on the board of Meta, but he didn't break copyright law, Meta did. So if there were to be a consequence, Meta would pay out the fine. Not sure how you jail a company.
Now, in a company with a real corporate governance structure, the board would look at the loss incurred by said fine, look at Zuckerberg, and immediately fire him for causing the loss. However, like I said before, Zuck's in charge of Meta, so that's not going to happen, and the fine is unlikely to be enough to drastically impact the company's profitability enough to sink his shares, which are the main repository of his wealth. So if he thinks he can make himself richer violating copyright law in the future, he will likely direct Meta to do so.
TL;DR, in the famous words of Bender from Futurama, "Hooray, the system fails again!"
There aren't enough things an executive can go to jail for.
Fines don't do anything to deter bad behavior. Either:
* The company pays
* They pay and the company mysteriously increases next year's comp / grants a "loan" / etc
* D&O insurer pays
In all three cases the money comes out of the shareholders' hides. It provides zero personal deterrence. The payoff matrix, as seen by a sociopath, makes it rational to always defect against the common good.
The only punishment that can really focus attention is physical imprisonment in a facility they can't choose.
SOX did this for financial reporting and gee shucks it turned out executives can follow the law after all!
I know people really hate AI training on their work - but is it really any different than a human reading it?
I know there's a complaint that AI can verbatim repeat that work. But so can human savants. No one is suing human savants for reading their books.
Producing copyrighted material, of course. Training on copyrighted material... I just don't see it.
EDIT: Making a perfectly valid point, but it's unpopular, so down I go.
I had to buy the copyrighted material before reading it... Meta apparently operates in a different legal system than me. That's my issue with it.
Yes, I have no objection to that part. It's the arguments that training itself is the problem.
Sarah Silverman as the most prominent example.
There's a huge difference in scale. The human mind can only process a limited portion of all works available over a lifetime. Human learning is therefore naturally limited to small-scale reuse, which serves to keep it proportional.
A machine training on all copyrighted materials in the world for commercial purposes at an industrial scale makes it disproportionate.
I see that as a distinction - but does it make a difference?
If a company hired hundreds of savants, then it would be illegal for them to read books?
I don't follow.
It would hardly make a dent. And if you hired hundreds of savants, the knowledge would still be spread over hundreds of separate minds.
And even if we grant that those savants are also very skilled at creating "market substitutes" based on their training that are capable of competing with the original works, their maximum creative output would only be a relatively small number of new works, because they can only work at human speed.
Ok - but if a company were able to hire one million savants, you feel it should be illegal, because why?
Can you cite something in the copyright laws themselves that suggest this scale distinction?
This goes back to the original purpose of copyright, which is to serve as an economic incentive for individual creators and artists to make more art, by securing exclusive rights to use their own works commercially for a specified time. The goal is both the creation of more works, but also to protect the economic viability of artists.
This principle is quite universal and can be found in many places, including the US constitution and US (supreme) court decisions, many international jurisdictions, treaties and conventions.
The human savant will remember where they read it and give you credit. It might lead more people to read your work, and ultimately you make money.
The AI won't even know where the page of text it's seeing came from, and people will avoid your book as they can just ask the AI. So you make less money. (Talking about specialized technical books here.)
Not necessarily.
It’s different.
Hm. I'm not sure I follow your logic.
Why should an AI have the same rights as a human?
How about then to grant AI all other rights, for example, to allow voting?(sarcasm)
We're not talking about rights, we're talking about illegal acts. If it's illegal for a machine to do it, how can it be ok for a human?
Just from a rational argumentation point of view. Clearly if a law is written saying as much, then sure. But there is no such copyright law like that yet.
But machines don't do things. People do things, and they use tools/machines to do those things more easily or efficiently.
The issue is certainly not so simple. But it seems to me, purely theoretically, that the rules don't necessarily have to be the same for living people and non-living machines.
Well - actually - it is pretty simple. For something to be illegal, there must be a law saying it's illegal. There are no laws distinguishing humans from machines in copyright law.
reading it after stealing it: gray area. producing & monetizing competing works devaluing the original is a problem
So is it a problem when humans produce and monetize competing works? My understanding is that there quite an industry in humans reading books and synthesizing their points. Cliff's Notes, for example.
I did some quick googling and most of cliffs notes guides are on public domain works so no problem there, they've also paid to license content, and also have been protected by fair use as parody
To Kill a Mockingbird, The Catcher in the Rye, Beloved, The Kite Runner, The Handmaid's Tale are all copyrighted works with a Cliff's Notes guide.
No one is asking human savants about what they read 1 million times per day.
Suppose they did, and some guy was filling stadiums regularly to hear him recite an entire audio book. That would probably get the attention of someone's lawyers.
I don't see your point. The problem is producing the copyrighted work, not processing it beforehand.
If it's illegal for AIs it should be illegal for humans, too. Is that really what you're arguing? It should be illegal for savants to read books?
I don't think anyone is arguing that the consumption is illegal. It's the reproduction that is illegal.
Read a book, that's fine. Write a book, that's fine. Read a book and then write a book that is 99.9% the same as the book that you read and sell it for profit without a license from the original author, that's infringement.
No, if you read the article, the point is in the training, not the reproduction.
That's what all these lawsuits are about - it's the training not the reproduction. I already agreed in my first comment that the reproduction is off limits.
In this case, it appears that Meta torrented illegal copies of the work to do the training. Obviously that's bad. But conflating that with training itself doesn't follow.
The point of these lawsuits is the piracy. My parent comment was about the general situation, not this specific article.
Pirating content is illegal, regardless of if it is to train an LLM.
Usage of LLMs trained on unlicensed content (basically all of them) might or might not be illegal.
Using any method to reproduce a copyrighted work by using that original as input in a way that supplants the market value of the original is probably illegal.
At least that is my rudimentary understanding.
Well - maybe so. But the common belief is that training itself is a violation of copyright, no matter how it's done. That's the argument I'm countering here.
Training requires making copies. Even if Meta had purchased each work they'd have had to make copies of it to distribute around the training cluster.
Does it though? If they bought a copy for each machine?
If copyright law doesn't extend to the works being used for training, why should it extend to the model that is produced as a result? AI model creators have set up an ethical scenario where the right thing to do is ignore copyright laws when it comes to AI, which includes model use. It might never be legal, but it has become ethical to pirate models, distill them against ToS, etc.
I'm not sure I follow. Can you say it a different way?
>The problem is producing the copyrighted work, not processing it beforehand.
the distinction isn't particularly clear cut with an open source model. If it is able to reproduce copyright protected work with high fidelity such that the works produced would be derivative, that's like trying to get around laws against distribution of protected works by handing them to you in a zip file.
It's a kind of copyright washing to hand you the data as a binary blob and an algorithm to extract them out of it. That wouldn't really fly with any other technology
I tend to agree - but you assume that it would not be possible to create a model that can train on copyrighted work and only output text which would be considered fair use.
That seems very possible to me, and undermines the "training is copyright violation" argument. It's not the training, it's the output.
HN really loves the copyright lobby when it's against someone they hate, huh
The problem is people at large companies creating these AI models, wanting the freedom to copy artists’ works when using it, but these large companies also want to keep copyright protection intact, for their regular business activities. They want to eat the cake and have it too. And they are arguing for essentially eliminating copyright for their specific purpose and convenience, when copyright has virtually never been loosened for the public’s convenience, even when the exceptions the public asks for are often minor and laudable. If these companies were to argue that copyright should be eliminated because of this new technology, I might not object. But now that they come and ask… no, they pretend to already have, a copyright exception for their specific use, I will happily turn around and use their own copyright maximalist arguments against them.
(Copied from a comment of mine written more than three years ago: <https://news.ycombinator.com/item?id=33582047>)
I take issue with the use of tense used in this framing. Its not 'infringed' its 'infringing' and to say that it happened is wrong, its happening and happening continuously in these models that are in use. To say a one time payment settles it is missing the whole scope of this theft.
Royalties are owed and continuously owed as these models are deployed and doing inference. How is it any different to paying a small pittance to someone every time a song is played?
Royalties for inference are unrealistic in a way that even royalties for training aren't.
The LLaMA models were released openly. Copies exist everywhere in the world. You aren't going to be able to charge someone for running `llama.cpp`; a court order ceases to have practical relevance at that point.
Inference might be unreasonable for a royalty agreement, but, in assessing damages, it is certainly relevant.
"I made enough copies for everyone" isn't a valid defense for copyright infringement.
These models can provide citations so I don't see why they can't tick a royalty owed. I'm sure many here could help build this pipeline.
First, LLMs do not reliably cite works. They are not looking things up in a database and repeating them. I think this false idea occurs a lot in people who don't understand what LLMs are or how they work.
Second, royalties are not required to cite a source.
Can you imagine how disastrous it would be to everything from news reporting to scientific publishing if that was the case?
Yeah well then I want my robot running this crap locally in its brain so I can get it to farm my two acres and haul water for me and I'll unplug from the rest of this nonsense going forward lol.
... LLMs cannot reliably provide citations. If you ask for citations, and the model did not use a web search tool, then whatever "citations" you receive are unreliable. Please do not trust these models to be honest. Just because they can discuss a topic doesn't mean they "know" where the knowledge came from in the same way that you don't need to have studied physics to catch a ball.
If you steal a book and read it, should you have to pay every time you use the knowledge gained or recall parts of it from memory?
No. People are not LLMs. And even if some argue that they are mechanically similar, they are legally distinct.
If I charged people for the privilege of listening to me recite relevant parts of the book to them for profit? Yes. Depending on the copyright.
If I perform a song in public then yes, I should pay the creator every time I play it. I fail to see the difference here.
What if you steal a CD and then play it on your radio station each morning?
Even better, what if you transform that stolen CD into an MP3, so the data isn’t the same as a lossy process was used, then share the MP3 with the world as your own work?
I don’t get why the training process doesn’t count as any other form of transformation but then I’m not a lawyer.
I don't have strong opinions on Zuck needing to be punished for this, because I have friends and family doing the same thing, although perhaps not at the same scale. I myself do not download copyrighted content. I think "rules for thee, not for me" goes both ways.
How much revenue have your friends and family made from "doing the same thing"?
Some. In some cases they've "stolen" tens of thousands in content. Like I said, not at the same scale, but the same "crime" nonetheless.
I'd much rather prosecution focus on Zuck's more serious crimes against privacy and civilization as a whole. But maybe this is a small start?
"They then copied those stolen fruits"
How are these fruits "stolen" if they still have what was allegedley stolen?
Dowling v. United States, 473 U.S. 207 (1985): The Supreme Court ruled that the unauthorized sale of phonorecords of copyrighted musical compositions does not constitute "stolen, converted or taken by fraud" goods under the National Stolen Property Act
And even if, arguendo, sure its stolen. The purpose of copyright is to "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries"
And you would be hard pressed to prove that LLM's haven't advanced the arts and sciences, so at bare minimum transformative, ie fair use.
I think you are confusing the idiom "stolen fruits" with an actual accusation of criminal theft. Aside from its use in this phrasing, neither "theft" nor "steal" appears anywhere else in the article.
The article, references the complaint.