Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply "learned from" the examples.

Right, because you would have done more than learning, you would have then gone past learning and used that learning to reproduce the work.

It works exactly the same for a LLM. Training the model on content you have legal access to is fine. Aftwards, somone using that model to produce a replica of that content is engaged in copyright enfringement.

You seem set on conflating the act of learning with the act of reproduction. You are allowed to learn from copyrighted works you have legal access to, you just aren't allowed to duplicate those works.





The problem is that it's not the user of the LLM doing the reproduction, the LLM provider is. The tokens the LLM is spitting out are coming from the LLM provider. It is the provider that is reproducing the code.

If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.


> The problem is that it's not the user of the LLM doing the reproduction, the LLM provider is.

I don't think this is legally true. The law isn't fully settled here, but things seem to be moving towards the LLM user being the holder of the copyright of any work produced by that user prompting the LLM. It seems like this would also place the enfringement onus on the user, not the provider.

> If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I'm the one who broke the license, not them.

If you produce code using a LLM, you (probably) own the copyright. If that code is already GPL'd, you would be the one engaged in enfringement.


You seem set on conflating "training" an LLM with "learning" by a human.

LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.

Legally, we call that "making a copy."

But don't take my word for it. There are plenty of lawsuits for you to follow on this subject.


> You seem set on conflating "training" an LLM with "learning" by a human.

"Learning" is an established word for this, happy to stick with "training" if that helps your comprehension.

> LLMs don't "learn" but they _do_ in some cases, faithfully regurgitate what they have been trained on.

> Legally, we call that "making a copy."

Yes, when you use a LLM to make a copy .. that is making a copy.

When you train a LLM... That isn't making a copy, that is training. No copy is created until output is generated that contains a copy.


Everything which is able to learn is also alive, and we don't want to start to treat digital device and software as living beings.

If we are saying that the LLM learns things and then made the copy, then the LLM made the crime and should receive the legal punishment and be sent to jail, banning it from society until it is deemed safe to return. It is not like the installed copy is some child spawn from digital DNA and thus the parent continue to roam while the child get sent to jail. If we are to treat it like a living being that learns things, then every copy and every version is part of the same individual and thus the whole individual get sent to jail. No copy is created when installed on a new device.


> we don't want to start to treat digital device and software as living beings.

Right, because then we have to decide at what point our use of AI becomes slavery.


[flagged]


[flagged]


> "Learning" was used by the person I responded too.

Not in the same sense.

> If you had read my comment with any care you would have realized I used the words "training" and "learning" specifically and carefully.

This is completely belied by "It works exactly the same for a LLM."

> That doesn't count as a "copy" since it isn't human-discernable.

That's not the reason it _might not_ count as a copy (the law is still not settled on this, and all the court cases have lots of caveats in the rulings), but thanks for playing.

> If you don't like being called out for lack of comprehension, then don't needlessly impose a semantic interjection

If you want to not appear mendacious, then don't claim equivalence between human learning and machine training.

> It is pretty clear this is a transformative use and so far the courts have agreed

In weak cases that didn't show exact outputs from the LLM, yes. In any case, "transformative" does not automagically transform into fair use, although it is one considered factor.

> Very mature.

Hilarious, coming from the one who wrote "if it helps your comprehension."

You must be one of those assholes who think it's OK to say mean things if you use the right words.

Bless your heart.


You both broke the site guidelines badly in this thread. Could you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules? We ban accounts that won't, and I don't want to ban either of you.

> This is completely belied by "It works exactly the same for a LLM."

I specifically used the word "training" in the sentence aftwards. "It" clearly refers to the sentence prior which explains that infringement happens when the copy is created, not when the original is memorized/learned/trained.

> If you want to not appear mendacious, then don't claim equivalence between human learning and machine training.

I never claimed that. I already clarified that with my previous comment. Instead of bothering to read and understand you have continued to call names.

> Hilarious, coming from the one who wrote "if it helps your comprehension."

You seemed confused, you still seem confused. If you think this genuine (and slightly snarky) offer to use terms that sidestep your pointless semantic nitpick is "being an asshole"... then you need to get some more real world experience.


You both broke the site guidelines badly in this thread. Could you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules? We ban accounts that won't, and I don't want to ban either of you.

Sorry, and thanks.

I know moderation is a tough gig.


I'm polite in repose to being repeatedly called names and this is your response?

If you think my behavior here was truly ban worthy than do it because I don't see anything in the I would change except for engaging at all


This is the sort of thing I was referring to:

> Instead of bothering to read and understand you have continued to call names.

> You seemed confused, you still seem confused

> your pointless semantic nitpick

> you need to get some more real world experience

I wouldn't personally call that being polite, but whatever we call it, it's certainly against HN's rules, and that's what matters.

Edit: This may or may not be helpful (probably not!) but I wonder if you might be experiencing the "objects in the mirror are closer than they appear" phenomenon that shows up pretty often on the internet - that is, we tend to underestimate the provocation in our own comments, and overestimate the provocation in others' comments, which in the end produces quite a skew (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: