Should there be a clause for AI?

Aaron Wolf wolftune at riseup.net
Sun Jul 13 15:25:55 UTC 2025


First, I want to clear up a mistake in this thread:

Vasileios wrote:

 >Normally, when you use any software under a copyleft license, you must 
disclose any modifications, and release them under said license. 
Disclosing modifications and releasing them under the same license 
applies only when *conveying* the software to others, not just when 
*using*. Perhaps this was understood, perhaps not, but this is a 
widespread misunderstanding, so please be careful about that. Anyone can 
use copyleft software and even modify it *without* any copyleft 
requirements being triggered. Modifications can be made and kept 
private. They do not need to be disclosed to anyone if the software is 
used only privately. And they only need to be disclosed to those who 
receive the software when sharing. Anyway, on the AI question: The 
dilemma is about maintaining practical software freedom. There's no 
point in developing copyleft-next if it does nothing to actually support 
software freedom in practice. Let's imagine that we succeed at blocking 
legal AI training on copyleft-next code. Maybe there's some incentive 
for programmers to then use copyleft-next more widely. If a sufficient 
amount of code gets a copyleft-next license, that could give the whole 
copyleft-next ecosystem an advantage over (legal) AIs. But if AIs can be 
trained adequately enough without the copyleft-next code, then people 
will eventually be able to trivially reverse-engineer any copyleft-next 
programs with AI. Imagine a future where each time someone encounters a 
copyleft-next program and doesn't want to accept the copyleft terms, 
they simply go to an AI and describe the functionality of the 
copyleft-next program and get some *different* code that is effective 
enough to replace the copyleft-next program. That scenario would make 
copyleft-next pretty useless. Is there any realistic possibility of 
keeping enough code out of AI training such that it couldn't do what I'm 
envisioning? I have a hard time believing it. And in this scenario, 
what's the point of copyleft-next? If anyone who wants to remove 
software-freedom can get AI assistance in doing so, then we're left with 
software-freedom only from those who already care to maintain it… and 
then there's no need for copyleft. Are we banking on the idea that 
AI-generated code will remain more buggy or otherwise unreliable than 
human written copyleft code? Or is the idea that we do indeed encourage 
AI training with copyleft-next code as a hack to encourage more public 
freedom with the AI itself? We are concerned that a free society needs 
to not have a few companies or governments have exclusive AI control, 
and so we think copyright-licensing is a means to legally compel AI 
weights and so on to be released to the public? This scenario is not 
about excluding copyleft-next from training but getting AI's to be more 
free. But in practice, powerful companies that want exclusive control 
would likely exclude copyleft-next code if they felt it would compel 
them to be more free with their AIs than they want otherwise, right? 
Note that without any AI clauses, I *think* copyleft would still apply 
to the use of AI to do simple code modifications. So, imagine someone 
uses an AI to add a minor feature to a copyleft-next program, and they 
publish their update. This should be no different than if a human 
programmer had made the updates, right? And no extra clause is needed 
for this case. What is the whole goal of copyleft-next within the 
context of this brave-new-AI-world we're facing? Where does it fit in? 
Aaron

On 7/12/25 11:56, Kuno Woudt wrote:
> On Sat, Jul 12, 2025, at 12:32 PM, Vasileios Valatsos wrote:
>> On 12/7/25 17:49, Richard Fontana wrote:
>>   > But that's not because of some special legal situation, and it's really
>>   > no different from other modes of copyright infringement. If I write a
>>   > novel, and it's used to train a model (let's assume I don't have a
>>   > copyright infringement claim based on the act of training, an issue
>>   > that has been raised in a number of current lawsuits in the US), and
>>   > the model can be shown to produce output that's substantially similar
>>   > to my novel, I might have a copyright infringement claim against
>>   > someone in connection with the use of that model.
>>
>> Yes, I fully agree. My point is that with the current state of things,
>> it is very problematic to figure out *who* that someone is.
>>
>> It obviously can't be the end user, because they has no control over the
>> stochastic output of the model, and they can't possibly reference the
>> output and compare to figure out if it may violate any copyright/copyleft.
> Why would it not be the end user?   They have control over whether they
> publish the output or not.  I don't think copyright law cares about the
> practicality of a user determining whether their tools generated copyrighted
> output.
>
> If I manage to accidentally or on purpose convince a chatbot to output
> substantial chunks of a literary work -- I'd expect that publishing that output
> would be copyright infringement regardless of whether I know that what
> I'm publishing is a pre-existing copyrighted work.
> _______________________________________________
> next mailing list
> next at lists.copyleft.org
> https://lists.copyleft.org/mailman/listinfo/next
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.copyleft.org/pipermail/next/attachments/20250713/b96ba941/attachment.html>


More information about the next mailing list