Should there be a clause for AI?
Vasileios Valatsos
me at aethrvmn.gr
Tue Jul 15 19:32:51 UTC 2025
There is a great big deal that I want to expand on; first and foremost
that someone possibly (almost certainly) smarter than me has put my
argumentation in a manner that is much clearer than what I could hope to
achieve in an email or two:
https://www.youtube.com/watch?v=CdKxgT1o864
The essential point of this discussion is that myself and others feel
like the proper definitions of FOSS are not enough to encapsulate what
AI does as a software. At the same time I understand any hesitation on
the extremely overbearing and uninvited work that would come with the
task, so in all earnest I think that this discussion should be
abandoned, not because a solution is unattainable, but because it is
beyond of the scope of what copyleft-next should have, especially since
it is at an extremely volatile time, where there are much more urgent,
and obvious, and tackleable problems (like the RHEL source code abuse,
and the "sending source in floppy discs twenty years after the fact, to
name a few.).
Now that's out, I really *really* want to respond to some things said in
this thread.
On Sun, Jul 13, 2025 at 5:30 PM Aaron Wolf <wolftune at riseup.net> wrote:
> I agree with the bulk of these points. https://ai-2027.com/ is a good
reference for the premise that AI could reach levels categorically
different from today within what is a short-term in human time scale.
Please don't fall for marketing hype. OpenAI has been saying "AGI next
week" since 2022. Scaling is *not* the solution. I understand that they
are *experts* and that "their opinion has weight", but (a) appeal to
authority is a logical fallacy, and (b) I am also an ai researcher
(Apart from working as an ai researcher I also maintain GPLv3-or-later
ai libraries), so I appeal to myself to tell you to not fall for the
hype. It's all marketing so that people talk about the field more, so
that VC funding is given without question. Everybody in the field is
aware of the insurmountable limitations to scaling.
On 14/7/25 01:21, Richard Fontana wrote:
> I've seen that before. That seems to give us very little time to have
> a copyleft-next that has any chance of accomplishing anything before
> the robots take over. :-)
NO. The USA is not the only legal framework in the world, and copyleft
has won cases outside of it. Even if citizens of the USA don't benefit
(which they will, because this license closes loopholes), other citizens
of other countries with as of yet undefined software and copyright laws,
such as the global South, will benefit *immensely*. Just consider the
fact that the notion of fair use exists mostly within the USA.
In any case, I think that copyleft-next is a much needed update to
copyleft implementations that I don't care about an AI license,
especially since there are others who are attempting to fix that exact
problem, so a double license solution becomes feasible.
Apart from those points:
On Sun, Jul 13, 2025 at 3:25 PM Aaron Wolf <wolftune at riseup.net> wrote:
> Disclosing modifications and releasing them under the same license
> applies only when *conveying* the software to others, not just when
> *using*. Perhaps this was understood, perhaps not, but this is a
> widespread misunderstanding, so please be careful about that.
You are correct, I should've been more careful. I used the phrase I did
be because in AI/ML settings you have something more akin to AGPL, where
the distribution is part of the copyleft clauses.
> That scenario would make copyleft-next pretty useless.
That scenario makes any copyleft license useless because I can generate
it via LLMs and have it closed source. Worse, current copyleft licenses
appear to be useless because there is no consideration for LLM outputs.
I can, right now, feed an LLM any source code line by line, with the
instruction to repeat the code. The output is non-copyrightable, so does
copyleft hold? I dont know.
> If anyone who wants to remove software-freedom can get AI assistance
> in doing so, then we're left with software-freedom only from those who
> already care to maintain it… and then there's no need for copyleft.
Exactly my point.
On 7/12/25 11:56, Kuno Woudt wrote:
> If I manage to accidentally or on purpose convince a chatbot to output
> substantial chunks of a literary work -- I'd expect that publishing
> that output would be copyright infringement regardless of whether I
> know that what I'm publishing is a pre-existing copyrighted work.
My hesitation on that has more to do with the fact that most people
don't know/check software licenses, especially vibe coders. I think of
it as people who bought a fake "brand", like a fake iPhone, or bought a
CD containing a pirated movie file.
On 14/7/25 01:21, Richard Fontana wrote:
> It sounds like you're envisioning that a "copyleft-next for AI" could
> devise some sort of "clever hack" or jujitsu move or whatever to make
> models themselves to be free, or more free. This is probably worth
> discussing, but it's probably not going to take the form of a license
> that says "if you want to train your model with this copyleft-next
> code, you have to do these things to make your resulting model
> copyleft".
Honestly the more I think about it I understand the attainability and
the folly of such an endeavour. However I think more reasonable a type
of strong ShareAlike clause. Maybe if copyleft-next is inside the
training data, the training data get infected? At the same time there is
no distribution of data, but then again, ShareAlike doesn't need
distribtuion of the data. And while it isn't about software strictly, it
is about copyleft so maybe this could prove better?
In any case, as I originally stated, I consider this a matter for a
later time. Currently I would much rather have a copyleft-next license
without any such clause, which can be amended, rather than no
copyleft-next license, but endless "fruitfull" discussions about the
approach.
- Vasileios Valatsos
More information about the next
mailing list