It's rather hard to open source the model when you trained it off a bunch of copyrighted content that you didn't have permission to use.
My sentiments exactly
@cmnybo@marvelous_coyote That's.. not how it works. You wouldn't see any copyrighted works in the model. We are already pretty sure even the closed models were trained on copyrighted works, based on what they sometimes produce. But even then, the AI companies aren't denying it. They are just saying it was all "fair use", they are using a legal loophole, and they might win this. Basically the only way they could be punished on copyright is if the models produce some copyrighted content verbatim.
Like producing some images with Disney Logo
@ReakDuck Yup, and that's a much better avenue to fight against the AI companies. Because fundamentally, this is almost impossible to avoid in the ML models. We should stop complaining about how they scraped copyrighted content, this complaint won't succeed until that legal loophole is removed. But when they reproduce copyrighted content, that could be fatal. And this applies also to reproducing GPL code samples by copilot for example.
Yeah, you just summarize my thoughts I had before chatGPT came to light.
Ok, not really. My thoughts were: could I store a Picture made illegaly into an LLM and later on ask it to show it again? Because I never stored it as a file and LLMs seem to not count as a storage.
I could store Pictures I would not be allowed to.
BERT and early versions of GPT were trained on copyright free datasets like Wikipedia and out of copyright books. Unsure if those would be big enough for the modern ChatGPT types
I feel like one of thr problem is LLMs hijacked the definition of AI. Like another comment said, the way they trained on copyrighted material, it's probably not possible. But imagine there was another model (not necessarily LLM) and it was trained with completely public domain material. For example maybe something trained to find genetic diseases from genetic samples of a person, or detecting asteroids from telescope images. Those could become open source. Now, I am not an expert, but do we consider those AI?
I mean you could a still do generative AI on voices of actors in really old movies that are in the public domain
Exactly what I wanted to say. Anything on public domain; movies, books, pictures and paintings should be fair game. Including any databases that allows the uses of it in the given context.
Yeah that's a lot of data
Public domain audiobooks too like LibriVox!
@astro_ray@marvelous_coyote It seems you have the incorrect idea about what open-source means, which is quite sad here in the open-source lemmy community. Being trained on public domain material does NOT make the model open-source. It's about the license - what the recipients of the model are allowed to do with it - open-source must allow derivative works and commercial use, on top of seeing the code, but for LLM models the "code" is just a bunch of float numbers, nothing interesting to see.
Looking at my open source model downloaded from the internet...
Yes?
What makes it open source?
The license.
If I license a binary as open source does that make it open source?
A pre-trained model alone can't really be open source. Without the source code and full data set used to generate it, a model alone is analogous to a binary.
@sunstoned@Ephera That's nonsense. You could write the scripts, collect the data, publish all, but without the months of GPU training you wouldn't have the trained model, so it would all be worthless. The code used to train all the proprietary models is already open-source, it's things like PyTorch, Tensorflow etc. For a model to be open-source means you can download the weights and you are allowed to use it as you please, including modifying it and publishing again. It's not about the dataset.
Quite aggressive there friend. No need for that.
You have a point that intensive and costly training process plays a factor in the usefulness of a truly open source gigantic model. I'll assume here that you're referring to the likes of Llama3.1's heavy variant or a similarly large LLM. Note that I wasn't referring to gigantic LLMs specifically when referring to "models". It is a very broad category.
However, that doesn't change the definition of open source.
If I have an SDK to interact with a binary and "use it as [I] please" does that mean the binary is then open source because I can interact with it and integrate it into other systems and publish those if I wish? :)
@sunstoned Please don't assume anything, it's not healthy.
To answer your question - it depends on the license of that binary. You can't just automatically consider something open-source. Look at the license. Meta, Microsoft and Google routinely misrepresents their licenses, calling them "open-source" even when they aren't.
But the main point is that you can put closed source license on a model trained from open-source data. Unfortunately. You are barking under the wrong tree.
Please don't assume anything, it's not healthy.
Explicitly stating assumptions is necessary for good communication. That's why we do it in research. :)
it depends on the license of that binary
It doesn't, actually. A binary alone, by definition, is not open source as the binary is the product of the source, much like a model is the product of training and refinement processes.
You can't just automatically consider something open source
On this we agree :) which is why saying a model is open source or slapping a license on it doesn't make it open source.
the main point is that you can put closed source license on a model trained from open source data
Actually the ability to legally produce closed source material depends heavily on how the data is licensed in that case
This is not the main point, at all. This discussion is regarding models that are released under an open source license. My argument is that they cannot be truly open source on their own.
Just because open source AI is not feasible at the moment is no reason to change the definition of open source.
@dandi8 but you are the one who is changing it. And who said it's not feasible? Mixtral model is open-source. WizardLM2 is open-source. Phi3:mini is open-source... what's your point?
But the license of the model is not related to the license of the data used for training, nor the license for the scripts and libraries. Those are three separate things.
Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.
From Mistral's FAQ:
We do not communicate on our training datasets. We keep proprietary some intermediary assets (code and resources) required to produce both the Open-Source models and the Optimized models. Among others, this involves the training logic for models, and the datasets used in training.
Unfortunately we're unable to share details about the training and the datasets (extracted from the open Web) due to the highly competitive nature of the field.
The training data set is a vital part of the source code because without it, the rest of it is useless. The model is the compiled binary, the software itself.
If you can't share part of your source code due to the "highly competetive nature of the field" (or whatever other reason), your software is not open source.
I cannot lool at Mistral's source and see that, oh yes, it behaves this way because it was trained on this piece of data in particular - because I was not given accesa to this data.
I cannot build Mistral from scratch, because I was not given a vital piece of the recipe.
I cannot fork Mistral and create a competitor from it, because the devs specifically said they're not providing the source because they don't want me to.
You can keep claiming that releasing the binary makes it open source, but that's not going to make it correct.
> The training data set is a vital part of the source code because without it, the rest of it is useless.
This is simply false. Dataset is not the "source code" of a model. You need to delete this notion from your brain. Model is not the same as a compiled binary.
Gee, you sure put a lot of effort into supporting your argument in this comment.
Yes. And then you're obligated to give the source code too.
You would be obligated, if your goal were to be complying with the spirit and description of open source (and sleeping well at night, in my opinion).
Do you have the source code and full data set used to train the "open source" model you're referring to?
I mean you would be legally obligated. You can sue someone who uses the GPL and doesn't provide their sources.
Do you plan to sue the provider of your "open source" model? If so, would the goal be to force the provider to be in full compliance with the license (access to their source code and training set)? Would the goal be to force them to change the license to something they comply with?
The architecture can easily be open source - as long as repo is missing just the training data. Just like there are Doom engines that are open source, even though they do not provide WAD files, which are still copyrighted. The code is there, but it is somewhat useless without the data. Analogy is not perfect, but let's assume it compiles to a single binary containing everything, maps included.
If ID Software gives you a compiled Doom with maps free to use it is freeware. If they open source the engine (they actually did), but do not release the WAD files as open source, the compiled game is not open source - it is still freeware.
It is not complicated really.
It's that simple
But then it's the tools to make the AI that are open source, not the model itself.
I think that we can't have a useful discussion on this if we don't distinguish between the source code of the training framework and the "source code" of the model itself, which is the training data set. E.g, Mistral Nemo can't be considered open source, because there is no Mistral Nemo without the training data set.
It's like with your Doom example - the Doom engine is open source, but Doom itself isn't.
Unfortunately, here the analogy falls apart a bit, because there is no logic in the art assets of doom, whereas there is plenty of logic in the dataset for Mistral - enough that the devs said they don't want to disclose it for fear of competition.
This data set logic - incredibly valuable and important for the behavior of the AI, as confirmed by the devs - is why the model is not open source, even though the training framework might be.
Edit:
Another aspect is the spirit of open-source. One of the benefits of OSS is you can study the source code to determine whether the software is in compliance with various regulations - you can audit that software.
How can we audit Mistral Nemo? How can we confirm that it doesn't utilize copyrighted material to provide its answers?
> E.g, Mistral Nemo can't be considered open source, because there is no Mistral Nemo without the training data set.
Right here - that's your logical conflict. By downloading the model file, you can run it, thereby you can "have Mistral Nemo" even without having the training data, contradicting your statement -> your statement is invalid.
You're, hopefully not on purpose, misunderstanding the argument.
You can download a binary of Adobe Photoshop and run it. That doesn't make it open source.
I cannot make Mistral Nemo from just the open-sourced tools, therefore Mistral Nemo is not open source.
@dandi8 the license of Adobe Photoshop is not open-source because it specifically restricts reverse-engineering and modifications, and a lot of other things. The license of Mistral Nemo IS open-source, because it's Apache2.0, you are free to use it, study it, redistribute it, ... open-source doesn't say anything about giving you all the tools to re-create it, because that would mean they would need to give you the GPU time. "Open-source" simply means something else than what you think.
You seem to think that "open source" is just about the license and that a project is open source if you're allowed to reverse engineer it.
You have a gross misunderstanding of what OSS is, which contradicts even the Wikipedia definition, and are unwilling to educate yourself about it.
You suggest that Mistral would need to lend us their GPUs to fit the widely accepted definition of OSS, which is untrue.
You're either not a software engineer, or you have an agenda.
Because of this, I will not be continuing this conversation with you, as at this point it is just a waste of my time.
Can they be closed source?
Of course. GPT by OpenAI is completely closed source.
It has "Open" in it's name. It's impossible that ChatGPT is closed source.
I'll have what he's having.
With the amount of CC and GPL content that has been ingested I don’t know that from a philosophical point
Interesting point.
Are the petabytes of training data included in the repo? No? Then how could it ever be called open source?
At best, some of the current AI can be called freeware.
If you're just including the trained AI itself, it's more like including a binary, rather than source.
You can't really modify Llama in a significant way, can you? You can't fork it and continue improving that fork.
@marvelous_coyote no. At least not in it's current form. Absolutely no way.
I agree with your point from a technical perspective.
It's rather hard to open source the model when you trained it off a bunch of copyrighted content that you didn't have permission to use.
My sentiments exactly
@cmnybo @marvelous_coyote That's.. not how it works. You wouldn't see any copyrighted works in the model. We are already pretty sure even the closed models were trained on copyrighted works, based on what they sometimes produce. But even then, the AI companies aren't denying it. They are just saying it was all "fair use", they are using a legal loophole, and they might win this. Basically the only way they could be punished on copyright is if the models produce some copyrighted content verbatim.
Like producing some images with Disney Logo
@ReakDuck Yup, and that's a much better avenue to fight against the AI companies. Because fundamentally, this is almost impossible to avoid in the ML models. We should stop complaining about how they scraped copyrighted content, this complaint won't succeed until that legal loophole is removed. But when they reproduce copyrighted content, that could be fatal. And this applies also to reproducing GPL code samples by copilot for example.
Yeah, you just summarize my thoughts I had before chatGPT came to light.
Ok, not really. My thoughts were: could I store a Picture made illegaly into an LLM and later on ask it to show it again? Because I never stored it as a file and LLMs seem to not count as a storage.
I could store Pictures I would not be allowed to.
BERT and early versions of GPT were trained on copyright free datasets like Wikipedia and out of copyright books. Unsure if those would be big enough for the modern ChatGPT types
@flamingmongoose @cmnybo
> copyright free datasets like Wikipedia
🤦♂️
What's up with that? Appreciate they're permissive rather than copyright free as such
Have you read this article by Cory Doctorow yet?
I feel like one of thr problem is LLMs hijacked the definition of AI. Like another comment said, the way they trained on copyrighted material, it's probably not possible. But imagine there was another model (not necessarily LLM) and it was trained with completely public domain material. For example maybe something trained to find genetic diseases from genetic samples of a person, or detecting asteroids from telescope images. Those could become open source. Now, I am not an expert, but do we consider those AI?
I mean you could a still do generative AI on voices of actors in really old movies that are in the public domain
Exactly what I wanted to say. Anything on public domain; movies, books, pictures and paintings should be fair game. Including any databases that allows the uses of it in the given context.
Yeah that's a lot of data
Public domain audiobooks too like LibriVox!
@astro_ray @marvelous_coyote It seems you have the incorrect idea about what open-source means, which is quite sad here in the open-source lemmy community. Being trained on public domain material does NOT make the model open-source. It's about the license - what the recipients of the model are allowed to do with it - open-source must allow derivative works and commercial use, on top of seeing the code, but for LLM models the "code" is just a bunch of float numbers, nothing interesting to see.
Looking at my open source model downloaded from the internet...
Yes?
What makes it open source?
The license.
If I license a binary as open source does that make it open source?
Nope. Second point in the definition: https://opensource.org/osd
My point precisely :)
A pre-trained model alone can't really be open source. Without the source code and full data set used to generate it, a model alone is analogous to a binary.
@sunstoned @Ephera That's nonsense. You could write the scripts, collect the data, publish all, but without the months of GPU training you wouldn't have the trained model, so it would all be worthless. The code used to train all the proprietary models is already open-source, it's things like PyTorch, Tensorflow etc. For a model to be open-source means you can download the weights and you are allowed to use it as you please, including modifying it and publishing again. It's not about the dataset.
Quite aggressive there friend. No need for that.
You have a point that intensive and costly training process plays a factor in the usefulness of a truly open source gigantic model. I'll assume here that you're referring to the likes of
Llama3.1
's heavy variant or a similarly large LLM. Note that I wasn't referring to gigantic LLMs specifically when referring to "models". It is a very broad category.However, that doesn't change the definition of open source.
If I have an SDK to interact with a binary and "use it as [I] please" does that mean the binary is then open source because I can interact with it and integrate it into other systems and publish those if I wish? :)
@sunstoned Please don't assume anything, it's not healthy.
To answer your question - it depends on the license of that binary. You can't just automatically consider something open-source. Look at the license. Meta, Microsoft and Google routinely misrepresents their licenses, calling them "open-source" even when they aren't.
But the main point is that you can put closed source license on a model trained from open-source data. Unfortunately. You are barking under the wrong tree.
Explicitly stating assumptions is necessary for good communication. That's why we do it in research. :)
It doesn't, actually. A binary alone, by definition, is not open source as the binary is the product of the source, much like a model is the product of training and refinement processes.
On this we agree :) which is why saying a model is open source or slapping a license on it doesn't make it open source.
Just because open source AI is not feasible at the moment is no reason to change the definition of open source.
@dandi8 but you are the one who is changing it. And who said it's not feasible? Mixtral model is open-source. WizardLM2 is open-source. Phi3:mini is open-source... what's your point?
But the license of the model is not related to the license of the data used for training, nor the license for the scripts and libraries. Those are three separate things.
https://en.m.wikipedia.org/wiki/Open-source_software
From Mistral's FAQ:
https://huggingface.co/mistralai/Mistral-7B-v0.1/discussions/8
The training data set is a vital part of the source code because without it, the rest of it is useless. The model is the compiled binary, the software itself.
If you can't share part of your source code due to the "highly competetive nature of the field" (or whatever other reason), your software is not open source.
I cannot lool at Mistral's source and see that, oh yes, it behaves this way because it was trained on this piece of data in particular - because I was not given accesa to this data.
I cannot build Mistral from scratch, because I was not given a vital piece of the recipe.
I cannot fork Mistral and create a competitor from it, because the devs specifically said they're not providing the source because they don't want me to.
You can keep claiming that releasing the binary makes it open source, but that's not going to make it correct.
@dandi8
> The training data set is a vital part of the source code because without it, the rest of it is useless.
This is simply false. Dataset is not the "source code" of a model. You need to delete this notion from your brain. Model is not the same as a compiled binary.
Gee, you sure put a lot of effort into supporting your argument in this comment.
Yes. And then you're obligated to give the source code too.
You would be obligated, if your goal were to be complying with the spirit and description of open source (and sleeping well at night, in my opinion).
Do you have the source code and full data set used to train the "open source" model you're referring to?
I mean you would be legally obligated. You can sue someone who uses the GPL and doesn't provide their sources.
Do you plan to sue the provider of your "open source" model? If so, would the goal be to force the provider to be in full compliance with the license (access to their source code and training set)? Would the goal be to force them to change the license to something they comply with?
The architecture can easily be open source - as long as repo is missing just the training data. Just like there are Doom engines that are open source, even though they do not provide WAD files, which are still copyrighted. The code is there, but it is somewhat useless without the data. Analogy is not perfect, but let's assume it compiles to a single binary containing everything, maps included.
If ID Software gives you a compiled Doom with maps free to use it is freeware. If they open source the engine (they actually did), but do not release the WAD files as open source, the compiled game is not open source - it is still freeware.
It is not complicated really.
It's that simple
But then it's the tools to make the AI that are open source, not the model itself.
I think that we can't have a useful discussion on this if we don't distinguish between the source code of the training framework and the "source code" of the model itself, which is the training data set. E.g, Mistral Nemo can't be considered open source, because there is no Mistral Nemo without the training data set.
It's like with your Doom example - the Doom engine is open source, but Doom itself isn't. Unfortunately, here the analogy falls apart a bit, because there is no logic in the art assets of doom, whereas there is plenty of logic in the dataset for Mistral - enough that the devs said they don't want to disclose it for fear of competition.
This data set logic - incredibly valuable and important for the behavior of the AI, as confirmed by the devs - is why the model is not open source, even though the training framework might be.
Edit:
Another aspect is the spirit of open-source. One of the benefits of OSS is you can study the source code to determine whether the software is in compliance with various regulations - you can audit that software.
How can we audit Mistral Nemo? How can we confirm that it doesn't utilize copyrighted material to provide its answers?
@dandi8 @marvelous_coyote
> E.g, Mistral Nemo can't be considered open source, because there is no Mistral Nemo without the training data set.
Right here - that's your logical conflict. By downloading the model file, you can run it, thereby you can "have Mistral Nemo" even without having the training data, contradicting your statement -> your statement is invalid.
You're, hopefully not on purpose, misunderstanding the argument.
You can download a binary of Adobe Photoshop and run it. That doesn't make it open source.
I cannot make Mistral Nemo from just the open-sourced tools, therefore Mistral Nemo is not open source.
@dandi8 the license of Adobe Photoshop is not open-source because it specifically restricts reverse-engineering and modifications, and a lot of other things. The license of Mistral Nemo IS open-source, because it's Apache2.0, you are free to use it, study it, redistribute it, ... open-source doesn't say anything about giving you all the tools to re-create it, because that would mean they would need to give you the GPU time. "Open-source" simply means something else than what you think.
You seem to think that "open source" is just about the license and that a project is open source if you're allowed to reverse engineer it.
You have a gross misunderstanding of what OSS is, which contradicts even the Wikipedia definition, and are unwilling to educate yourself about it.
You suggest that Mistral would need to lend us their GPUs to fit the widely accepted definition of OSS, which is untrue.
You're either not a software engineer, or you have an agenda.
Because of this, I will not be continuing this conversation with you, as at this point it is just a waste of my time.
Can they be closed source?
Of course. GPT by OpenAI is completely closed source.
It has "Open" in it's name. It's impossible that ChatGPT is closed source.
I'll have what he's having.
With the amount of CC and GPL content that has been ingested I don’t know that from a philosophical point
Interesting point.
Are the petabytes of training data included in the repo? No? Then how could it ever be called open source?
At best, some of the current AI can be called freeware.
If you're just including the trained AI itself, it's more like including a binary, rather than source.
You can't really modify Llama in a significant way, can you? You can't fork it and continue improving that fork.
@marvelous_coyote no. At least not in it's current form. Absolutely no way.
I agree with your point from a technical perspective.