Visual artists fight back against AI companies for repurposing their work

L4sBot@lemmy.worldmod to Technology@lemmy.world – 191 points –
Visual artists fight back against AI companies for repurposing their work
apnews.com

Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.

78

You are viewing a single comment

The brute forcing doesn't happen when you generate the art. It happens when you train the model.

You fiddle with the numbers until it produces only results that "look right". That doesn't make it not brute forcing.

Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

As for current models generating different result for the same prompt... no. They don't. They generate variations, but the same prompt won't get you Dalí in one iteration, then Monet in the next.

The brute forcing doesn't happen when you generate the art. It happens when you train the model.

So it's the same as a human - they also generate art until they get something that "looks right" during training. How is it different when an AI does it?

But you'll have to explain where this brute forcing happens. What are the inputs and outputs of the process? Because the NN doesn't generate all possible outputs until the correct one is found, which is what brute forcing is. Maybe you could argue that GANs are kinda doing this, but it's still a very much directed process, which is entirely different from real brute forcing.

Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

You're using more words without defining them.

Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

But we're not writing code to generate art. We're writing code to train a model to generate art. As I've already mentioned, NNs provably can build an accurate model of whatever you're training - how is this not a form of comprehension?

In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

Please prove you need to understand the human experience to be able to generate meaningful art.

As for current models generating different result for the same prompt... no. They don't. They generate variations, but the same prompt won't get you Dalí in one iteration, then Monet in the next.

Of course they can, depending on your prompt and temperature.

You are drawing parallels where I don't think there are any, and are asking me to prove things I consider self-evident.

I'm no longer interested in elaborating, and I don't think you'd understand me if I did.

This is what it always comes down to - you have this fuzzy feeling that AI art is not real art, but the deeper you dig, the harder it gets to draw a real distinction. This is because your arguments aren't rooted in actual definitions, so instead of clearly explaining the difference between A and B, you handwave it away due to C, which you also don't explain.

I once held positions similar to yours, but after analysing the topic much much deeper I arrived at my current positions. I can clearly answer all the questions I posed to you. You should consider whether you not being able to means anything regarding your own position.

I am able to answer your questions for myself. I have lost interest in doing so for you.

But can you do so from the ground up, without handwaving towards the next unexplained reason? That's what you've done here so far.

Yes.

I once held a view similar to the one you present now. I would consider my current opinion further advanced, like you do yours.

You ask for elaboration and verbal definitions, I've been concise because I do not wish to spend time on this.

It is clear we cannot proceed further without me doing so. I have decided I won't.

Bummer. You could have been the first to bring actual argument for your position :)

Not today. I have too much else to do.

And it's not like my being concise makes my argument absent.

The issue isn't you being concise, it's throwing around words that don't have a clear definition, and expecting your definition to be broadly shared. You keep referring to understanding, and yet objective evidence towards understanding is only met with "but it's not creative".

Are you suggesting there is valid evidence modern ML models are capable of understanding?

I don't see how that could be true for any definition of the word.

As I've shared 3 times already: Yes, there is valid evidence that modern ML models are capable of understanding. Why do I have to repeat it a fourth time?

I don’t see how that could be true for any definition of the word.

Then explain to me how it isn't true given the evidence:

Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.

https://arxiv.org/abs/2210.13382

I don't see how an emergent nonlinear internal representation of the board state is anything besides "understanding" it.

Cool. But this is still stuff that has a "right" answer. Math. Math in the form of game rules, but still math.

I have seen no evidence that MLs can comprehend the abstract. To know, or more accurately, model, the human experience. It's not even clear, that given a conscious entity, it is possible to communicate about being human to something non-human.

I am amazed, but not surprised, that you can explain a "system" to an LLM. However, doing the same for a concept, or human emotion, is not something I think is possible.

4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...
4 more...