Not sure why he lost. He could have claimed to run it through an AI that was only trained on the one picture.
“I will give a picture, you will create an exact copy, accurate pixel to pixel to the one I give you”.
He got disqualified on purpose in order to make a point
you don't even need to train anything, just run it through on a super low blur and almost nothing will change.
To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy
It very much is possible. It'll only ever output that one image and it's a huge waste of resources but it's very much possible.
It's very much possible and indeed such a problem that it may be done by mistake if a large enough data set isn't used (see overfitting). A model trained to output just this one image will learn to do so and over time it should learn to do it with 100% accuracy. The model would simply learn to ignore whatever arbitrary inputs you've given it
That probably doesn't count as "AI" do... It's more a very bad form of compression (that may very well make the image file larger)
Not sure why he lost. He could have claimed to run it through an AI that was only trained on the one picture.
“I will give a picture, you will create an exact copy, accurate pixel to pixel to the one I give you”.
He got disqualified on purpose in order to make a point
you don't even need to train anything, just run it through on a super low blur and almost nothing will change.
To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy
It very much is possible. It'll only ever output that one image and it's a huge waste of resources but it's very much possible.
It's very much possible and indeed such a problem that it may be done by mistake if a large enough data set isn't used (see overfitting). A model trained to output just this one image will learn to do so and over time it should learn to do it with 100% accuracy. The model would simply learn to ignore whatever arbitrary inputs you've given it
That probably doesn't count as "AI" do... It's more a very bad form of compression (that may very well make the image file larger)