To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy
It very much is possible. It'll only ever output that one image and it's a huge waste of resources but it's very much possible.
It's very much possible and indeed such a problem that it may be done by mistake if a large enough data set isn't used (see overfitting). A model trained to output just this one image will learn to do so and over time it should learn to do it with 100% accuracy. The model would simply learn to ignore whatever arbitrary inputs you've given it
To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy
It very much is possible. It'll only ever output that one image and it's a huge waste of resources but it's very much possible.
It's very much possible and indeed such a problem that it may be done by mistake if a large enough data set isn't used (see overfitting). A model trained to output just this one image will learn to do so and over time it should learn to do it with 100% accuracy. The model would simply learn to ignore whatever arbitrary inputs you've given it