Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.
So the LLM answers what's relevant according to stereotypes instead of what's relevant... in reality?
It just means there's a bias in the data that is probably being amplified during training.
It answers what's relevant according to its training.
Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.
So the LLM answers what's relevant according to stereotypes instead of what's relevant... in reality?
It just means there's a bias in the data that is probably being amplified during training.
It answers what's relevant according to its training.
Please remember what the A in AI stands for.