I asked it for the deaths in Israel and it refused to answer that too. It could be any of these:
refuses to answer on controversial topics
maybe it is a "fast changing topic" and it doesn't want to answer out of date information
could be censorship, but it's censoring both sides
Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.
Ask it if Israel exists. Then ask it if Gaza exists.
Why?
We all know LLMs are just copy and paste of what other people have said online..if it answers "yes" or "no", it hasn't formulated an opinion on the matter and isn't propaganda, it's just parroting whatever it's been trained on, which could be anything and is guaranteed to upset someone with either answer.
which could be anything and is guaranteed to upset someone with either answer.
Funny how it only matters with certain answers.
The reason "Why" is because it should become clear that the topic itself is actively censored, which is the possibility the original comment wanted to discard. But I can't force people to see what they don't want to.
it’s just parroting whatever it’s been trained on
If that's your take on training LLMs, then I hope you aren't involved in training them. A lot more effort goes into doing so, including being able to make sure it isn't just "parroting" it. Another thing entirely is to have post-processing that removes answers about particular topics, which is what's happening here.
Not even being able to answer whether Gaza exists is being so lazy that it becomes dystopian. There are plenty of ways LLM can handle controversial topics, and in fact, Google Gemini's LLM does as well, it just was censored before it could get the chance to do so and subsequently refined. This is why other LLMs will win over Google's, because Google doesn't put in the effort. Good thing other LLMs don't adopt your approach on things.
I asked it for the deaths in Israel and it refused to answer that too. It could be any of these:
Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.
Ask it if Israel exists. Then ask it if Gaza exists.
Why? We all know LLMs are just copy and paste of what other people have said online..if it answers "yes" or "no", it hasn't formulated an opinion on the matter and isn't propaganda, it's just parroting whatever it's been trained on, which could be anything and is guaranteed to upset someone with either answer.
Funny how it only matters with certain answers.
The reason "Why" is because it should become clear that the topic itself is actively censored, which is the possibility the original comment wanted to discard. But I can't force people to see what they don't want to.
If that's your take on training LLMs, then I hope you aren't involved in training them. A lot more effort goes into doing so, including being able to make sure it isn't just "parroting" it. Another thing entirely is to have post-processing that removes answers about particular topics, which is what's happening here.
Not even being able to answer whether Gaza exists is being so lazy that it becomes dystopian. There are plenty of ways LLM can handle controversial topics, and in fact, Google Gemini's LLM does as well, it just was censored before it could get the chance to do so and subsequently refined. This is why other LLMs will win over Google's, because Google doesn't put in the effort. Good thing other LLMs don't adopt your approach on things.