the conversations that where exposed contained sensitive userinformation
unresponsible user error, everyone and their mom should know better by now
Yeah you gotta treat chat GPT like it's a public GitHub repository.
Imma fork dat
Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it's the users who are idiots?
Except it's never just about that. Every comment has to make it known that they would never allow that to happen to them because they're super smart. It's honestly one of the most self-righteous, tone deaf takes I see on here.
I don't support calling people idiots, but here's that: we can't control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.
Because that's what the last several reported "breaches" have been. There's been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.
In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.
Data loss or leaks may not be the end user's fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.
The only point of access to my information that I can control completely is what I do with it. If someone says "hey, don't do that with your password" they're saying it's a potential safety issue. You're putting control of your account in the hands of some entity you don't know. If it's revealed, well, it's THEIR fault, but you also goofed and should take responsibility for it.
Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we're all tired of hearing this story over and over while the public learns nothing.
Your frustration is valid.
Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.
Well I'd never use the term to describe a person--it's unnecessarily loaded. Ignorant, naive, etc might be better.
Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.
To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn't absolve responsibility.
I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress
Open ai is absolutely still to blame
For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.
Maybe it has something to do with being retrained/finetuned on conversations its having
So what actually happened seems to be this.
thats a big ooof and really shouldn’t happen
unresponsible user error, everyone and their mom should know better by now
Yeah you gotta treat chat GPT like it's a public GitHub repository.
Imma fork dat
Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it's the users who are idiots?
Except it's never just about that. Every comment has to make it known that they would never allow that to happen to them because they're super smart. It's honestly one of the most self-righteous, tone deaf takes I see on here.
I don't support calling people idiots, but here's that: we can't control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.
Because that's what the last several reported "breaches" have been. There's been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.
In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.
Data loss or leaks may not be the end user's fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.
The only point of access to my information that I can control completely is what I do with it. If someone says "hey, don't do that with your password" they're saying it's a potential safety issue. You're putting control of your account in the hands of some entity you don't know. If it's revealed, well, it's THEIR fault, but you also goofed and should take responsibility for it.
Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we're all tired of hearing this story over and over while the public learns nothing.
Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.
Well I'd never use the term to describe a person--it's unnecessarily loaded. Ignorant, naive, etc might be better.
Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.
To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn't absolve responsibility.
I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress
Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.
Maybe it has something to do with being retrained/finetuned on conversations its having