23andMe tells victims it's their fault that their data was breached | TechCrunch

Eezyville@sh.itjust.works to Technology@lemmy.world – 801 points –
23andMe tells victims it's their fault that their data was breached | TechCrunch
techcrunch.com

Hope this isn't a repeated submission. Funny how they're trying to deflect blame after they tried to change the EULA post breach.

271

I'm seeing so much FUD and misinformation being spread about this that I wonder what's the motivation behind the stories reporting this. These are as close to the facts as I can state from what I've read about the situation:

  1. 23andMe was not hacked or breached.
  2. Another site (as of yet undisclosed) was breached and a database of usernames, passwords/hashes, last known login location, personal info, and recent IP addresses was accessed and downloaded by an attacker.
  3. The attacker took the database dump to the dark web and attempted to sell the leaked info.
  4. Another attacker purchased the data and began testing the logins on 23andMe using a botnet that used the username/passwords retrieved and used the last known location to use nodes that were close to those locations.
  5. All compromised accounts did not have MFA enabled.
  6. Data that was available to compromised accounts such as data sharing that was opted-into was available to the people that compromised them as well.
  7. No data that wasn't opted into was shared.
  8. 23andMe now requires MFA on all accounts (started once they were notified of a potential issue).

I agree with 23andMe. I don't see how it's their fault that users reused their passwords from other sites and didn't turn on Multi-Factor Authentication. In my opinion, they should have forced MFA for people but not doing so doesn't suddenly make them culpable for users' poor security practices.

I think most internet users are straight up smooth brained, i have to pull my wife's hair to get her to not use my first name twice and the year we were married as a password and even then I only succeed 30% of the time, and she had the nerve to bitch and moan when her Walmart account got hacked, she's just lucky she didn't have the cc attached to it.

And she makes 3 times as much as I do, there is no helping people.

These people remind me of my old roommate who "just wanted to live in a neighborhood where you don't have to lock your doors."

We lived kind of in the fucking woods outside of town, and some of our nearest neighbors had a fucking meth lab on their property.

I literally told him you can't fucking will that want into reality, man.

You can't just choose to leave your doors unlocked hoping that this will turn out to be that neighborhood.

I eventually moved the fuck out because I can't deal with that kind of hippie dippie bullshit. Life isn't fucking The Secret.

I have friends that occasionally bitch about the way things are but refuse to engage with whatever systems are set up to help solve whatever given problem they have. "it shouldn't be like that! It should work like X"

Well, it doesn't. We can try to change things for the better but refusal to engage with the current system isn't an excuse for why your life is shit.

The bootlickers really come out of the woodwork here to suck on corporate boot.

Edit: wrong thread.

Lately I try to get people to use Chrome's built-it password manager. It's simple and it works across platforms.

I get that people aren’t a fan of Google, and I’m not either, but this is a reasonable option that would be better than what the vast majority of people are doing now…

That's what I'm getting at. It's an upgrade for most users and certainly novices. I thought I was being cleaver with a password manager and they got hacked twice (you know who).

Bitwarden is simple, works across platforms, is open source, and isn't trusting your data to a company whose *checks notes entire business model is based on sucking up as much data as possible to use for ad-targeting.

I'll trust the company whose business model isn't built on data-harvesting, thanks.

Also, Firefox is better for the health of the web, Google is using Chrome as a backdoor to dictate web standards, yadda yadda.

You and I can choose our tools as the best for our use case and for the good of the internet in general, but our non-tech friends can't.

I convinced a friend to use KeePass, but he wouldn't spend the time to learn it. I now tell him and others like him to just use Chrome's suggested password.

I agree, by all accounts 23andMe didn't do anything wrong, however could they have done more?

For example the 14,000 compromised accounts.

  • Did they all login from the same location?
  • Did they all login around the same time?
  • Did they exhibit strange login behavior like always logged in from California, suddenly logged in from Europe?
  • Did these accounts, after logging in, perform actions that seemed automated?
  • Did these accounts access more data than the average user?

In hindsight some of these questions might be easier to answer. It's possible a company with even better security could have detected and shutdown these compromised accounts before they collected the data of millions of accounts. It's also possible they did everything right.

A full investigation makes sense.

I already said they could have done more. They could have forced MFA.

All the other bullet points were already addressed: they used a botnet that, combined with the "last login location" allowed them to use endpoints from the same country (and possibly even city) that matched that location over the course of several months. So, to put it simply - no, no, no, maybe but no way to tell, maybe but no way to tell.

A full investigation makes sense but the OP is about 23andMe's statement that the crux is users reusing passwords and not enabling MFA and they're right about that. They could have done more but, even then, there's no guarantee that someone with the right username/password combo could be detected.

I'm not sure how much MFA would have mattered in this case.

23andme login is an email address. Most MFAs seem to use email as an option these days. If they're already reusing passwords, the bad actor already has a password to use for their emails that's likely going to work for the accounts that were affected. Would it have brought it down? Sure, but doesn't seem like it would've been the silver bullet that everyone thinks it is.

It's a big enough detractor to make it cumbersome. It's not that easy to automate pulling an MFA code from an email when there are different providers involved and all that. The people that pulled this off pulled it off via a botnet and I would be very surprised if that botnet was able to recognize an MFA login and also login, get the code, enter it, and then proceed. It seems like more effort than it's worth at that point.

Those are my questions, too. It boggles my mind that so many accounts didn’t seem to raise a red flag. Did 23&Me have any sort of suspicious behavior detection?

And how did those breached accounts access that much data without it being observed as an obvious pattern?

If the accounts were logged into from geographically similar locations at normal volumes then it wouldn't look too out of the ordinary.

The part that would probably look suspicious would be the increase in traffic from data exfiltration. However, that would probably be a low priority alert for most engineering orgs.

Even less likely when you have a bot network that is performing normal logins with limited data exfiltration over the course of multiple months to normalize any sort of monitoring and analytics. Rendering such alerting inert, since the data would appear normal.

Setting up monitoring and analysis for user accounts and where they're logging from and suspicious activity isn't exactly easy. It's so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them. And even if they had this setup which I imagine they already did it was defeated.

If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.

I mean, device fingerprinting is used for this purpose. Then there is the geographic pattern, the IP reputation etc. Any difference -> ask MFA.

It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them.

Cloudflare, Imperva, Akamai I believe all offer these services. These are some of the players who can help against this type of attack, plus of course in-house tools. If you decide to collect sensitive data, you should also provide appropriate security. If you don't want to pay for services, force MFA at every login.

1 more...
1 more...
1 more...

I actually saw someone on FB complaining that they were being forced to enable 2FA on FB.

Common thing, a lot of people despise MFA. I somewhat recently talked with 1 person who works in IT (programmer) that has not set up MFA for their personal mail account.

Credential stuffing is an attack which is well known and that organizations like 23andme definitely should have in their threat model. There are mitigations, such as preventing compromised credentials to be used at registration, protecting from bots (as imperfect as it is), enforcing MFA etc.

This is their breach indeed.

They did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe's fault.

Also, how do you go about "preventing compromised credentials" if you don't know that the credentials are compromised ahead of time? The dataset in question was never publicly shared. It was being sold privately.

The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.

Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.

Regarding the last bit, it might noto have helped against this specific breach, but we don't know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.

Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don't have sufficient controls.

I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t. I would even play devil’s advocate and say that sharing your info with strangers was also the user’s responsibility but that 23andMe could have forced MFA on accounts who shared data with other accounts.

Many people hate MFA systems. It’s up to each user to determine how securely they want to protect their data. The users in question clearly didn’t if they reused passwords and didn’t enable MFA when prompted.

My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users' own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that's why you enforce things.

Very often companies use "ease" or "users don't like" to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That's why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don't enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it's clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.

It’s up to each user to determine how securely they want to protect their data.

Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That's the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?

This wasn’t a brute force attack, though. Even if they had brute force detection, which I’m not sure if they don’t or not, that would have done nothing to help this situation as nothing was brute forced in the way that would have been detected. The attempts were spread out over months using bots that were local to the last good login location. That’s the primary issue here. The logins looked legitimate. It wasn’t until after the exposure that they knew it wasn’t and that was because of other signals that 23andMe obviously had in place (I’m guessing usage patterns or automation detection).

Of course this is not a brute force attack, credentials stuffing is different from bruteforcing and I am well aware of it. What I am saying is that the "lockout period" or the rate limiting (useful against brute force attacks) for logins are both security measures that are sometimes demanded from companies. However, even in the case of bruteforcing, it's the user who picks a "brute-forceable" password. A 100 character password with numbers, letters, symbols and capital letters is essentially not possible to be bruteforced. The industry recognized however that it's the responsibility of organizations to implement protections from bruteforcing, even though users can already "protect themselves". So, why would it be different in the case of credentials stuffing? Of course, users can "protect themselves" by using unique passwords, but I still think that it's the responsibility of the company to implement appropriate controls against this attack, in the same exact way that it's their responsibility to implement a rate-limiting on logins or a lockout after N failed attempts. In case of stuffing attacks, MFA is the main control that should simply be enforced or at the very least required (e.g., via email - which is weak but better than nothing) when any new pattern in a login emerges (new device, for example). 23andMe failed to implement this, and blaming users is the same as blaming users for having their passwords bruteforced, when no rate-limiting, lockout period, complexity requirements etc. are implemented.

So forced MFA is the only way to prevent what happened? That’s basically what you’re saying, right?

Their other mechanisms would prevent credential stuffing (e.g., rate limits, comparing login locations) so how was this still successful?

Yes, forced mfa (where forced means every user is required to configure it) is the most effective way. Other countermeasures can be effective, depending on how they are implemented and how the attackers carry out the attack. Rate limiting for example depends on arbitrary thresholds that attackers can bypass by slowing down and spreading the logins over multiple IPs. Other things you can do is preventing bots to access the system (captcha and similar - this is usually a service from CDNs), which can be also bypassed by farms and in some cases clever scripting. Login location detection is only useful if you can ask MFA afterwards and if it is combined with a solid device fingerprinting.

My guess in what went wrong in this case is that attackers spread the attack very nicely (rate limiting ineffective) and the mechanism to detect suspicious logins (country etc.) was too basic, and took into account too few and too generic data. Again, all these measures are only effective against dumb attackers. MFA (at most paired with strong device fingerprinting) is the only effective way there is, that's why it's on them to enforce, not offer, 2fa. They need to prevent the attack, not let just users take this decision.

There are services that check provided credentials against a dictionary of compromised ones and reject them. Off the top of my head Microsoft Azure does this and so does Nextcloud.

This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.

Is there a standards body web developers should rely on, which suggests requiring MFA for every account? OWASP, for example, only recommends requiring it for administrative users, but for giving regular users the option without requiring it.

There’s some positives to requiring MFA for all users, but like any decision there’s trade offs. How can we throw 23andme under the bus when they were compliant with industry best practices?

I don't think it's possible to make a blanket statement in this sense. For example, Lemmy doesn't handle as sensitive data as 23andMe. In this case, it might be totally acceptable to have the feature, but not requiring it. Banks (at least in Europe) never let you login with just username and password. The definitely comply with different standards and in general, it is well understood that the sensitivity of the data (and actions) needs to be reflected into more severe controls against attacks which are relevant.

For a company with so sensitive data (such as 23andMe), their security model should have definitely included credential stuffing attacks, and therefore they should have implemented the measures that are recommended against this attack. Quoting from OWASP:

Multi-factor authentication (MFA) is by far the best defense against the majority of password-related attacks, including credential stuffing and password spraying, with analysis by Microsoft suggesting that it would have stopped 99.9% of account compromises. As such, it should be implemented wherever possible; however, depending on the audience of the application, it may not be practical or feasible to enforce the use of MFA.

In other words, unless 23andMe had specific reasons not to implement such control, they should have. If they simply chose to do so (because security is an afterthought, because that would have meant losing a few customers, etc.), it's their fault for not building a security posture appropriate for the risk they are subject to, and therefore they are responsible for it.

Obviously not every service should be worried about credential stuffing, therefore OWASP can't say "every account needs to have MFA". It is the responsibility of each organization (and their security department) to do the job of identifying the threats they are exposed to.

I agree. The people blaming the website are ridiculous here.

It’s just odd that people get such big hate boners from ignorance. Everything I’m reading about this is telling me that 23andMe should have enabled forced MFA before this happened rather than after, which I agree with, but that doesn’t mean this result is entirely their fault either. People need to take some personal responsibility sometimes with their own personal info.

Would bet that you’re a crypto fan.

Would bet your password includes "password" or something anyone could guess in 10 minutes after viewing your Facebook profile.

Edit: Your l33t hacker name is your mother's maiden name and the last four of your social, bro. Mines hunter1337, what's yours?

4 more...

The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers

Turns out, it is.

What should a website do when you present it with correct credentials?

  1. IP based rate limiting
  2. IP locked login tokens
  3. Email 2FA on login with new IP

IP-based mitigation strategies are pretty useless for ATO and credential stuffing attacks.

These days, bot nets for hire are easy to come by and you can rotate your IP on every request limiting you controls to simply block known bad IPs and data server IPs.

  1. The attackers used IPs situated in their victims regions to log in, across months, bypassing rate limiting or region locks / warnings

  2. I don't know if they did but it would seem trivial to just use the tokens in-situ once they managed to login instead of saving and reusing said tokens. Also those tokens are the end user client tokens, IP locking them would make people with dynamic IPs or logged in 5G throw a fuss after the 5th login in half an hour of subway

  3. Yeah 2FA should be a default everywhere but people just throw a fuss at the slightest inconvenience. We very much need 2FA to become the norm so it's not seen as such

I'm cool with 2fa, I'm not cool with a company demanding my cellphone number to send me SMS for 2fa or to be forced to get a 2fa code via email...like my bank. I can ONLY link 2fa to my phone. So when my phone goes missing or stolen, I can't access my bank. Only time I have resisted 2fa is when this pooly implemented bullshit happens.

Pro tip, when making a new Google account and putting your phone number in be sure to look into more options. There is a choice to only use it for 2fa and not for data linking.

2 factor beats the hell outta that "match the horse with the direction of the the arrow 10x" bs

What should a website do when you present it with correct credentials?

Not then give you access to half their customers' personal info?

Credential stuffing 1 grandpa who doesn't understand data security shouldn't give me access to names and genetics of 500 other people.

That's a shocking lack of security for some of the most sensitive personal data that exists.

You either didn’t read or just really need this to be the company’s fault.

Those initial breaches lead to more info being leaked because users chose to share data with those breached users before their accounts were compromised.

When you change a setting on a website do you want to have to keep setting it back to what you want or do you want it to stay the first time you set it?

Not then give you access to half their customers’ personal info?

That's a feature of the service that you opt into when you're setting up your account. You're not required to share anything with anyone, but a lot of people choose too. I actually was able to connect with a half-sibling that I knew I had, but didn't know how to contact, via that system.

Hi! If you've used it, there's something I was curious about - how many people's names did it show you?

If 50%+ of the 14000 had the feature enabled, it was showing an average of 500-1000 "relatives". Was that what you saw? What degree of relatedness did they have?

I don't think that opting in changes a company's responsibility to not launch a massive, inevitable data security risk, but tbh I'm less interested in discussing who's to blame than I am in hearing more about your experience using the feature. Thanks in advance!

This list shows 1500 people for me. I assume that's just some arbitrary limit to the number of results. There's significantly overlap in the relationship lists, so the total number of people with data available is less than the (14000 x 0.5 x 1500) than the math might indicate.

My list of possible relations goes from 25% to 0.28% shared DNA. That's half-sibling down to 4th cousin (shared 3rd-great-grandparents).

The only thing I can see for people who I haven't "connected" with is our shared ancestry and general location (city or state) if they share it. I can see "health reports" if the person has specifically opted to share it with me after "connecting".

2 more...
2 more...

What should it do? It should ask you to confirm the login with a configured 2FA

Yeah they offered that. I don’t think anyone with it turned on was compromised.

This shouldn't be "offered" IMHO, this should be mandatory. Yes, people are very ignorant about cyber security (I've studied in this field, trust me, I know). But the answer isn't to put the responsibility on the user! It is to design products and services which are secure by design.

If someone is actually able to crack accounts via brute-forcing common passwords, you did not design a secure service/product.

[Edit: spelling]

I've noticed that many users in this thread are just angry that the average person doesn't take cybersecurity seriously. Blaming the user for using a weak password. I really don't understand how out of touch these Lemmy users are. The average person is not thinking of cybersecurity. They just want to be able to log into their account and want a password to remember. Most people out there are not techies, don't really use a computer outside of office work, and even more people only use a smartphone. Its on the company to protect user data because the company knows its value and will suffer from a breach.

You're right, most people either don't care, or don't even know enough to care in the first place.

And that's a huge problem. Yes, companies have some responsibility here, but ultimately it's the user who decides to use the service, and how to use it.

don’t even know enough to care in the first place.

but ultimately it’s the user who decides to use the service, and how to use it.

So you admit they don't have access to the knowledge needed to make better choices for their digital security. Then immediately blame them. I think your bias from the point of view of a one that is already more informed on this sort of thing. If they don't know they need to know more, how can they be expected to do any research? There's only so much time in a day so you can't expect people to learn "enough" about literally everything.

I don't intend to blame them, I'm just making an observation.

The fact that they don't know is a problem in itself too, and spreading awareness about cybersecurity and teaching general tech literacy and common sense is not done as much as it should be.

It's exactly like you say. They don't know, and how would they? No one is ever giving them the information they need.

That's exactly right. I was about to say how people usually don't even "not take it seriously" but rather don't even think or know about it. But you already said that yourself haha :D

Or, worse, they don’t even understand it. I definitely have people in my life who know about the idea of cybersecurity and are terrified of getting hacked, but constantly do things the wrong way or worry about the wrong things. Because it’s just too confusing for them, and it’s always changing.

5 more...

Fuck mandatory 2FA. Most sites just throw SMS on there and leave it at that. I’m so tired of putting yet more of my information into services that don’t require it to utilize the service.

If TOTP was more prevalent (getting there) I might agree but then we’d be talking about how the typical user doesn’t know how to set that up.

Companies pay SMS, TOTP is free for them (just a computation...). It is utterly dumb to implement the same logic with a paid service rather than TOTP (or security keys, at this point). So yeah, I agree with the idea, but I think nowadays most 2fa is TOTPs (sadly, some require their shitty apps to do just that - Blizzard once was one of them, maybe still is).

It’s a thinly veiled method to gather more info from users when SMS is the only option.

5 more...

2FA should be forced, it's not a hard thing to do.

5 more...
5 more...

So… we are ignoring the 6+ million users who had nothing to do with the 14 thousand users, because convenience?

Not to mention, the use of “brute force” there insinuates that the site should have had password requirements in place.

Please excuse the rehash from another of my comments:

How do you people want options on websites to work?

These people opted into information sharing.

When I set a setting on a website, device, or service I damn sure want the setting to stick. What else would you want? Force users to set the setting every time they log in? Every day?

I admit, I’ve not used the site so I don’t know the answers to the questions I would need, in order to properly respond:

  • Were these opt-in or opt-out?
  • Were the risks made clear?
  • Were the options fine tuned enough that you could share some info, but not all?

From the sounds of it, I doubt enough was done by the company to ensure people were aware of the risks. Because so many people were shocked by what was able to be skimmed.

I’m convinced that everyone pissed at the company for users reusing passwords has a reading comprehension problem because I definitely already answered your first question in the comment you responded to.

I haven’t used the service either - I don’t want more of my data out there. So I can’t answer the other questions.

Users were probably not thinking about the implications of a breach after sharing but it stands to reason that if you share data with an account, and that account gets compromised, your data is compromised.

We’ve all been through several of those from actual hacks at other companies (looking at you, T-Mobile). I refuse to believe people aren’t aware of this general issue by now.

It was credential stuffing. Basically these people were hacked in other services. Those services probably told them "Hey, you need to change your password because our database was hacked" and then they were like "meh, I'll keep using this password and won't update my other services that this password and personally identifiable information about myself and my relatives".

Both are at fault, but the users reusing passwords with no MFA are dumb as fuck.

by brute-forcing accounts with passwords that were known

That's not what "brute force" means.

7 more...

Blaming your customers is definitely a strategy. It's not a good one, but it is a strategy.

BRB deleting my 23AndMe account

As if deleting your account deletes your data.

Surely they have a GDPR-compliant way to have your info removed. Right?

They're an American company, and I'm not yet aware of any lawsuits setting the precedent of the GDPR applying to server infrastructure in the USA, which is outside the jurisdiction of the GDPR.

So if they've copied your data to their American servers already (you can bet they have), it's there for good.

UPDATE user_data SET deleted = 1 WHERE ID = you.

Done. Data deleted. All gone forever. Definitely doesn't just hide it from the user.

OP spreading disinformation.

Users used bad passwords. Their accounts where accessed using their legitimate, bad, passwords.

Users cry about the consequences of their bad passwords.

Yeah, 23AndMe has some culpability here, but the lions share is still in the users themselves

From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million million victims because they had opted-in to 23andMe’s DNA Relatives feature.

How exactly are these 6.9M users at fault? They opted in to a feature of the platform that had nothing to do with their passwords.

On top of that, the company should have enforced strong passwords and forced 2FA for all accounts. What they're doing is victim blaming.

users knowingly opted into a feature that had a clear privacy risk.

Strong passwords often aren't at issue, password re-use is. If un-{salted, hashed} passwords were compromised in a previous breach, then it doesn’t matter how strong those passwords are.

Every user who was compromised:

  1. Put their DNA profile online
  2. Opted to share their information in some way

A further subset of users failed to use a unique and strong password.

A 2FA token (think Matrix) might have helped here, other than that, individuals need to take a greater responsibility for personal privacy. This isn’t an essential service like water, banking, electricity etc. This is a place to upload your DNA profile…

As I said elsewhere, the company implemented this feature and apparently did not do absolutely jack about the increased risk of account compromise deriving from it. If I would sit in a meeting discussing this feature I would immediately say that accounts which share data with others are way too sensitive and at least these should have 2fa enforced. If you don't want it, you don't share data. Probably the company does not have a good security culture and this was not done.

users knowingly opted into a feature that had a clear privacy risk.

Your aunt who still insists she's part Cherokee is not as capable of understanding data security risks as the IT department of the multi-million dollar that offered the ludicrously stupid feature in the first place.

People use these sites once right? Who's changing their password on a site they don't log into anymore? Given that credential stuffing was inevitable and foreseeable, the feature is obviously a massive risk that shouldn't have been launched.

Are you telling me a password of 23AndMe! Is bad? It meets all the requirements.

How am I spreading disinformation? I just contributed an article I found interesting for discussion.

It’s worth noting that OP simply used the article title.

The article title is a little biased, individuals must take greater personal responsibility.

I don't know title etiquette in this forum. I used the author's title because it is their article, not mine, and thus their opinion/research/AI output.

Oh no, I was just pointing it out for others. I think using the title post is perfectly reasonable.

Thank you for posting, I found it interesting.

Users used bad passwords. Their accounts where accessed using their legitimate, bad, passwords.

Just as an anecdotal counterpoint, I am a 23andMe customer who did receive notification of my account was accessed and personal information obtained.

This was my password at the time: 7Kk5bXjIdfB25

That password was auto-generated for me by the BitWarden app.

So for what it's worth I don't think my password was a 'bad' password.

Your direct account was accessed or some of your information was access through a compromised account? those are big differences and from what I've read only the latter should have been possible. and in my opinion, not such a big deal.

The lions share IMHO is at 23&me. Offering such a poorly secured service is negligence, in the face of the data's high sensitivity nature.

Yeah, 23AndMe has some culpability here, but the lions share is still in the users themselves

Tell me you didn't read the article without telling me.

If 14,000 users who didn't change a password on a single use website they probably only ever logged into twice gives you 6.9 million user's personal info, that's the company's fault.

You didn't read it either. They gained access to shared information between the accounts because both accounts had enabled "share my info with my relatives" option.

Logging into someones Facebook and seeing their friends and all the stuff they posted as "friends only" and their private DM discussions isn't a hack or a vulnerability, it's how the website works.

It doesn't matter. It is a known attack and the company should have implemented measures against it.

At the very least, they should have made a threat modeling exercise and concluded that with this sharing feature, the compromise of a single account can lead to compromise of data for other users. One possible conclusion is that users who shared data should be forced to have 2fa.

It doesn't matter. It is a known attack and the company should have implemented measures against it.

At the very least, they should have made a threat modeling exercise and concluded that with this sharing feature, the compromise of a single account can lead to compromise of data for other users. One possible conclusion is that users who shared data should be forced to have 2fa.

Laughing a feature that lets an inevitable attack access 500 other people's info for every comprimised account is a glaring security failure.

Accounting for foreseeable risks to users' data is the company's responsibility and they launched a feature that made a massive breach inevitable. It's not the users' fault for opting in to a feature that obviously should never have been launched.

1 more...

23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users

I'm honestly asking what the impact to the users is from this breach. Wasn't 23andMe already free to selling or distribute this data to anybody they wanted to, without notifying the users?

That's not how this works. They are running internationally, and GDPR would hit them like a brick if they did that.

I would assume they had some deals with law enforcement to transmit data one narrow circumstances.

I'm honestly asking what the impact to the users is from this breach.

Well if you signed up there and did an ancestry inquiry, those hackers can now without a doubt link you to your ancestry. They might be able to doxx famous people and in the wrong hands this could lead to stalking, and even more dangerous situations. Basically everyone who is signed up there has lost their privacy and has their sensitive data at the mercy of a criminal.

This is different. This is a breach and if you have a company taking care of such sensitive data, it's your job to do the best you can to protect it. If they really do blame this on the users, they are in for a class action and hefty fine from the EU, especially now that they've established even more guidelines towards companies regarding the maintenance of sensitive data. This will hurt on some regard.

If they really do blame this on the users

It's not that they said:

It's your fault your data leaked

What they said was (paraphrasing):

A list of compromised emails/passwords from another site leaked, and people found some of those worked on 23andme. If a DNA relative that you volunteered to share information with was one of those people, then the info you volunteered to share was compromised to a 3rd party.

Which, honestly?

Completely valid. The only way to stop this would be for 23andme to monitor these "hack lists" and notify any email that also has an account on their website.

Side note:

Any tech company can provide info if asked by the police. The good ones require a warrant first, but as data owners they can provide it without a warrant.

That's not 23 and me fault at all then. Basically boils down to password reuse. All i would say is they should have provided 2fa if they didn't.

All i would say is they should have provided 2fa if they didn’t.

At this point, every company not using 2FA is at fault for data hacks. Most people using the internet have logins to 100's of sites. Knowing where to do to change all your passwords is nearly impossible for a seasoned internet user.

The sad thing is you have to balance the costs of requiring your customer to use 2FA with the risk of losing business because of it and the risk of losing reputation because your customers got hacked and suffered loss.

The sad thing is some (actuall most) people are brain dead, you will lose business if you make them use a complicated password or MFA and it puts them in the position to make a hard call.

They took the easy route and gave the customer the option to use MfA if they wished and unfortunately a lot of people declined. Those people should not have the ability to claim damages (or vote, for that matter)

I feel like that argument could be made for some things, but inherently cannot apply to companies involved in personal, genetic, or financial information.

The only way to stop this would be for 23andme to monitor these "hack lists"

Unfortunately, from the information that I've seen, the hack lists didn't have these credentials. HIBP is the most popular one and it's claimed that the database used for these wasn't posted publicly but was instead sold on the dark web. I'm sure there's some overlap with previous lists if people used the same passwords but the specific dataset in this case wasn't made public like others.

I would guess (hope?) that the data sets they sell are somewhat anonymized, like listing people by an i.d. number instead of the person's name, and not including contact information like home address and telephone number. If so then the datasets sold to companies don't contain the personal information that hackers got in this security breach.

I’m honestly asking what the impact to the users is from this breach.

The stolen info was used to databases of people with jewish ancestry that were sold on the dark web. I think there was a list of similar DB of people with chinese ancestry. 23andme's poor security practices have directly helped violent white supremecists find targets.

If you're so incompetent that you can't stop white supremecists from getting identifiable information about people from minorities, there is a compelling public interest for your company to be shut down.

That is a whoooolllee lot of assumptions

Why do you think someone would buy illegally obtained lists of people with Jewish or Chinese ancestry? And who do you think would be buying it?

Scammers, that opens up a lot of scam potential.

Hi, I’m your new cousin.

Scammers would buy all info, not specifically targeted to people of Jewish or Chinese descent. That’s not what’s being sold.

Who do you think would want only information about people with Jewish or Chinese ancestry, and why?

OK you’re gonna have to give me a link to what you’re talking about. It feels like you are being specific, and I am being generic.

It’s the same incident, the OP article just didn’t mention it.

In this case, I think it is more likely to be some type of Arab major nation, for the Jewish one, and I don’t know about the Chinese.

What I do know is there pretty much every white supremacists I have known has been one of the white supremacist stereotypes to a T.

Anything higher level than that it’s just conspiracy theory level on my part at least with that one information point.

1 more...
1 more...
1 more...
1 more...

Reusing credentials is their fault. Sure, 23&me should've done better, but someone was likely to get fucked, and if you're using the same password everywhere it is objectively your fault. Get a password manager, don't make the key the same compromised password, and stop being stupid.

It's at least 99.8% the company's fault.

Even if we blame those 14k password reusers, we're blaming 1 in every 500 victims. Being able to access genetic information and names of 6.9 million people - half your entire customers! - by hacking 0.02% of that is the fault of the company. They structured that access and failed to act on the obvious threat it represents.

But why blame password reusers? Not every grandparent interested in their family tree is capable of even understanding data security, let alone juggling multiple passwords or a PW manager.

Credential stuffing is an inevitable part of security landscape - especially for one time use accounts like genetics sites. A multimillion dollar IT department is just clearly responsible for preventing egregious data security failures.

They didn't get genetic raw data of anyone beyond the 14K, they got family relationship information. Which is an option you can turn on or off, if you want. It's very clear that you're exposing yourself to other people if you choose to see who you're related to. It doesn't expose raw data and it doesn't instantly expose names, just how they're related to you. (And most of the "relations" are 3rd to 5th cousins, aka strangers.)

Hackers used the genetic ancestry data of the 14K hacked users and their "relatives" connections to deduce large families of Ashkenazi Jews.

Given the sensitivity of the data in both cases they should have had mandatory 2fa set up. However, the other person is right, there's probably a ton of tech illiterate people using this and they likely saw better security as barriers to entry and making less money.

some people just aren't that worried about sharing their dna info. Hell, I'd venture I'd give a actual sample to a good % of the population if they asked me in a sexy way.

I would say it's partially their fault. IMHO 23&me is mainly to blame. They should've enforced (proper) 2FA. Sure, people should've known better, but they didn't; they oftenly don't. But 23&me did know better.

Edit: spelling

Bro just don't have DNA.

If you were really on your sigma grindset, your DNA would have never existed.

“users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe...Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures,”

This is a failure to design securely. Breaking into one account via cred stuffing should give you access to one account's data, but because of their poor design hackers were able to leverage 14,000 compromised accounts into 500x that much data. What that tells me is that, by design, every account on 23andMe has access to the confidential data of many, many other accounts.

It's terrible design. If they know their users are going to do this, they're supposed to work around that. Not leave it as a vulnerability.

I don't think so. Those users had opted in to share information within a certain group. They've already accepted the risk of sharing info with someone who might be untrustworthy.

Plenty of other systems do the same thing. I can share the list of games on my Steam account with my friends - the fact that a hacker might break into one of their accounts and access my data doesn't mean that this sharing of information is broken by design.

If you choose to share your secrets with someone, you accept the risk that they may not protect them as well as you do.

There may be other reasons to criticise 23andMe's security, but this isn't a broken design.

And it's your fault you have access to them. Stop doing bad things and keep your information secure.

you clearly have no familiarity with the principles of information security. 23andMe failed to follow a basic principle: defense in depth. The system should be designed such that compromises are limited in scope and cannot be leveraged into a greater scope. Password breaches are going to happen. They happen every day, on every system on the internet. They happen to weak passwords, reused passwords and strong passwords. They're so common that if you don't design your system assuming the occasional user account will be compromised then you're completely ignoring a threat vector, which is on you as a designer. 23andMe didn't force 2 factor auth (https://techcrunch.com/2023/11/07/23andme-ancestry-myheritage-two-factor-by-default/) and they made it so every account had access to information beyond what that account could control. These are two design decisions that enabled this attack to succeed, and then escalate.

Fiivemacs was joking, speaking in 23&me's voice. They don't actually believe it's the user's fault.

They're right. It the customer's fault for giving them the data in the first place.

But hear me out, I have no control over my cousin or aunt or some random relative getting one of these tests and now this shitty company has a pretty good idea what a large chunk of my DNA looks like. If people from both sides of my family do it they have an even better idea what my genetic profile looks like. That's not my fault, I never consented to it, and it doesn't seem ok.

I also know about 99.9% of your DNA.

Sorry, I thought it was obvious that we're talking about the part that varies by individual humans...

This is such a fucking braindead, victim blaming take.

They became a victim the moment they gave their data to that company. Why is anyone that works at 23andme more trust worthy then rando hackers? They aren't obligated to any HIPPA laws.

I SHOULD NOT BE GETTING GASLIT FOR WHAT SEEMED LIKE A NEAT IDEA AT THE TIME

Absolutely; and this is another example in a long list which should serve as a lesson for people to not share their personal data with any company if possible. Yet, I feel that lesson will never be learned.

7 more...

https://haveibeenpwned.com/

Gentle reminder to plop your email address in here and see if you, much like 14,000 23andMe users, have had an account compromised somewhere. Enable two-factor where you can and don't reuse passwords.

Welp my two gmail address have been pwned. Good thing I don't use them and I have limited use of Google services.

Just to clarify; It doesn't necessarily mean that your Google account password is compromised. It lists data breaches of services where you used the provided email to register. The password you chose for that service at the time of the breach has been compromised. If you don't use the same password everywhere, or changed your password after the breach, your other accounts are not compromised.

Also, as OP said, use two-factor authentication. And please also use a password manager.

I understand that. I use KeePassXC and love it. I just notice that those gmail accounts get all the spam so I abandoned them.

Giving your genetic info to them is the first mistake

I see this trend of websites requesting your identification and all i think is: i don't even trust my own government with a copy why the hell should i trust a business?

Instant skip.

And I agree with them, I mean 23andMe should have a brute-force resistant login implementation and 2FA, but you know that when you create an account.

If you are reusing creds you should expect to be compromised pretty easily.

A successful breach of a family member's account due to their bad security shouldn't result in the breach of my account. That's the problem.

Edit: so people stop asking, here's their docs on DNA relatives: https://customercare.23andme.com/hc/en-us/articles/212170838

Showing your genetic ancestry results makes select information available to your matches in DNA Relatives

It clearly says select information, which one could reasonably assume is protecting of your privacy. All the reports seem to imply the hackers got access to much more than just the couple fun numbers the UI shows you.

At minimum I hold them responsible for not thinking this feature through enough that it could be used for racial profiling. That's the equivalent of being searchable on Facebook but they didn't think to not make your email, location and phone number available to everyone who searches for you. I want to be discoverable by my friends and family but I'm not intending to make more than my name and picture available.

A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem

I mean...

You volunteered to share your info with that person.

And that person reused a email/password that was compromised.

How can 23andme prevent that?

It sucks, but it's the fault of your relative that you entrusted with access to your information.

No different than if you handed them a hardcopy and they left it on the table of McDonald's .

Quick edit:

It sounds like you think your account would be compromised, that's not what happened. Only info you shared with the compromised relative becomes compromised. They don't magically get your password.

But you still choose to make it accessible to that relatives account by accepting their request to share

Could I please have your personal information?

No.

See... it's that easy.

Ok, who else would be able to give me your personal information. I'll go get it from them instead.

Your mom has my contact information. You can ask her.

/pwn3d.

Oh, so you're actually not consenting to have some personal information you've given to family given to me as well? Odd, you sure seemed ok when it was people having their information snagged from 23andMe.

No, but I didn't consent to give that info to family either. If I was worried about my data getting in the hands of strangers, I wouldn't have shared it with strangers which is what happened here. Unless you count a 4th cousin that you've never met "family", why would you give them access to your data?

And that’s exactly how the attackers got in in the first place lol.

The ding dongs used the same creds elsewhere which were leaked.

Thank you for explaining the point I was making to me.

I doesn't. Sharing that info was opt-in only. In this scenario, no 23andMe accounts were breached. The users reused their credentials from other sites. It would be like you sharing your bank account access with a family member's account and their account getting accessed because their banking password was "Password1" or their PIN was "1234".

Yep it was 14,000 that were hacked, the other 6.9 million were from that DNA relative functionality they have. Unfortunately 23andMe's response is what to expect since companies will never put their customers safety ahead of their profits.

So if you enabled a setting that is opt-in only that allows sharing data between accounts and you are surprised that data was shared between accounts how is that not your fault?

afaik there was no breach of private data, only the kind of data shared to find relatives, which is opt-in and obviously not private to anyone who has seen how this service works. In other words, the only data "leaked" was the kind of data that was already shared with other 23andMe users.

Name, sex and ancestry were sold on the dark web, that's a breach of private data.

The feature that lets a hacker see 500 other people's personal information when they hack an account is obviously a massive security risk. Especially if you run a single use service - no one updates their password on a site they don't use anymore.

Launching the feature in the first place made this inevitable.

Name, sex and ancestry were sold on the dark web, that’s a breach of private data.

It would be a breach if the data was private, but the feature itself exposes this data. That would be like presenting a concert to hundreds of people then complaining your facial attributes were leaked in social media.

You shouldn’t have shared your information with someone who is untrustworthy then. Data sharing is opt-in.

Credential stuffing attacks will always yield results on a single use website because no one changes passwords on a site they don't use anymore.

Launching a feature that enables an inevitable attack to access 500 other people's info is very clearly the fault of the company who launched the feature.

How do you and the surprising number of people who upvoted you want options on websites to work?

These people opted into information sharing.

When I set a setting on a website, device, or service I damn sure want the setting to stick. What else would you want? Force users to set the setting every time they log in? Every day?

Wtf?

Even if you didn’t reuse a compromised password yourself, the fact that your relatives did indicates that you’re genetically predisposed to bad security practices. /s

A successful breach of a family member’s account due to their bad security shouldn’t result in the breach of my account. That’s the problem.

How the hell would they prevent that if you voluntarily shared a bunch of information with the breach account? This is like being mad that your buddy's Facebook account got breached and someone downloaded shared posts from your profile, too. It's how the fucking service works.

Is it also the User's fault for the 6,898,600 people that didn't reuse a password and were still breached?

Yes, because you have to choose to share that data with other people. 23andMe isn't responsible if grandma uses the same password for every site.

23andMe is responsible for sandboxing that data, however. Which they obviously didn't do.

User opted-in to share those data

You opt in to share your data with Facebook. Would you still consider it an issue if your data was breached because someone else's account was hacked?

I would consider normal that my photos that I only share with some people were leaked if one of those people's accounts got hacked.

Sure, it's a breach, but I would blame my idiot friend for re-using passwords. I wouldn't blame the service for doing exactly what I expected the service to do, and is the reason I chose to use the service in the first place.

It's also the reason I've very selective about what I share with anyone online, friend or otherwise.

If you share your nudes with the "friends only" privacy settings on facebook, and someone else accesses one of your friends accounts because they reused their password and proceeds to leak those photos, is it the fault of Facebook, your friend, the person leaking them, or you?

Because that is exactly what happened here. Credit stuffing reused passwords and scraping opt-in "friends only" shared data between accounts.

Private health data was compromised as well, on a smaller scale. It doesn't make sense to blame users for a security breach of a corporation, literally ever. That's my point. The friend was dumb, and you shared something maybe you shouldn't have. But that doesn't also absolve the company of poor security practices. I very strongly doubt that 14,000 people knew or consciously chose to directly share with a collective 7 million people.

But they did. All 7 million of them - that's why their data was visible for those 14000.

As it says in the article:

From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million victims because they had opted-in to 23andMe’s DNA Relatives feature. This optional feature allows customers to automatically share some of their data with people who are considered their relatives on the platform.

Here's what each and every one of those 7 million people opted in and agreed to:

https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics

Did you not read my comment? Users opt in to sharing data with other accounts, which means if one account is compromised, then every account that allowed them access would have their data compromised too. That's not on the company, because they feature can't work without allowing access.

They weren't breached. The data they willingly shared with the compromised accounts was available to the people that compromised them.

Pretty sure nobody clicked a button that said "share my data with compromised accounts."

There was a button that said "share my data with this account". If that person went and shared that info publicly, how is that any different? The accounts accessed with accessed with valid credentials through the normal login process. They weren't "breached" or "hacked".

I mean if you use the same weak password on all websites, even a strong password, it is your fault in a legitimate way. Not your fault for the fact it was leaked or found out or the company having shit security practices, but your fault for not having due diligence given the current state of online security best practices.

Not your fault if you did have a strong password but your data was leaked through the sharing anyways…

I wonder if they can identify a genetic predisposition that these patients had that made them more prone to compromising their passwords? And then if so, was it REALLY their fault?

Should probably ask OP!

They seems to be in the same boat based on this submission...

It's proper etiquette to use the wording from the title when posting an article. OP did everything right.

Well its also their fault for falling for 23andMe because its basically a scam. The data is originally self-selected data sets then correlating a few markers tested once, to match you to their arbitrary groups, isn't exactly how genetics work is done.

Its actually cheap as, maybe cheaper to get 50x full genome sequencing from a company that actually doesn't sell your data; where 23andMe business model was running a few marker tests to appease their audience they kept in the dark of how modern genetics works; then keep the same for full genome sequencing later because that shit only gets more valuable over time.

Its what makes genetics weird. A sample taken 10 years ago, will reveal so much more about you 5 years from now, like massively more.

I mean, it is kinda their fault in the first place for using an optional corporate service that stores very private data of yours which could be used in malicious ways.

Maybe there should be some type of regulation that prevents that from happening considering the average person doesn't think of shit like that because they don't expect to be fucked over in every conceivable way

If only Congress was literate on the issue.

If only companies could be executed.

Did you know they used to not be immortal by default? Like old companies had to definite like a shutdown date in their articles of incorporation.

Now they have human rights, are immortal, and use the planet like its a computer and they are a poorly written piece of malware.

Hint: Its gonna keep looping till it overheats and crashes. Might need to unplug it and plug it back in again.

No, we know where we are getting fucked from: behind usually, sometimes ontop so they can choke us, and the rest is always on our knees.

This is the best summary I could come up with:


“Rather than acknowledge its role in this data security disaster, 23andMe has apparently decided to leave its customers out to dry while downplaying the seriousness of these events,” Hassan Zavareei, one of the lawyers representing the victims who received the letter from 23andMe, told TechCrunch in an email.

In December, 23andMe admitted that hackers had stolen the genetic and ancestry data of 6.9 million users, nearly half of all its customers.

The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing.

“The breach impacted millions of consumers whose data was exposed through the DNA Relatives feature on 23andMe’s platform, not because they used recycled passwords.

23andMe’s attempt to shirk responsibility by blaming its customers does nothing for these millions of consumers whose data was compromised through no fault of their own whatsoever,” said Zavareei.

Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving,” and “a desperate attempt” to protect itself and deter customers from going after the company.


The original article contains 721 words, the summary contains 184 words. Saved 74%. I'm a bot and I'm open source!

From the article:

The data breach started with hackers accessing only around 14,000 user accounts. The hackers broke into this first set of victims by brute-forcing accounts with passwords that were known to be associated with the targeted customers, a technique known as credential stuffing.

From these 14,000 initial victims, however, the hackers were able to then access the personal data of the other 6.9 million million victims because they had opted-in to 23andMe’s DNA Relatives feature. This optional feature allows customers to automatically share some of their data with people who are considered their relatives on the platform.

I knew better than to give thee companies my DNA but of course I've had family give it to them. I suppose if I was wanted for an unsolved murder I'd be a bit concerned, but I'm still not happy that anyone's DNA is compromised that I'm associated with.

The question to me is what's the play with that data. I'd assume they would have a use for it if they went to the trouble of stealing it. I suspect in the future this will be lucrative data, but what's the play right now??

For the grand majority of folks, Name, relationship label, self-reported location (city or zip), and birth year.

The ones with DNA compromises would be the ones whose accounts were directly accessed.

Company involved in a data breach try not to blame customers challenge (impossible)

That headline sounds to me like them claiming "Y'all're a bunch of eejits for usin' our service!"

To which I'd say "Yeah sure, I'm certain that would hold up in court" with the biggest eye roll you could imagine

23andMe

I never met a Geneticist who couldn't immediately recognize this company as a scam. The product wasn't the papers they send you after doing random marker tests once (so, false positives exist, and they never cared). The product is the DNA they collected by convincing people that their test was even remotely useful or insightful.

Its entirely based on correlation; and correlation to what? Geographic area? That makes no sense if you know one of any number of fields and many don't even have to be scientific in nature, or genetics.

I have always hated them, always told people to never use them and get themselves a proper 50x full genome sequencing since it costed the same; and actually provides real, resolute and reliable data. Not just like borderline pseudoscience. Might as well sent in the shape of your skull.

In a way, it kind of is their fault for trusting companies like this in the first place. I'd never consider using companies like this and both think and hope none of my family members would either.

Obviously, the breach is the company being incompetent like many companies are when it comes to security.

Unfortunately like you said, family members can do so of their own accord which is exactly what one of mine did, despite my warnings of such.

It's completely impossible for me to "un-ring" that bell now, so to speak.

Why anyone would ever trust somebody else with their DNA data is beyond me.

Why anyone would care is beyond me. Explain what someone's realistically going to do with your DNA data.

This is always the most short-sighted kind of comment on the internet, I don't assume you're ignorant, I assume you're selfish - Do you not see a responsibility to future generations in any of your actions or are you just here to "get yours" and check out?

While there are real and immediate dangers today, our responsibility in this moment is to be a firm NO so that these things don't find their extremes in our lifetime or beyond. You're the frog in the pot of cold water, but the burner is turned on beneath you.

"What the fuck are you guys talking about man? being all hysterical and shit? The water is comfortable right now, even a bit cold"

Anyone can obtain your DNA by picking a single hair of yours or a dirty napkin. Your DNA is an open secret.

And there would likely be legal ramifications if they actually used that information in a way that harmed me. That's not so clear when given up willingly.

And anyone can hate a group of people, the difference between that hate staying small, isolated and relatively contained is organization and systemization - so for example, IBM catalogues and analyzes data for the nazis and you then get an amplification of strength of that hatred that effectively results in the holocaust (instead of something that would have maybe been more like Putin's limp, flailing invasion of Ukraine).

Yes I can pull a single hair from your head, but if I create a machine where you and 50 million of your friends send me that hair, pay me for the privilege and I then sell the data or it gets breached, that's where we start to get into the danger zone.

Those of you here being contrarians for the sake of it are on the wrong side of history. Learn a book, shitheads.

Do you not see a responsibility to future generations in any of your actions or are you just here to “get yours” and check out?

Not on this matter. Simply asserting that danger exists is not the same as demonstrating it, and you're doing a lot of asserting and zero demonstrating.

While there are real and immediate dangers today

Such as? You're pretty light on details in a situation where it would really help your argument to provide examples. It makes me assume that you don't actually know.

our responsibility in this moment is to be a firm NO so that these things don’t find their extremes in our lifetime or beyond

Why does that require a "firm NO"? Plenty of actually dangerous things have been handled via regulation rather than a "firm NO".

You’re the frog in the pot of cold water, but the burner is turned on beneath you.

Bad news for your point: the frogs actually jump out in real life. You've also completely failed to demonstrate that we are frogs and there is a pot of water in this situation.

You're very confidently ignorant. I'm glad this is only an internet conversation and it can just full stop here - I do feel bad for the people that have to suffer you daily in real life though.

Funny you calling me ignorant in response to a post where I asked you twice to explain more. That you resorted to insults instead of explaining your thinking says a lot more about you than it does me.

4 more...

The biggest worry is that the data might be right and might be used by an insurance provider to deny a person's cover

Though that's not a realistic problem. The various DNA ancestry companies' privacy policies prevent them sharing with insurance companies.

The biggest worry is that the data might be right and might be used by an insurance provider to deny a person’s coverage

Ok, but if that's something insurance companies want to do, they're not going to be stopped because you didn't send a DNA sample to 23andMe, nor are they going to have to go scrape up questionable data off the black market. They'll simply offer people some discount for sending in a DNA sample or even make it a requirement for coverage.

Explain what someone’s realistically going to do with your DNA data.

You are obviously oblivious to how mass-surveillance works, and how much it can destroy our freedoms. Services like 23AndMe keep a database over all the DNA they have received. This database is often shared with governments, and can be used to create relationship maps - who is what to whom. This information can be and is being weaponized against us on a daily basis.

In what ways is it actively being weaponized? Examples, sources?

You are obviously oblivious to how mass-surveillance works, and how much it can destroy our freedoms.

I'm pretty sure they're currently doing the mass surveillance thing just fine without DNA data. I'm not sure how DNA would even factor into mass surveillance. I'm open to considering realistic scenarios.

Services like 23AndMe keep a database over all the DNA they have received.

Yes, it's how they provide the service.

This database is often shared with governments, and can be used to create relationship maps - who is what to whom.

What's your evidence for this claim?

This information can be and is being weaponized against us on a daily basis.

How? By who? What's your evidence?

I'm betting you have no evidence and will simply appeal to some instance where some company sold some data to the government in a situation that isn't at all analogous.

The evidence is literally publicly available. It takes mere seconds to find court records and articles online. But it is just easier for you to sit there and scream "what is your evidence?" as some headless chicken, right?

I'm not going to try and guess what you think the evidence is. If it's as readily available as you claim, it should be trivial for you go find it and show me. The fact that you haven't yet is telling about how honest you're actually being.

Sell to insurance companies. Genetic predisposition towards certain illnesses? That's a premium.

And the insidious thing is, it's not even just you. Any relative that does a test, boom, they know.

Sell to insurance companies. Genetic predisposition towards certain illnesses? That’s a premium.

If that's something that those companies were interested in doing, why wouldn't they just require people applying for coverage to submit a DNA sample? That would be way easier, more reliable, and less shady compared to trying to piece together profiles based on data being sold on the black market.

If I am an insurance company, and I have data that says you are carrying a gene that is correlated with colon cancer, I can either raise the fuck out of your rates because youre a risky client who might cost me a lot of money in colon cancer treatments, or when you do get colon cancer I could refuse to cover it because I have a contract clause you didnt read that says if youre genetically correlated thats functionally a pre-existing condition and thus isnt a part of your coverage.

If I am a med company, and I know what your genes correlate with known treatable genetic diseases that become fatal or more serious to people like you with those genes, I can raise the price of your medication. You have to pay, because you will die if you dont, so I can ask for any price.

If I am a texas politician, who is already threatening hospitals across the nation illegally for your private medical data, I am salivating trying to get your dna. Correlate any gene, or suite of genes, with a population of people you do not like, and you can target them through this. "Prove" a genetic superiority to defend and promote eugenic ideals, while targeting your racial scapegoat at a genetic level. Look like one race? Well your blood says youre not pure, so youre next too.

These are only the obvious problems.

If I am an insurance company, and I have data that says you are carrying a gene that is correlated with colon cancer...

You think an insurance company would leave money on the table if they thought your DNA could save them a few bucks? They'd either offer discounts to people for submitting DNA samples or require DNA samples as a condition of coverage.

If I am a med company, and I know what your genes correlate with known treatable genetic diseases ...

Med companies don't need your DNA to know that they can charge more life-saving medication. They just need you to know that you have a particular condition and then make sure you know about their medication. If the disease in question is fatal, like your example, it actually seems like a win for the person in question that there's a cure for their condition.

If I am a texas politician, who is already threatening hospitals across the nation illegally for your private medical data, I am salivating trying to get your dna...

Ah yes, the Texas politician who is going to let the lack of DNA data stand in the way of his eugenic designs. Okay. Totally realistic.

The insurance company doesnt want or need to give you discounts. They are buying this data from companies like 23andme, after the professionals have indexed and prorated it. Telling the customer risks scandal, and buying from youmeans they need to process it in house. This back door pre analyzed data sharing keeps you in the dark, and your money in their pocket.

Med companies do not use this to develop the medication, they use it to change the price of existing meds based on your need. Diseases and disorders are not equally lethal. They are buying this data to get the information on how badly you need the drug, and alter the price accordingly.

They arent going to let anything stand in the way of their plans, they are already illegally collecting this information. More data makes this easier for them.

4 more...
4 more...

I'm just of the general opinion that any personal data you entrust to any corporation is going to be at risk - regardless of it's assurances. There's also a risk of that corporation being legitimately acquired by another thus nullifying previous TOS, etc. Or worse case, they sell all your info anyway. Connected technology is moving quickly. What might seem safe to share today could become the basis of an insurance claim denial when they discover a genetic predisposition they believe you were obligated to disclose.

This is at least partly true. If you reuse the same information, you should expect to get pwned

It is, it's their fault for sending their data to some company that wants your DNA. I'm curious too, but i'm not that dumb.

Victim blaming is so cool!

ya'll are projecting a whole lot onto what i said here... go right ahead, i know that you will never see things any way but your own. Have a nice day.

You're literally blaming the victims and calling them dumb, how am I projecting?

If you are dumb enough to send your DNA to a company that keeps it in a database forever, and often shares it with governments to make relationship maps and population control, you deserve everything.

Victim blaming is so fun, isn't it? Do you feel big and strong?

Well, when somebody drives drunk and kills themselves, I will also say that they brought it on themselves. Play stupid games, win stupid prizes.

You're a fucking buffoon. Driving drunk is absolutely different. Grow up.

Wow, am I glad nobody in my family has used this whack service!

Who's up for an old skool loic session against this bunch of clowns lol 🤣 🤣 🤣

Congratulations this is the cringiest thing I've read today.