815

Hope this isn't a repeated submission. Funny how they're trying to deflect blame after they tried to change the EULA post breach.

you are viewing a single comment's thread
view the rest of the comments
[-] dpkonofa@lemmy.world 214 points 10 months ago

I'm seeing so much FUD and misinformation being spread about this that I wonder what's the motivation behind the stories reporting this. These are as close to the facts as I can state from what I've read about the situation:

  1. 23andMe was not hacked or breached.
  2. Another site (as of yet undisclosed) was breached and a database of usernames, passwords/hashes, last known login location, personal info, and recent IP addresses was accessed and downloaded by an attacker.
  3. The attacker took the database dump to the dark web and attempted to sell the leaked info.
  4. Another attacker purchased the data and began testing the logins on 23andMe using a botnet that used the username/passwords retrieved and used the last known location to use nodes that were close to those locations.
  5. All compromised accounts did not have MFA enabled.
  6. Data that was available to compromised accounts such as data sharing that was opted-into was available to the people that compromised them as well.
  7. No data that wasn't opted into was shared.
  8. 23andMe now requires MFA on all accounts (started once they were notified of a potential issue).

I agree with 23andMe. I don't see how it's their fault that users reused their passwords from other sites and didn't turn on Multi-Factor Authentication. In my opinion, they should have forced MFA for people but not doing so doesn't suddenly make them culpable for users' poor security practices.

[-] Kittenstix@lemmy.world 70 points 10 months ago

I think most internet users are straight up smooth brained, i have to pull my wife's hair to get her to not use my first name twice and the year we were married as a password and even then I only succeed 30% of the time, and she had the nerve to bitch and moan when her Walmart account got hacked, she's just lucky she didn't have the cc attached to it.

And she makes 3 times as much as I do, there is no helping people.

[-] SnotFlickerman@lemmy.blahaj.zone 38 points 10 months ago* (last edited 10 months ago)

These people remind me of my old roommate who "just wanted to live in a neighborhood where you don't have to lock your doors."

We lived kind of in the fucking woods outside of town, and some of our nearest neighbors had a fucking meth lab on their property.

I literally told him you can't fucking will that want into reality, man.

You can't just choose to leave your doors unlocked hoping that this will turn out to be that neighborhood.

I eventually moved the fuck out because I can't deal with that kind of hippie dippie bullshit. Life isn't fucking The Secret.

[-] c0mbatbag3l@lemmy.world 24 points 10 months ago

I have friends that occasionally bitch about the way things are but refuse to engage with whatever systems are set up to help solve whatever given problem they have. "it shouldn't be like that! It should work like X"

Well, it doesn't. We can try to change things for the better but refusal to engage with the current system isn't an excuse for why your life is shit.

[-] SnotFlickerman@lemmy.blahaj.zone -4 points 10 months ago* (last edited 10 months ago)

~~The bootlickers really come out of the woodwork here to suck on corporate boot.~~

Edit: wrong thread.

[-] NoIWontPickaName@kbin.social 3 points 10 months ago

What in the fuck are you talking about? You’re the one standing up for the corporation

[-] SnotFlickerman@lemmy.blahaj.zone 0 points 10 months ago* (last edited 10 months ago)

Yeah that is my bad, responded to the wrong thread.

In this case, the corporation isn't wrong that users aren't doing due dilligence.

[-] NoIWontPickaName@kbin.social 4 points 10 months ago

Happens to the best of us

[-] ripcord@lemmy.world 5 points 10 months ago
[-] aksdb@feddit.de 1 points 10 months ago

I would definitely want my door locked for that.

[-] Ibex0@lemmy.world 7 points 10 months ago

Lately I try to get people to use Chrome's built-it password manager. It's simple and it works across platforms.

[-] Chobbes@lemmy.world 20 points 10 months ago

I get that people aren’t a fan of Google, and I’m not either, but this is a reasonable option that would be better than what the vast majority of people are doing now…

[-] Ibex0@lemmy.world 1 points 10 months ago

That's what I'm getting at. It's an upgrade for most users and certainly novices. I thought I was being cleaver with a password manager and they got hacked twice (you know who).

[-] SnotFlickerman@lemmy.blahaj.zone 14 points 10 months ago* (last edited 10 months ago)

Bitwarden is simple, works across platforms, is open source, and isn't trusting your data to a company whose *checks notes entire business model is based on sucking up as much data as possible to use for ad-targeting.

I'll trust the company whose business model isn't built on data-harvesting, thanks.

Also, Firefox is better for the health of the web, Google is using Chrome as a backdoor to dictate web standards, yadda yadda.

[-] psud@lemmy.world 1 points 10 months ago

You and I can choose our tools as the best for our use case and for the good of the internet in general, but our non-tech friends can't.

I convinced a friend to use KeePass, but he wouldn't spend the time to learn it. I now tell him and others like him to just use Chrome's suggested password.

[-] kautau@lemmy.world -1 points 10 months ago

~~internet users~~

people

[-] MimicJar@lemmy.world 14 points 10 months ago

I agree, by all accounts 23andMe didn't do anything wrong, however could they have done more?

For example the 14,000 compromised accounts.

  • Did they all login from the same location?
  • Did they all login around the same time?
  • Did they exhibit strange login behavior like always logged in from California, suddenly logged in from Europe?
  • Did these accounts, after logging in, perform actions that seemed automated?
  • Did these accounts access more data than the average user?

In hindsight some of these questions might be easier to answer. It's possible a company with even better security could have detected and shutdown these compromised accounts before they collected the data of millions of accounts. It's also possible they did everything right.

A full investigation makes sense.

[-] dpkonofa@lemmy.world 27 points 10 months ago

I already said they could have done more. They could have forced MFA.

All the other bullet points were already addressed: they used a botnet that, combined with the "last login location" allowed them to use endpoints from the same country (and possibly even city) that matched that location over the course of several months. So, to put it simply - no, no, no, maybe but no way to tell, maybe but no way to tell.

A full investigation makes sense but the OP is about 23andMe's statement that the crux is users reusing passwords and not enabling MFA and they're right about that. They could have done more but, even then, there's no guarantee that someone with the right username/password combo could be detected.

[-] EssentialCoffee@midwest.social -1 points 10 months ago

I'm not sure how much MFA would have mattered in this case.

23andme login is an email address. Most MFAs seem to use email as an option these days. If they're already reusing passwords, the bad actor already has a password to use for their emails that's likely going to work for the accounts that were affected. Would it have brought it down? Sure, but doesn't seem like it would've been the silver bullet that everyone thinks it is.

[-] dpkonofa@lemmy.world 2 points 10 months ago

It's a big enough detractor to make it cumbersome. It's not that easy to automate pulling an MFA code from an email when there are different providers involved and all that. The people that pulled this off pulled it off via a botnet and I would be very surprised if that botnet was able to recognize an MFA login and also login, get the code, enter it, and then proceed. It seems like more effort than it's worth at that point.

[-] Monument@lemmy.sdf.org 6 points 10 months ago

Those are my questions, too. It boggles my mind that so many accounts didn’t seem to raise a red flag. Did 23&Me have any sort of suspicious behavior detection?

And how did those breached accounts access that much data without it being observed as an obvious pattern?

[-] douglasg14b@lemmy.world 13 points 10 months ago* (last edited 10 months ago)

If the accounts were logged into from geographically similar locations at normal volumes then it wouldn't look too out of the ordinary.

The part that would probably look suspicious would be the increase in traffic from data exfiltration. However, that would probably be a low priority alert for most engineering orgs.

Even less likely when you have a bot network that is performing normal logins with limited data exfiltration over the course of multiple months to normalize any sort of monitoring and analytics. Rendering such alerting inert, since the data would appear normal.

Setting up monitoring and analysis for user accounts and where they're logging from and suspicious activity isn't exactly easy. It's so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them. And even if they had this setup which I imagine they already did it was defeated.

[-] sudneo@lemmy.world 3 points 10 months ago

If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.

I mean, device fingerprinting is used for this purpose. Then there is the geographic pattern, the IP reputation etc. Any difference -> ask MFA.

It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them.

Cloudflare, Imperva, Akamai I believe all offer these services. These are some of the players who can help against this type of attack, plus of course in-house tools. If you decide to collect sensitive data, you should also provide appropriate security. If you don't want to pay for services, force MFA at every login.

[-] sudneo@lemmy.world 8 points 10 months ago

Credential stuffing is an attack which is well known and that organizations like 23andme definitely should have in their threat model. There are mitigations, such as preventing compromised credentials to be used at registration, protecting from bots (as imperfect as it is), enforcing MFA etc.

This is their breach indeed.

[-] dpkonofa@lemmy.world 18 points 10 months ago* (last edited 10 months ago)

They did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe's fault.

Also, how do you go about "preventing compromised credentials" if you don't know that the credentials are compromised ahead of time? The dataset in question was never publicly shared. It was being sold privately.

[-] sudneo@lemmy.world 7 points 10 months ago

The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.

Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.

Regarding the last bit, it might noto have helped against this specific breach, but we don't know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.

Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don't have sufficient controls.

[-] dpkonofa@lemmy.world 10 points 10 months ago* (last edited 10 months ago)

I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t. I would even play devil’s advocate and say that sharing your info with strangers was also the user’s responsibility but that 23andMe could have forced MFA on accounts who shared data with other accounts.

Many people hate MFA systems. It’s up to each user to determine how securely they want to protect their data. The users in question clearly didn’t if they reused passwords and didn’t enable MFA when prompted.

[-] sudneo@lemmy.world 12 points 10 months ago

My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users' own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that's why you enforce things.

Very often companies use "ease" or "users don't like" to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That's why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don't enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it's clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.

It’s up to each user to determine how securely they want to protect their data.

Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That's the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?

[-] dpkonofa@lemmy.world 1 points 10 months ago

This wasn’t a brute force attack, though. Even if they had brute force detection, which I’m not sure if they don’t or not, that would have done nothing to help this situation as nothing was brute forced in the way that would have been detected. The attempts were spread out over months using bots that were local to the last good login location. That’s the primary issue here. The logins looked legitimate. It wasn’t until after the exposure that they knew it wasn’t and that was because of other signals that 23andMe obviously had in place (I’m guessing usage patterns or automation detection).

[-] sudneo@lemmy.world 3 points 10 months ago

Of course this is not a brute force attack, credentials stuffing is different from bruteforcing and I am well aware of it. What I am saying is that the "lockout period" or the rate limiting (useful against brute force attacks) for logins are both security measures that are sometimes demanded from companies. However, even in the case of bruteforcing, it's the user who picks a "brute-forceable" password. A 100 character password with numbers, letters, symbols and capital letters is essentially not possible to be bruteforced. The industry recognized however that it's the responsibility of organizations to implement protections from bruteforcing, even though users can already "protect themselves". So, why would it be different in the case of credentials stuffing? Of course, users can "protect themselves" by using unique passwords, but I still think that it's the responsibility of the company to implement appropriate controls against this attack, in the same exact way that it's their responsibility to implement a rate-limiting on logins or a lockout after N failed attempts. In case of stuffing attacks, MFA is the main control that should simply be enforced or at the very least required (e.g., via email - which is weak but better than nothing) when any new pattern in a login emerges (new device, for example). 23andMe failed to implement this, and blaming users is the same as blaming users for having their passwords bruteforced, when no rate-limiting, lockout period, complexity requirements etc. are implemented.

[-] dpkonofa@lemmy.world 1 points 10 months ago* (last edited 10 months ago)

So forced MFA is the only way to prevent what happened? That’s basically what you’re saying, right?

Their other mechanisms would prevent credential stuffing (e.g., rate limits, comparing login locations) so how was this still successful?

[-] sudneo@lemmy.world 4 points 10 months ago

Yes, forced mfa (where forced means every user is required to configure it) is the most effective way. Other countermeasures can be effective, depending on how they are implemented and how the attackers carry out the attack. Rate limiting for example depends on arbitrary thresholds that attackers can bypass by slowing down and spreading the logins over multiple IPs. Other things you can do is preventing bots to access the system (captcha and similar - this is usually a service from CDNs), which can be also bypassed by farms and in some cases clever scripting. Login location detection is only useful if you can ask MFA afterwards and if it is combined with a solid device fingerprinting.

My guess in what went wrong in this case is that attackers spread the attack very nicely (rate limiting ineffective) and the mechanism to detect suspicious logins (country etc.) was too basic, and took into account too few and too generic data. Again, all these measures are only effective against dumb attackers. MFA (at most paired with strong device fingerprinting) is the only effective way there is, that's why it's on them to enforce, not offer, 2fa. They need to prevent the attack, not let just users take this decision.

[-] lightnsfw@reddthat.com 1 points 10 months ago

There are services that check provided credentials against a dictionary of compromised ones and reject them. Off the top of my head Microsoft Azure does this and so does Nextcloud.

[-] dpkonofa@lemmy.world 1 points 10 months ago

This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.

[-] lightnsfw@reddthat.com 1 points 10 months ago

Yea, you're right. Good point.

[-] serial_crusher@lemmy.basedcount.com 1 points 10 months ago

Is there a standards body web developers should rely on, which suggests requiring MFA for every account? OWASP, for example, only recommends requiring it for administrative users, but for giving regular users the option without requiring it.

There’s some positives to requiring MFA for all users, but like any decision there’s trade offs. How can we throw 23andme under the bus when they were compliant with industry best practices?

[-] sudneo@lemmy.world 1 points 10 months ago

I don't think it's possible to make a blanket statement in this sense. For example, Lemmy doesn't handle as sensitive data as 23andMe. In this case, it might be totally acceptable to have the feature, but not requiring it. Banks (at least in Europe) never let you login with just username and password. The definitely comply with different standards and in general, it is well understood that the sensitivity of the data (and actions) needs to be reflected into more severe controls against attacks which are relevant.

For a company with so sensitive data (such as 23andMe), their security model should have definitely included credential stuffing attacks, and therefore they should have implemented the measures that are recommended against this attack. Quoting from OWASP:

Multi-factor authentication (MFA) is by far the best defense against the majority of password-related attacks, including credential stuffing and password spraying, with analysis by Microsoft suggesting that it would have stopped 99.9% of account compromises. As such, it should be implemented wherever possible; however, depending on the audience of the application, it may not be practical or feasible to enforce the use of MFA.

In other words, unless 23andMe had specific reasons not to implement such control, they should have. If they simply chose to do so (because security is an afterthought, because that would have meant losing a few customers, etc.), it's their fault for not building a security posture appropriate for the risk they are subject to, and therefore they are responsible for it.

Obviously not every service should be worried about credential stuffing, therefore OWASP can't say "every account needs to have MFA". It is the responsibility of each organization (and their security department) to do the job of identifying the threats they are exposed to.

[-] Xer0@lemmy.ml 3 points 10 months ago

I agree. The people blaming the website are ridiculous here.

[-] dpkonofa@lemmy.world 4 points 10 months ago

It’s just odd that people get such big hate boners from ignorance. Everything I’m reading about this is telling me that 23andMe should have enabled forced MFA before this happened rather than after, which I agree with, but that doesn’t mean this result is entirely their fault either. People need to take some personal responsibility sometimes with their own personal info.

load more comments (7 replies)
this post was submitted on 03 Jan 2024
815 points (94.1% liked)

Technology

59205 readers
2519 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS