2) BAD website says “We’ve sent you an email, please enter the 6-digit code! The email will come from GOOD, as they are our sign-in partner.”
3) BAD’s bots start a “Sign in with email one-time code” flow on the GOOD website using the user’s email.
4) GOOD sends a one-time login code email to the user’s email address.
5) The user is very likely to trust this email, because it’s from GOOD, and why would GOOD send it if it’s not a proper login?
6) User enters code into BAD’s website.
7) BAD uses code to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.
This is why “email me a one-time code” is one of the worst authentication flows for phishing. It’s just so hard to stop users from making this mistake.
“Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious. However, if some popular email service suddenly decides your login emails or the login link within should be blocked, then suddenly many of your users cannot login.
Passkeys is the way to go. Password manager support for passkeys is getting really good. And I assure you, all passkeys being lost when a user loses their phone is far, far better than what’s been happening with passwords. I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
The problems of Passkeys are more nuanced than just losing access when a device is lost (which actually doesn't need to happen depending on your setup). The biggest problem are attestations, which let services block users who use tools that give them more freedom. Passkeys, or more generally challenge-response protocols, could easily have been an amazing replacement for passwords and a win-win for everyone. Unfortunately, the reality of how they've been designed is that they will mainly serve to further cement the primacy of BigTech and take away user freedom.
I want to like passkeys but I haven't had any success getting them to work. Every time I click on "sign in using passkey" both my browser (Firefox or Chrome, on Android/Win/Mac) and Bitwarden are like "no passkeys found" and I'm never given an option to create one.
I feel like I'm doing something stupidly wrong or missing a prompt somewhere, or maybe UX is just shitty everywhere, but if I, a millennial who grew up programming and building computers, struggle with this, then I don't expect my mom, who resets her password pretty much every time she needs to sign into her bank, to get it to work.
I'm in the same boat. I just cannot get them to work; they work sometimes on some browsers, but a solid majority of the time I click on "use passkey" I get a generic error message and end up going back and using the password flow.
I haven't invested more time in this because if it's so unusable for me as an engineer, it's a non-starter for the general public.
I've found passkeys support on Windows to be a bit janky right now. You can get into weird scenarios where the passkeys don't work, but there's also no UI to remove them or reset them. This is especially annoying when you change out some component in your PC and it seems to void however they are encrypted and Windows can't figure out why it can't access them.
I use Bitwarden in Firefox and passkeys "just work" on Linux, Android, Mac, and Windows. Previously, I used the extension in Chrome (Linux, Android, Windows).
The only relevant Bitwarden setting appears to be "Ask to save and use passkeys" under Notifications. I do turn off the browser's built-in password manager, though I believe anything else relevant I have at default. If you have those and they still aren't getting saved, then I'm at a loss, but wish I knew why they don't work for you. In Bitwarden, you can see if there's a passkey saved in an entry, as the creation timestamp is shown right under the password field and editing an entry also allows deleting a passkey.
I've seen that kind of comments multiple times, and I don't get it.
I use Yubikeys, and passkeys just work. On Chrome, Firefox and Safari, both on macOS and Linux (specifically Alpine). I also tried with iPhones (for my family), and it also just works.
I haven't tried using an Android device, is it what you are trying?
All non-enterprise big tech uses of passkeys (Google, Apple & Microsoft Accounts), do not require an attestation statement (or in spec-parlance, use the `None` or `Self` Attestation Types).
The presence of other attestation types in the spec allows passkeys to replace the use of other classes of authentication that already exist (e.g. smartcard). For example, it's very reasonable for a company to want to ensure that all their employees are using hardware Yubikeys for authentication. Furthermore, sharing the bulk of implementation with the basic case is a huge win. Codepaths are better tested, the UIs are better supported on client computers, etc.
The presence of attestations in the spec, does not impinge on user freedom in any meaningful way.
Do you have some examples where people actually require attestation in 3rd party facing systems? Or is this purely "But in theory..." and you've dismissed all the very real problems with the alternatives because you're scared of a theoretical problem ?
I always reject attestation requests and I don't recall ever having been refused, so if this was a real problem it seems like I ought to have noticed by now.
Microsoft Entra ID goes out of its way to enforce attestation for FIDO 2 keys.
The protocol normally allows you to omit the attestation, but they worked around an extra call after a successful registration flow that sends you to an error page if your FIDO2 passkey isn't from one of these large approved vendors: https://learn.microsoft.com/en-us/entra/identity/authenticat...
I found out by trying to prototype my own FIDO2 passkey, and losing my mind trying to understand why successful flow that worked fine on other websites failed with Microsoft. It turns out, you are not allowed to do that.
To defend Redmond here, Entra is an enterprise system. If the company you work for or are interfacing with wants to enforce attestation, that's their business.
B2C I would expect more latitude on requiring attestation.
A problem is that once a thing like that exists, it ends up on security audit checklists and then people do it without knowing whether they have any reason to.
I would counter argue being the person pushing passkeys in an enterprise: noone in the business knows what attestation is, but we're going to do it because the interface recommends it.
I'm not sure it's the standards committee's fault that your employer hires people that don't know how to do their job.
I think it's reasonable to have attestation for the corporate use case. If they're buying security devices from a certain vendor, it's reasonable for their server to check that the person pretending to be you at the other end is using one of those devices. It's an extra bit of confidence that you're actually you.
Exactly. For personal authentication, you are at least personally incentivized to do the right things. For corporate auth, people will do whatever it takes to skip any kind of login.
I once knew a guy who refused to let his office computer go to sleep just to avoid having to enter his password to unlock his computer. He was a really senior guy too, so IT bent to allow him do this. What finally made him lock his computer was a colleague sending an email to all staff from his open outlook saying “Hi everyone, it’s my birthday today and I’m disappointed because hardly anyone has come by to wish me happy birthday”. The sheer mortification made him change his ways.
Yeah Microsoft is so annoying. It's also kicking me out every day now (with this passive aggressive "hang on while we're signing you out" message). On M365 business with Firefox on Linux with adblocker. I hate using their stuff so much.
Yes me too since a couple months :( So annoying. It doesn't of course happen on Windows.
It started with OneNote web a couple years ago. Every day that gave a popup "Your session needs to be refreshed) and it would reload all over again. Microsoft don't bother to make a OneNote desktop app for my platform and the web version is really terrible anyway (you can only search in one tab, not a whole notebook). So I moved to self-hosted Obsidian which I'm really happy with. Now I can basically see myself typing in a note from another client.
But replacing Microsoft for email is another topic.
I don't work in August, so I can't (well, won't) check, but my boss had the infrastructure team turn on FIDO2 for the mandatory 2FA on our administrative accounts and I do not remember having any problems with this.
I do remember explicitly telling them (because of course having agreed to do this they have no idea how and need our instructions) not to enable attestation because it's a bad idea, but you seem to be saying that it'll somehow be demanded (and then ignored) anyway and that was not my experience.
So, I guess what I'm saying here is: Are you really sure it's demanded and then ignored if you turn it off from the administrative controls? Because that was not my impression.
It's been a little while, but I believe at the time you'd get a CTAP/CBOR MakeCredentialRequest, the browser would ask you to confirm that you allow MS to see the make and model of your security key, and it would send the response to a Microsoft VerifySecurityInfo API.
If you refused to provide make and model, IIRC you would fail the check whether enforcement was enabled or not. Then if enforcement was enabled and your AAGUID didn't match the list, you would see a different error code.
Either way, you're sending over an attestation. They understandably forbid attestation format "none" or self-signed attestations. It's possible that this has changed, but the doc page still seems to say they won't accept a device without a packed attestation, it's only that the AAGUID check can currently be skipped.
Passkeys are in their infancy. You don't go about rolling out such patterns when most users haven't even switched yet and big players like Apple are still resisting attestations (last time I checked). The problem is that the feature is there and can be (ab)-used in this way, so it should be rejected on principle, irrespective of whether it's a problem right now.
I understand the value of attestations in a corporate environment when you want to lock down your employees' devices. But that could simply have been handled through a separate standard for that use case.
At the very least the spec should be painstakingly insistent on not requiring attestation unless implementors have really thought and understood the reasons why they need the security properties provided by attestation in their particular use case. And that it has to be something more meaningful than “be more secure this way” as security is not a rating (even though security ratings exist) but a set of properties, and not every possible security guarantee is universally desirable (please correct me if I’m wrong here, of course), and at least some are not without downsides. Maybe even strongly recommend library authors to pass the message on.
I agree, but unfortunately the spec authors are already going out and dangling possible bans in front of projects who implement Passkeys in more user-friendly ways:
> To be very honest here, you risk having KeePassXC blocked by relying parties
But having a choice about how you store your credentials shouldn't depend on the good faith of service providers or the spec authors who are doing their bidding anyway. It's a bit similar to sideloading apps, and it should probably be treated similarly (ie, make it a right for users).
There's a tension here between "user freedom" and a service wanting to make sure that credentials that it trusts to grant access to stuff aren't just being yolo'd around into textfiles on people's dropboxes.
People forget that one of the purposes of authentication is to protect both the end user and the service operator.
Sure, but as long as the fallback for account recovery is sending a reset email or sms (both of which are similar or worse than yoloing textfiles on dropboxes), that's a very tough argument to make in good faith.
This attitude has got to stop. Is it not enough that there's no customer service and it's almost impossible to sue these companies thanks to arbitration clauses? Now they need to have control over our computing to keep themselves safe? And how many recorded incidents of losing an account because someone had their "password in a text file" are even out there? The most common scenarios one hears about are either phishing or social engineering.
Do you think someone running a service that's under constant denial-of-service attacks would be sympathetic to the argument that "What people do on their own computer is none of the service's business".
Pretty much every service out there has "don't share credentials" in their ToU. You don't have to like it, but you also don't have to accept the ToU.
Note the scare quotes around user freedom. Perhaps user freedom is a notorious fake issue, a bizarre misconception, or an exotic concept that nobody understands.
Ensuring it's not possible for remote attackers to easily steal users passkeys is not "removing all rights" for someone. It is setting a security bar you have to pass. One user's poor security can have negative effects on not just them but the platform itself.
You’re falling for the exact “better security” fallacy I was trying to warn about. Security is not a rating, “better security/guarantee” is not a really meaningful phrase on its own, even though it’s very tempting to take mental shortcuts and think in such terms.
Attestation provides a guarantee that the credential is stored in a system controlled by a specific vendor. It’s not “more” or “less” secure, it’s just what it literally says. It provides guarantees of uniformity, not safe storage of credentials. An implementation from a different vendor is not necessarily flawed! And properties/guarantees don’t live on some universal (or majority-applicable) “good-to-bad” scale, no such thing exists.
This could make sense in a corporate setting, where corporate may have a meaningful reason to want predictability and uniformity. It doesn’t make sense in a free-for-all open world scenario where visitors are diverse.
I guess it’s the same nearsighted attitude that makes companies think they want to stifle competition, even though history has plenty of examples how it leads to net negative effects in the long run despite all the short term benefits. It’s as if ‘00s browser wars haven’t taught people anything (IE won back then - and where is it now?)
Yes, the rate of account compromises is a metric we can define. But attestation doesn't directly or invariably improve this metric. It may do so in some specific scenarios, but it's not universally true (unless proven otherwise, which I highly doubt). In other words, it's not an immediate consequence.
It could help to try to imagine a scenario where limited choice can actually degrade this metric. For example, bugs happen - remember that Infineon vulnerability affecting Yubikeys, or Debian predictable RNG issue, or many more implementation flaws, or various master key leaks. The less diverse the landscape is, the worse the ripples are. And that's just what I can think of right away. (Once again, attestation does not guarantee that implementation is secure, only that it was signed by keys that are supposed to be only in possession of a specific vendor.)
Also, this is not the only metric that may possibly matter. If we think of it, we probably don't want to tunnel vision ourselves into oversimplifying the system, heading into the infamous "lies, damned lies, and statistics" territory. It is dangerous to do so when the true scope is huge - and we're talking about Internet-wide standard so it's mindbogglingly so. All the side effects cannot be neglected, not even in a name of some arbitrarily-selected "greater good".
All this said, please be aware that I'm not saying that lack of attestation is not without possible negative effects. Not at all, I can imagine things working either way in different scenarios. All I'm saying that it's not simple or straightforward, and that careful consideration must be taken. As with everything in our lives, I guess.
Because users want the services they use to be good. They don't want to be sent phishing links from their friend's account that was hijacked by attackers.
I had a meeting with a public servant this morning. He is part of an organization that promotes multi-factor authentication and publicly endorses the view that users are stupid.
The meeting was about him unable to test the APK of the new version of their mobile app. He felt embarrassed, his mobile phone is enrolled in the MDM scheme that disallows side-loading of apps.
What I am trying to say is that assuming users are stupid carries a non-negligible risk that you will be that stupid user one day.
The solution here sounds like having a separate development device that is used to sideload test versions of the app. The idea is that devices may require different levels of security depending on how much can be accessed from that device.
Reducing passkeys to the security level of passwords is not just "making something user friendly". It's undoing all of the hardware everyone else in the ecosystem is putting into to making a more secure way for authentication to be done.
Passkeys have several advantages over passwords but not all of them rely on UX controls. They are, after all, public-private keypairs and the private part is never shared during authentication. The wider web never adopted PAKEs so passwords are still sent verbatim over the (TLS-protected) wire.
Not reusing passwords across sites greatly limits the blast radius but verbatim password exchange still carries its own risks. The widespread adoption of TLS addressed most of the issues, as I alluded to already, but there are still insider threats, MITM phishers, and infrastructure compromises from time to time.
How exactly is this "reducing the security level to those of passwords"? For example: you can't use a passkey on attacker's web site even if you have a plaintext copy of the private key.
I don't know why people don't see this coming: very obviously once Passkeys are everywhere, it'll become "we're requiring attestation from approved device bootloaders/enclaves" and that'll be your vendor lock in where it'll be just difficult enough that unless you stick with the same providers phone, you might lose all your passkeys.
> very obviously once Passkeys are everywhere it'll become "we're requiring attestation from approved device bootloaders/enclaves"
This is far from very obvious, especially given that Apple have gone out of their way to not provide attestation data for keychain passkeys. Any service requiring attestation for passkeys will effectively lock out every iPhone user - not going to happen.
If there's no intention of doing this, it should be removed from the protocol. "I promise we'll never use this feature, so long as you implement it" isn't very convincing.
Great, they can use standards that aren't targeted at running services for the general public. It seems like the requirements already diverged.
Drop attestation from passkeys, and I become a promoter. Keep it, and I suggest people stay away.
If it's not something anyone intends to use on public services, this should be uncontroversial. Dropping attestation simplifies implementation, and makes adoption easier as a result.
The fact that sites targeted at the general public are prompting me to use them. Should websites avoid using passkeys and webauthn? Would you like to tell them that they're doing it wrong?
Yeah, so if you want me to trust them, the harmful parts need to get removed from specs used in public contexts.
I would love to use public key cryptography to authenticate with websites, but enabling remote attestation is unacceptable. And pinky swears that attestation won't be used aren't good enough. I've seen enough promises broken. It needs to be systematic, by spec.
Passwords suck. It's depressing that otherwise good alternatives carry poisonous baggage.
Because passkeys are designed to replace passwords across multiple different service contexts, that have different requirements. Just because there's no reason to use it for one use case doesn't mean it's not actually useful in a different one. See things like FIPS140 (which everyone ignores unless they're legally required not to).
Can you sketch out for me the benefit of a public-facing service deciding to require passkey attestation? What's the thought process? Why would they decide to wake up and say "I know, I'm going to require that all of my users authenticate just with a Yubikeys and nothing else"?
Is there a difference? It's a field in the response payload that nobody is filling out except the corps that need it. Would it make you feel better if they moved it to an appendix and called it an optional extension?
> Do you have some examples where people actually require attestation in 3rd party facing systems?
That's not the right question. The right question is "what companies would be using passkey's if there was attestation on their security". To answer that question, you might look at the answer for a similar one on X509: "would we be doing banking over http if X509 didn't have attestation?".
All because passkeys backup is deemed "too unsafe and users should never be allowed that feature, so if you implement it we'll kick you out of the treehouse."
The authoritarian nature of passkeys is already on full display. I hope they never get adopted and die.
They paraphrased what you said in the thread, but I don't think it's much of a misrepresentation.
You may have "been one of the most vocal proponents of synced passkeys never being attested to ensure users can use the credential manager of their choice", but as soon as one such credential manager allows export that becomes "something that I have previously rallied against but rethinking as of late because of these situations".
There may not currently be attestation in the consumer synced passkey ecosystem, but
in the issue thread you say "you risk having KeePassXC blocked by relying parties".
The fact that that possibility exists, and that the feature of allowing passkeys to be exported is enough to bring it up, is a huge problem. Especially if it's coming from "one of the most vocal proponents of synced passkeys never being attested", because that says a lot about whoever else is involved in protocol development.
A year and a half ago doesn't really matter; that this was ever even a concern from the industry, something that the industry could make happen at all, or even just was thinking about doing at some point in the past, poisons the entire effort. In a world where password+totp already exists and requires almost no hoops, no dependencies and is incredibly secure vs basic password flows, it's no wonder that folks remember discussions around curtailing user freedom around a new authentication pattern which already was less convenient, offers less user control, and further centralizes infrastructure in the hands of a few major brokers of technological power.
Until we have full E2E passkey implementations that are completely untethered from the major players, where you can do passkey auth with 3 raspberry pi's networked together and no broader internet connection, the security minded folks who have to adopt this stuff are going to remember when someone in the industry publicly said "if you don't use a YubiKey/iPhone/Android and connect to the internet, ~someone~ might ban you from using your authenticator of choice."
> Until we have full E2E passkey implementations that are completely untethered from the major players, where you can do passkey auth with 3 raspberry pi's networked together and no broader internet connection
This is already possible today. And since it's a completely open ecosystem, you can even build your own credential manager if you choose!
I don't believe it is a misrepresentation, you are bullying a project for letting users backup their own passkeys.
>which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
This is exactly why we need truly open standards, so people who believe they are acting for the greater good can't close their grubby hands over the ecosystem.
We've had massive problems with moving to passkeys (browser based) at our company and moved back to an app based Authenticator. Everyone is accepting of the autenticator app or uses a yubikey.
Re-imaged, lost, or bad updates on PCs wiping out a all the saved passkeys and being locked out of all accounts during off-campus sales or design meetings.
Making staff look like idiots in front of clients is a resume-generating-event.
I'm not the OP, but I expect it the same issues that have stopped me from using passkeys now.
His reply does give one aspect of it: passkey's are fragile. To be secure, they can't be copied around or written down on a piece of paper in case you forget, so when the hardware they are stored on dies, or you lose your Yubikey or is as he described the PC re-imaged, all the your logins die. That will never fly, and it's why passkeys are having a hard time being adopted despite them being better in every other way.
Passkey's solution to that is to make them copyable, but not let the user copy them. Instead someone else owns them, someone like Google or Apple, and they will do the copy to devices they approve of. That will only be to devices they trust to keep them secure I guess. But surprise, surprise, the only devices Apply will trust are ones sold to you by Apple. The situation is the same for everyone else, so as far as I know bitwarden will not let you copy a bitwarden key to anyone else. Bitwarden loudly proclaims they lets you export all your data, including TOTP - but that doesn't apply to passkeys.
So, right now, having a passkey means locking yourself into proprietary companies ecosystem. If they company goes belly up, or Google decides you've transgressed one of the many pages of terms, or you decide to move to the Apple ecosystem again you lose all your logins. And again, that won't fly.
The problem is not technological, it's mostly social. It's not difficult to imagine a ecosystem that does allow limited, and secure transfer and/or copying of passkeys. DNS has such a system for example. Anyone can go buy a DNS name, then securely move it between all registrars. There could be a similar system for passkeys.
Passkeys have most of the bits in place. You need attestation, so whoever is relying on the key knows it's secure. The browsers could police attestation as they do now for CA's. We have secure devices that can be trusted to not leak leak passkeys in the form of phones, smartwatches, and hardware tokens. But we don't have a certification system for such devices. And we we don't have is a commercial ecosystem of companies willing to sell you safe passkey storage that allows copying to other such companies. On the technological front, we need standards for such storage, standards that ensure the companies holding the passkeys for you couldn't leak the secrets in the passkeys even if they were malicious.
We are at a frustrating point of being 80% of the way there, but the remaining 20% looks to be harder than the first 80%.
Why would BigTech care about the dozens of users using an open source password manager? What’s their gain from preventing these people from logging in? They love money and don’t care about user freedom, sure. But they’ve shown no evidence of hating user freedom on principle.
Every time I’ve seen them actually attack user freedom, there was an embarrassingly obvious business angle. Like Chrome’s browser attestation that was definitely not to prevent Adblock, no sir.
Because they'd actively have to make their proprietary passkey systems interoperable with password managers. This is fail-closed, not fail-open: If they truly didn't care, they'd also be no incentive for them to implement support.
But I fear it's worse. Based on how past open standards played out, I find it believable they do care - that there won't be an open ecosystem of password managers.
> But they’ve shown no evidence of hating user freedom on principle.
Yes, they did, just see Microsoft's crusade against Linux and the origin of the "embrace-extend-extinguish" term.
They already failed then. All sides (browser->website and browser->passkey holder) of passkeys are open standards. They already don’t restrict passkeys from e.g. open source apps they have no control over, for both Google accounts and any site on Chrome. Webauthn “fails open” by default in the sense you’re indicating; if you don’t check the attestation, any app or device made by anyone can hold a passkey. I haven’t encountered or heard of anyone restricting passkey apps/hardware outside of business-managed employee accounts.
I recommend reading the MDN docs on Webauthn, they’re surprisingly accessible.
> Yes, they did, just see Microsoft's crusade against Linux and the origin of the "embrace-extend-extinguish" term.
The whole point of the trial that term came from was that Microsoft explicitly saw Linux as a material threat to their business. What threat are Google quashing by preventing you from using passkeys they don’t control?
>Why would BigTech care about the dozens of users using an open source password manager?
Because big tech loves control. Just because you can't see the angle yet, it doesn't mean there isn't one now, or won't be one later. It has been shown time and time again that they will take all the freedom away from you that they can.
There is already an example of Microsoft selling passkeys with their own "secure (tm)" stamp on them, and not accepting anything else just a few comments down.
Even if there wasn't already an example, it's easy to turn control into a revenue stream at a later time.
That is for their enterprise SaaS, and has an obvious profit motive (I.e. bundling). Do you think Chrome is going to start charging for using their passkey storage and then kick all the other apps off Chrome?
> Even if there wasn't already an example, it's easy to turn control into a revenue stream at a later time.
I think you’ll have to justify or qualify this a bit. If Google forces every website on Chrome to have a red background, how do they turn that control into a revenue stream later on?
No it’s not. My goalpost from the beginning was “show me an example where there wasn’t a clear monetary incentive for restricting user freedom”. That one has a monetary incentive (make our paying customer for product X also buy product Y).
As for blocking things that block ads; if you can’t see the monetary incentive for Google there then I don’t know what to tell you.
I didn’t ask you to divine the future. I said “I’ve not seen them do X without trying to get Y” (a statement about the past), and you still haven’t given me a remotely credible example.
>Why would BigTech care about the dozens of users using an open source password manager?
I agree, why would BigTech care about those dozens of users. Screw those guys, they can use our password manager or they can get lost, we don't need them!
If all you want is to make a bot that can use passkeys automatically, add a transistor between your Yubikey's touch button and GND. When you turn the transistor on, the capacitive sensor is activated.
Now the Yubikey is just an API you can call, websites cannot tell the difference. You can't export keys, but a bot can add new keys after using existing keys to log in.
yeah, IMHO the design was messed up by a few very influential companies "over fitting" it for their company specific needs
but I don't think attestation per-se is bad, if you are a employee from a company and they provide you the hardware and have special certification requirements for the hardware then attestation is totally fine
at the same time normal "private" users should never exposed to it, and for most situations where companies do expose users to it (directly or indirectly) it's often not much better then snake oil if you apply a proper thread analysis (like allowing banking apps to claim a single app can provide both the login and second factor required by law for financial transactions, except if you do a thread analysis you notice the main thread of the app is a malicious privilege escalation, which tend to bypass the integrity checks anyway)
But a lot of the design around attestation look to me like someone nudged it into a direction where "a nice enterprise features" turns into a "system to suppress and hinder new competition". It also IMHO should never have been in the category of "supported by passkey" but idk. "supported by enterprise passkey only" instead.
Through lets also be realistic the degree to which you can use IT standards to push consumer protection is limited, especially given that standard are made by companies which foremost act in their financial interest, hence why a working consumer protection legislation and enforcement is so important.
But anyway it's not just the specific way attestation is done, it's also that their general design have dynamics push to a consolidation on a view providers, and it's design also has elements which strongly push for "social login"/"SSO" instead of a login per service/app/etc. i.e. also pushes for consolidation on the side of login.
And if you look at some of the largest contributors you find
- those which benefit a ton from a consolidation of login into a few SSO providers
- those which benefit from a different from a login consolidation (consolidation of password managers) and have made questionably blog entries to e.g. push people to not just store password but also 2FA in the same password manager even through that does remove on of the major benefits of 2FA (making the password manager not a single point of failure)
- those which benefit a ton if it's harder for new hardware security key companies, especially such wich have an alternative approach to doing HSKs
and somehow we ended up with a standard which "happened" to provide exactly that
eh, now I sound like a conspiracy theorist, I probably should clarify that I don't think there had to be some nefarious influence, just this different companies having their own use case and over fitting the design on their use case would also happen to archive the same and is viable to have happened by accident
>but I don't think attestation per-se is bad, if you are a employee from a company and they provide you the hardware and have special certification requirements for the hardware then attestation is totally fine
Perhaps I'm missing something, but I do think hardware "attestation per-se is bad. Just look at the debacle of SafetyNet/Play Integrity, which disadvantages non-Google/non-OEM devices. Hardware attestation is that on steroids.
As for corporate/MDM managed environments, what's wrong with client certificates[0] for "attestation"? They've been used securely and successfully for decades.
As for the rest of your comment, I think you're spot on. Thanks for sharing your thoughts!
Your style of thinking is exactly why linux never became a leader in desktop os's. Why we're still dealing with the most ridiculous tech debt and complexity in OSS tooling to date. You're obsessed with fake problems that have no bearing on real people. When grandma does indeed loose all her money because some prick phished her password away, I would love to watch you explain how that's actually better than BigTech taking away user freedoms.
This argument is ridiculous and purposefully inflammatory. The issue at hand is the requirement for client attestation while using passkeys. So in that light, can you describe for us the scenario in which grandma, who is undoubtedly using passkeys on an iPhone or an Android, looses all her money simply because someone, somewhere else is using a passkey without attestation? You can't, because the vendor lock-in created by attestation doesn't meaningfully increase grandma's security. Rather, it exists (outside the enterprise scenario) primarily as an anti-competitive tool to be wielded by the major players.
Passkeys could have been an overall boon to society. But with attestation restricted to a set of corporate-blessed providers it is a Faustian bargain at best.
That is not a problem that GP brought up. In fact GP claims it's not a big problem.
> The problems of Passkeys are more nuanced than just losing access when a device is lost (which actually doesn't need to happen depending on your setup).
Those are the solutions I'm familiar with; there may be others. If Android and Windows don't already solve this problem in similar ways--which they might!--it sounds like an open opportunity for them.
>"I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money."
More like abuelita gets robbed at gunpoint and made to unlock and clear out her bank account, then has no recourse at home because her device was taken. I live in a third world country and even 2FA simply isn't viable for me due to how frequent phone robberies are. I've had to do the process once and it was a nightmare, whereas with passwords I can just log into Bitwarden wherever and I'm golden
A key part of the recent push for passkeys has been cross device syncing with your Google / Apple / whatever password manager account, so you end up in the same situation: if you can log in to Bitwarden to access your passwords, you can log in to your password manager to access your passkeys.
> A key part of the recent push for passkeys has been cross device syncing with your Google / Apple / whatever password manager account, so you end up in the same situation: if you can log in to Bitwarden to access your passwords, you can log in to your password manager to access your passkeys.
Relying on Google/Apple is no better, with the stories of people losing access to their (Google in particular) account, and not being able to recover or let alone even reach a human at Google to begin with.
Why not have a public service for this, instead of relying on big tech that can just revoke your account for any number of ToS "violations" without recourse? The solution for "normies" should not be rely on and trust Google with your entire digital identity.
Getting the State involved is just a different, much worse threat actor than Google, though. From this discussion it should be evident how much more sovereignity passwords give you, if you want the State involved it should regulate websites' policies on passwords, such as: no service shall be hostile to password managers (special character bans, short limits on length, no pasting), no service shall require regular password resetting (proven to worsen security).
State involvement may be better used in policing, too. Public repositories of leaked passwords (without usernames, of course) would do wonders, for example
Google can ban me (really just one specific digital instance of me) from their services. The government can throw me in jail, take all my property, fine me whatever amount they want, etc.
Please stop right there. I want a password manager that I fully control, and lives on my own infrastructure (including sync between devices). Not reliance on someone else's cloud.
I do something similar with KeePass because a lot of my 2FA is stored on my YubiKey. When I register with YubiKey I also register with a KeePass vault intended as a "break-incase-of-emergency". So rarely opened and with lots of security options set to max.
For a long time 2fa apps (other than Bitwarden and maybe some others) would lock you into the app and not let you export it. Websites don’t usually expose the text version of the code, just the QR.
I recently switched from Authy to 1Password for 2FA, requiring me to set up every single website's 2FA from scratch, and I found that every website I use provides the text version of the code. It's hidden behind a "having a problem scanning the code?" link. I didn't need to take a single screenshot of a QR code; I was able to save the text version for them all. Next time I switch, it'll be easy.
>> Did people not realize they can save their 2fa token and just use that with a new authenticator?
What's 2fa token? Is that an AI thing? AI uses tokens. Or a crypto thing? Do you need one of them "nonfungible" tokens? And what's an authenticator? I have MS authenticator for work, but it uses 2 digit numbers, are those tokens?
Not sure if I'm missing a joke, but the 2fa token is a secret that you stick in your password manager and sync (or otherwise send) to other devices so that your 2fa is not bound to a particular device. My password manager lets me view the 2fa secret as if it were just another password.
Yes, I was joking. I'm not up on all the options and I'm an engineer who read HN. What chance does Joe public have of making sense of all these things?
2fa is two factor authentication. User+password is the first factor, and is a "something you know" check. The second factor is a "something you have" check. Like sending you an SMS code.
They exist so if someone watches over your shoulder while typing your password, they don't gain access to anything.
I feel like this is a really strong justification for duress passwords. Register a duress password with your phone or bank account, and if you ever enter it, that system will take whatever actions you want - call the police with your location, display a fake balance of a few hundred dollars, switch to a fake email account, hide your crypto wallet app, whatever.
FYI, you can put a 2FA secret into Bitwarden and autofill the one-time passwords alongside the regular password. That would mitigate the impact of losing your phone.
I personally don't do this because I feel like it defeats the whole purpose of 2fa. If someone gets into your bitwarden account, now they have your passwords and can generate 2fa codes. Of course, if the alternative is just not doing 2fa then it's better than nothing but I'd still prefer an authenticator app or hardware key than putting them in bitwarden.
Getting into your bitwarden account should be at least as hard as getting into your authenticator app or stealing your hardware key, though, if you're using it as intended, so I think it's ok for 2FA
2FA keys are easily stolen from a desktop with a password manager running in the background when running a malicious executable, vs. 2FA keys on a 2FA app on a phone and running a malicious app.
Great, this is a universal solution. Let's all make it an integral part of our digital security, and in 5 years or so hope that bitwarden doesn't leverage it!
Grandma, and Uncle Rob, and your cousins, and anyone else you have a long standing relationship with, can use your VaultWarden instance if you let them.
But! You now get to maintain uptime (Rob travels and is frequently awake at 3am your time) and make sure that the backups are working... and remember that their access to their bank accounts is now in your hands, so be responsible. Have a second site and teach your niece how to sysadmin.
Exactly. The only financial stuff on my phone is Google Wallet and I don't even live in a high threat area. The devices that can accept payment from Google Wallet are always in observed locations, it would be very hard for a mugger to use it maliciously. All the easy money transfer options are an attack surface I see no need to expose.
> whereas with passwords I can just log into Bitwarden wherever and I'm golden
Good luck. For some arcane reason, Bitwarden turned on email-based 2FA for my account last night and all of a sudden I'm locked out of my account for half a day. …mostly because I have greylisting enabled on my mail server, so emails don't arrive right away, but as it so happens I also had all my hardware stolen from me last weekend. Bootstrap is a real bitch.
> More like abuelita gets robbed at gunpoint and made to unlock and clear out her bank account, then has no recourse at home because her device was taken.
You are describing the current status quo, without passkeys. This is already possible.
Well, except maybe for the "without recourse" part, because there are some legal and policy avenues available for dealing with this situation.
The without recourse is the part that matters... With passkeys or 2FA she's at risk of having to wait a day or more to go to the physical location (if there even is one, digital banks are huge in Latin America), with passwords she can just check her notebook the same night and start the recourse through official channels. I know she could just call the hotline, but if 24hr customer service guy can get you in your account same night then the bank is too insecure anyways
> The without recourse is the part that matters...
Yes, and I'm saying that part isn't accurate either for the story you're portraying with passkeys or for the status quo. That's not how account recovery flows work.
With passwords, no account was even lost in the scenario for a recovery flow to start. An account recovery flow is only necessary because of the superfluous extra security, which will almost inevitably introduce more attack vectors than before (such as a social engineering attack through customer service) if the banks want to service customers like grandmas.
> Passkeys is the way to go. Password manager support for passkeys is getting really good.
I set up a passkey for github at some point, and apparently saved it in Chrome. When I try to "use passkey for auth" with github, I get a popup from Chrome asking me to enter my google password manager's pin. I don't know what that pin is. I have no way of resetting that pin - there's nothing about the pin in my google profile, password manager page, security settings, etc.
Passkeys are the pinnacle of bad UX. It just works, until the user tries to switch devices, accounts or platforms. The slogan of passkeys should be something like "I don't have a password, it usually just works, but now I changed X and it doesn't work anymore". Even worse is hardware-based 2FA built into smartphones (also FIDO), as you lose your phone in a lake and now you can't access anything anymore.
The way to go is an encrypted password manager + strong unique random passwords + TOTP 2FA. It's human-readable. Yes, that makes it susceptible to phishing, but it also provides very critical UX that makes it universal and simple.
Apple’s works fine, including when I’m logging on to my windows machine. Opening the camera app is a little annoying, but I don’t have to do it frequently. 1Password works well too and it runs on everything. There’s open source options, but I can’t attest to their UX.
That's fine, but Chrome has 67% market share, and the majority of people will pick the default option for passkeys if prompted. For passkeys to replace passwords it's got to be seamless and easily recoverable without compromising security.
> the majority of people will pick the default option for passkeys if prompted
Especially since Google doesn’t allow you to change your personal default which is what convinced me to go and switch all my accounts off of Google SSO
So we need to make a new open standard, and then somehow prevent Google from implementing it? Too badly they implemented TOTP too. I’m not sure what you’re proposing here.
Did you see a proposal? I'm merely pointing out that there's disasterously poor UX lurking in the #1 platform that users may encounter passkeys in. It's not ready to send out to normies without more work on it.
I really dislike how passkeys have generally been used. Once KeepassXC got proper support of them and in the browser plugin its been a bit more sensible. KeepassXC means I can transfer them between devices and its protected the same way my passwords are so no additional pins and logins I don't want, it solves a lot of the issues I have around them. Now its just a long random password.
I wouldn't have minded if we moved to a scheme like SSH logins with public and private keys I own either, that I can store securely but load as I please and again would work well with a local password manager.
Passkey is a great example of how five kitchen chefs can't make scrambled eggs. Horrible user experience, terrible marketing, no mental model like "your phone is THE key," no tangible or even symbolic presentation of the key.
That’s a lot of anger without a substantial argument. For Apple users, for example, the user experience is very smooth and the mental model is “I use iCloud to store my passcode just like I use iCloud to store my passwords”. If you use 1Password, you’re changing iCloud for 1Password instead.
"You lost me the moment you mentioned iCloud". At least that's the way the majority of people I know react to this line of thinking. The "cloud" is still mysterious and complicated to a good number of people. Passwords are easy to understand.
Most Apple users are used to using their account for everything - that’s how they buy apps, use things like music or photos, etc. and, of course, passwords. Switching to passkeys doesn’t change that much other than being a bit faster.
Yeah that's right! If you simply say that this syncs to all my devices, it papers over, or abstracts if you will, the complexity of: secure enclaves/TPMs, symmetric sync keys wrapping asymmetrically encrypted passkeys, resident keys that support backup, keys that do NOT support backup, how biometrics are used, etc. etc.
With a password, I can write it down on a piece of paper and put it in my safe.
Most people use the cloud without even knowing it. If you instead say it’s seamlessly replicated among all your devices, that is a good enough explanation and conveys the benefits to customers.
the "the app tries to trick me into using the service of the company behind it so that they can consolidate the market" problem
it's not quite new, as a dump example depending where in android contacts you click on a address it might always force open google maps (2/3 cases) or (1/3 cases) propelry goes through the intend system and gives users a choice
stuff like that has been constantly getting worse with google products, but it's not like Microsoft or apple are foreign to it
> I'd rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
The problem is that I can physically show up at my local bank branch or at my job's IT helpdesk to get my account back, but I can't show up at the Googleplex or at Facebook's or Xitter's HQ and do the same. Device bound passkeys are very error prone for the latter scenario, since users will fail to account for that case.
To add, services account for that failure by introducing something worse: a customer service backdoor where you can get into an account with very weak or nonexistent authentication.
With Amazon's live chat, someone was able to get into my account by providing an address in the same city as the destination of my latest Amazon order.
You see this with 2FA since "sorry lol you've lost your account forever" isn't an option, and it's trivial for users to lose their 2FA key unlike, say, access to their email.
Services that use passwords for login need to do that too, because people lose passwords.
Even services that use login via emailed link need to do it because people do lose email access. Far too many people use the email provided by their ISP as their only email service, which can be very bad if they move to someplace that ISP does not serve or simply want to switch to another ISP in their current area.
Email account access is the closest thing we have to ubiquitous identity on the web. Users that truly lose access to their email account are in a catastrophic situation before they even think of whether they can access your service.
Heh, that is kinda interesting and I've never heard of it before. What are some services that have this set up?
So, I guess you set up some "emergency users". And maybe if you lose access to your account, you get customer support to mark your account as lost which sends an email to the address that you have on file (in case it's an attack started by someone other than the user).
And I suppose if N days pass without any login, one of your emergency users can generate a credential that they can pass to you to recover your account?
Apple accounts have had it for years. You can set up a legal successor if you die, and a couple of people who can vouch for you to regain access.
That and Apple will give you a very long one-time password meant to be printed that can restore access as well. This one is in a third undisclosed location for me.
No, at least not on its own. Let's not repeat the mistakes.
Password managers are the way to go and ONLY FOR RARE EXCEPTIONS we should use dedicated MFA, such as for email-accounts and financial stuff. And the MFA should ask you to set up at least 3 factors and ask you to use 2 or more. And if it doesn't support more or less all factors like printed codes, OS-independent authenticator apps and hardware keys like yubikey, then it should not be used.
We need to go further. If a service doesn't include 197 factors including blood samples, showing up at a physical location 50 miles from your home, and sending a picture of yourself in a specific posture, and doesn't require you to use at least 53 of them (determined randomly) to login, then it's insecure and should not be used.
Passkey are more like password managers, and less like MFA tokens - despite the fact that many passkey implementations can function as MFA tokens as well.
Bitwarden the password manager includes a full passkey implementation, which doesn't involve any MFA.
> Passkey are more like password managers, and less like MFA tokens
No:
- I can always export and import all my passwords from/into my password manager
- My passwords always work independently of a password manager or any specific app/OS/hardware
That is not true for passkeys and makes them much more like tokens. Of course they don't have to be used in MFA, just like passwords.
I just exported my Bitwarden vault and the resulting .json file has my passkeys in it. I'm not going to try to test import, but if it doesn't work that would obviously be more "bug" than anything else. Clearly "export" is the high concern functionality and once exported, importing them is not a big deal.
This is only about your first paragraph, it doesn't affect your second.
Indeed. Credential Exchange Protocol (CXP) is already been worked on and all major vendors are planning to support it. There was talk also in Apple WWDC 2025 about Passkey related APIs including exporting them.
Are you saying that it's not always possible to import/export passkeys because you can manage them with some program that doesn't allow it, but the same is not true for passkeys?
Counter-example: I can write a password manager that will not allow you to export/import passwords.
There are cases where bitwarden doesn't work but chrome for example does. Easy to Google up.
For passwords however, I never heard of a case where a website only accepts passwords from a specific password manager - and how could they even do that right?
I don't think your reasoning holds. You say "I know situations where one passkey client works with some websites and not others, but I don't know situations where a website works with some clients and not others".
If the website accepts a password, then it can't prevent you from using the password manager you want. But if the website accepts FIDO2 passkeys, it's the same thing, isn't it?
> If the website accepts a password, then it can't prevent you from using the password manager you want. But if the website accepts FIDO2 passkeys, it's the same thing, isn't it?
No need to write like that. I know, understand and use passkeys for quite a while now.
I don't love them. I don't love passwords either.
But while I don't fear passwords, I fear passkeys. The reason is that it makes the tech even more intransparent. My password manager stops working, completely dies or I can't use it anymore for other reason? No problem, I can fallback to a paper list of passwords if I really have to. This transparency and compatibility is more important than people think.
Passkeys lack that. They can be an interface like you described, but only if everyone plays along and they can be exported. But since there is no guarantee (and in practice, they often cannot be exported either) they are not a replacement for passwords. They are a good addition though.
Unfortunately, many people don't understand that and push for passwords to begone.
I have yet to see passkeys used as a sole method of logging in. There's always a traditional username and password setup first. There's always a recovery code set up for the passkey. I have yet to see passkeys offered as the only means of MFA. Which means that your backup methods still work. You can use them for recovering your access. I see passkeys as an optional convenience. It works well for me by that measure.
What about server-generated passwords, like API keys? That would solve the main problem with passwords, namely, that people reuse the same weak password everywhere. I doubt it would be as popular as user-selected passwords, but I still wonder why no website has tried it.
Why not keeping passwords AND passkeys? Most of the time I want to use passkeys for different reasons, but if I lose my passkeys I can go back to my printed list of passwords.
True. Still, the difference is that with passwords, no one can stop you from "exporting" it. With passkeys, it could be changed, and the power for that lies in the hands of only a few vendors. It's still a bit concerning if they replace passwords forcefully.
Password managers are those proprietary programs that you need to install, give full access to your computer, register an account and trust their word that your passwords are uploaded to the cloud securely? No thanks.
Also they are too complicated for an ordinary user. A physical key is much simpler and doesn't require any setup or thinking, and can be used on multiple devices without any configuration. And doesn't require a cloud account.
Password managers are both significantly simpler to use than just passwords and more secure.
Passwords have always been bad. The problem is that users can't remember them. So they rotate, like, 3 passwords.
Which means if fuckyou.com is breached then your bank account will be drained. Great.
On top of that, the three passwords they choose are usually super easy to guess or brute force.
With a password manager, users only need to remember one password, which means they can make said password not stupid. You can automatically log in too with your new super secure passwords you never need to see.
Its the perfect piece of software. Faster, easier, more secure, with less mental load.
uh no - a password manager is an open source application you can compile and install yourself if you want. Its nothing more than a small specialised database with a excel like interface. Personally I think that the argument that things are "too complicated for the average user" eventually gets gets you users that find breathing and sphincter function too complicated.
I’ve been observing this space for two decades and haven’t come across a single open-source password manager that actually works, is properly maintained, has an acceptable security track record, and comes with a similarly well-maintained browser extension that protects both my clipboard and myself from phishing.
I've been using Keepass for two decades and have never had a single issue. I would never recommend a browser plug in (too much attack surface area), and instead simply check the URL before having KeePass autotype. No clipboard.
I think you're rejecting good solutions out of hand.
Meanwhile...millions of users trusted LastPass. Twice.
> simply check the URL before having KeePass autotype.
I’m not going to rely on myself never making a mistake. I want a solution that protects me even during stressful moments where I have a lapse of judgement and forget to check.
I don't think fixing this at the browser-level is the right place. In general, I'm very vigilant, but I know I can be tricked. So I have a policy about not clicking links in emails from companies I already know the address for. I also aggressively right click / long tap links to examine the URL before opening.
In general, opening a malicious URL exposes the user to unnecessary risk, so the correct solution is not to assume the user has visited a malicious site (since that would already be high-risk), but rather to prevent opening of malicious URLs. The most obvious solution is to treat any untrusted content as questionable. So I very carefully examine every domain I visit - as I say to my kids: have a model about who owns the computer you're talking to. Domains matter.
Now, this works for me. I'm not cognitively impaired, I have high conscientiousness, probably from working in military and classified defense contexts way back when, but I'm not really sure to be honest, could just be my personality. But it works for me.
I get that you want that extra safeguard, but it's just not a dealbreaker for me, especially since I'm highly suspicious of browser add-ons and the security implications they bring in. I guess I'm just extremely selective about what add-ons I'll use.
If you're not using KeepassXC's browser plugin (or are using KeePassX, which -IIRC- never had a browser plugin), then its autotype feature will check the title of the window that has keyboard focus when deciding which entry to use. If one or more matches are found, it will [1] also ask you to confirm which entry you're about to have the software punch in. If no matches are found, it will alert you to that fact.
You might find the KeePassXC docs about the feature [0] to be informative.
If you're going to complain that all a phisher has to do to capture a password is create a website with the same title as the official one, then my reply would be something like "Duh. That's what the browser plugin is for.".
Not sure how you got the impression that I was unwilling to use a browser plugin.
I’m absolutely looking for a browser plugin. I would refuse to use an auto-type feature that only checks the window title instead of, as a browser plugin would do, the site’s domain.
I'm not sure how you got the impression that I had the impression that you're unwilling to use a browser plugin. I have absolutely no idea whether or not you're willing to use a browser plugin.
I was mentioning how auto-type worked because it's useful information for those who either are unwilling to use a browser plugin, or are like myself and simply have no need for one.
I don’t remember why KeePassXC didn’t make my list last time I checked.
That was years ago, so I’m going to check it out again. Thanks for the pointer.
Update: One thing that stands out immediately is a confusing mess of three different projects, two of them unmaintained, which all call themselves KeePassX or KeePassXC, sometimes linking to each other’s documentation. How do I even tell I’m facing the correct KeePass(X(C)?)? project?
Yes, I’ll figure it out eventually but until then, it’s confusing. Also, if a password manager project needs to be forked over and over and over again (how can a holder of the keys to the kingdom possibly go MIA on three different occasions in basically the same project?), then does that tell us something about how the project is governed?
> How do I even tell I’m facing the correct KeePass(X(C)?)? project?
Well, [0] lists a single project called KeePassXC, with [1] as its homepage. Search engines list [1] and [2] as the top results for the query KeePassXC, for whatever that's worth. [3]
> Also, if a password manager project needs to be forked over and over and over again ... then does that tell us something about how the project is governed?
No?
KeePass is Windows-only software. So, some folks decided to write KeePassX, which ran on Linux, OSX, and Windows. They got bored of that after a decade or so, called it quits, and one of the preexisting forks [4] became the widely-used one.
> how can a holder of the keys to the kingdom possibly go MIA on three different occasions in basically the same project?
In addition to the history I wrote above, you are aware that KeePass is still receiving stable releases? According to [5], it looks like 2.59 was released just last month.
EDIT: Actually, where are you getting this "confusing mess of three different projects" from? When I search for "keepass", I get the official home pages for KeePass and KeePassXC as the top two results, the Wikipedia page, and then the Keepass project's SourceForge downloads page. When I search for "keepassx", I get the official homepages for KeePassX and KeePassXC, the wikipedia page, the KeePassXC Github repo, and an unofficial SourceForge project page for KeePassX.
When I searched for `keepassxc`, my search engine ranked eugenesan/keepassxc [0] higher than keepassxreboot/keepassxc [1], so the former was the first that I’d visit. GitHub says that eugenesan/keepassxc is 2693 commits ahead of keepassx/keepassx:master, so I assumed that eugenesan/keepassxc was a legitimate and meaningful fork of keepassx/keepassx. Maybe I’m entirely mistaken, and I was just tricked by a blunder of my search engine and eugenesan/keepassxc is just a random person’s fork? (But then again, if it’s just a random fork, then why does it show up at the top, and why so many commits ahead of keepassx?)
To add even more to the confusion, not only is eugenesan/keepassxc unmaintained, it also points to www.keepassx.org (why?), which in turn says it’s unmaintained, too.
If I was just mistaken and eugenesan/keepassxc is really just a random fork, then my earlier allegations are all moot. Thank you for clearing this up, and also for clarifying that the other (legitimate?) KeePassXC was a preexisting fork (so it would have been difficult for them and possibly even more confusing to users if they had taken over the abandoned KeePassX project).
I've tried DDG, Google, Bing, and Yandex. All of them rank official KeepassXC stuff in the top five results, and -with the exception of Bing- rank it above any other non-Wikipedia results. I didn't see this weird keepassx GitHub fork in the results from any of the search engines I tried.
> When I searched for `keepassxc`, my search engine ranked eugenesan/keepassxc [0] higher than keepassxreboot/keepassxc...
With the greatest of respect, I would expect someone who's sufficiently savvy to know what to do with a GitHub repo in their search result to also be sufficiently savvy to -at minimum- visit the homepage listed in the repo's About blurb and notice that [0] is the very first item in the list of "Latest News". I'd also expect that savvy someone to know to visit the repo's Releases page, notice that there are no published releases, and consider even more intensely that they might not be looking at the software they expected to see.
I can't explain why your search system is ranking this misleadingly-named GitHub repo so highly. AFAICT, noone with the repo owner's email address was ever involved in any public development on KeePassXC.
I’m using Kagi. They say they rely on several third-party search indexes. I can’t see which one they are using for which particular search request. What I do know is that the backends are of varying quality. However, after years and years of using Google (back when their search was still good), I got used to the fact that if they return a GitHub project as a top search result, then that project was usually meaningful.
> With the greatest of respect, I would expect someone who's sufficiently savvy to know what to do with a GitHub repo in their search result to also be sufficiently savvy to -at minimum- visit the homepage listed in the repo's About blurb and notice that [0] is the very first item in the list of "Latest News".
Forks sometimes don’t update the About blurb that they inherit, and I think that that’s exactly what happened in the bogus repo.
> I'd also expect that savvy someone to know to visit the repo's Releases page, notice that there are no published releases, and consider even more intensely that they might not be looking at the software they expected to see.
In this case, however, the Releases section said “13 tags.” Some projects don’t use GitHub’s Releases feature at all, and rely only on Git tags. It’s sometimes difficult to spot.
>Password managers are those proprietary programs that you need to install, give full access to your computer, register an account and trust their word that your passwords are uploaded to the cloud securely? No thanks.
[1] No, it's not hosted in the cloud (i.e., on someone else's servers) and that's a good thing. It's FOSS and can be compiled for Android/IOS (and has, see [2][3][4], least for Android). The DB (just a GPG store) can also be shared across multiple devices.
I wish there was a stronger differentiation between syncable and device-bound passkeys. It seems like we're now using the same word for two approaches which are very different when it comes to security and user-friendliness.
And yes, giving granny unsyncable passkeys is a really bad idea, for so many reasons.
> I wish there was a stronger differentiation between syncable and device-bound passkeys.
But there is no difference. I'd prefer if services just let me generate a passkey and leave it entirely up to me how I manage it. Whoever setup granny's device should have done so with a cloud based manager.
I think Google tries to make some confused distinction, or maybe that has more to do with FIDO U2F vs FIDO2. There you can add either a "passkey" or a "security key", but iirc I added my passkey on my security key so... yeah
Passkeys are a usability nightmare. No two experiences are ever the same. I have passkeys saved in 1Password and in Apple Passwords. I have a YubiKey. I have Duo on my work computer.
A common experience is Chrome telling me to scan a QR code. But I know this is not a legitimate method to sign in on any service _I_ use. I also never know WHY I'm being told to "scan this QR code". I scan it, and my phone also has no idea what to do with it! The site has decided, by not finding a passkey where it expects it, that it MUST be on my phone.
That's but one example of the horrible implementation, horribly usability, and horrible guidance various sites/applications/browsers/implementations use.
I don’t like passkeys. Before my process to login was:
- open website
- if not already logged in, log in to 1Password
- autofill password
- autofill TOTP
Now:
- open website
- if logged in to 1Password the Use Passkey usually shows up
- if not:
- log in to 1Password
- choose use passkey
- this almost always does nothing
- choose “use other method”
- choose “password”
- autofill that
- now there is another dialog to choose the 2fa method, choose Authenticator
- autofill that
Passkeys would be great if they actually made anything simpler on a computer. They work fine on the phone but that’s not where I spend most of my time.
And if I'm not using passkey, but the web site detects I'm using a passkey-compatible browser or password manager, the site takes over and tries to "sell" me a passkey anyway. No, I don't want it!
It’s also confusing if I’m being promoted to use an existing passkey or if I’m being promoted to create a passkey.
Now that I’m so paranoid about this, and not remembering which sites I have them for, I always dismiss the passkey prompt, then have to click several more times to get to the password login and fill it in with my password manager.
I forget which site it is but there is one site I try to use with passkeys that somehow bypasses my BitWarden and rigidly insists on a passkey tied to Google and/or my phone, which I do not want. (My BitWarden stack is fully owned by me, as I self-host a VaultWarden instance, with daily backups of it, and I don't want my passkeys anywhere else.) That's definitely annoying.
Passkeys work very smoothly with Safari and Apple Passwords.
Apple Passwords now sufficiently good to replace 1Password for me and I’m slowly transitioning.
I don’t mind subscription models per se but there was something about subscription for your own passwords that made me refuse to jump the fence when 1Password switched to that model.
It works fine until you dare to have TWO accounts for the same website. Safari will just randomly pick one of them and always tray to log you in with that passkey every time you visit, and the interface for using a different one is really annoying.
Apple handles it cleanly in Safari (you get a list of the accounts you're registered with on macOS, and iOS gives you the two most-recently-used accounts for that website with a button to reveal more).
The implementation in Chromium browsers (I use Arc, so I can't speak to Chrome itself) is basically a chunkier-looking 1Password.
Well if that’s what’s meant to happen, it does not happen for me. All I get is the same account over and over again that isn’t the one I want to log in with. No matter how many times I tap the little x and then select the account I want, carefully avoiding the gaze of Face ID which will automatically use the selected passkey if it spots me.
I've never gotten passkeys to work on the Mac. Every time I try it with either Firefox or Safari says I need to log into iCloud, which I really don't want.
I stick with 1Password, because I don’t want my password manager to be part of the barrier to using other platforms.
I also have a bunch of stuff in 1Password that doesn’t have a home in Apple Passwords, which would be a problem.
And yes, Chrome with Apple Passwords is annoying. At work I’m forced to use Chrome for some things, and I’ve been dabbling with Apple Passwords. Every time I launch the browser I have to put in a code to link the extension with Passwords. It’s very annoying.
1Password used to be decent until they enshittified about five years ago, decided to rewrite their app from scratch in Electron, replaced their support staff with non-technical staff who are unable to write any meaningful response to critical bug reports, and hired developers who allowed the app to degrade beyond recognition.
1Password used to work decently well before 2020. Now I have ~ 2k items in 1Password, distributed among two accounts (work and personal). Additionally, my spouse and I have a shared 1Password vault via the Family plan.
There’s no way I’m going to migrate 2k items and two dozen devices to another vendor. If there were one that met my requirements to begin with.
1Password has tons of features. No two vendors have exactly the same data model. Any of them might break on migration or worse, doesn’t exist on the target system.
For example, are my 2FA seeds going to migrate properly? How about the tags, attachments, sections, subsections, security questions and answers, inline Markdown notes, the HIBP integration, built-in overrides to fix known broken websites, workarounds I’ve learned for unfixed websites, shared vaults, recovering lost access to shared vaults, syncing, templates, custom integrations that I maintain [0], personal scripts, etc. etc.
Will it still be able to auto-fill into a web page? Into shitty, broken web pages? On Linux? On my Linux phone?
At the scale and depth at which 1Password is currently integrated into my spouse’s and my life, it’s difficult to consider migration anything less than a full weekend project.
I regret letting my spouse and myself lock into 1Password before it enshittified.
> “Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious.
"Click a link in the email" is really bad because it's very difficult to know the mail and the link in it are legitimate. Trusting links in emails opens to door to phishing attacks.
The question was "How do you know the email comes from that website? "
And the answer was, I can find out if the email is from abc.com
by looking at the link, which should also be abc.com
I don't click in "track.monkey.exe". I don't click tracking links. I pay a lot of money for my newsletter provider because I can turn off (most) tracking links.
For the record, if you're in the EU you can make a GDPR request to their Data Protection Officer - since it's your data what's being kept away from you, you have the right to at least a backup.
It can take months and it only guarantees a backup, not full access, but it's better than nothing.
Even if they provide you with a dump of their database records on you, you will not be able to recover your password from the salted iterated hash, PBKDF2, bcrypt, Argon2, or whatever else irreversible function they used to store it.
In theory: they can ask for ID, sworn affidavit, or whatever other means their local laws determine to be valid. At the end of the day, proving that someone owns something is not a new problem. I've also seen "here's some evidence that I know what the contents of the account are, my legal name matches the account and my legal address matches some emails there".
In practice: in my case, anecdotally, they just did it. For some reason owning the backup email account was not enough for the automated workflow to unlock my account, but sending a letter threatening to sue under the GDPR somehow changed their minds.
Or I could keep using passwords in my password manager, where I DON’T lose all my passwords if I lose my phone? Passkeys just seem to solve no real problems and create a black box dependency. Everything I’ve seen about them just makes no damn sense.
The scary part is not about losing her phone. It's about having to keep the old, no-longer-secure Android phone alive just for passkeys after getting a shiny (and secure) new iPhone.
I have 531 logins for varied websites and services. Would you enjoy having to change 531 passkey devices? Me neither. But default login flows in all these sites prompt you to use your current device as passkey by default, so people who don't know better (i.e. a general "everybody") are being gently pushed to do so.
No, which is why there is the cross platform standard CXF which allows for cross platform sharing of passkeys. Apple has announced that support for this is shipping later this year with iOS 26. Google hasn't announced when they are shipping it yet.
I keep all my Passkeys in Bitwarden, it works fine across different devices and I use all major platforms regularly (iOS, Android, Windows, MacOS, ChromeOS). As a backup I've also added some extra duplicate Passkeys in the Chrome and iCloud password manager for the most important accounts in case I lose access to Bitwarden somehow.
AFAIK, there is no requirement for websites to support multiple passkeys nor, if they do, to support them in a sensible way. Some sites do this well, most don't.
If granny forgets her password, she looks it up on the last page of her notebook where it is written down. Granny cannot write down her passkey.
To avoid getting locked out you could add 2-3 passkeys from different providers to each account. And/or use a passkey provider that allows backups, and back up your keys. But I doubt many people will have the discipline to do either of that.
Honest question: isn't that introducing some weaknesses, allowing the attacker to either reactivate password auth or add it's own passkey eh by tricking the user in accepting that change after receiving a mail with a link to accept that change?
That would make the passkey unbreakable, but leave other easier to exploit weaknesses.
Good points but dont underestimate "granny needs to visit the bank to get access to her account again" as a problem.
For a lot of people, dealing with (now mostly digital) bureaucracies is a major stress in life. The biggest one, for some.
Its not just about invonvenience. Its sometimes about losing access to some, and just not having it for a while.
In terms of practical effect, a performance metric for a login system could be "% of users that have access at a given point." There can be a real tradeoff, irl, between legitimate access and security.
On the vendor side.. the one time passwords fallback has become a primary login method for some. Especially government websites.
Customer support is costly and limited in capacity. We are just worse at this than we used to be.
Digital identity is turning out to be a generational problem.
How many HN denizens are the de facto tech support for family members when they can’t login, can’t update, can’t get rid of some unwanted behavior, or just can’t figure stuff out?
I don’t blame them one bit. The tech world has presented them with hundreds of different interfaces, recovery, processes, and policies dreamed up by engineers and executives who assume most of their user base is just like them.
> Passkeys is the way to go. Password manager support for passkeys is getting really good. And I assure you, all passkeys being lost when a user loses their phone is far, far better than what’s been happening with passwords. I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
I am waiting for the era when using passkeys is not depending from some big tech company.
The maker of the credential manager is still a "big tech company", and there is still lock-in. Before I ever use any passkey solution, I would need to be guaranteed the ability to export and backup my passkeys and migrate them wherever I want.
My expectations for how long I intend to be alive and using the internet is much longer than my expectations for the continued operation and service of any particular passkey management software.
I already had to jump ship from LastPass after they were hacked. Imagine if they hadn't allowed me to migrate my passwords.
Passkeys will be the way to go if we get them to remove the "attestation object" field from the protocol. Until then there's no way for Jimbob to tell the difference between:
> Website: is this Jimbob' phone
> Hardware: yes
And
> Website: I'll give you a dollar if you tell me something juicy about this user
> Hardware: Give this token to Microsoft and ask them
> Microsoft: Jimbob is most likely to click ads involving fancy cheeses, is sympathetic to LGBTQ causes, and attended a protest last week
With passwords and TOTP codes, I am in control of what information is exchanged. Passkeys create a channel that I can't control and which will be used against me.
(I chose Microsoft here because in a few months they're using the windows 10->11 transition to force people into hardware that locks the user out of this conversation, though surely others will also be using passkeys for similarly shady things).
It seems pretty clear that "where possible" parties besides the user are provided with information about the user (ostensibly about their device, but who knows what implementers will use this channel for)... so they can make a trust decision.
It's going to end up being a root-of-trust play, and those create high value targets which don't hold up against corruption, so you're going to end up with a cabal of auth-providers who use their privileged position to mistreat users (which they already do, but what'll be different is that this time around nobody will trust that you're a real human unless you belong at least one member of this cabal).
Just because an API or protocol has a certain capability, does not mean it is implemented for all use cases.
Folks seem to be hung up on the term "attestation" being in the response of a create call. If you look inside that object, there is another carve out for optional authenticator attestation, which is not used for consumer use cases.
I will keep repeating what I've said in the other comments. There is no credential manager attestation in the consumer synced passkey ecosystem. Period.
OK, so suppose you and I were bad guys. You work on the code that interfaces with the TPM on a windows device, and I work at an insurance provider and write code that authenticates users.
Suppose we hatch a conspiracy to take our users out of the "consumer synced passkey system". And into one where you can use the authentication ritual as a channel where you can pass me unique bits re: this user such that we can later compare notes about their behavior.
What about passkeys prevents us from doing this? How do we get caught, and by whom?
A while ago, I implemented a signin approach that looks similar to this "send a link/code" mode but (I believe) can't be exploited this way - https://sriku.org/blog/2017/04/29/forget-password/ - appreciate any thoughts on that.
Btw this predates passkeys which should perhaps be the way to go from now on.
One problem is you are requiring users to trust and click on a link in an email which is historically frowned upon. So you are undercutting phishing education.
If attacker would fool me at one website he will get that one account (possibly forever) and that's it. If it is a bank connected account, I can intervene and change email/account by writing a physical request to the bank for example, call the bank, do something. And likely it will be only a single bank account. But it may be even some unrelated account. Maybe it will be my Amazon account and all the attacker gets is some ebooks. Or Steam account. Or some email without important links. Etc.
Point is, the damage will be likely local to a single or a handful of accounts.
If all the accounts are protected by two factor on my phone and I lose it or it bricks, then I'm done. It will be a total mess with no paths to recover, except restarting literally everything from scratch.
I have Google Auth app on my phone and every few months I consider using it, but then reconsider and stay with passwords.
I think the "click a link in the email" solution is more than a "tiny" bit better isn't it? It almost completely solves the attack pattern you laid out. Passing the whole link to BAD is not only more tedious but totally ridiculous. That is not the kind of thing that even totally naive users would do.
And there is a significant benefit of not needing to worry about weak or repeated passwords, password leaks etc.
Overall that pattern feels significantly better to me than a normal password system, and MUCH better than the "we'll send you six digits to copy and paste" solution.
- user has an account on GOOD.COM
- user has saved her password in her browser
- user navigates to BAD.COM
In this case autofilled passwords are safe and convenient since they alarm the user that she isn't at GOOD.COM.
A clickable link sent in email mostly works too, it ensures that the user arrives at GOOD.COM. (If BAD sends an email too, then there is a race condition, but it is very visible to the user.)
Pin code sent in email is not very good when the user tries to log in to BAD.COM.
There is no password in these new flows. They just ask for email or phone and send you a code.
Bad website only needs to ask for an email. It logs into Good with a bot using that email. Good sends you the code. You put the code in bad. Bad finishes the login with that code.
At no point in time is a password involved in these new flows. It's all email/txt + code.
Many sites work like this now. Resy comes to mind.
Please help me understand the passkey flow that solves this problem.
1) BAD actor tries to create account at GOOD website posing as oblivious@example.com.
2) GOOD website requests public key from BAD.
3) BAD provides self-generated public key.
4) GOOD later asks BAD to prove that they control the private key.
5) BAD successfully proves they control the private key.
Unless you have step 3b where GOOD can independently confirm that the public key does indeed belong to oblivious. But even that is easily worked around.
No, please, not as long as attestation is in the spec. I firmly believe that passkeys are intended to facilitate vendor lock-in and reduce the autonomy of end users.
Frankly, I do not trust any passkey implementation as much as I trust a GPG-encrypted text file.
Re: Granny - Pew Research about Online Scams, granny is far less likely to lose her account (15%) than an 18-29 year old (26%). If she's upper income and white even less likely. Similar trends with falling for large numbers of scams. People who've had 3+, granny (19%), 18-26 year olds (24%). [1] The survey notably has the same perception results. Society views the youth as mostly immune from scams (believe only 22%), yet fall victim to them (26%), while worrying about old people (believe 84% fall for them), who actually don't fall victim that often (15%).
Most of the time, re: granny, women are targeted a much greater amount because of supposed weakness and vulnerability (report 2/3, victim 2/3), yet males send much larger amounts of money. ($112 vs $205) [2] Too be fair though, old people do tend to lose more with scams. Granny would probably lose $300 on average vs $113 for a 18-24. Conflicting numbers on the money #'s though, so some of that depends on which survey you ask.
Old people also tend to write each other a lot of cautionary warning stories such as the AARP article on Stan Lee's swindling in old age (security guard, "senior adviser", "protector", and daughter). [3]
Old people get a bunch of grief, yet old people are actually less likely to fall for the scams.
Also, if she's a retiree in Miami Beach, more likely to be targeted (Adak, AK; Deepwater, NJ; then Miami Beach, FL are the worst for scams.)
I hate passkeys because even as a savvy user, I don’t know what to do with them. Do they replace my password? Do I need to generate one passkey per account and per device? How do I login on a new device? Are password managers still relevant with passkeys?
They’re too opaque for my taste and I don’t like them.
I am afraid that this flaw is present for almost all phishable methods (SMS, TOTP, email OTP, App Push) to certain extent (except passkeys, mtls)
"Click a link in the email" isn't much secure either for most part. You might end up following a link blindly which can lure you into revealing even more information
Passkeys aren't that great either cause almost everyone has to provide a account recovery flow which uses these same phishable methods.
The language in communication is probably the most important deterrent here, second to using signals in the flow to present more friction to the abuser. A simple check like presenting captcha like challenge to the user in case they are not authenticating from the same machine can go a long way to prevent these kind of attacks at scale
> “Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious.
Somehow this makes me think of Pascal's Wager...
You just got through describing an attack where the victim was not aware that a bad actor can trigger a bona fide password reset code at an arbitrary time. For your little table of threats, you posit that at least clicking the link goes to the bona fide web site.
But there's a separate little table of threats for the case where an attacker controls the timing of sending a fake email. I believe realtors have this problem-- an attacker hacks their email and hangs back until the closing date approaches, then sends the fake email when the realtor tells the client to expect one with the wire transfer number/etc.
There are lots of attack patterns. That is one. I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
> Passkeys is the way to go. ... I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
... I do not agree your story is justification for passkeys, or for letting banks trust passkeys for authentication purposes. I'd rather she not lose access to banking services in the first place: I don't think banks should be allowed to do that, and I do not think it should be possible for someone to "steal all her money" so quickly -- Right now you should have at least several days to fix such a thing with no serious inconvenience beyond a few hours on the phone. I think it is important to keep that, and for banking consumers to demand that from their bank.
A "granny" friend of mine got beekeeper'd last year[1] and her bank reversed/cancelled the transfers when she was able to call the next say and I (local techdude) helped backup/restore her laptop. I do not think passkeys would helped and perhaps made things much worse.
But I don't just disagree with the idea that passkeys are useful, or even the premise of a decision here between losing all their money and choosing passkeys, I also disagree with your priors: Having to visit a bank branch is a huge inconvenience for me because I have to fly to my nearest. I don't know how many people around here keep the kind of cash they would need on-hand if they suddenly lost access to banking services and needed to fly to recover them.
I think passkeys are largely security-theatre and should not be adopted simply if only so it will be harder for banks to convince people that someone should be able to steal all their money/access with the passkey. This is just nonsense.
[1]: seriously: fake antivirus software invoice and everything, and her and her kid who is my age just saw the movie in theatres in like the previous week. bananas.
> I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
You and I think they are bullshit, but ... the problem is that bullshit is sometimes genuine.
I have got tired of how many times in recent years I have seen things that looked like phishing or had obvious UX-security flaws and reported them only to have got a reply from customer service that the emails and sites were genuine and that they have no intention of improving.
If janky patterns is the norm, then regular users will not be able to recognise the good-but-janky from the scams.
> I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
Now replace email a with text message sent from a short-code.
It is all bananas. The old way with a local key on the computer and some silly Java program, a physical dongle to validate transactions with a number printed on a display, was way more fool proof.
Then the banks wanted you to use the dongle to verify yourself on phone and it all went downhill from there.
Mobile phone App/Passkey authentication is just a way to pass the responsibility down to users. Losing a phone today is not just losing the passkey, there are "login with QR-code" schemes too, which do not need a password at all. It is a bad trend to pass all security onto the physical phone.
But you could replace #2 with "Enter your password from GOOD, as they are our sign-in partner". I'm not in favor of emailing 6 digit codes either, but your scenario presupposes that users will be willing to trust that two services have intermingled their auth, and in that case their password can be wrangled from them too.
My password manager won’t allow autofilling in the latter case, because it remembers the domain I used at sign-up time.
On the rare occasion that my password manager refuses to autofill, I take a step back and painstakingly try to understand why. This happens about once a year or twice.
Passkeys are still a shared secret, aren't they? Asymmetric cryptography would have been amazing. Barring that I would actually recommend Oauth or something like it, to limit the number of parties who manage shared secrets to a smaller set of actors who have more experience doing so.
But in practice they usually rely on attestation by an approved vendor, and the vendor won't let you control your private key, so they'll leverage it for lock-in.
> I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
But granny can't go to a bank because they closed down most of their offices. Since 99% of what you need a bank for can be done using their app it no longer made financial sense to have a physical presence in most smaller towns and villages.
Lots of elderly were complaining about this when it happened because they were too lazy to learn how to use the bank apps. Hell, they already started complaining when you could no longer withdraw money at the desk even before they closed down the offices. Apparently even learning to use something as simple as an ATM was too much effort for them.
You do realise the average granny is in cognitive decline and dealing with a myriad of health issues? You can judge a society (or a company) by how they treat their elderly
I suppose the GOOD site should say "do not enter this code on any other sites, we are NOT a login partner for any other sites" but a lot of people would probably not read that. Still, it would help. The very tricky thing about this scam is that it gets people to react to an email that they are expecting. Which means they will not be as guarded as if they got an email out of the blue.
They don't have to actually be a 3rd party identity provider, just the user has to find it plausible that they might offer 3rd party login. Which, to be honest, pretty much any big or even medium-sized tech company might be doing these days.
No, BAD just inserts your email address on GOOD’s login page which sends you the login code, and they lie to prime you into thinking it’s not suspicious that the email came from someone other than BAD.
When you insert the login code on BAD, BAD uses it to finish the login process on GOOD that they started “on your behalf”.
BAD is lying about GOOD and presenting GOOD's legitimate service as a mere IdP for BAD, such that the user provides their code for GOOD to BAD so that the latter can then automatically log into GOOD.
There are sites that send you immediately a 6 digit code just by entering your email on their sign in page, they don’t even request a password. That means you could be phished on a fake website that when you enter your email there they do it on the real site, then you receive the real good code and enter it on the fake site.
It is just the same old stuff with username & password combination. I used to duplicate websites, they looked exactly like the original, except I was storing the entered username and password combination. I did this when I was a kid. The process is the same (or very similar) with everything else that is not a password.
True, they do it to facilitate access to their site without a password, but personally I don’t like getting an email just because I entered my username to sign in (my password manager takes care of filling the form so that email with a code is unnecessary to me).
Good explanation!
The GOOD's email should contains "Never give this code to others" and the user should know this clearly. I like phone and email OPT, it's easy to login.
My problem with passkeys is that there is no hardware attestation like there is with Yubikeys and similar.
This means for security conscious applications you have no way of knowing if the passkey you are dealing with is from an emulator or the real-deal.
Meanwhile with Yubikeys & Co you have that. And it means that, for example people like Microsoft can (and do) offer you the option to protect your cloudy stuff with AAGUID filtering.
And similar if you're doing PIV e.g. as a basis for SSH keys, you can attest the PIV key was generated on the Yubikey.
Isn’t clicking on a link in an email also problematic? It gets users in the habit of trusting links in emails. There is a history of those being used in bad ways as well.
I still don’t really understand what recovery looks like for a lost passkey… especially if I lose all of them. Not everything has a physical location where an identity can be validated, like a bank. Even my primary bank isn’t local. I’d have to drive about 6 hours to get to a branch office.
I work on a product that emails administrators with a list of actions they can take related to certain authorization requests. We got feedback from a customer that all requests were simultaneously approved then denied. It turns out their Microsoft provided email server follows all links and runs all javascript before showing it to the user
I like capability URLs. I know an URL isn't a secret, but it works in practice and it works well.
A bad practice is the shorten the code validity to a few minutes. This cannot really be justified and puts users under stress, which lessens security.
The discussion around passkeys, who is and isn't allowed to store them, almost killed them for me personally. I use them for very, very few services and I don't want to extend it.
> If you put a code you get from GOOD.com into BAD.com, it's like you put a password from GOOD.com into BAD.com - don't do that.
A password manager will protect me from doing the latter. There’s no way it can protect me from doing the former.
Any human can be tricked, no matter how smart they are. A bad actor just has to wait for the right moment. No amount of “don’t do that” can change that fact.
If a website says "Do this" and you're the person who follows random websites against security practices, because you believe in authority, a password manager does not help. You will open the password manager, search for GOOD.com and put it into BAD.com and be angry that your password manager can't do that for you.
"Any human can be tricked, no matter how smart they are."
and
"A password manager will protect me from doing the latter."
Don't work together. Either everyone can be tricked or not.
It says "Everyone can be tricked" but I can't be tricked because I use a password manager.
> you're the person who follows random websites against security practices, because you believe in authority
There are many reasons why such lapses of judgements happen, even to people who don’t believe in authority. For example, the fact that any human can be tricked.
> Don't work together. Either everyone can be tricked or not.
The password manager protects me from filling my password into the wrong site.
The password manager will not protect me from BAD.com tricking me into handing them out a one-time code that GOOD.com sent me via email.
I'm not sure what kind of websites are vulnerable to these attacks, but websites that have double authentication seem pretty safe to me. And if you forgot your password, then you receive an e-mail to change it with a secure link.
This point means the user is not paying attention: 1) User goes to BAD website and signs up.
Steps 2-7 wouldn't be possible without 1.
"User not paying attention" is ultimately the reason for most phising attacks. It happens a lot, and we're trying to solve it as the known problem it is. Everybody, and I say everybody are human beings at the end of the day (so far...) and so by definition, can have a bad day and lower their defenses. It has ironically even happened to reputated security specialists.
Would it be a viable and simple solution to only enter 6-digit codes into the specific website that requested it?
Isn't this the same thing as BAD asking, let us know the code i.e. password that GOOD gave you? Why would one be inclined to give BAD (i.e. someone else) this info?
If you're phished, you're probably not checking the domain too carefully anyway.
You get an email, providing you with a phishing link for miсrosoft.com (where the apparent c is actually the cyrillic "s", so BAD). In the background, they initiate a login to microsoft.com (GOOD), who then send you a 6 digit code from the actual microsoft.com. If you were fooled by the original phishing website, you have no reason to doubt the code or not enter it.
This came to my mind too. But by using a password manager it will be able to differentiate between the GOOD and BAD site. So I think the point is valid only if the user is not using a password manager.
If the target was not actively trying to log into GOOD at that exact moment, why would they treat this as anything other than one of a phishing attempt or spam?
Imagine a "free porn, login here" website, when you put in your gmail address it triggers the onetime code from gmail (assuming it did that type of login) - thousands would give it up for the free porn.
Links are more worse than otp but both can easily be secure if users check domain which users never do so links and otp are terrible. Long live passkeys.
To be fair, can we blame them? There are so many legitimate flows that redirect like it’s a sport. Especially in payments & authn, which is where it’s most important. Just random domains and ping pong between different partner systems.
If we are talking about real time phishing then sending a code to the email is as secure as a 2FA authentication with password and Google Authenticator code.
My password manager will protect me from entering my password into a website on the wrong domain. It won’t protect me in the passwordless case where the code is sent via email.
Can you explain this more, I don't understand Google authenticator completely? Could a bad actor spoof a 2FA as they can with an email, and capture your input?
> convince a user to enter their google 2fa code into a site that isn't obviously google?
if the BAD site itself looks legit, and has convinced a user to do the initial login in the first place, they won't hesitate to lie and say that this 2-factor code is part of their partnership with google etc, and tells you to trust it.
A normal user doesn't understand what is a 2factor code, how it works, and such. They will easily trust the phisher's site, if the phisher first breaks the user and set them up to trust the site in the beginning.
What google does is to send a notification to the user's phone telling them someone tried to access their account if this happened (or any new login to any new device you previously haven't done so on). It's a warning that require some attention, and depending on your state of mind and alertness, you might not suspect that your account is stolen even with this warning. But it is better than nothing, as the location of the login is shown to you, which should be _your own location_ (and not some weird place like cypress!).
What I don't understand is how the site will send the 2FA code request to the bad actors phone, instead of the real users phone? Is this not part of what makes it more secure than a text or email? Wouldn't the bad actor need to be logged into the authenticator as the user your trying to hack?
> how the site will send the 2FA code request to the bad actors phone, instead of the real users phone?
the 2FA code in this case is in the email, not via an app. This email is triggered by BAD on their end, but it is sent by GOOD.
If the 2fa is _only_ via the authenticator app, then the BAD will need to convince the user to type in that 2fa code from the app into the BAD site (which is harder, as nobody else does this, so it should raise suspicions from the user at least).
Not much harder. The state of the art of phishing right now is proxy based setups like evilginx which pass along credentials in real time. Then you just save the session cookie or change/add the 2fa mechanisms so you can get in whenever you want with the stolen credentials.
I think this is what Raymond Chen calls the other side of the airtight hatch.
The game is already over. The user is already convinced the BAD website is the good website. The BAD website could just ask the user for the email and password already and the user would directly provide it. The email authenticaton flow doesn’t introduce any new vulnerability and in fact, may reduce it if the user actually signs in via a link in the email.
I haven't been able to get into my Oracle (free) account for 2 years because I lost 2fa... Unless I start needing to pay them for something, they'll probably never answer my emails. There are consequences for losing your phone when using alternative authentication methods (be careful).
What about the 99 other places granny needs to regain access to after the much more common broke or lost phone many of which doesn't have a meaningful amount of customer service.
I see no reason not to use password + one of multiple 2FA methods so the user can regain control.
I guess this flow is even worse for authenticators like Duo or even Apple’s own iCloud logins with 2fa. You log on to a phishing site mimicking the real one, and your phone asks if it is you trying to log in. Yes of course it’s you logging in, but you don’t realize you’re logging in bad guys by proxy.
The prompts that show where the login is coming from are useless, too, because mapping from IP addresses to geographical locations is far from perfect. For example, my legit login attempts showed me all over my country map. If I’m in a corporate VPN already, its exit nodes may also be all over the map, and your legitimate login from, say, Germany may present itself as coming from Cyprus, which is shady as fuck.
If I seek to implement 2fa for my own service and have it be not theater and resistant to such phishing attacks, it gets difficult real fast.
but the device is under the control of BAD. They fake a signin on their backend to GOOD. Your computer never touches GOOD at all, except from seeing the email from GOOD (which you're told about by BAD, and lied to about being a partner signin thing).
The problem being exploited by BAD is that your login account identifier (email in this case) is used in both GOOD (and BAD - accidentally or deliberately orchestrated), and 2-factor does not prevent this type of phishing.
> I know from experience that well designed messages with secure code are very understandable
This premise seems flawed.
How can you possibly know from experience that something is “very understandable” if the only brain you have is your own?
How do you anticipate how other people with brains different from yours are going to behave in situations of cognitive impairment or extreme stress, things that happen in the real world?
I can assure you that by now, my brain is conditioned to lock into the four-digit code as soon as it can, entirely ignoring everything around it, including the words to the left.
I’m an avid reader. But there are limits to what I can process, and our world has become so full of noise that it has become a coping strategy for brains to selectively ignore stuff if they feel it’s not important at the moment. That effect becomes even more pronounced as the brain deteriorates with age.
I don't know if you're sarcastic or just missing the problem; which is that people will be presented with lika a facebook login page, on a site with url like `facebook.quick-login.com` or `facebock.com` and they'll enter the passcode since as fair as they were concerned, they did everything correct. The disclaimer does shit preventing that, they »obviously« didn't share the code with any other website, they entered it on the facebooks as they were told!
I am sarcastic because this discussion is about a different attack. Not about fishing.
(The OP says one time codes are worse than passwords. In case of fishing passwords fail the same way as one time codes.)
I was also sarcastic/provocative even in the prev comment, saying the GOOD site always includes a warning with the code making the attack impossible. A variation of the attack is very widely used by phone scammers: "Hello, we are updating intercomm on your appartment block. Please tell us your name and phone number. Ok, you will receive a code now, tell it to us". Yet many online services and banks still send one time codes without a warning to never share it!
The fishing point may also be used in defence of one time codes: if the GOOD service was using passwords instead of one time codes, the BAD could just initiated fishing attack, redirecting the user to a fake login page - people today are used to "Login with" flow.
Most password managers are fussy about which websites they fill the password in on. It’s partly a convenience feature to only show relevant accounts but it’s also a security feature to avoid phishing.
Passkeys are stronger here because you can’t copy and paste a passkey into a bad website.
Can happen, but on the BAD website will the password manager not offer saved password, so user has higher chance of noticing smth. is wrong.
Of course if he uses pass manager with complex passwords...
Four times a day, I get an email notification that someone requested a password reset for my Microsoft account, which gives me a six-digit number to recover my account. So every day, an attacker has four shots in 1,000,000 of stealing my account by just guessing the number. They've been doing this for years.
If the attacker's doing this to thousands of accounts - which I'm sure they are - they're going to be stealing accounts for free just by guessing.
I wrote up a security report and submitted it and they said that I hadn't sufficiently mathematically demonstrated that this is a security vulnerability. So your only option is to get spammed and hope your account doesn't get stolen, I guess.
I have added what I think they call login alias to my account. This blocks logins using the normal account username (which is my public email address), and only allows them via the alias (which is not public and just a random string). Not a single foreign login attempt since I enabled the alias.
You can enable it on account.microsoft.com > Account Info > Sign-in preferences > Add email > Add Alias and make it primary. Then click Change Sign-in Preferences, and only enable the alias.
I had to make my Outlook email primary again on my Microsoft account, unfortunately, because of how I use OneDrive. I send people share invitations and there are scenarios (or at least there were the last time I checked) where sending invitations from the primary account email is the only way to deliver the invite. If your external email alias is primary, they'll attempt to send an email from Outlook's servers that spoofs the alias email :/
This sounds a lot like Steam, where the name on your profile page is a vanity string that you can change whenever you want, but the actual username in their system is an unrelated (and immutable) ID.
I get a similar message constantly for an old Instagram account - "sorry you're having trouble logging in, click here to log in and change your password!"
If they are doing this to 125,000 accounts, they should get an average of one account per day, right? So on average it would on average take them 342 years to get any specific account, but as long as they aren't trying for any particular account, they've got a pretty good ROI.
I guess the fix for this would be exponential backoff on failed attempts instead of a static quota of 4 a day?
Why would doing this to 125K accounts give them access to one account per day? The chances of guessing 6-digtis pin code for each account is the same (10^6) regdless of how many accounts your are attacking
No, this means there is a 98% chance you get _at least_ 1 account.
`1-1/1,000,000` is the probability you fail 1 attempt. That probability to the 4millionth is the probability you fail 4 million times in a row. 1 minus _that_ probability is that the probability that you _don't_ fail 4 million times in a row, aka that you succeed at least once.
The expected number of accounts is still number of attempts times the probability of success for 1 try, or: 4 accounts.
What are the chances of getting 500,000 guesses (4 each for 125,000 accounts) wrong ? My math says 60%, so probably not one account per day, but if they keep it up for a week and everything else holds, there's only a 3% chance they haven't gotten any codes right.
Imagine the extreme case, where they pinged one million accounts and then tried the same code (123456) for each one. Statistically, 1 of those 1,000,000 six-digit TOTP codes will probably be 123456
I had the same issue on a useless old account. Could see the IP addresses of the sign-in attempts, they came from all over the world, all different ISPs, mostly residential. Nearly every request was from a unique /16! If botnets are used for something this useless, I dread to think what challenges at-risk people face
Adding 2FA was the solution
I couldn't find the method they were using in the first place, because for me it always asks for the password and then just logs me in (where were they finding this 6-digit email login option?!), but this apparently blocked that mechanism completely because I haven't seen another sign-in attempt from that moment onwards. The 2FA code is simply stored in the password manager, same as my password. I just wanted them to stop guessing that stupid 6-DIGIT (not even letters!) "password" that Microsoft assigns to the account automatically...
Microsoft allows you create a second "login only" account username to access your e-mail and other services. I was having the same problem as you but much worse. Check into it, only takes a few minutes to setup.
Does adding MFA not protect you against this? If you are secured by a TOTP on top of your password, it should not matter if they manage to reset your password.
Somewhat, but imho the Microsoft MFA is also full of similar flaws.
As an example: I've disabled the email and sms MFA methods because I have two hardware keys registered.
However, as soon as my account is added to an azure admin group (e.g. through PIM) an admin policy in azure forces those to 'enabled'.
It took me a long time debugging why the hell these methods got re-enabled every so often, it boils down to "because the azure admin controls for 'require MFA for admins' don't know about TOTP/U2F yet"
I was authenticating a set of scripts five times for each run with MFA. Once, it asked me for six MFA prompts with no disambiguating info.
Did I click “Yes” to the attack the fifth time, or was the sixth the attack? Or was it just a “hiccup” in the system?
Do I cancel the migration job and start from the beginning or roll the dice?
It’s beyond idiotic asking a Yes/No question with zero context, but that was the default MFA setup for a few hundred million Microsoft 365 and Azure users for years.
“Peck at this button like a trained parrot! Do it! Now you are ‘secure’ according to our third party audit and we are no longer responsible for your inevitable hack!”
All of the prompts users get these days in an effort to add "security" have trained users to mindlessly say "yes" to everything just so they can access the thing they're trying to do on their computer; we've never had less secure users. The cookie tracking prompts should probably take most of the blame.
I know with the last major macOS update, nearly every app is now repeatedly asking if it can connect to devices on my network. I don't know? I've been saying yes just so I don't have stuff mysteriously break, and I assume most people are too. They also make apps that take screenshots or screen record nag you with prompts to continue having access to that feature. But how many users are really gonna do a proper audit, as opposed to the amount that will just blindly click "sure, leave me alone"?
On my phone, it keeps asking if I want to let apps have access to my camera roll. Those stupid web notifications have every website asking if it can send notifications, so everyone's parents who use desktop Chrome or an Android have a bunch of scam lotto site ad notifications and don't know how to turn them off.
The worst part about this is it just further reinforces horrible habits and expectations.
Using a modern password manager, like 1password, is _easier_, safer, and faster than the stupid email-token flow. it takes a little bit of work and attention at first to setup across a couple devices, and verify it works.... but its really about the same amount of effort as keeping track of a set of keys for your house, car, and maybe a workplace.
If you make a copy of a door key when you move into a new place, you test the key before assuming it works. Same thing with a password manager. Save a password on your phone, test it on a different device, and verify the magic sync works. Same as a key copier or some new locks a locksmith may install.
Humans can do this. You don't need to understand crypto or 2fa, but you can click 'create new password' and let the app save some insanely secure password for a new site. Same with a passkey, assuming you don't save to your builtin device storage that has some horrible, hidden user interface around backing that up for when your phone dies.
And the irony is the old flow just works better! You let the password manager do the autofill, and it takes a second or two, assuming their is an email _and_ a password input. Passkeys can be even faster.
that little bit of work and attention is too much for most people.
I'm as frustrated about this as you are, but there is a large class of people who will not or can not understand and implement the password-manager workflow.
Of the people I know who are not in a tech career i'd say about 80% have nothing but contempt and ignorant fatalism toward security. The only success I've had is getting one older relative to start writing account credentials down in a little paper notebook and making sure there are numbers and letters in the passwords.
I get the point. However, from my own experience this type of one-time passcode is unfortunately the 2nd well-understood authentication method for non-tech people surrounding me. The 1st is the password, of course.
I don't know the general situation, but, at least in our small town, people would go to the phone service shop just for account setup and recovery, since it's just too complicated. Password managers and passkeys don't make things simpler for them either –– I've never successfully conveyed the idea of a password manager to a non-tech person; the passkey is somehow even harder to explain. From my perspective it's both the mental model and the extra, convoluted UX that's very hard to grasp for them.
Until one day we come up with something intuitive for general audience, passwords and the "worse" one-time code will likely continue to be prominent for their simplicity.
Good luck finding a suite of modern, convenient services that will allow you to do that nowadays. I wish we could opt-in with some sort of I-know-what-I'm-doing-with-passwords-and-take-full-responsibility option.
You vastly underestimate the number of people who should not pick this option but would (because doing otherwise would be admitting their incompetence / ignorance) -- thus handily continuing the problem.
> If you have password reset via email, as almost every service using passwords does, there’s no security gain over magic links/codes.
I disagree. The problem with the magic code is that you've trained the user to automatically enter the code without much scrutiny. If one day you're attempting to access malicious.com and you get a google.com code in your email, well you've been trained to take the code and plug it in and if you're not a smarty then you're likely to do so.
In contrast, email password recovery is an exception to the normal user flow.
Password reset also has phishing potential. I do see your point, but if a user doesn’t check domains, I think they can be easily phished through either route.
And even if proper passwords are used, many sites/apps use this pattern for account recovery if the password is forgotten so effectively this is the only security as an attacker has “forgotten” the password and just uses this flow to login.
I've got a little generic login tool that bits I write myself use for login, using this method, but it is not for anything sensitive or otherwise important (I just want to identify the user, myself or a friend, so correct preferences and other saved information can be applied to the right person, and the information is not easily scraped) - I call it ICGAFAS, the “I Couldn't Give A Factor” Auth System to make it obvious how properly secure it isn't trying to be!
Another issue that email based “authentication” like this (though one for the site/app admins more than the end user) has is the standard set of deliverability issues inherent with modern handling of SMTP mail. You end up having to use a 3rd party relay service to reduce the amount of time you spend fighting blocklists as your source address gets incorrectly ignored as a potential spam source.
> And even if proper passwords are used, many sites/apps use this pattern for account recovery if the password is forgotten so effectively this is the only security as an attacker has “forgotten” the password and just uses this flow to login.
Was about to post just this. This is the flow they use for account recovery so it's the weakest link in the chain anyway.
What's quite annoying is how agressive most products are into forcing this method over regular email+pw / Social Logins. Let me use my 100 chars password!
You are not the target audience, you are not even an outlier, it's probably time to accept this and look for long-term solutions that allow you to interface with the "mainstream".
Agreed. But since every character gives you around 6 bits (26*2 letters + 10 numbers + some special characters ≈ 64 = 2^6), you'd need 256/6 ≈ 43 characters to exhaust the checked entropy, so up to that level it makes sense.
If you use sentences instead of randomly generated characters, the entropy (in bits/character) is lower, so 100 characters might well make sense.
Passwords are (or, rather, SHOULD be) cryptographically hashed rather than encrypted. It's possible to compute a hash over data which is longer than the hash input block size by feeding precious hashes and the next input block back in to progressively build up a hash of the entire data.
bcrypt, one of the more popular password hashing algorithms out there, allows the password to be up to 72 characters in length. Any characters beyond that 72 limit are ignored and the password is silently truncated (!!!). It's actually a good method of testing whether a site uses bcrypt or not. If you set a password longer than 72 characters, but can sign in using just the 72 characters of your password, they're in all likelihood using bcrypt.
It's not broken. It's just potentially less helpful when it comes to protecting poor guessable passwords. bcrypt isn't the problem, weak password policies/habits are. Like bcrypt, argon2 is just a bandaid, though a tiny bit thicker. It won't save you from absurdly short passwords or silly "correct horse battery staple" advice, and it's no better than bcrypt at protecting proper unguessable passwords.
Also, only developers who have no idea know what they're doing will feed plain-text passwords to their hasher. You should be peppering and pre-digesting the passwords, and at that point bcrypt's 72 character input limit doesn't matter.
Bcrypt alone is unfit for purpose. Argon2 does not need its input to be predigested.
It's easy for somebody who knows this to fix bcrypt, but silently truncating the input was an unforced error. The fact that it looks like and was often sold as the right tool for the job but isn't has led to real-world vulnerabilities.
It's a classic example of crypto people not anticipating how things actually get used.
Peppering is for protecting self-contained password hashes in case they leak. It's a secondary salt meant to be situated 1) external to the hash, and 2) external to the storage component the hashes reside in (i.e. not in the database you store accounts and hashes in). The method has nothing to do with trying to fix anything with bcrypt. You should be peppering your input even if you use Argon2.
Right, but peppering was not part of my comment. You can't always pepper, and there are different ways to do it. It's (mostly) orthogonal to the matter.
You do not have to do any transformations on the input when using Argon2, while you must transform the input before using bcrypt. This was, again, an unnecessary and dangerous (careless) design choice.
I don't understand your responses here. Clearly you are not familiar with what problem peppering solves, or why it's a recommended practice, no matter what self-contained password hashing you use. bcrypt, scrypt, Argon2; they are all subject to the same recommendation because they all store their salt together with the digest. You can always use a pepper, you should always use a pepper, and there's only one appropriate way to do it.
And no, you cannot always pepper. To use a pepper effectively, you have to have a secure out-of-band channel to share the pepper. For a lot of webapps, this is as simple as setting some configuration variable. However, for certain kinds of distributed systems, the pepper would have to be shared in either the same way as the salt, or completely publicly, defeating its purpose. Largely these are architectural/design issues too (and in many cases, bcrypt is also the wrong choice, because a KDF is the wrong choice). I already alluded to the Okta bcrypt exploit, though I admit I did not fully dig into the details.
The HMAC-SHA256 construction I showed above, and similar techniques, accomplishes both transforming the input and peppering the hash. However, the others don't transform the input at all or, in one case, transform it in a way even worse for bcrypt's use.
Using memorable passphrases online is always a bad option because they're easily broken with a dictionary attack, unless you bump the number of words to the point where it becomes hard to remember the phrase. Use long strings of random characters instead, and contain the use of passphrases to unlocking your password manager.
To wit, each word drawn from a 10,000-word dictionary adds about 13 bits of entropy. At 4 words, you have (a little over) 52 bits of entropy, which is roughly equivalent to a 9-character alphanumeric (lower and upper) password. The going recommendation is 14 such characters, which would mean you'd need about 7 words.
The average person will create a passphrase from their personal dictionary of most-used words, amounting to a fraction of that. An attacker will start in the same way. Another problem with passphrases is that you'll have a hard time remembering more than a couple of them, and which phrase goes to what website.
And there is _NOTHING_ worse than being locked out of an account because without asking they reverse the password and second factor authentication while your traveling and don't have access to a phone/etc.
Nevermind. that pretty much all services treat the second factor as more secure than my 20 character random password saved in a local password safe. And those second factors are, lets see, plain text over SMS, plain text over the internet to an email address, etc, etc, etc.
I just deleted my gofundme because they kicked me into this cycle today. Somehow I've managed to have an account there and make contributions over the years, but now they wanted my phone number and an MFA code to proceed, and there was no opt-out. I went through it but then deactivated my account. I need less of this in my life, and gofuneme is not essential to my life.
I'm in the rental market right now, and Zillow not only has a log-in for the app, but to read messages in your inbox, you have to MFA again each time, and the time-out period is about an hour.
Ticketmaster did the same. They don't accept Google Voice numbers, yet my only number is Google Voice. The number tied to my SIM is an implementation detail that changes depending on where I am, but it's the only way I can get into that account now. My choices are to not go to events that are ticketed by them, or accept that I'll probably be locked out whenever I change SIMs.
SMS is literally the least secure form of authentication, because numbers expire after mere weeks, and get re-assigned shortly, within months, because of number shortage in many area codes.
Nothing like this could happen with any mainstream mail service like Gmail, where it's officially advertised that the accounts could never be reused.
The worst part about SMS is that not only is there the potential to be locked out permanently, but also you never know whether or not the service would allow login or password reset via SMS, thus, you never know if you're opening yourself to account takeover.
I read this sentence 4 times and I still can't parse it:
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
Because the sentence makes no sense, but what the author wanted to say was:
- You are in front of the attacker site that looks like a legitimate site where you have an account (you arrived there in any way: Whatsapp link, SMS, email, whatever). Probably the address bar of your browser shows something like microsoft.minecraft-softwareupdate.com or something alike, but the random user can't tell it's fake. The page asks you to login (in order to steal your account).
- You enter the email address to login. They enter your email address in the legitimate site where you actually have an account.
- Legitimate site (for example Microsoft) sends you an email with a six digit code, you read the code, it looks legit (it is legit) and you enter it in the attacker site. They can now login with your account.
I read it as just some web page that was bad, but not necessarily imitating a good sits. For example some new gaming forum that pops up, which is bad, but uses the gaming forum to get people to send them six digit codes which they use for whatever sites they see fit. Then the people who run the gaming forum are now stealing your Etsy account.
I think one can also understand it as the attacker being the one to enter the email first.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
Replace "can simply send your email address" with "can simply input your email address". An attacked inputs your email at login.example.com, which sends a code to your email. The attacker then prompts you for that code (ex. via a phishing sms), so you pass them the code that lets them into the account.
I believe (and the article should make it clear) that the article is criticizing specifically the use of the code that user must enter into a box, which invites man-in-the-middle attacks.
The article is not advocating against e-mail-driven URL-based password reset/login, whereby the user doesn't enter any code, but must follow a URL.
The six digit code can be typed into a phony box put up by a malicious web site or application, which has inserted itself between the user and the legitimate site.
The malicious site presents phony UI promoting the user to initiate a coded login. Behind the scenes, the malicious site does that by contacting the genuine site, and provoking a coded login. The user goes to their inbox and copies the code to the malicious site's UI. The site then uses it to obtain a session with the genuine site, taking over the user's account.
A SSL protected URL cannot be so easily intercepted. The user clicks on it and it goes to the domain of the genuine site.
So there are two complaints about this authn scheme that I'm seeing in this thread:
1. It's pretty phishable. I think this is mostly solved, or at least greatly mitigated, by using a Slack-style magic sign-in link instead of a code that you have the user manually enter into the trusted UI. A phisher would have to get the user to copy-paste the URL from the email into their UI, instead of clicking the link or copy-pasting it into the address bar. That's an unusual enough action that most users probably won't default to doing it (and you could improve this by not showing the URL in HTML email, instead having users click an image, but that might cause usability problems). It's not quite fully unphishable, but it seems about as close as you can get without completely hiding the authentication secret from the user, which is what passkeys, Yubikeys, etc., do. I'd love to see the future where passkeys are the only way to log into most websites, but I think websites are reluctant to go there as long as the ecosystem is relatively immature.
2. It's not true multi-factor authn because an attacker only needs to compromise one thing (your inbox) to hijack your account. I have two objections to this argument:
a. This is already the case as long as you have an email-based password reset flow, which most consumer-facing websites are unwilling to go without. (Password reset emails are a bit less vulnerable to phishing because a user who didn't request one is more likely to be suspicious when one shows up in their inbox, but see point 1.)
b. True multi-factor authn for ordinary consumer websites never really worked, and especially doesn't work in the age of password managers. As long as those exist, anyone who possesses and is logged into the user's phone or laptop (the usual prerequisites for a possession-based second factor) can also get their password. Most websites should not be in the business of trying to use knowledge-based authentication on their users, because they can't know whether the secret really came from the user's memory or was instead stored somewhere, the latter case is far more common in practice, and only in the former case is it truly knowledge-based. Websites should instead authenticate only the device, and delegate to the device's own authentication system (which includes physical possession and likely also a lock secret and/or biometric) the task of authenticating the user in a secure multi-factor way.
* Mobile email clients that open links in an embedded browser. This confuses some people. From their perspective they never stay logged in, because every time they open their regular browser they don’t have a session (because it was created in the embedded browser) and have to request a login link again.
* Some people don’t have their email on the device they want to log in on.
Sending codes solves both of these problems (but then has the issues described in the article, and both share all the problems with sending emails)
Magic links can be used to authorize the session rather than the device. That is, starting the sign in process on your laptop and clicking the link on your phone would authorize your laptop's sign in request rather than your phone's browser. It requires a bit more effort but it's not especially difficult to do.
Wouldn't that be incredibly insecure? Attacker would just need to initiate a login, and if the user happens to click the link they've just given the attacker access to their account..
The reason why magic links don't usually work across devices/browsers is to be sure that _whoever clicks the link_ is given access, and not necessarily whoever initiated the login process (who could be a bad actor)
> and if the user happens to click the link they've just given the attacker access to their account
Worse: if the user's UA “clicks the link” by making the GET request to generate a preview. The user might not even have opened the message for this to happen.
> Wouldn't that be incredibly insecure?
It can be mitigated somewhat by making the magic link go to a page that invites the user to click something that sends a post request. In theory the preview loophole might come into play here if the UA tries to be really clever, but I doubt this will happen.
Another option is to give the user the option to transfer the session to the originating UA, or stay where they are, if you detect that a different UA is used to open the magic link, but you'd have to be carful wording this so as to not confuse many users.
That, or a background process that visits links to check for malware before the user even sees the message.
> Isn’t there a way to configure the `a` element so the UA knows that it shouldn’t do that?
If sending just HTML you could include rel="nofollow" in the a tag to discourage such things, bit there is no way of enforcing that and no way of including it at all if you are sending plain text messages. This has been a problem for single-use links of various types also. So yes, but not reliably so effectively no.
And we get back to the original point of the article (sort of). Opening a magic link should authenticate the user who opened the magic link, not the attacker who made the application send the magic link.
This is what makes securing this stuff so hard when you don't have proper review. What seems like a good idea from one perspective opens up another gaping hole somewhere else.
Off the cuff suggestions for improving UX in secure flows just make things worse.
> think this is mostly solved, or at least greatly mitigated, by using a Slack-style magic sign-in link instead of a code that you have the user manually enter into the trusted UI.
Magic links are better than codes, but they don't work well for cross-device sign-in. What Nintendo does is pretty great: If I buy something on my switch, it shows me a QR code I take a picture of with my phone and complete the purchase there.
I agree it is "mostly solved" in that there are good examples out there, but this is a long way from the solution being "best practices" that users can expect the website/company to take security seriously.
> a. This is already the case as long as you have an email-based password reset flow
I hard-disagree:
If I get an email saying "Hi you are resetting your password, follow these directions to continue" and I didn't try to reset my password I will ignore that email.
If I have to type in random numbers from my email every few days, I'm probably going to do that on autopilot.
These things are not the same.
> anyone who possesses and is logged into the user's phone or laptop (the usual prerequisites for a possession-based second factor) can also get their password.
I do not know what kind of mickey-mouse devices you are using, but this is just not true on any device in my house.
Accessing the saved-password list on my computer or phone requires an authentication step, even if I am logged-in.
I also require second-authentication for mail and a most other things (like banking, facebook, chats, etc) since I do like to let my friends just "use my phone" to change something on spotify or look up an address in maps.
> Most websites should not be in the business of trying to use knowledge-based authentication on their users, because they can't know whether the secret really came from the user's memory or was instead stored somewhere
They can't know that anyway, and pretending they do puts people at risk of sophisticated attackers (who can recover the passkey) and unsophisticated incompetence on behalf of the website (who just send reset links without checking).
> Websites should instead authenticate only the device, and delegate to the device's own authentication system
I disagree: Websites have no hope of authenticating the device and are foolishly naive to try.
You want to be authenticated specifically on the device that you're using to access the website. Not some arbitrary other device.
If you enter your username, password, and totp, and the website tells you you've logged in from some device halfway across the planet you've never heard of, you probably have a problem.
I don't like any of the methods used today. Passwords are OK for me since I pick strong pass phrases, use different emails per site but for me the superior option for me is IP/CIDR restrictions. A small handful of sites support it and some of those don't expose that they do because some people think a long DHCP lease is a static IP and that can cause a customer support ticket. It was a battle but I have managed to get some financial institutions to enable it for me. Every bank big and small can do this but tellers and bankers have no idea, only their IT person. When that fails I just disable internet access to my account from the financial institutions and go talk to a real person face to face. If that isn't an option I just don't do business with them. Simple as. I do 99.999999% of my internet access from home but if I depended on mobile I would have a VPN back to my home to utilize my static IP from a Linux laptop. I do not browse the internet from a cell phone and never will. Not perfect, nothing is.
If what you mean is that you have no online accounts then you are a few steps ahead of me. I will get there eventually but have some things to take care of first. Congrats on disconnecting from the internet though. I assume this site is your last holdout? I am envious if so. This site will also be my last online presence.
Can OP tell us how they implement one-time code email? Ever heard of PKCE flow applied to otp auth where there is a guarantee that the top flow can only be completed using the device/browser on which the user initiated the request?
Consider this scenarioUser initiates login on your site.
1. You generate a code_verifier (random) and
a code_challenge = SHA256(code_verifier) and store the code_verifier in the
browser session (e.g., local/session storage, secure cookie, etc.).
2. You send the code_challenge to the server along with the email address.
3. Server sends the email with a login code to the user, recording the challenge (associated with the email).
4. User receives the email and enters the code on the same device/session.
Client sends the code + code_verifier to the server.
Server verifies:
Code is correct.
SHA256(code_verifier) == stored code_challenge.
The end result is that
The code cannot be used from another device or browser unless that device/browser initiated the flow and has the code_verifier.
A combination of the above and a login link might help.
But ultimately, the attacker will be relying on the gullibility of the
user. The user will have to not check the urls
Assuming the bot knows to send a code_challenge and send the code_verifier together with the verification code
But then again, GOOD can just also ensure that their otp can only be completed from the GOOD domain/origin. That would shore things up at least.
What about all those services that force you to enable 2FA with SMS? How could you possibly know whether or not that opens you up to these SIM swap attacks?
This is a fundamental flaw with any login flow that is not phishing resistant. There is nothing novel about this attack.
An attacker can register a domain like office375.com, clone Microsoft's login page, and relay user input to the real site. This works even with various forms of MFA, because the victim willingly enters both their credentials and second factor into a fake site. Push-based MFA is starting to show IP and location data, but a non-technical user likely won’t notice or understand the warning and a sophisticated attacker will just use a VPN matching the users' location anyways.
Passkeys solve this problem through origin enforcement. Your browser will not let you use a passkey for an origin that the passkey was not created for. If they did, you could relay those challenges as well (still better than user + pass as the challenges are useless after first use).
It's about email as single factor auth, which has become very trendy of late. You just enter your email address, no password, and the email you a code. Access to your email is the only authentication.
The first bullet point is "Enter an email address or phone number".
That's not MFA. MFA stands for multi-factor authentication. If the authentication only requires a code sent to an email OR phone number, that's just a single factor.
But then, email always was the only authentication. On any site, click "forgot password" and promptly they send you a reset password link. Very few sites have a challenge question.
Anthropic is the main one. Its pushing a lot of others to do the same. I literally was arguing against that 2 weeks ago and the person who was pushing it said "Claude does that. Its really slick, no password to remember".
Patreon can do that too, depending on how you sign up.
It’s not slick at all. Passwords and MFA autofill, their image codes don’t, so I have to close the browser, go to email, copy code, delete email, go to browser, paste code just to login.
The entire email login flow is completely retarded. It’s not even secure.
Only in an abstract threat model sense. In real world phishing its pretty different.
Its super odd if you land on facebook.com-profilesadfg.info/login thinking its just Facebook and try to login but get a "password reset" email. Most people would be confused as they don't want to reset their password.
Having it for every login means that just missing the website URL, everything else is 100% legit.
In India, almost all websites & apps, send a OTP to either mobile or email & ask you to enter that to login. Most of them have even disabled password based login flows. Really grinds my gears.
The authentication factors of a multi-factor authentication scheme may include:
1. Something the user has: Any physical object in the possession of the user, such as a security token (USB stick), a bank card, a key, a phone that can be reached at a certain number, etc.
2. Something the user knows: Certain knowledge only known to the user, such as a password, PIN, PUK, etc.
3. Something the user is: Some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc.
Email and phone are both in category one, comprising only one unique factor.
What is the minimum number of things you need access to in order to log in?
If you have access to the phone, you can log in. OR if you have access to the email account, you can log in.
You don't need to know the user's password, you only need access to one of these inboxes and nothing else. One-factor authentication, but worse, because there are multiple attack surfaces.
My main frustration with this sort of system, beyond the security risks is the terrible UX of a system like Spotify.
I appreciate most people log in and stay logged in but I frequently switch Spotify accounts and I use passwords to log in, instead of letting me choose password or a 6 digit code, every time I try and change account a needless 6 digit code is generated and sent to a shared inbox, a huge waste of resources and storage. In addition to being a security concern as flagged throughout this thread.
It's "terrible" because the author can describe exactly one phishing vector?..
Have you ever tried resetting a password before? Passwords have a similar phishing vector, plus many other problems that magic links and one-time login codes don't have.
If six-digit login codes are less secure than passwords, the reasons why are certainly not found in this article.
There are some short comings about using email codes but I fail to see how this worse than passwords when the same exact kind of attack would work for passwords. The difference being that it would be worse with passwords which can be stored, reused later or sometimes changed directly on the service.
My password manager will never fill my password into the wrong site. I would need to do so manually which sets of so many alarm bells in my head.
With email pasting the number into a random website is the expected flow and there is basically no protection (some phones have basic protections for SMS auth but even this only works if you are signing in on the same device).
OP is just ragebait for nerds. How many articles have been published in the last 20 years about the issues with passwords? Now we're saying that the small chance some user ends up on bad-minecraft.com with a login form is actually worse than using "L3tmein!" as a password everywhere? Please find something more worthy to spend your time thinking about.
But if you could back up a passkey, wouldn't the key just be a password?
(I do agree with you about backups being essential, but my conclusion was "the idea is fundamentally flawed," rather than "it's one tweak away from greatness.")
No, because unlike a password you never provide the private key for a passkey to the site you’re logging into, which is how many password breaches occur.
This is the irreducible problem. It's the Emperor's New Clothes™. So either the secrets get generated and stored in tamper-protected hardware, or they are stored somewhere else that can be made portable. For the latter, then they ought to be serializable into some standard form.
Since you keep posting this link, I'll just keep saying it: there is no credential manager attestation in the consumer synced passkey ecosystem. Period. There is no way to build and allowlist, by design. The consumer synced passkey ecosystem is open.
Strawman? We are talking about this link, right, the one that says:
> I've already heard rumblings that KeepassXC is likely to be featured in a few industry presentations that highlight security challenges with passkey providers, the need for functional and security certification, and the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
> The reason we're having a conversation about providers being blocked is because the FIDO Alliance is considering extending attestation to cover roaming keys.
> From this conversation it sounds like the FIDO Alliance is leaning towards making it possible for services to block roaming keys from specific providers.
Yes, read the quotes you took again. Attestation is not a thing currently. There is legitimate discussion about how to handle shitty password managers. If LastPass shits the bed again, it would be great to have a mechanism for others to block it or at least know that due to a major incident, keys from that tool are week. Debian OpenSSL keys were vulnerable for a long time and being able to know and alert or block private keys generated on a Debian machine is reasonable if not desirable. If KeepassXC is insecure or promote insecure practices who's fault is that and what do you suggest we do?
The entire issue is about doing the minimum possible of not exporting it in plaintext. Nothing is stopping you from decrypting it and posting it on your Twitter if you so wish. Just don't have the password manager encourage bad practices. How it that unreasonable?
> If LastPass shits the bed again, it would be great to have a mechanism for others to block it
And by the way, if and when something like that does happen, what's the user supposed to do if they suddenly find their passkey provider has been blocked?
Yes, we've seen you repeat that we have to read it again. I reread this morning before the post, but really just found more things supporting my position.
> To be very honest here, you risk having KeePassXC blocked by relying parties (similar to #10406).
From the linked https://github.com/keepassxreboot/keepassxc/issues/10406
> | no signed stamp of approval from on high
> see above. Once certification and attestation goes live, there will be a minimum functional and security bar for providers.
> | RP's blocking arbitrary AAGUIDs doesn't seem like a thing that's going to happen or that can even make a difference
> It does happen and will continue to happen because of non-spec compliant implementations and authenticators with poor security posture.
Is your argument that despite being doused with gasoline I can't complain because I'm not currently on fire?
So you’re just not gonna respond to any of the points explaining your straw man. Yeah you should read it again, and read my explanation again and let me know if you have any questions or responses. Dont douse yourself in gasoline and you won’t have to worry about being on fire.
(You have every right do douse yourself in gasoline. No one is taking that way from you. Just say away from everyone else)
Maybe you can let us know what definition of "strawman" you are using in this context?
KeePassXC is at risk of being blocked for making it easy to back up the passkeys. I don't see where that's been disproven or explained, other than saying "well attestation isn't enforced yet" -- that is, the metaphorical gasoline (provider AAGUIDs) hasn't yet been ignited (blocking of provider AAGUIDs)
> The entire issue is about doing the minimum possible of not exporting it in plaintext. Nothing is stopping you from decrypting it and posting it on your Twitter if you so wish. Just don't have the password manager encourage bad practices.
I don't disagree with this in principle, but it does warn you and realistically, what is the threat model here? It seems more like a defense-in-depth measure rather than a 5-alarm fire worthy of threatening to blacklist a provider. Maybe focus energy instead on this? (3+ year workstream now I guess?)
>> Sounds like the minimal export standard for portability needs to be defined as well.
> This is all part of the 2+ year workstream.
--
The more I get exposed to this topic, the less I'm convinced it was designed around people in the real world, e.g. https://news.ycombinator.com/item?id=44821601. Sure is convenient that it's so so easy to get locked into a particular provider, though!
People want the ability to back their passkeys up. The fact that this can be used for migrating is an unfortunate side-effect, as far as the provider is concerned.
Same. Sacrificing security by selling superficial convenience and limited security advantages for crucial inconveniences.
Perhaps there ought to be a well-known, "ACME" password-changing API such that a password manager could hypothetically change every password for every service contained in an automated fashion.
They are referring to the ability of a site you are logging into forcing you to use a client from a specific list or having a list of clients to deny.
It's copied over from FIDO hardware keys where each device type needed to be identifiable so higher tier ones could be required or unsecured development versions could be blocked.
This is what I was referring to, and we already have seen this happen in the wild with PayPal at one point (possibly still) blocking passkeys from e.g. Firefox. For now the argument against this seems to be that "Apple zeroes this out so service providers can't do it without risking issues for their many users who use Apple to store their keys", but clearly this is so precarious of a situation it may as well not be a thing. You can't depend on one trillion-dollar company not changing their minds on that tomorrow.
Even with the current flimsy "What about iPhones?" defense against attestation, is there anything stopping say Microsoft from just forcing you to install a different app to use Microsoft services?
What a crock, to not bother coming up with a way to make passkeys portable and then threaten to ban providers who actually thought about how humans might use them in the real world
Specifically they are referring to synced passkeys (passkeys generated by services like Google password manager/1Password/Apple and are linked to that account).
Because these passkeys are stored in the Cloud and synced to your providers account (i.e. Google/Apple/1Password etc), they can't support attestation. It leads to a scenario where Relying Parties (the apps consuming the passkey), cannot react to incidents in passkey providers.
For example: If tomorrow, 1Password was breached and all their cloud-stored passkeys were leaked, RP's have no way to identify and revoke the passkeys associated with that leak. Additionally, if a passkey provider turns out to be malicious, there is no way to block them.
Did you try that? I can't find any confirmation on if it's actually working for classical keys but it's for sure not supported for resident keys on Ledger.
Damn I misremembered, I've only tried it on Trezor, not Ledger. What I do know is I've successfully used my Trezor for signing in to Google and several other sites through one or both of Chrome or Firefox.
Erm... Passkeys _are_ backupable/syncable WebAuthn keys. You can get the clear-text Passkey private keys by just looking into your storage (Keychain on iOS).
What's missing is a standardized format for the export.
Wholeheartedly agree, however The Changelog Podcast helped shift my perspective on this. It's really about not having the responsibility of storing and maintaining passwords.
You should never store passwords anyways. You store hashes. I don’t see the issue. If you don’t trust yourself to keep a hash, maybe don’t store user information at all.
Most leaked passwords online come initially from leaked hashes, which bad actors use tools like hashcat to crack.
If your user has a password like "password123" and the hash gets out, then the password is effectively out too, since people can easily lookup the hash of previous cracked passwords like "password123".
This is how it should be done. But it still doesn't protect users fully, because attacker can try to brute-force passwords their interested in. It requires much more effort though.
Salting already fixed this decades ago, and most modern password libraries will automatically generate and verify against a hash like <method>$salt$saltedhash if you use them instead of rolling your own.
So if they don't want to store your passwords because they do not want the responsibility of keeping it safe, should you trust your credit card and other personal information with them?
You'll find that opinion is still divided among these three options. And bcrypt is harder to mess up. It has less parameters (it doesn't fall apart as easy) and salting is built in, whereas its not for scrypt and argon2. If, knowing nothing else about the competency of the programmer, I had to choose between an application using scrypt, argon2 and bcrypt, I'd pick bcrypt any day.
"But, seriously: you can throw a dart at a wall to pick one of these... In practice, it mostly matters that you use a real secure password hash, and not as much which one you use.
This also doesn't address my biggest concern, google controls the chrome password manager and probably controls your email address. At a bureaucratic sneeze you can be denied access to your entire life.
We need something equivalent to "Americans will use anything but the metric system" but "Sites will force users to use anything but a password manager."
Some sites make this into a problem accessing their site by having an unsubscribe that doesn't account for this login method. Unsubscribing from marketing means I can no longer login
Public Shaming: Ally Bank, made this mandatory. I'm leaving them as soon as I can find a another bank with 3.x% on savings, bill pay that automatically retrieves bill amounts, and supports _at least_ TOTP.
I use Schwab (bank and brokerage). Their money market funds yields 4.x% with just a few more clicks to move into and out of the MMF. The Bill Pay retrieves the amount on my BofA credit card just fine. And it supports TOTP via Symantec VIP Access (it doesn't seem like you can use a standard TOTP app).
This is why I think people ending up locked into vendor implementations of passkeys will be a thing. We had a totally open standard, TOTP, and there were still (somewhat successful) efforts make it non-standard like the Symantec VIP Access you mentioned. How many authenticator apps do I have to install? I was hoping for one!
FWIW when I was researching this for my own accounts I believe I saw in passing that someone had figured out a way to extricate the TOTP secret from VIP Access to use in a standard TOTP app. I didn't look into it much though since none of my current accounts require it and it just seemed something to avoid.
Services love it because it hands off the risk and responsibility to… Google/Gmail in most personal cases. This was why the pattern was adopted so quickly.
I went hunting in the NIST documentation to see if this is even an approved authentication method and, technically, I can't find anything wrong with it (if we consider it to be a "Look-up Secret Authenticator", see NIST 800-63b section 5.1.2.1). They're technically abusing what is supposed to be a collection of pre-distributed authenticators (think recovery codes), but there's nothing prohibiting these look-up codes from being sent on-demand and there only being a single selection.
As for the method itself.. IMO they're certainly phishable, but I don't think they're any more phishable than a typical username/password prompt.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
An attacker can also simply present a login prompt, say a Google-looking one, and a user will just enter their credentials.
This is why phishing-resistant authentication is the one true path forward.
Codes that are provided on demand by a service will always be far less secure than proper TOTP. Because in the case of proper TOTP, no secret ever leaves the service after initial configuration, but in the case of discount 2FA through email or especially SMS, a fresh secret has to be delivered to me each time, where it can easily be intercepted by all manner of attacks.
I recently set up passkey-only sign ins for a webapp I'm writing using Authentik [0](Python OIDC provider, with quite a nice docker-compose run-up, took only minutes to stand up.) It was surprisingly easy to configure everything so that passkeys are the only thing ever used.
If anyone would be interested I could write it up? I was surprised what a nice user flow it is and how easy it was to achieve.
so many of these Authentication providers have a hockey stick pricing scheme, where the first few users are near free and when you grow you are going to get mugged and kicked in the groin.
Still seems far, far more likely that the average user will have their account stolen via password theft/reuse than the more complicated scheme the author is describing. Links instead of codes also fixes the issue.
As the author points out, email OTP can be phished if the user is tricked into sending their OTP to an attacker.
Email magic links are more phishing resistant - the email contains a link that authenticates the device where the link was clicked. To replicate the same attack, the user would have to send the entire link to the attacker, which is hopefully harder to socially engineer.
But magic links are annoying when I want to sign in from my desktop computer that doesn't have access to my email. In that case OTP is more convenient, since I can just read the code from my phone.
I think passkeys are a great option. I use a password manager for passkeys, but most people will use platform-provided keys that are stuck in one ecosystem (Google/Apple/MS). You probably need a way to register a new device, which brings you back again to email OTP or magic link (even if only as an account recovery option).
It's also a lot less convenient. Because I need to have access to my email, wait for the code, copy it etc. I hate companies that dump this extra work on me, like booking.com and all the AI companies.
Passkeys would be so much easier, convenient and so much more secure. I really don't understand why they go for this.
I haven't actually seen these being used as passwords like TFA states; they're usually a form of 2FA.
If they actually are passwords, yes, my password manager is a better UX than having to fetch my phone, open SMS, wait for the SMS, like good grief it's all so slow.
(In the 2FA form, I'd prefer TOTP over SMS-OTP, but the difference is less there.)
IF everyone switches to PASSKEYS, hackers are going to focus exclusively on them and they WILL find a way; then everyone will be FUBAR. Worse, BIG TECH is not going to take responsibility. THAT and AI? PROBLEMATIC. Nothing is foolproof. However... Proper password protocol must be taught.Once upon a time, people had password hints. Why not tech a combination of that and proper passwords?
There is a way to fix this. Don't just require a 6 digit code. Require a 6 digit code and a long random string (an expiring token), which is only present on the page the user visited, or in the email they were sent.
The actual weak link here is not the procedure itself. It’s the fact that your email services will happily accept phishing mails into your inbox.
I’m pretty sure we can prevent this by issuing some kind of proof of agreement (with sender and recipient info) thru email services. Joining a service becomes submitting a proof to the service, and any attempt to contact the user from the service side must be sealed with the proof. Mix in some signing and HMAC this should be doable. I mean, IF we really want to extend the email standard.
I've conscientiously ignored every attempt by every service in the past decade to bully me into giving up a phone number for 2FA. Authenicator apps and passkeys, fine. But never over SMS.
Indeed, such a bad design where instead of a simple and quick 1-shortcut login from a fishing-resistant password manager users have to waste time switching back and forth between different apps/devices
This article really opened my eyes to how phishing can exploit email verification codes. The shift towards using keys instead of passwords sounds promising for security. It's scary how easily users can be tricked, but your point about preferring security over convenience is spot on. Great read!
Some days when I'm tired of receiving a new authentication code, I'm half-jokingly thinking : we're certainly must reaching a point where at least half of all SMS messages sent are authentication codes (with a small payload, for now).
This has been driving me nuts. Ever since implementation. This method has been the biggest disappointment of login procedures and quickness. I dont want to go through, three to five steps just to login and in the meantime I forget what I came to the service for in the first place. There's gotta be a better method for security and streamlining sign in's. I should not have to do the work of security for the service and every other week I hear about the same service being hacked and millions of accounts are now affected.
A passphrase is basically like a password in the sense that I can lose it, but it's not like a password in the sense that I can actually memorise it. (Or rather, all of them)
I prefer my passwordstore workflow.
I remember two passwords, the rest is kept save for me and unlocked when I need them.
It's not perfect, but it's by far the least worse solution of them all.
Also super annoying if you haven’t set up email on a device (like my iPad), now I have back and forth with my phone instead of going through my password manager.
Also the 6-digit codes tend to appear on the lock screen of my phone, which means anybody can see them. I can turn that off, I know, but many people will not.
I can't be the only person here who is familiar with the word "attestation" in everyday life but had no idea what it means in the context of login security.
So I asked my friend Miss Chatty [1] about it. Hopefully this will help anyone who is as confused as I was.
I think the registration pattern should be - user enters email to register. email is sent to that email with a link to verify. user clicks link. user gets email with username and password to login in to the profile created for them.
same thing in blue which additionally opens the door for someone else to change their password and lock them out, never mind the quality of passwords users set initially etc. Looking at you, mum, registering a new account everytime you forget the last password.
After all big companies has change to sms code authentication, we just realized this is a bad pattern, please take out all that y get back to "click a link in this mail" pattern, looks more secure.
From my experience, OTAC is typically associated with sites that want to prevent automation and scraping. By that I mean I have seen it being used at EACH log in to create extra friction. Interesting this is being used as part of "regular" security.
From a design perspective, the reason this flaw exists is because the code can be typed on any machine and sent through any intermediary. More secure schemes are possible without much effort. Magic links have some pros/cons but overall I think they are better.
Here is what I do when the user logs in and email verification is needed:
1. Generate a UUID on the server.
2. Save the UUID on the client using the Set-Cookie response header.
- The cookie value is symmetrically encrypted and authenticated via HMAC or AES-GCM by the server before it is set, such that it can only be decrypted by the server, and only if the cookie value has not been tampered with. This is very easy to do in hapi.js and any other framework that has encrypted cookies.
- Use all the tricks to safeguard the cookie from being intercepted and cloned. For example, use a name with the __Host- prefix and these attributes: Secure; HttpOnly; SameSite=Lax;
4. The user clicks the link and has their email verified.
- When the link is clicked, the browser sends the Cookie header automatically, the server decrypts it and compares it to the UUID in the URL and if that succeeds, the email has been verified. Again, this is very easy in hapi.js, as it handles the decryption step.
- Including the UUID in the magic link signals that there is _supposed_ to be a cookie present, so if the cookie is missing or it doesn't match, we can alert the user. It also proves knowledge of the email, since only the email has access to the UUID in unencrypted form.
5. The server unsets the cookie, by responding with a Set-Cookie header that marks it as expired.
6. The server begins a session and logs the user in, either on the page that was opened via the link or the original page that triggered the verification flow (whichever you think is less likely to be an attacker, probably the former).
Note that there are some tradeoffs here. The upside is that the user doesn't need to remember or type anything, making it harder to make mistakes or be taken advantage of. The downside is that the friction of having to use the same device for email and login may be a problem in some situations. Also, some email software may open a different browser when the link is clicked, which will cause the cookie to be missing. I handle this by detecting the missing cookie and showing a message suggesting the user may need to copy-paste the link to their already open browser, which will work even if they open a new tab to do it (except for incognito mode, where some browsers use a per-tab cookie jar).
Lastly, no cookie is 100% safe from being stolen and cloned. For example, a social engineering attack could involve tricking the user into sharing their link and Set-Cookie header. But we've made it much more difficult. They need two pieces of information, each of which generally can't be intercepted, or used even if intercepted, by intermediary sites.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place. Password managers (a usual defense against phishing) can't help you either.
Roughly the same security for password-login with email recovery. The only difference is that this makes the attack surface larger because the user is frequently using email.
The only secure login is through 1. a hardware device and 2. a solution where both the user/service are "married" and can challenge each other during the login process. This way, your certificate of authentication will also check that the site you are connecting to is who it says it is.
I would have agreed to this if it weren’t for the fact that, for various reasons, you occasionally need to copy and paste passwords manually from password managers. This phishing scenario is no worse.
I've flipped my stance on this. I used to be pretty pro passkey, but after using them for a while what I've observed is:
1. There's very low consistency in implementation, so while I understand the problems passkeys solve, it seems like every vendor has chosen different subproblems of the problem space to actually implement. Does it replace your password? Does it replace MFA? Can I store multiple passkeys? Can I turn off other forms of MFA? Do I still need to provide my email address when I sign in (Github actually says No to this)?
2. The experience of passkeys was supposed to be easier and more secure than passwords for users who struggle to select good passwords, but all I've observed is: Laypeople whose passwords have never been compromised, in 20 years of computing, now deeply struggling to authenticate with services. Syncing passwords or passkeys between devices is something none of these people in my life have figured out. I still know two people in their late 20s who use a text file on their computer and Evernote to manage their passwords. What is their solution for passkeys? They don't know. They're definitely using them though. The average situation I've seen is: "What the heck is this how do I do this I guess I'll just click save passkey on this iOS prompt" and then they can never get back into that service. The QR code experience for authenticating on desktop using mobile barely works on every Windows machine I've seen.
3. There is still extremely low support among password managers for exporting passkeys. No password managers I've interacted with can do it. Instead its to my eyes become another user-hostile business decision; why should we prioritize a feature that enables our users to leave my product? "Oh FIDO has standardized the import/export system its coming" Yeah we've also standardized IPv6. Standards aren't useful until they're used. "Just create new passkeys instead of exporting" as someone who has recently tried to migrate from 1Password to custom-hosted Bit/Vaultwarden: This is the reason why I gave up. By the way, neither of these products support exporting passkeys.
It might end up being like USB-C where its horrible for the first ten years, but slowly things start getting better and the vision becomes clear. But I think if that's the case: We The Industry can't be pulling an Jony Ive Apple 2016 Macbook Pro and telling users "you have to use these things and you have no other option [1]". Apple learned that lesson. I'm also reasonably happy with how Apple has implemented Passkeys (putting aside all the lockin natural to using Apple products, at least its expected with them). But no one else learned that lesson from them.
1. You go to evil.example.com, which uses this flow.
2. It prompts you to enter your email. You do so, and you receive a code.
3. You enter the code at evil.example.com.
4. But actually what the evil backend did was automated a login attempt to, like, Shopify or some other site that also uses this pattern. You entered their code on evil.example.com. Now the evil backend has authenticated to Shopify or whatever as you.
The site is comparing this method to plain username + password though. Doesn't that miss the obvious point that evil.example.com could do the exact same thing with the username + password method, except it's even easier to phish because they just get your username + password directly (when you type them in) and then an attacker can log in as you via a real browser?
evil.example.com can be a legitimate-looking website (e.g. a new tool a person might want to try). If it has a login with email code, it can try to get the code from a different website (e.g. aforementioned Shopify).
For the username + password hack to work, the evil.example.com would have to look like Shopify, which is definitely more suspicious than if it's just a random legitimate-looking website.
I assume it's a phishing scenario, given the note about password managers. Evil site spoofs the login page, and when you attempt to log in to the malicious site, it triggers an attempt from the real site, which will duly pass you a code, which you unwittingly put into the malicious site
TOTP is vulnerable to the same attack, though. If you are fooled into providing the code, it doesn't matter whether it's a fresh one to your email or a fresh one from your authenticator.
They are, which is one major issue with TOTP and most current MFA methods. There is an implicit assumption that you only get the full benefit if your usi g a password manager.
1. A password manager shouldn't be vulnerable to putting your password in a phishing site.
2. If your password is leaked, an attacker can't use it without the TOTP.
Someone who doesn't use a password manager won't get the benefits of #1, so they can be phished even with a TOTP. But they will get the benefits of #2 (a leaked password isn't enough)
Passkeys assume/require the use of a password manager (called a "passkey provider")
It means that you go to foo.com and enter your e-mail to sign up. But foo.com routes that request and to bank.com, hoping you have an account there.
bank.com sends you verification email, which you expect from foo.com as part of the sign-up verification process. For some bat shit crazy reason, you ignore that the email came from bank.com and not foo.com and you type in the secret code from the email into the foo.com to complete the sign up process.
And bam! the foo.com got into your bank account.
A complete nonsense but because it works in 0.000000000000001% of the time for some crazy niche cases in the real world, let's talk about it.
The evil site usually says something like "enter the code from our identity partner x" or something, which is a lot more believable when it's a service like Microsoft that does provide services like that.
The author couldn't even be bothered to write about the supposed examples of these practices being wrong. The whole thing lacks detail and actual arguments, instead we get "please stop" like it's some sort of a reddit or twitter shitpost.
Look at this - https://news.ycombinator.com/item?id=44822267 - is this what this site is supposed to be now? Writing the article in the place of the author because the author couldn't be bothered to even form their own argument correctly? What the fuck?
The fact that this has been upvoted so high and allowed to stay on the front page is also a clear signal to others that this low-effort garbage is welcome here, which will only encourage others to post similarly worthless blogposts, lowering the overall quality of this site.
There are multiple comments in this very thread that are longer than this "article". My own comment is longer!
Is this what this site is supposed to be now? People ranting, complaining, and swearing about how a post submission is not what they think should be on the site?
The post spawned an interesting conversation, thats worth itself alone.
Go put replies like this on reddit where they belong.
Interesting conversations can also happen under articles that have actual substance, there's no need to tolerate such short blogposts just because these might spawn an interesting discussion.
Funny that you mention reddit because this is the exact same type of spam that pollutes /r/programming.
The idea of needing to provide extremely personal information that’s somehow tailored for me just to use a service is so incredibly dystopian to me. I’d much rather use a password.
sure, it being a 6 digit code which has potential for social engineering can be an issue
like similar to if you get a "your login" yes/no prompt on a authentication app, but a bit less easy to social engineer but a in turn also suspect to bruteforce attacks (similar to how TOTP is suspect to it)
through on the other hand
- some stuff has so low need of security that it's fine (like configuration site for email news letters or similar where you have to have a mail only based unlock)
- if someone has your email they can do a password reset
- if you replace email code with a login link you some cross device hurdles but fix some of of social enginering vectors (i.e. it's like a password reset on every login)
- you still can combine it with 2FA which if combined with link instead of pin is basically the password reset flow => should be reasonable secure
=> eitherway that login was designed for very low security use cases where you also wouldn't ever bother with 2FA as losing the account doesn't matter, IMHO don't use it for something else :smh:
The attack pattern is:
1) User goes to BAD website and signs up.
2) BAD website says “We’ve sent you an email, please enter the 6-digit code! The email will come from GOOD, as they are our sign-in partner.”
3) BAD’s bots start a “Sign in with email one-time code” flow on the GOOD website using the user’s email.
4) GOOD sends a one-time login code email to the user’s email address.
5) The user is very likely to trust this email, because it’s from GOOD, and why would GOOD send it if it’s not a proper login?
6) User enters code into BAD’s website.
7) BAD uses code to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.
This is why “email me a one-time code” is one of the worst authentication flows for phishing. It’s just so hard to stop users from making this mistake.
“Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious. However, if some popular email service suddenly decides your login emails or the login link within should be blocked, then suddenly many of your users cannot login.
Passkeys is the way to go. Password manager support for passkeys is getting really good. And I assure you, all passkeys being lost when a user loses their phone is far, far better than what’s been happening with passwords. I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
The problems of Passkeys are more nuanced than just losing access when a device is lost (which actually doesn't need to happen depending on your setup). The biggest problem are attestations, which let services block users who use tools that give them more freedom. Passkeys, or more generally challenge-response protocols, could easily have been an amazing replacement for passwords and a win-win for everyone. Unfortunately, the reality of how they've been designed is that they will mainly serve to further cement the primacy of BigTech and take away user freedom.
I want to like passkeys but I haven't had any success getting them to work. Every time I click on "sign in using passkey" both my browser (Firefox or Chrome, on Android/Win/Mac) and Bitwarden are like "no passkeys found" and I'm never given an option to create one.
I feel like I'm doing something stupidly wrong or missing a prompt somewhere, or maybe UX is just shitty everywhere, but if I, a millennial who grew up programming and building computers, struggle with this, then I don't expect my mom, who resets her password pretty much every time she needs to sign into her bank, to get it to work.
I'm in the same boat. I just cannot get them to work; they work sometimes on some browsers, but a solid majority of the time I click on "use passkey" I get a generic error message and end up going back and using the password flow.
I haven't invested more time in this because if it's so unusable for me as an engineer, it's a non-starter for the general public.
I've found passkeys support on Windows to be a bit janky right now. You can get into weird scenarios where the passkeys don't work, but there's also no UI to remove them or reset them. This is especially annoying when you change out some component in your PC and it seems to void however they are encrypted and Windows can't figure out why it can't access them.
It's too early for grandma to use, IMO.
I use Bitwarden in Firefox and passkeys "just work" on Linux, Android, Mac, and Windows. Previously, I used the extension in Chrome (Linux, Android, Windows).
The only relevant Bitwarden setting appears to be "Ask to save and use passkeys" under Notifications. I do turn off the browser's built-in password manager, though I believe anything else relevant I have at default. If you have those and they still aren't getting saved, then I'm at a loss, but wish I knew why they don't work for you. In Bitwarden, you can see if there's a passkey saved in an entry, as the creation timestamp is shown right under the password field and editing an entry also allows deleting a passkey.
Not just you. Browser support is horrid for physical passkeys, very hit and miss.
Depends on the brand of the passkey i think.
I've seen that kind of comments multiple times, and I don't get it.
I use Yubikeys, and passkeys just work. On Chrome, Firefox and Safari, both on macOS and Linux (specifically Alpine). I also tried with iPhones (for my family), and it also just works.
I haven't tried using an Android device, is it what you are trying?
That might be the delta. I'm not using a hardware key (well, not a YubiKey). I'm using just my phone or browser.
All non-enterprise big tech uses of passkeys (Google, Apple & Microsoft Accounts), do not require an attestation statement (or in spec-parlance, use the `None` or `Self` Attestation Types).
The presence of other attestation types in the spec allows passkeys to replace the use of other classes of authentication that already exist (e.g. smartcard). For example, it's very reasonable for a company to want to ensure that all their employees are using hardware Yubikeys for authentication. Furthermore, sharing the bulk of implementation with the basic case is a huge win. Codepaths are better tested, the UIs are better supported on client computers, etc.
The presence of attestations in the spec, does not impinge on user freedom in any meaningful way.
Do you have some examples where people actually require attestation in 3rd party facing systems? Or is this purely "But in theory..." and you've dismissed all the very real problems with the alternatives because you're scared of a theoretical problem ?
I always reject attestation requests and I don't recall ever having been refused, so if this was a real problem it seems like I ought to have noticed by now.
Microsoft Entra ID goes out of its way to enforce attestation for FIDO 2 keys.
The protocol normally allows you to omit the attestation, but they worked around an extra call after a successful registration flow that sends you to an error page if your FIDO2 passkey isn't from one of these large approved vendors: https://learn.microsoft.com/en-us/entra/identity/authenticat...
I found out by trying to prototype my own FIDO2 passkey, and losing my mind trying to understand why successful flow that worked fine on other websites failed with Microsoft. It turns out, you are not allowed to do that.
To defend Redmond here, Entra is an enterprise system. If the company you work for or are interfacing with wants to enforce attestation, that's their business.
B2C I would expect more latitude on requiring attestation.
A problem is that once a thing like that exists, it ends up on security audit checklists and then people do it without knowing whether they have any reason to.
I would counter argue being the person pushing passkeys in an enterprise: noone in the business knows what attestation is, but we're going to do it because the interface recommends it.
I'm not sure it's the standards committee's fault that your employer hires people that don't know how to do their job.
I think it's reasonable to have attestation for the corporate use case. If they're buying security devices from a certain vendor, it's reasonable for their server to check that the person pretending to be you at the other end is using one of those devices. It's an extra bit of confidence that you're actually you.
It's the standards committees job to design standards that are difficult to misuse.
Exactly. For personal authentication, you are at least personally incentivized to do the right things. For corporate auth, people will do whatever it takes to skip any kind of login.
I once knew a guy who refused to let his office computer go to sleep just to avoid having to enter his password to unlock his computer. He was a really senior guy too, so IT bent to allow him do this. What finally made him lock his computer was a colleague sending an email to all staff from his open outlook saying “Hi everyone, it’s my birthday today and I’m disappointed because hardly anyone has come by to wish me happy birthday”. The sheer mortification made him change his ways.
A culture of harmlessly pranking computers left unlocked goes a long way. ThoughtWorks veterans know what I mean.
lol this is funny, why he didn't want to sign in more often tho???
He was completely non technical and I guess he figured that IT should be able to work the security system around him.
The most common human trait ever.... laziness
Don’t put in place systems which encourage lock-in, even at the B2B level.
Aren't those usually used inside an enterprise vs B2B between enterprises?
Ah, and even if you can turn it off as the administrator, you still need to include the attestation, it's just not checked. Gotta love Microsoft...
Yeah Microsoft is so annoying. It's also kicking me out every day now (with this passive aggressive "hang on while we're signing you out" message). On M365 business with Firefox on Linux with adblocker. I hate using their stuff so much.
Same has been happening for for a few months. I get thrown out of all o365 services multiple times each day.
Yes me too since a couple months :( So annoying. It doesn't of course happen on Windows.
It started with OneNote web a couple years ago. Every day that gave a popup "Your session needs to be refreshed) and it would reload all over again. Microsoft don't bother to make a OneNote desktop app for my platform and the web version is really terrible anyway (you can only search in one tab, not a whole notebook). So I moved to self-hosted Obsidian which I'm really happy with. Now I can basically see myself typing in a note from another client.
But replacing Microsoft for email is another topic.
I don't work in August, so I can't (well, won't) check, but my boss had the infrastructure team turn on FIDO2 for the mandatory 2FA on our administrative accounts and I do not remember having any problems with this.
I do remember explicitly telling them (because of course having agreed to do this they have no idea how and need our instructions) not to enable attestation because it's a bad idea, but you seem to be saying that it'll somehow be demanded (and then ignored) anyway and that was not my experience.
So, I guess what I'm saying here is: Are you really sure it's demanded and then ignored if you turn it off from the administrative controls? Because that was not my impression.
It's been a little while, but I believe at the time you'd get a CTAP/CBOR MakeCredentialRequest, the browser would ask you to confirm that you allow MS to see the make and model of your security key, and it would send the response to a Microsoft VerifySecurityInfo API.
If you refused to provide make and model, IIRC you would fail the check whether enforcement was enabled or not. Then if enforcement was enabled and your AAGUID didn't match the list, you would see a different error code.
Either way, you're sending over an attestation. They understandably forbid attestation format "none" or self-signed attestations. It's possible that this has changed, but the doc page still seems to say they won't accept a device without a packed attestation, it's only that the AAGUID check can currently be skipped.
Passkeys are in their infancy. You don't go about rolling out such patterns when most users haven't even switched yet and big players like Apple are still resisting attestations (last time I checked). The problem is that the feature is there and can be (ab)-used in this way, so it should be rejected on principle, irrespective of whether it's a problem right now.
I understand the value of attestations in a corporate environment when you want to lock down your employees' devices. But that could simply have been handled through a separate standard for that use case.
At the very least the spec should be painstakingly insistent on not requiring attestation unless implementors have really thought and understood the reasons why they need the security properties provided by attestation in their particular use case. And that it has to be something more meaningful than “be more secure this way” as security is not a rating (even though security ratings exist) but a set of properties, and not every possible security guarantee is universally desirable (please correct me if I’m wrong here, of course), and at least some are not without downsides. Maybe even strongly recommend library authors to pass the message on.
I agree, but unfortunately the spec authors are already going out and dangling possible bans in front of projects who implement Passkeys in more user-friendly ways:
https://github.com/keepassxreboot/keepassxc/issues/10407
> To be very honest here, you risk having KeePassXC blocked by relying parties
But having a choice about how you store your credentials shouldn't depend on the good faith of service providers or the spec authors who are doing their bidding anyway. It's a bit similar to sideloading apps, and it should probably be treated similarly (ie, make it a right for users).
There's a tension here between "user freedom" and a service wanting to make sure that credentials that it trusts to grant access to stuff aren't just being yolo'd around into textfiles on people's dropboxes.
People forget that one of the purposes of authentication is to protect both the end user and the service operator.
Sure, but as long as the fallback for account recovery is sending a reset email or sms (both of which are similar or worse than yoloing textfiles on dropboxes), that's a very tough argument to make in good faith.
I agree that account recovery isn't the best. But just because that sucks doesn't mean there's zero value in improving credentials.
What people do on their own computer is none of the service's business.
It is if it puts the service at risk.
This attitude has got to stop. Is it not enough that there's no customer service and it's almost impossible to sue these companies thanks to arbitration clauses? Now they need to have control over our computing to keep themselves safe? And how many recorded incidents of losing an account because someone had their "password in a text file" are even out there? The most common scenarios one hears about are either phishing or social engineering.
Do you think someone running a service that's under constant denial-of-service attacks would be sympathetic to the argument that "What people do on their own computer is none of the service's business".
Pretty much every service out there has "don't share credentials" in their ToU. You don't have to like it, but you also don't have to accept the ToU.
Note the scare quotes around user freedom. Perhaps user freedom is a notorious fake issue, a bizarre misconception, or an exotic concept that nobody understands.
I don't know what "scare quotes" are. They're just regular quotation marks, because I'm quoting.
Sure, I stand corrected, you "don't know" what I'm talking about.
Literally no idea.
My point was that freedom is not an absolute, it's balanced against other freedoms. It's hard to tell whether you agree with that or not.
What does Microsoft stand to lose if someone steals my passkey for Outlook from a text file I yolo'd into a Dropbox?
The exact point of passkeys is to remove all rights from users )
Ensuring it's not possible for remote attackers to easily steal users passkeys is not "removing all rights" for someone. It is setting a security bar you have to pass. One user's poor security can have negative effects on not just them but the platform itself.
You don't need attestation to allow users to secure their passwords.
You don't, but with one services have a better guarantee that they are.
You’re falling for the exact “better security” fallacy I was trying to warn about. Security is not a rating, “better security/guarantee” is not a really meaningful phrase on its own, even though it’s very tempting to take mental shortcuts and think in such terms.
Attestation provides a guarantee that the credential is stored in a system controlled by a specific vendor. It’s not “more” or “less” secure, it’s just what it literally says. It provides guarantees of uniformity, not safe storage of credentials. An implementation from a different vendor is not necessarily flawed! And properties/guarantees don’t live on some universal (or majority-applicable) “good-to-bad” scale, no such thing exists.
This could make sense in a corporate setting, where corporate may have a meaningful reason to want predictability and uniformity. It doesn’t make sense in a free-for-all open world scenario where visitors are diverse.
I guess it’s the same nearsighted attitude that makes companies think they want to stifle competition, even though history has plenty of examples how it leads to net negative effects in the long run despite all the short term benefits. It’s as if ‘00s browser wars haven’t taught people anything (IE won back then - and where is it now?)
>You’re falling for the exact “better security” fallacy
How is it a fallacy? The rate of account compromises is a real metric that is affected by how good security there is for accounts.
I've tried to explain it in my comment above.
Yes, the rate of account compromises is a metric we can define. But attestation doesn't directly or invariably improve this metric. It may do so in some specific scenarios, but it's not universally true (unless proven otherwise, which I highly doubt). In other words, it's not an immediate consequence.
It could help to try to imagine a scenario where limited choice can actually degrade this metric. For example, bugs happen - remember that Infineon vulnerability affecting Yubikeys, or Debian predictable RNG issue, or many more implementation flaws, or various master key leaks. The less diverse the landscape is, the worse the ripples are. And that's just what I can think of right away. (Once again, attestation does not guarantee that implementation is secure, only that it was signed by keys that are supposed to be only in possession of a specific vendor.)
Also, this is not the only metric that may possibly matter. If we think of it, we probably don't want to tunnel vision ourselves into oversimplifying the system, heading into the infamous "lies, damned lies, and statistics" territory. It is dangerous to do so when the true scope is huge - and we're talking about Internet-wide standard so it's mindbogglingly so. All the side effects cannot be neglected, not even in a name of some arbitrarily-selected "greater good".
All this said, please be aware that I'm not saying that lack of attestation is not without possible negative effects. Not at all, I can imagine things working either way in different scenarios. All I'm saying that it's not simple or straightforward, and that careful consideration must be taken. As with everything in our lives, I guess.
Services, by definition, serve. Why should we, the users, care about their guarantees?
Because users want the services they use to be good. They don't want to be sent phishing links from their friend's account that was hijacked by attackers.
I had a meeting with a public servant this morning. He is part of an organization that promotes multi-factor authentication and publicly endorses the view that users are stupid.
The meeting was about him unable to test the APK of the new version of their mobile app. He felt embarrassed, his mobile phone is enrolled in the MDM scheme that disallows side-loading of apps.
What I am trying to say is that assuming users are stupid carries a non-negligible risk that you will be that stupid user one day.
The solution here sounds like having a separate development device that is used to sideload test versions of the app. The idea is that devices may require different levels of security depending on how much can be accessed from that device.
Reducing passkeys to the security level of passwords is not just "making something user friendly". It's undoing all of the hardware everyone else in the ecosystem is putting into to making a more secure way for authentication to be done.
Passkeys have several advantages over passwords but not all of them rely on UX controls. They are, after all, public-private keypairs and the private part is never shared during authentication. The wider web never adopted PAKEs so passwords are still sent verbatim over the (TLS-protected) wire.
With password managers passwords are not reused which avoids this problem already.
Not reusing passwords across sites greatly limits the blast radius but verbatim password exchange still carries its own risks. The widespread adoption of TLS addressed most of the issues, as I alluded to already, but there are still insider threats, MITM phishers, and infrastructure compromises from time to time.
How exactly is this "reducing the security level to those of passwords"? For example: you can't use a passkey on attacker's web site even if you have a plaintext copy of the private key.
I'm not following. The issue is about it being used for the site the private key is for. The attacker's site is irrelevant here.
Apple hasn't been particularly resistant to offering device attestation. The DeviceCheck / App Attest system has been offered since iOS 11 released in 2017. https://developer.apple.com/documentation/devicecheck
I assume they mean attention in the webauthn/passkey specs specifically
Systems are usually more open while they are trying to onboard users than they will be once the moat has been established.
We have already been through this with many services suddenly demanding that you give them your phone number "for security".
They're not going to start requiring them until they've phased out non-passkey login. But at that point it will be too late.
I don't know why people don't see this coming: very obviously once Passkeys are everywhere, it'll become "we're requiring attestation from approved device bootloaders/enclaves" and that'll be your vendor lock in where it'll be just difficult enough that unless you stick with the same providers phone, you might lose all your passkeys.
> very obviously once Passkeys are everywhere it'll become "we're requiring attestation from approved device bootloaders/enclaves"
This is far from very obvious, especially given that Apple have gone out of their way to not provide attestation data for keychain passkeys. Any service requiring attestation for passkeys will effectively lock out every iPhone user - not going to happen.
If there's no intention of doing this, it should be removed from the protocol. "I promise we'll never use this feature, so long as you implement it" isn't very convincing.
Not all people who want to replace passwords are running services available to the general public.
There are a bunch of service provider contexts where credential storage attestation is a really useful (and sometimes legally required!) feature.
Great, they can use standards that aren't targeted at running services for the general public. It seems like the requirements already diverged.
Drop attestation from passkeys, and I become a promoter. Keep it, and I suggest people stay away.
If it's not something anyone intends to use on public services, this should be uncontroversial. Dropping attestation simplifies implementation, and makes adoption easier as a result.
What makes you think that the Webauthn standards are "targeted at running services for the general public"?
> It seems like the requirements already diverged.
No, the requirements are _contextual_. This isn't a new idea.
The fact that sites targeted at the general public are prompting me to use them. Should websites avoid using passkeys and webauthn? Would you like to tell them that they're doing it wrong?
I missed a word out from my question. Let me try again.
What makes you think that the Webauthn standards are _only_ "targeted at running services for the general public"?
Yeah, so if you want me to trust them, the harmful parts need to get removed from specs used in public contexts.
I would love to use public key cryptography to authenticate with websites, but enabling remote attestation is unacceptable. And pinky swears that attestation won't be used aren't good enough. I've seen enough promises broken. It needs to be systematic, by spec.
Passwords suck. It's depressing that otherwise good alternatives carry poisonous baggage.
If you make something possible, it will be used.
> If you make something possible, it will be used.
Sure, but that's not without tradeoffs. I come back to:
> Any service requiring attestation for passkeys will effectively lock out every iPhone user - not going to happen.
And I come back to: if it would never work, why not drop support? "We pinky promise" is just not good enough.
> if it would never work, why not drop support?
Because passkeys are designed to replace passwords across multiple different service contexts, that have different requirements. Just because there's no reason to use it for one use case doesn't mean it's not actually useful in a different one. See things like FIPS140 (which everyone ignores unless they're legally required not to).
Can you sketch out for me the benefit of a public-facing service deciding to require passkey attestation? What's the thought process? Why would they decide to wake up and say "I know, I'm going to require that all of my users authenticate just with a Yubikeys and nothing else"?
> Can you sketch out for me the benefit of a public-facing service deciding to require passkey attestation? What's the thought process?
A misguided administrator is very likely to think "They can't use a malicious device to access our service".
What's the benefit for a private service?
Is there a difference? It's a field in the response payload that nobody is filling out except the corps that need it. Would it make you feel better if they moved it to an appendix and called it an optional extension?
As long as it required an extension and extra application.
I should need to install an enterprise authenticator app, which speaks webpki-enterprise, if you want to enable that shit.
> Would it make you feel better if they moved it to an appendix and called it an optional extension?
That kind of thing can make a huge difference once this standard starts becoming e.g. required for government procurement.
Great. At that point there will be a real market niche for people who (can, want, might) think a bit different.
> Do you have some examples where people actually require attestation in 3rd party facing systems?
Austria's governmental ID is linked to 5 approved tokens only.
> Do you have some examples where people actually require attestation in 3rd party facing systems?
That's not the right question. The right question is "what companies would be using passkey's if there was attestation on their security". To answer that question, you might look at the answer for a similar one on X509: "would we be doing banking over http if X509 didn't have attestation?".
The Fido2 folks really really want things to be so secure and centralized, with so little user freedom, and they want to use attestation to do it.
Here's a Fido2 member (Okta) employee saying "If keepass allows users to back up passkeys to paper, I think we'll have to allow providers to block keepass via attestation." https://github.com/keepassxreboot/keepassxc/issues/10407#iss...
All because passkeys backup is deemed "too unsafe and users should never be allowed that feature, so if you implement it we'll kick you out of the treehouse."
The authoritarian nature of passkeys is already on full display. I hope they never get adopted and die.
Those quotes aren't in your source.
Hi, since you mentioned me, that's not what was said and putting it in quotes as if I did is really inappropriate.
I'll post the same response I replied to other on a different thread:
Wild that you (and a few others) continue to make these accusations about me in these comments (and in other venues).
1) I've been one of the most vocal proponents of synced passkeys never being attested to ensure users can use the credential manager of their choice
2) What makes you think I have any say or control over the hundreds of millions of websites and services in the world?
3) There is no known synced passkey credential manager that attests passkeys.
tl;dr attestation does not exist in the consumer synced passkey ecosystem. Period.
They paraphrased what you said in the thread, but I don't think it's much of a misrepresentation.
You may have "been one of the most vocal proponents of synced passkeys never being attested to ensure users can use the credential manager of their choice", but as soon as one such credential manager allows export that becomes "something that I have previously rallied against but rethinking as of late because of these situations".
There may not currently be attestation in the consumer synced passkey ecosystem, but in the issue thread you say "you risk having KeePassXC blocked by relying parties".
The fact that that possibility exists, and that the feature of allowing passkeys to be exported is enough to bring it up, is a huge problem. Especially if it's coming from "one of the most vocal proponents of synced passkeys never being attested", because that says a lot about whoever else is involved in protocol development.
You should really re-read the entire discussion. It wasn't about passkeys being able to be exported. It was specifically about clear text export.
> The fact that that possibility exists,
The possibility does not exist in the consumer synced passkey ecosystem. The post is from a year and a half ago.
A year and a half ago doesn't really matter; that this was ever even a concern from the industry, something that the industry could make happen at all, or even just was thinking about doing at some point in the past, poisons the entire effort. In a world where password+totp already exists and requires almost no hoops, no dependencies and is incredibly secure vs basic password flows, it's no wonder that folks remember discussions around curtailing user freedom around a new authentication pattern which already was less convenient, offers less user control, and further centralizes infrastructure in the hands of a few major brokers of technological power.
Until we have full E2E passkey implementations that are completely untethered from the major players, where you can do passkey auth with 3 raspberry pi's networked together and no broader internet connection, the security minded folks who have to adopt this stuff are going to remember when someone in the industry publicly said "if you don't use a YubiKey/iPhone/Android and connect to the internet, ~someone~ might ban you from using your authenticator of choice."
> Until we have full E2E passkey implementations that are completely untethered from the major players, where you can do passkey auth with 3 raspberry pi's networked together and no broader internet connection
This is already possible today. And since it's a completely open ecosystem, you can even build your own credential manager if you choose!
I don't believe it is a misrepresentation, you are bullying a project for letting users backup their own passkeys.
>which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
This is exactly why we need truly open standards, so people who believe they are acting for the greater good can't close their grubby hands over the ecosystem.
Seems very argumentative for somebody who's just saying there's no issue.
We've had massive problems with moving to passkeys (browser based) at our company and moved back to an app based Authenticator. Everyone is accepting of the autenticator app or uses a yubikey.
What were those "massive problems"?
Re-imaged, lost, or bad updates on PCs wiping out a all the saved passkeys and being locked out of all accounts during off-campus sales or design meetings.
Making staff look like idiots in front of clients is a resume-generating-event.
Yeah, 'availability' is a huge pillar of computer security that many people forget exists.
I'm not the OP, but I expect it the same issues that have stopped me from using passkeys now.
His reply does give one aspect of it: passkey's are fragile. To be secure, they can't be copied around or written down on a piece of paper in case you forget, so when the hardware they are stored on dies, or you lose your Yubikey or is as he described the PC re-imaged, all the your logins die. That will never fly, and it's why passkeys are having a hard time being adopted despite them being better in every other way.
Passkey's solution to that is to make them copyable, but not let the user copy them. Instead someone else owns them, someone like Google or Apple, and they will do the copy to devices they approve of. That will only be to devices they trust to keep them secure I guess. But surprise, surprise, the only devices Apply will trust are ones sold to you by Apple. The situation is the same for everyone else, so as far as I know bitwarden will not let you copy a bitwarden key to anyone else. Bitwarden loudly proclaims they lets you export all your data, including TOTP - but that doesn't apply to passkeys.
So, right now, having a passkey means locking yourself into proprietary companies ecosystem. If they company goes belly up, or Google decides you've transgressed one of the many pages of terms, or you decide to move to the Apple ecosystem again you lose all your logins. And again, that won't fly.
The problem is not technological, it's mostly social. It's not difficult to imagine a ecosystem that does allow limited, and secure transfer and/or copying of passkeys. DNS has such a system for example. Anyone can go buy a DNS name, then securely move it between all registrars. There could be a similar system for passkeys.
Passkeys have most of the bits in place. You need attestation, so whoever is relying on the key knows it's secure. The browsers could police attestation as they do now for CA's. We have secure devices that can be trusted to not leak leak passkeys in the form of phones, smartwatches, and hardware tokens. But we don't have a certification system for such devices. And we we don't have is a commercial ecosystem of companies willing to sell you safe passkey storage that allows copying to other such companies. On the technological front, we need standards for such storage, standards that ensure the companies holding the passkeys for you couldn't leak the secrets in the passkeys even if they were malicious.
We are at a frustrating point of being 80% of the way there, but the remaining 20% looks to be harder than the first 80%.
i'm also curious to hear what issues you faced!
Why would BigTech care about the dozens of users using an open source password manager? What’s their gain from preventing these people from logging in? They love money and don’t care about user freedom, sure. But they’ve shown no evidence of hating user freedom on principle.
Every time I’ve seen them actually attack user freedom, there was an embarrassingly obvious business angle. Like Chrome’s browser attestation that was definitely not to prevent Adblock, no sir.
Because they'd actively have to make their proprietary passkey systems interoperable with password managers. This is fail-closed, not fail-open: If they truly didn't care, they'd also be no incentive for them to implement support.
But I fear it's worse. Based on how past open standards played out, I find it believable they do care - that there won't be an open ecosystem of password managers.
> But they’ve shown no evidence of hating user freedom on principle.
Yes, they did, just see Microsoft's crusade against Linux and the origin of the "embrace-extend-extinguish" term.
They already failed then. All sides (browser->website and browser->passkey holder) of passkeys are open standards. They already don’t restrict passkeys from e.g. open source apps they have no control over, for both Google accounts and any site on Chrome. Webauthn “fails open” by default in the sense you’re indicating; if you don’t check the attestation, any app or device made by anyone can hold a passkey. I haven’t encountered or heard of anyone restricting passkey apps/hardware outside of business-managed employee accounts.
I recommend reading the MDN docs on Webauthn, they’re surprisingly accessible.
> Yes, they did, just see Microsoft's crusade against Linux and the origin of the "embrace-extend-extinguish" term.
The whole point of the trial that term came from was that Microsoft explicitly saw Linux as a material threat to their business. What threat are Google quashing by preventing you from using passkeys they don’t control?
>Why would BigTech care about the dozens of users using an open source password manager?
Because big tech loves control. Just because you can't see the angle yet, it doesn't mean there isn't one now, or won't be one later. It has been shown time and time again that they will take all the freedom away from you that they can.
What instance have you seen where BigTech opted for control with no monetary incentive?
There is already an example of Microsoft selling passkeys with their own "secure (tm)" stamp on them, and not accepting anything else just a few comments down.
Even if there wasn't already an example, it's easy to turn control into a revenue stream at a later time.
That is for their enterprise SaaS, and has an obvious profit motive (I.e. bundling). Do you think Chrome is going to start charging for using their passkey storage and then kick all the other apps off Chrome?
> Even if there wasn't already an example, it's easy to turn control into a revenue stream at a later time.
I think you’ll have to justify or qualify this a bit. If Google forces every website on Chrome to have a red background, how do they turn that control into a revenue stream later on?
Saying "oh that's enterprise" is just moving the goal posts.
Chrome has already started kicking off extensions, see ublock.
I can't divine the future about how they will further their income streams.
No it’s not. My goalpost from the beginning was “show me an example where there wasn’t a clear monetary incentive for restricting user freedom”. That one has a monetary incentive (make our paying customer for product X also buy product Y).
As for blocking things that block ads; if you can’t see the monetary incentive for Google there then I don’t know what to tell you.
I didn’t ask you to divine the future. I said “I’ve not seen them do X without trying to get Y” (a statement about the past), and you still haven’t given me a remotely credible example.
>Why would BigTech care about the dozens of users using an open source password manager?
I agree, why would BigTech care about those dozens of users. Screw those guys, they can use our password manager or they can get lost, we don't need them!
They already let the open source password managers work just fine with every facet of passkeys. Why would they reverse this now, was my point.
> Why would BigTech care about the dozens of users using an open source password manager?
Bots using a custom password manager to share logins.
If all you want is to make a bot that can use passkeys automatically, add a transistor between your Yubikey's touch button and GND. When you turn the transistor on, the capacitive sensor is activated.
Now the Yubikey is just an API you can call, websites cannot tell the difference. You can't export keys, but a bot can add new keys after using existing keys to log in.
this doesn't work on stolen aws accounts though /s
You can proxy all the underlying USB communications to a physical device. Allowing attestation in the spec was not an anti-bot measure.
I'm not sure I understand all the opposition expressed in this thread about device attestation. Can someone explain it to me?
I thought Apple decided not to utilize the attestation field, is this not true?
yeah, IMHO the design was messed up by a few very influential companies "over fitting" it for their company specific needs
but I don't think attestation per-se is bad, if you are a employee from a company and they provide you the hardware and have special certification requirements for the hardware then attestation is totally fine
at the same time normal "private" users should never exposed to it, and for most situations where companies do expose users to it (directly or indirectly) it's often not much better then snake oil if you apply a proper thread analysis (like allowing banking apps to claim a single app can provide both the login and second factor required by law for financial transactions, except if you do a thread analysis you notice the main thread of the app is a malicious privilege escalation, which tend to bypass the integrity checks anyway)
But a lot of the design around attestation look to me like someone nudged it into a direction where "a nice enterprise features" turns into a "system to suppress and hinder new competition". It also IMHO should never have been in the category of "supported by passkey" but idk. "supported by enterprise passkey only" instead.
Through lets also be realistic the degree to which you can use IT standards to push consumer protection is limited, especially given that standard are made by companies which foremost act in their financial interest, hence why a working consumer protection legislation and enforcement is so important.
But anyway it's not just the specific way attestation is done, it's also that their general design have dynamics push to a consolidation on a view providers, and it's design also has elements which strongly push for "social login"/"SSO" instead of a login per service/app/etc. i.e. also pushes for consolidation on the side of login.
And if you look at some of the largest contributors you find
- those which benefit a ton from a consolidation of login into a few SSO providers
- those which benefit from a different from a login consolidation (consolidation of password managers) and have made questionably blog entries to e.g. push people to not just store password but also 2FA in the same password manager even through that does remove on of the major benefits of 2FA (making the password manager not a single point of failure)
- those which benefit a ton if it's harder for new hardware security key companies, especially such wich have an alternative approach to doing HSKs
and somehow we ended up with a standard which "happened" to provide exactly that
eh, now I sound like a conspiracy theorist, I probably should clarify that I don't think there had to be some nefarious influence, just this different companies having their own use case and over fitting the design on their use case would also happen to archive the same and is viable to have happened by accident
>but I don't think attestation per-se is bad, if you are a employee from a company and they provide you the hardware and have special certification requirements for the hardware then attestation is totally fine
Perhaps I'm missing something, but I do think hardware "attestation per-se is bad. Just look at the debacle of SafetyNet/Play Integrity, which disadvantages non-Google/non-OEM devices. Hardware attestation is that on steroids.
As for corporate/MDM managed environments, what's wrong with client certificates[0] for "attestation"? They've been used securely and successfully for decades.
As for the rest of your comment, I think you're spot on. Thanks for sharing your thoughts!
[0] https://en.wikipedia.org/wiki/Client_certificate
Your style of thinking is exactly why linux never became a leader in desktop os's. Why we're still dealing with the most ridiculous tech debt and complexity in OSS tooling to date. You're obsessed with fake problems that have no bearing on real people. When grandma does indeed loose all her money because some prick phished her password away, I would love to watch you explain how that's actually better than BigTech taking away user freedoms.
This argument is ridiculous and purposefully inflammatory. The issue at hand is the requirement for client attestation while using passkeys. So in that light, can you describe for us the scenario in which grandma, who is undoubtedly using passkeys on an iPhone or an Android, looses all her money simply because someone, somewhere else is using a passkey without attestation? You can't, because the vendor lock-in created by attestation doesn't meaningfully increase grandma's security. Rather, it exists (outside the enterprise scenario) primarily as an anti-competitive tool to be wielded by the major players.
Passkeys could have been an overall boon to society. But with attestation restricted to a set of corporate-blessed providers it is a Faustian bargain at best.
> The issue at hand is the requirement for client attestation while using passkeys.
There is no attestation in the consumer synced passkey ecosystem. Period.
Can you say "There will be no attestation in the consumer synced passkey ecosystem. Period."? That seems to be the concern, not what exists today.
Ecosystems are made up of hundreds of thousands of organizations, billions of devices,and billions of users.
How do you expect a single person to be able to make an authoritative statement like that?
You're the one dismissing real problems like "lose all passkeys when you lose your phone".
That is not a problem that GP brought up. In fact GP claims it's not a big problem.
> The problems of Passkeys are more nuanced than just losing access when a device is lost (which actually doesn't need to happen depending on your setup).
That doesn’t happen when you use Apple’s passwords ecosystem or 1Password. The backing databases are synchronized between devices.
And everyone knows that abuelitas in the global south, as a rule, own iPhone 16s and subscribe to 1Password.
There's no need to be snippy.
Those are the solutions I'm familiar with; there may be others. If Android and Windows don't already solve this problem in similar ways--which they might!--it sounds like an open opportunity for them.
Edit: sure enough, Android supports it: https://support.google.com/chrome/answer/13168025?hl=en&co=G...
As does Windows: https://blogs.windows.com/windowsdeveloper/2024/10/08/passke...
>"I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money."
More like abuelita gets robbed at gunpoint and made to unlock and clear out her bank account, then has no recourse at home because her device was taken. I live in a third world country and even 2FA simply isn't viable for me due to how frequent phone robberies are. I've had to do the process once and it was a nightmare, whereas with passwords I can just log into Bitwarden wherever and I'm golden
A key part of the recent push for passkeys has been cross device syncing with your Google / Apple / whatever password manager account, so you end up in the same situation: if you can log in to Bitwarden to access your passwords, you can log in to your password manager to access your passkeys.
> A key part of the recent push for passkeys has been cross device syncing with your Google / Apple / whatever password manager account, so you end up in the same situation: if you can log in to Bitwarden to access your passwords, you can log in to your password manager to access your passkeys.
Relying on Google/Apple is no better, with the stories of people losing access to their (Google in particular) account, and not being able to recover or let alone even reach a human at Google to begin with.
Why not have a public service for this, instead of relying on big tech that can just revoke your account for any number of ToS "violations" without recourse? The solution for "normies" should not be rely on and trust Google with your entire digital identity.
Getting the State involved is just a different, much worse threat actor than Google, though. From this discussion it should be evident how much more sovereignity passwords give you, if you want the State involved it should regulate websites' policies on passwords, such as: no service shall be hostile to password managers (special character bans, short limits on length, no pasting), no service shall require regular password resetting (proven to worsen security).
State involvement may be better used in policing, too. Public repositories of leaked passwords (without usernames, of course) would do wonders, for example
I use a layered approach for passwords. If I don't trust the site and they're not getting my financial information, I'm glad to use Password1234%
Google frequently warns me that one of my passwords has compromised but I don't really care for those sites.
So then the State can see what services I've signed up for, when and where?
The State is always more difficult and dangerous to deal with than a private company.
"The State is always more difficult and dangerous to deal with than a private company."
Ridiculous.
Of course it is.
Google can ban me (really just one specific digital instance of me) from their services. The government can throw me in jail, take all my property, fine me whatever amount they want, etc.
The State is significantly less interested in your activities than Google, regardless of whatever hypothetical you'd care to spin.
a state has a monopoly on force, you've obviously never lived under a regime which actively wants to harm you.
Odds are neither have you.
You can use a third-party password manager to handle passkeys. I recommend Bitwarden personally.
> if you can log in to...
Please stop right there. I want a password manager that I fully control, and lives on my own infrastructure (including sync between devices). Not reliance on someone else's cloud.
Did people not realize they can save their 2fa token and just use that with a new authenticator?
I haven't used a phone 2fa forever, but it was a much better system than this "email me a code" BS.
I do something similar with KeePass because a lot of my 2FA is stored on my YubiKey. When I register with YubiKey I also register with a KeePass vault intended as a "break-incase-of-emergency". So rarely opened and with lots of security options set to max.
For a long time 2fa apps (other than Bitwarden and maybe some others) would lock you into the app and not let you export it. Websites don’t usually expose the text version of the code, just the QR.
I recently switched from Authy to 1Password for 2FA, requiring me to set up every single website's 2FA from scratch, and I found that every website I use provides the text version of the code. It's hidden behind a "having a problem scanning the code?" link. I didn't need to take a single screenshot of a QR code; I was able to save the text version for them all. Next time I switch, it'll be easy.
I've never found a TOTP site that didn't also have a "click to show the code" option. It's usually in small print at the bottom, but it's there.
It's easy to screenshot or physically print a QR code during setup.
There are desktop totp apps that will decode the QR code from a screenshot in the clipboard.
QR has trivial format and code is easily extractable.
Spot the developer who never had to setup anything for his mom.
Almost all (not you, steam) allow saying "I cannot take a picture" or "Enter manually"
But you're right, it's not perfect but has gotten better. Just in time to be of no use thanks to email BS.
>> Did people not realize they can save their 2fa token and just use that with a new authenticator?
What's 2fa token? Is that an AI thing? AI uses tokens. Or a crypto thing? Do you need one of them "nonfungible" tokens? And what's an authenticator? I have MS authenticator for work, but it uses 2 digit numbers, are those tokens?
Not sure if I'm missing a joke, but the 2fa token is a secret that you stick in your password manager and sync (or otherwise send) to other devices so that your 2fa is not bound to a particular device. My password manager lets me view the 2fa secret as if it were just another password.
Yes, I was joking. I'm not up on all the options and I'm an engineer who read HN. What chance does Joe public have of making sense of all these things?
2fa is two factor authentication. User+password is the first factor, and is a "something you know" check. The second factor is a "something you have" check. Like sending you an SMS code.
They exist so if someone watches over your shoulder while typing your password, they don't gain access to anything.
I feel like this is a really strong justification for duress passwords. Register a duress password with your phone or bank account, and if you ever enter it, that system will take whatever actions you want - call the police with your location, display a fake balance of a few hundred dollars, switch to a fake email account, hide your crypto wallet app, whatever.
FYI, you can put a 2FA secret into Bitwarden and autofill the one-time passwords alongside the regular password. That would mitigate the impact of losing your phone.
I personally don't do this because I feel like it defeats the whole purpose of 2fa. If someone gets into your bitwarden account, now they have your passwords and can generate 2fa codes. Of course, if the alternative is just not doing 2fa then it's better than nothing but I'd still prefer an authenticator app or hardware key than putting them in bitwarden.
Getting into your bitwarden account should be at least as hard as getting into your authenticator app or stealing your hardware key, though, if you're using it as intended, so I think it's ok for 2FA
2FA keys are easily stolen from a desktop with a password manager running in the background when running a malicious executable, vs. 2FA keys on a 2FA app on a phone and running a malicious app.
That's why my bitwarden account is protected with 2FA! If an adversary has gotten into my bitwarden secrets, my second factor is already compromised.
And if I lose my phone, I only need to do the recovery flow with the printed codes for one account, rather than for all of my accounts.
Great, this is a universal solution. Let's all make it an integral part of our digital security, and in 5 years or so hope that bitwarden doesn't leverage it!
the good news is that you can self-host bitwarden pretty easily and so it doesn't have to be a hassle/risk
Grandma is self-hosting what???
I am going to be honest, Grandma is already compromised.
that's where you come in sonny
This.
Grandma, and Uncle Rob, and your cousins, and anyone else you have a long standing relationship with, can use your VaultWarden instance if you let them.
But! You now get to maintain uptime (Rob travels and is frequently awake at 3am your time) and make sure that the backups are working... and remember that their access to their bank accounts is now in your hands, so be responsible. Have a second site and teach your niece how to sysadmin.
FYI you can go `oathtool --totp -b "that secret code"` and never need a third party vendor again
Exactly. The only financial stuff on my phone is Google Wallet and I don't even live in a high threat area. The devices that can accept payment from Google Wallet are always in observed locations, it would be very hard for a mugger to use it maliciously. All the easy money transfer options are an attack surface I see no need to expose.
> whereas with passwords I can just log into Bitwarden wherever and I'm golden
Good luck. For some arcane reason, Bitwarden turned on email-based 2FA for my account last night and all of a sudden I'm locked out of my account for half a day. …mostly because I have greylisting enabled on my mail server, so emails don't arrive right away, but as it so happens I also had all my hardware stolen from me last weekend. Bootstrap is a real bitch.
> More like abuelita gets robbed at gunpoint and made to unlock and clear out her bank account, then has no recourse at home because her device was taken.
You are describing the current status quo, without passkeys. This is already possible.
Well, except maybe for the "without recourse" part, because there are some legal and policy avenues available for dealing with this situation.
The without recourse is the part that matters... With passkeys or 2FA she's at risk of having to wait a day or more to go to the physical location (if there even is one, digital banks are huge in Latin America), with passwords she can just check her notebook the same night and start the recourse through official channels. I know she could just call the hotline, but if 24hr customer service guy can get you in your account same night then the bank is too insecure anyways
> The without recourse is the part that matters...
Yes, and I'm saying that part isn't accurate either for the story you're portraying with passkeys or for the status quo. That's not how account recovery flows work.
With passwords, no account was even lost in the scenario for a recovery flow to start. An account recovery flow is only necessary because of the superfluous extra security, which will almost inevitably introduce more attack vectors than before (such as a social engineering attack through customer service) if the banks want to service customers like grandmas.
> With passwords, no account was even lost in the scenario for a recovery flow to start
Given how common mandatory SMS 2FA is for banks, if thieves stole your unlocked phone, they have stolen your account too.
Isn't the SMS just 1 factor, and for 2FA they will also need the other F (e.g. password)?
Relying on only SMS sounds like 1FA?
> Passkeys is the way to go. Password manager support for passkeys is getting really good.
I set up a passkey for github at some point, and apparently saved it in Chrome. When I try to "use passkey for auth" with github, I get a popup from Chrome asking me to enter my google password manager's pin. I don't know what that pin is. I have no way of resetting that pin - there's nothing about the pin in my google profile, password manager page, security settings, etc.
Passkeys are the pinnacle of bad UX. It just works, until the user tries to switch devices, accounts or platforms. The slogan of passkeys should be something like "I don't have a password, it usually just works, but now I changed X and it doesn't work anymore". Even worse is hardware-based 2FA built into smartphones (also FIDO), as you lose your phone in a lake and now you can't access anything anymore.
The way to go is an encrypted password manager + strong unique random passwords + TOTP 2FA. It's human-readable. Yes, that makes it susceptible to phishing, but it also provides very critical UX that makes it universal and simple.
Apple’s works fine, including when I’m logging on to my windows machine. Opening the camera app is a little annoying, but I don’t have to do it frequently. 1Password works well too and it runs on everything. There’s open source options, but I can’t attest to their UX.
Apple's works fine until you don't have access to your apple devices.
That's fine, but Chrome has 67% market share, and the majority of people will pick the default option for passkeys if prompted. For passkeys to replace passwords it's got to be seamless and easily recoverable without compromising security.
> the majority of people will pick the default option for passkeys if prompted
Especially since Google doesn’t allow you to change your personal default which is what convinced me to go and switch all my accounts off of Google SSO
I’m not sure what you mean; I have multiple passkeys on different platforms for my Google account (and a few similarly important ones).
So we need to make a new open standard, and then somehow prevent Google from implementing it? Too badly they implemented TOTP too. I’m not sure what you’re proposing here.
Did you see a proposal? I'm merely pointing out that there's disasterously poor UX lurking in the #1 platform that users may encounter passkeys in. It's not ready to send out to normies without more work on it.
Yes, it really is a shame that Google Chrome has dominated the market since the very first browser was created.
Bitwarden is really good for passkeys, better than apple's password manager imo
I use protonpass and it’s great, carried across all my devices and browsers.
I really dislike how passkeys have generally been used. Once KeepassXC got proper support of them and in the browser plugin its been a bit more sensible. KeepassXC means I can transfer them between devices and its protected the same way my passwords are so no additional pins and logins I don't want, it solves a lot of the issues I have around them. Now its just a long random password.
I wouldn't have minded if we moved to a scheme like SSH logins with public and private keys I own either, that I can store securely but load as I please and again would work well with a local password manager.
KeepassXC's passkey integration has been excellent for me. No vendor lock-in is important to me.
passkeys are public / private keys. it's just a new pair for every log in.
That is unfortunate, but that sounds more like a chrome problem than a passkey problem. You would have the same issue if chrome saved your password.
Passkey is a great example of how five kitchen chefs can't make scrambled eggs. Horrible user experience, terrible marketing, no mental model like "your phone is THE key," no tangible or even symbolic presentation of the key.
That’s a lot of anger without a substantial argument. For Apple users, for example, the user experience is very smooth and the mental model is “I use iCloud to store my passcode just like I use iCloud to store my passwords”. If you use 1Password, you’re changing iCloud for 1Password instead.
"You lost me the moment you mentioned iCloud". At least that's the way the majority of people I know react to this line of thinking. The "cloud" is still mysterious and complicated to a good number of people. Passwords are easy to understand.
Most Apple users are used to using their account for everything - that’s how they buy apps, use things like music or photos, etc. and, of course, passwords. Switching to passkeys doesn’t change that much other than being a bit faster.
Yeah that's right! If you simply say that this syncs to all my devices, it papers over, or abstracts if you will, the complexity of: secure enclaves/TPMs, symmetric sync keys wrapping asymmetrically encrypted passkeys, resident keys that support backup, keys that do NOT support backup, how biometrics are used, etc. etc.
With a password, I can write it down on a piece of paper and put it in my safe.
One of these systems is not like the other.
Most people use the cloud without even knowing it. If you instead say it’s seamlessly replicated among all your devices, that is a good enough explanation and conveys the benefits to customers.
the "the app tries to trick me into using the service of the company behind it so that they can consolidate the market" problem
it's not quite new, as a dump example depending where in android contacts you click on a address it might always force open google maps (2/3 cases) or (1/3 cases) propelry goes through the intend system and gives users a choice
stuff like that has been constantly getting worse with google products, but it's not like Microsoft or apple are foreign to it
Google password manager's pin?
On my Windows laptop that is Windows Hello PIN, not sure about other OSs. And it can be disabled.
> I'd rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
The problem is that I can physically show up at my local bank branch or at my job's IT helpdesk to get my account back, but I can't show up at the Googleplex or at Facebook's or Xitter's HQ and do the same. Device bound passkeys are very error prone for the latter scenario, since users will fail to account for that case.
To add, services account for that failure by introducing something worse: a customer service backdoor where you can get into an account with very weak or nonexistent authentication.
With Amazon's live chat, someone was able to get into my account by providing an address in the same city as the destination of my latest Amazon order.
You see this with 2FA since "sorry lol you've lost your account forever" isn't an option, and it's trivial for users to lose their 2FA key unlike, say, access to their email.
Services that use passwords for login need to do that too, because people lose passwords.
Even services that use login via emailed link need to do it because people do lose email access. Far too many people use the email provided by their ISP as their only email service, which can be very bad if they move to someplace that ISP does not serve or simply want to switch to another ISP in their current area.
The forgot-my-password email link has a customer support load very different from "I can't do 2fa because I lost my device".
And once you set up a customer service pipeline for it, you might accidentally create a backdoor that's far worse than forgot-my-password email verification: https://medium.com/@espringe/amazon-s-customer-service-backd...
Email account access is the closest thing we have to ubiquitous identity on the web. Users that truly lose access to their email account are in a catastrophic situation before they even think of whether they can access your service.
The solution is what's already happening, but throughly enforced: allow designated users to restore your access to your account.
Heh, that is kinda interesting and I've never heard of it before. What are some services that have this set up?
So, I guess you set up some "emergency users". And maybe if you lose access to your account, you get customer support to mark your account as lost which sends an email to the address that you have on file (in case it's an attack started by someone other than the user).
And I suppose if N days pass without any login, one of your emergency users can generate a credential that they can pass to you to recover your account?
Apple accounts have had it for years. You can set up a legal successor if you die, and a couple of people who can vouch for you to regain access.
That and Apple will give you a very long one-time password meant to be printed that can restore access as well. This one is in a third undisclosed location for me.
> Passkeys is the way to go
No, at least not on its own. Let's not repeat the mistakes.
Password managers are the way to go and ONLY FOR RARE EXCEPTIONS we should use dedicated MFA, such as for email-accounts and financial stuff. And the MFA should ask you to set up at least 3 factors and ask you to use 2 or more. And if it doesn't support more or less all factors like printed codes, OS-independent authenticator apps and hardware keys like yubikey, then it should not be used.
We need to go further. If a service doesn't include 197 factors including blood samples, showing up at a physical location 50 miles from your home, and sending a picture of yourself in a specific posture, and doesn't require you to use at least 53 of them (determined randomly) to login, then it's insecure and should not be used.
That's the French postal service's "identité numérique".
Sounds like something the UK gov would love to implement, plus extra.
Such as finding a dinosaur fossil of your families name clan.
And then we can compress all of that into a single piece of plastic and call it the Ident-i-eeze.
I mean, the Scots technically still use a rock to authenticate their king.
https://en.wikipedia.org/wiki/Stone_of_Scone
Passkey are more like password managers, and less like MFA tokens - despite the fact that many passkey implementations can function as MFA tokens as well.
Bitwarden the password manager includes a full passkey implementation, which doesn't involve any MFA.
> Passkey are more like password managers, and less like MFA tokens
No: - I can always export and import all my passwords from/into my password manager - My passwords always work independently of a password manager or any specific app/OS/hardware
That is not true for passkeys and makes them much more like tokens. Of course they don't have to be used in MFA, just like passwords.
I just exported my Bitwarden vault and the resulting .json file has my passkeys in it. I'm not going to try to test import, but if it doesn't work that would obviously be more "bug" than anything else. Clearly "export" is the high concern functionality and once exported, importing them is not a big deal.
This is only about your first paragraph, it doesn't affect your second.
Indeed. Credential Exchange Protocol (CXP) is already been worked on and all major vendors are planning to support it. There was talk also in Apple WWDC 2025 about Passkey related APIs including exporting them.
Unfortunately just because it's possible with Bitwarden doesn't mean it is always possible.
Are you saying that it's not always possible to import/export passkeys because you can manage them with some program that doesn't allow it, but the same is not true for passkeys?
Counter-example: I can write a password manager that will not allow you to export/import passwords.
No, that's not what I meant.
There are cases where bitwarden doesn't work but chrome for example does. Easy to Google up.
For passwords however, I never heard of a case where a website only accepts passwords from a specific password manager - and how could they even do that right?
I don't think your reasoning holds. You say "I know situations where one passkey client works with some websites and not others, but I don't know situations where a website works with some clients and not others".
If the website accepts a password, then it can't prevent you from using the password manager you want. But if the website accepts FIDO2 passkeys, it's the same thing, isn't it?
> but I don't know situations where a website works with some clients and not others
For example: https://www.w3.org/TR/webauthn-2/#dictdef-authenticatorselec...
> If the website accepts a password, then it can't prevent you from using the password manager you want. But if the website accepts FIDO2 passkeys, it's the same thing, isn't it?
Unfortunately not...
If you like password managers, you'll love passkeys!
Passkeys is an interface between your password manager and a website without all the fluff with filling or copy-pasting passwords.
No need to write like that. I know, understand and use passkeys for quite a while now.
I don't love them. I don't love passwords either.
But while I don't fear passwords, I fear passkeys. The reason is that it makes the tech even more intransparent. My password manager stops working, completely dies or I can't use it anymore for other reason? No problem, I can fallback to a paper list of passwords if I really have to. This transparency and compatibility is more important than people think.
Passkeys lack that. They can be an interface like you described, but only if everyone plays along and they can be exported. But since there is no guarantee (and in practice, they often cannot be exported either) they are not a replacement for passwords. They are a good addition though.
Unfortunately, many people don't understand that and push for passwords to begone.
I have yet to see passkeys used as a sole method of logging in. There's always a traditional username and password setup first. There's always a recovery code set up for the passkey. I have yet to see passkeys offered as the only means of MFA. Which means that your backup methods still work. You can use them for recovering your access. I see passkeys as an optional convenience. It works well for me by that measure.
I agree, but there is no guarantee that it will stay like that. In fact, there are many people who argue to completely get rid of passwords.
This would be an argument to support keeping the passwords, instead of pushing for not adding passkeys in the first place.
And I would agree with that argument.
Which is exactly what I said.
You're right, I misunderstood the context. I agree with you then :-).
What about server-generated passwords, like API keys? That would solve the main problem with passwords, namely, that people reuse the same weak password everywhere. I doubt it would be as popular as user-selected passwords, but I still wonder why no website has tried it.
How is that different from a passkey's private key, apart from being less secure?
It's literally something like
stored in the password manager.Why not keeping passwords AND passkeys? Most of the time I want to use passkeys for different reasons, but if I lose my passkeys I can go back to my printed list of passwords.
Exactly! That is what I would like to have too and that is in fact how I currently use passkeys.
A passkey import/export standard is in the works. Once I know I can backup everything in a keepass database I'll be much happier.
True. Still, the difference is that with passwords, no one can stop you from "exporting" it. With passkeys, it could be changed, and the power for that lies in the hands of only a few vendors. It's still a bit concerning if they replace passwords forcefully.
Let me decide for myself what must I love.
I love password managers. I dislike passkeys. So clearly that's not the case.
Also without all that pesky privacy and choice of what you run on your own computer.
Password managers are those proprietary programs that you need to install, give full access to your computer, register an account and trust their word that your passwords are uploaded to the cloud securely? No thanks.
Also they are too complicated for an ordinary user. A physical key is much simpler and doesn't require any setup or thinking, and can be used on multiple devices without any configuration. And doesn't require a cloud account.
Password managers are both significantly simpler to use than just passwords and more secure.
Passwords have always been bad. The problem is that users can't remember them. So they rotate, like, 3 passwords.
Which means if fuckyou.com is breached then your bank account will be drained. Great.
On top of that, the three passwords they choose are usually super easy to guess or brute force.
With a password manager, users only need to remember one password, which means they can make said password not stupid. You can automatically log in too with your new super secure passwords you never need to see.
Its the perfect piece of software. Faster, easier, more secure, with less mental load.
uh no - a password manager is an open source application you can compile and install yourself if you want. Its nothing more than a small specialised database with a excel like interface. Personally I think that the argument that things are "too complicated for the average user" eventually gets gets you users that find breathing and sphincter function too complicated.
I’ve been observing this space for two decades and haven’t come across a single open-source password manager that actually works, is properly maintained, has an acceptable security track record, and comes with a similarly well-maintained browser extension that protects both my clipboard and myself from phishing.
I've been using Keepass for two decades and have never had a single issue. I would never recommend a browser plug in (too much attack surface area), and instead simply check the URL before having KeePass autotype. No clipboard.
I think you're rejecting good solutions out of hand.
Meanwhile...millions of users trusted LastPass. Twice.
> simply check the URL before having KeePass autotype.
I’m not going to rely on myself never making a mistake. I want a solution that protects me even during stressful moments where I have a lapse of judgement and forget to check.
I don't think fixing this at the browser-level is the right place. In general, I'm very vigilant, but I know I can be tricked. So I have a policy about not clicking links in emails from companies I already know the address for. I also aggressively right click / long tap links to examine the URL before opening.
In general, opening a malicious URL exposes the user to unnecessary risk, so the correct solution is not to assume the user has visited a malicious site (since that would already be high-risk), but rather to prevent opening of malicious URLs. The most obvious solution is to treat any untrusted content as questionable. So I very carefully examine every domain I visit - as I say to my kids: have a model about who owns the computer you're talking to. Domains matter.
Now, this works for me. I'm not cognitively impaired, I have high conscientiousness, probably from working in military and classified defense contexts way back when, but I'm not really sure to be honest, could just be my personality. But it works for me.
I get that you want that extra safeguard, but it's just not a dealbreaker for me, especially since I'm highly suspicious of browser add-ons and the security implications they bring in. I guess I'm just extremely selective about what add-ons I'll use.
If you're not using KeepassXC's browser plugin (or are using KeePassX, which -IIRC- never had a browser plugin), then its autotype feature will check the title of the window that has keyboard focus when deciding which entry to use. If one or more matches are found, it will [1] also ask you to confirm which entry you're about to have the software punch in. If no matches are found, it will alert you to that fact.
You might find the KeePassXC docs about the feature [0] to be informative.
If you're going to complain that all a phisher has to do to capture a password is create a website with the same title as the official one, then my reply would be something like "Duh. That's what the browser plugin is for.".
[0] <https://keepassxc.org/docs/KeePassXC_UserGuide#_auto_type>
[1] ...optionally, and on by default...
Not sure how you got the impression that I was unwilling to use a browser plugin.
I’m absolutely looking for a browser plugin. I would refuse to use an auto-type feature that only checks the window title instead of, as a browser plugin would do, the site’s domain.
I'm not sure how you got the impression that I had the impression that you're unwilling to use a browser plugin. I have absolutely no idea whether or not you're willing to use a browser plugin.
I was mentioning how auto-type worked because it's useful information for those who either are unwilling to use a browser plugin, or are like myself and simply have no need for one.
What about Bitwarden?
rpdililon mentioned KeePass. What have you (that is, Hackbraten) found wrong with the KeePassXC offshoot of it?
/me wonders if this is a "recommend me a nice open source, offline password manager" question in disguise.
I don’t remember why KeePassXC didn’t make my list last time I checked.
That was years ago, so I’m going to check it out again. Thanks for the pointer.
Update: One thing that stands out immediately is a confusing mess of three different projects, two of them unmaintained, which all call themselves KeePassX or KeePassXC, sometimes linking to each other’s documentation. How do I even tell I’m facing the correct KeePass(X(C)?)? project?
Yes, I’ll figure it out eventually but until then, it’s confusing. Also, if a password manager project needs to be forked over and over and over again (how can a holder of the keys to the kingdom possibly go MIA on three different occasions in basically the same project?), then does that tell us something about how the project is governed?
> How do I even tell I’m facing the correct KeePass(X(C)?)? project?
Well, [0] lists a single project called KeePassXC, with [1] as its homepage. Search engines list [1] and [2] as the top results for the query KeePassXC, for whatever that's worth. [3]
> Also, if a password manager project needs to be forked over and over and over again ... then does that tell us something about how the project is governed?
No?
KeePass is Windows-only software. So, some folks decided to write KeePassX, which ran on Linux, OSX, and Windows. They got bored of that after a decade or so, called it quits, and one of the preexisting forks [4] became the widely-used one.
> how can a holder of the keys to the kingdom possibly go MIA on three different occasions in basically the same project?
In addition to the history I wrote above, you are aware that KeePass is still receiving stable releases? According to [5], it looks like 2.59 was released just last month.
EDIT: Actually, where are you getting this "confusing mess of three different projects" from? When I search for "keepass", I get the official home pages for KeePass and KeePassXC as the top two results, the Wikipedia page, and then the Keepass project's SourceForge downloads page. When I search for "keepassx", I get the official homepages for KeePassX and KeePassXC, the wikipedia page, the KeePassXC Github repo, and an unofficial SourceForge project page for KeePassX.
[0] <https://keepass.info/download.html>
[1] <https://keepassxc.org/>
[2] <https://github.com/keepassxreboot/keepassxc/releases>
[3] And -because I'm a Linux user- not only do I have KeePassXC in my package manager, I also know that [1] is listed as its project homepage.
[4] ...which started like four years before KeePassX's final stable release...
[5] <https://sourceforge.net/projects/keepass/files/KeePass%202.x...>
Thanks for taking the time to follow up.
When I searched for `keepassxc`, my search engine ranked eugenesan/keepassxc [0] higher than keepassxreboot/keepassxc [1], so the former was the first that I’d visit. GitHub says that eugenesan/keepassxc is 2693 commits ahead of keepassx/keepassx:master, so I assumed that eugenesan/keepassxc was a legitimate and meaningful fork of keepassx/keepassx. Maybe I’m entirely mistaken, and I was just tricked by a blunder of my search engine and eugenesan/keepassxc is just a random person’s fork? (But then again, if it’s just a random fork, then why does it show up at the top, and why so many commits ahead of keepassx?)
To add even more to the confusion, not only is eugenesan/keepassxc unmaintained, it also points to www.keepassx.org (why?), which in turn says it’s unmaintained, too.
If I was just mistaken and eugenesan/keepassxc is really just a random fork, then my earlier allegations are all moot. Thank you for clearing this up, and also for clarifying that the other (legitimate?) KeePassXC was a preexisting fork (so it would have been difficult for them and possibly even more confusing to users if they had taken over the abandoned KeePassX project).
[0]: https://github.com/eugenesan/keepassxc
[1]: https://github.com/keepassxreboot/keepassxc
What search engine are you using?
I've tried DDG, Google, Bing, and Yandex. All of them rank official KeepassXC stuff in the top five results, and -with the exception of Bing- rank it above any other non-Wikipedia results. I didn't see this weird keepassx GitHub fork in the results from any of the search engines I tried.
> When I searched for `keepassxc`, my search engine ranked eugenesan/keepassxc [0] higher than keepassxreboot/keepassxc...
With the greatest of respect, I would expect someone who's sufficiently savvy to know what to do with a GitHub repo in their search result to also be sufficiently savvy to -at minimum- visit the homepage listed in the repo's About blurb and notice that [0] is the very first item in the list of "Latest News". I'd also expect that savvy someone to know to visit the repo's Releases page, notice that there are no published releases, and consider even more intensely that they might not be looking at the software they expected to see.
I can't explain why your search system is ranking this misleadingly-named GitHub repo so highly. AFAICT, noone with the repo owner's email address was ever involved in any public development on KeePassXC.
[0] <https://www.keepassx.org/index.html%3Fp=636.html>
> What search engine are you using?
I’m using Kagi. They say they rely on several third-party search indexes. I can’t see which one they are using for which particular search request. What I do know is that the backends are of varying quality. However, after years and years of using Google (back when their search was still good), I got used to the fact that if they return a GitHub project as a top search result, then that project was usually meaningful.
> With the greatest of respect, I would expect someone who's sufficiently savvy to know what to do with a GitHub repo in their search result to also be sufficiently savvy to -at minimum- visit the homepage listed in the repo's About blurb and notice that [0] is the very first item in the list of "Latest News".
Forks sometimes don’t update the About blurb that they inherit, and I think that that’s exactly what happened in the bogus repo.
> I'd also expect that savvy someone to know to visit the repo's Releases page, notice that there are no published releases, and consider even more intensely that they might not be looking at the software they expected to see.
In this case, however, the Releases section said “13 tags.” Some projects don’t use GitHub’s Releases feature at all, and rely only on Git tags. It’s sometimes difficult to spot.
>Password managers are those proprietary programs that you need to install, give full access to your computer, register an account and trust their word that your passwords are uploaded to the cloud securely? No thanks.
cf. pass(1)[0][1]
[0] https://www.passwordstore.org/
[1] No, it's not hosted in the cloud (i.e., on someone else's servers) and that's a good thing. It's FOSS and can be compiled for Android/IOS (and has, see [2][3][4], least for Android). The DB (just a GPG store) can also be shared across multiple devices.
[2] https://f-droid.org/packages/app.passwordstore.agrahn/
[3] https://play.google.com/store/apps/details?id=dev.msfjarvis....
[4] Not sure about IOS versions, I don't have any Apple devices.
> Passkeys is the way to go.
I wish there was a stronger differentiation between syncable and device-bound passkeys. It seems like we're now using the same word for two approaches which are very different when it comes to security and user-friendliness.
And yes, giving granny unsyncable passkeys is a really bad idea, for so many reasons.
> I wish there was a stronger differentiation between syncable and device-bound passkeys.
But there is no difference. I'd prefer if services just let me generate a passkey and leave it entirely up to me how I manage it. Whoever setup granny's device should have done so with a cloud based manager.
I think Google tries to make some confused distinction, or maybe that has more to do with FIDO U2F vs FIDO2. There you can add either a "passkey" or a "security key", but iirc I added my passkey on my security key so... yeah
Passkeys are a usability nightmare. No two experiences are ever the same. I have passkeys saved in 1Password and in Apple Passwords. I have a YubiKey. I have Duo on my work computer.
A common experience is Chrome telling me to scan a QR code. But I know this is not a legitimate method to sign in on any service _I_ use. I also never know WHY I'm being told to "scan this QR code". I scan it, and my phone also has no idea what to do with it! The site has decided, by not finding a passkey where it expects it, that it MUST be on my phone.
That's but one example of the horrible implementation, horribly usability, and horrible guidance various sites/applications/browsers/implementations use.
I don’t like passkeys. Before my process to login was:
- open website
- if not already logged in, log in to 1Password
- autofill password
- autofill TOTP
Now:
- open website
- if logged in to 1Password the Use Passkey usually shows up
- if not:
Passkeys would be great if they actually made anything simpler on a computer. They work fine on the phone but that’s not where I spend most of my time.And if I'm not using passkey, but the web site detects I'm using a passkey-compatible browser or password manager, the site takes over and tries to "sell" me a passkey anyway. No, I don't want it!
It’s also confusing if I’m being promoted to use an existing passkey or if I’m being promoted to create a passkey.
Now that I’m so paranoid about this, and not remembering which sites I have them for, I always dismiss the passkey prompt, then have to click several more times to get to the password login and fill it in with my password manager.
I forget which site it is but there is one site I try to use with passkeys that somehow bypasses my BitWarden and rigidly insists on a passkey tied to Google and/or my phone, which I do not want. (My BitWarden stack is fully owned by me, as I self-host a VaultWarden instance, with daily backups of it, and I don't want my passkeys anywhere else.) That's definitely annoying.
Passkeys work very smoothly with Safari and Apple Passwords.
Apple Passwords now sufficiently good to replace 1Password for me and I’m slowly transitioning.
I don’t mind subscription models per se but there was something about subscription for your own passwords that made me refuse to jump the fence when 1Password switched to that model.
Would be a bit faffy if you’re a Chrome user.
It works fine until you dare to have TWO accounts for the same website. Safari will just randomly pick one of them and always tray to log you in with that passkey every time you visit, and the interface for using a different one is really annoying.
Maybe im misremembering, but I feel like it gave me an option between two accounts recently?
Let me see if I can get it again
Apple handles it cleanly in Safari (you get a list of the accounts you're registered with on macOS, and iOS gives you the two most-recently-used accounts for that website with a button to reveal more).
The implementation in Chromium browsers (I use Arc, so I can't speak to Chrome itself) is basically a chunkier-looking 1Password.
Well if that’s what’s meant to happen, it does not happen for me. All I get is the same account over and over again that isn’t the one I want to log in with. No matter how many times I tap the little x and then select the account I want, carefully avoiding the gaze of Face ID which will automatically use the selected passkey if it spots me.
Ahhh I see. Typical Apple, honestly
I've never gotten passkeys to work on the Mac. Every time I try it with either Firefox or Safari says I need to log into iCloud, which I really don't want.
I stick with 1Password, because I don’t want my password manager to be part of the barrier to using other platforms.
I also have a bunch of stuff in 1Password that doesn’t have a home in Apple Passwords, which would be a problem.
And yes, Chrome with Apple Passwords is annoying. At work I’m forced to use Chrome for some things, and I’ve been dabbling with Apple Passwords. Every time I launch the browser I have to put in a code to link the extension with Passwords. It’s very annoying.
... or like most people, and not a Mac user.
Or anyone that thinks a monoculture is bad and that perhaps we shouldn't trust a single vendor with everything important.
That just sounds like you made a poor choice of password manager that doesn't put a priority on good ux...
1Password used to be decent until they enshittified about five years ago, decided to rewrite their app from scratch in Electron, replaced their support staff with non-technical staff who are unable to write any meaningful response to critical bug reports, and hired developers who allowed the app to degrade beyond recognition.
that says more about 1Password than about passkeys. With 1Password I often get "does nothing" when trying to autofill good old regular passwords
1. I don't get that with 1Password
2. If you get this often, why do you use 1Password, honest question.
Vendor lock-in and lack of alternatives.
1Password used to work decently well before 2020. Now I have ~ 2k items in 1Password, distributed among two accounts (work and personal). Additionally, my spouse and I have a shared 1Password vault via the Family plan.
There’s no way I’m going to migrate 2k items and two dozen devices to another vendor. If there were one that met my requirements to begin with.
Every vendor implements export and import. Why do you think you would need to manually migrate?
1Password has tons of features. No two vendors have exactly the same data model. Any of them might break on migration or worse, doesn’t exist on the target system.
For example, are my 2FA seeds going to migrate properly? How about the tags, attachments, sections, subsections, security questions and answers, inline Markdown notes, the HIBP integration, built-in overrides to fix known broken websites, workarounds I’ve learned for unfixed websites, shared vaults, recovering lost access to shared vaults, syncing, templates, custom integrations that I maintain [0], personal scripts, etc. etc.
Will it still be able to auto-fill into a web page? Into shitty, broken web pages? On Linux? On my Linux phone?
At the scale and depth at which 1Password is currently integrated into my spouse’s and my life, it’s difficult to consider migration anything less than a full weekend project.
I regret letting my spouse and myself lock into 1Password before it enshittified.
[0]: https://github.com/claui/aws-credential-1password
> “Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious.
"Click a link in the email" is really bad because it's very difficult to know the mail and the link in it are legitimate. Trusting links in emails opens to door to phishing attacks.
How would a phishing attack against a website which doesn't use passwords, only magic links, be performed?
I know not to click links on random emails but comfortably click links on emails I initiated from a website.
How do you know the email comes from that website? There are known cases of phishing mails being sent when people expect a legitimate mail.
If someone hacks my account and starts ordering stuff on bol it's not my problem but the company's so I don't sleep over it.
The company doesn't care either because fraud is just the cost of doing business- ease of ordering> security.
The website is
abc.com
the link in the email is abc.com
unless the product manager decided the link in the email is
track.monkey.exe/sus/path/spyware?c=behhdywbsncocjdb&b=ndbejsudndbd&k=uehwbehsysjendbdhjdodj
or something 2x–3x longer
The question was "How do you know the email comes from that website? "
And the answer was, I can find out if the email is from abc.com by looking at the link, which should also be abc.com
I don't click in "track.monkey.exe". I don't click tracking links. I pay a lot of money for my newsletter provider because I can turn off (most) tracking links.
The link in the email is a mailchimp wrapped tracking link with a gibberish URL. What now
Or the email is rendered HTML, where the expected URL is used as the text for an anchor whose href is the malicious site.
Pro tip: Hover over the link an check the URL. Or look at the source.
I don't click the link. Simple.
Track someone else.
You do, but does the average user? Security's reliance on people's behaviour / knowledge / discipline should be minimal.
Yeah, I was frowning when I read that. It is not any better at all, not even a tiny bit.
> I’d rather granny needs to visit the bank to get access to her account again
Visiting the bank is fine. But who do you visit to recover your Gmail password?
For the record, if you're in the EU you can make a GDPR request to their Data Protection Officer - since it's your data what's being kept away from you, you have the right to at least a backup.
It can take months and it only guarantees a backup, not full access, but it's better than nothing.
Even if they provide you with a dump of their database records on you, you will not be able to recover your password from the salted iterated hash, PBKDF2, bcrypt, Argon2, or whatever else irreversible function they used to store it.
How would you prove to their data protection officer that you are the owner of that Gmail account?
In theory: they can ask for ID, sworn affidavit, or whatever other means their local laws determine to be valid. At the end of the day, proving that someone owns something is not a new problem. I've also seen "here's some evidence that I know what the contents of the account are, my legal name matches the account and my legal address matches some emails there".
In practice: in my case, anecdotally, they just did it. For some reason owning the backup email account was not enough for the automated workflow to unlock my account, but sending a letter threatening to sue under the GDPR somehow changed their minds.
Or I could keep using passwords in my password manager, where I DON’T lose all my passwords if I lose my phone? Passkeys just seem to solve no real problems and create a black box dependency. Everything I’ve seen about them just makes no damn sense.
It sounds good, unless granny needs to visit Google or Microsoft to get a new password after losing her phone. Then what??
The scary part is not about losing her phone. It's about having to keep the old, no-longer-secure Android phone alive just for passkeys after getting a shiny (and secure) new iPhone.
You can add the new phone as an additional passkey. I don't see how this would be scary.
I have 531 logins for varied websites and services. Would you enjoy having to change 531 passkey devices? Me neither. But default login flows in all these sites prompt you to use your current device as passkey by default, so people who don't know better (i.e. a general "everybody") are being gently pushed to do so.
No, which is why there is the cross platform standard CXF which allows for cross platform sharing of passkeys. Apple has announced that support for this is shipping later this year with iOS 26. Google hasn't announced when they are shipping it yet.
So until then you have to do what parent said? Change each one individually when you switch devices? Thanks but no.
I keep all my Passkeys in Bitwarden, it works fine across different devices and I use all major platforms regularly (iOS, Android, Windows, MacOS, ChromeOS). As a backup I've also added some extra duplicate Passkeys in the Chrome and iCloud password manager for the most important accounts in case I lose access to Bitwarden somehow.
Would’ve been nice if the basic UX would have been figured out before passkeys were shoved down everyone’s throats
It just wasn't an important consideration, unlike the attestation anti-feature.
AFAIK, there is no requirement for websites to support multiple passkeys nor, if they do, to support them in a sensible way. Some sites do this well, most don't.
The problem here is really with Google and Microsoft than anything else. It's not like this problem doesn't occur already for other reasons.
She follows same reset flow as before. Passkeys are identical in this respect to the passwords of yore.
If granny forgets her password, she looks it up on the last page of her notebook where it is written down. Granny cannot write down her passkey.
To avoid getting locked out you could add 2-3 passkeys from different providers to each account. And/or use a passkey provider that allows backups, and back up your keys. But I doubt many people will have the discipline to do either of that.
Then that's worse, it's now two authentication flows to remember. It's only made the situation more complicated.
Honest question: isn't that introducing some weaknesses, allowing the attacker to either reactivate password auth or add it's own passkey eh by tricking the user in accepting that change after receiving a mail with a link to accept that change? That would make the passkey unbreakable, but leave other easier to exploit weaknesses.
No. You always need that flow.
The problem with passkeys is they’re very unfamiliar and it’s easier therefore for less experienced users to get confused or tricked.
Passkeys are more like 2FA, and many services disable password resets without 2FA if it's enabled.
Do you have any examples of such services? How do they handle the lost phone case? Tell people to go pound stand?
Good points but dont underestimate "granny needs to visit the bank to get access to her account again" as a problem.
For a lot of people, dealing with (now mostly digital) bureaucracies is a major stress in life. The biggest one, for some.
Its not just about invonvenience. Its sometimes about losing access to some, and just not having it for a while.
In terms of practical effect, a performance metric for a login system could be "% of users that have access at a given point." There can be a real tradeoff, irl, between legitimate access and security.
On the vendor side.. the one time passwords fallback has become a primary login method for some. Especially government websites.
Customer support is costly and limited in capacity. We are just worse at this than we used to be.
Digital identity is turning out to be a generational problem.
That’s right.
How many HN denizens are the de facto tech support for family members when they can’t login, can’t update, can’t get rid of some unwanted behavior, or just can’t figure stuff out?
I don’t blame them one bit. The tech world has presented them with hundreds of different interfaces, recovery, processes, and policies dreamed up by engineers and executives who assume most of their user base is just like them.
> Passkeys is the way to go. Password manager support for passkeys is getting really good. And I assure you, all passkeys being lost when a user loses their phone is far, far better than what’s been happening with passwords. I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
I am waiting for the era when using passkeys is not depending from some big tech company.
We're in that era. BitWarden supports them natively, and you can even self-host.
> I am waiting for the era when using passkeys is not depending from some big tech company.
You can choose any credential manager you want to store your passkeys.
The maker of the credential manager is still a "big tech company", and there is still lock-in. Before I ever use any passkey solution, I would need to be guaranteed the ability to export and backup my passkeys and migrate them wherever I want.
My expectations for how long I intend to be alive and using the internet is much longer than my expectations for the continued operation and service of any particular passkey management software.
I already had to jump ship from LastPass after they were hacked. Imagine if they hadn't allowed me to migrate my passwords.
Bitwarden is a great option for you. You can even self host it!
I use them without being dependent on big tech.
This comment explains why passkeys are nothing more than a way to shift responsibility.
If you lose all your data and your entire life because you lost your phone, no company is responsible.
But if you get hacked they are.
So they’ve come up with a solution that can destroy your entire life, but reduces the risk of corporate liability.
But yeah, keep carrying water for the entities that won’t come up with actual user focused solutions because it may cost them 0.01% of their profits.
Yes that’s what 99% of modern security is. Anything that requires O(N) spending for N users must be thrown away in the name of stock prices.
Passkeys will be the way to go if we get them to remove the "attestation object" field from the protocol. Until then there's no way for Jimbob to tell the difference between:
> Website: is this Jimbob' phone
> Hardware: yes
And
> Website: I'll give you a dollar if you tell me something juicy about this user
> Hardware: Give this token to Microsoft and ask them
> Microsoft: Jimbob is most likely to click ads involving fancy cheeses, is sympathetic to LGBTQ causes, and attended a protest last week
With passwords and TOTP codes, I am in control of what information is exchanged. Passkeys create a channel that I can't control and which will be used against me.
(I chose Microsoft here because in a few months they're using the windows 10->11 transition to force people into hardware that locks the user out of this conversation, though surely others will also be using passkeys for similarly shady things).
> Passkeys will be the way to go if we get them to remove the "attestation object" field from the protocol.
I don't think you understand the protocol. The attestation object does not mean there is an authenticator attestation.
There is no authenticator / credential manager attestation in the consumer synced passkey ecosystem. Period.
Is this not the protocol we're talking about? https://w3c.github.io/webauthn/#sctn-attestation
It seems pretty clear that "where possible" parties besides the user are provided with information about the user (ostensibly about their device, but who knows what implementers will use this channel for)... so they can make a trust decision.
It's going to end up being a root-of-trust play, and those create high value targets which don't hold up against corruption, so you're going to end up with a cabal of auth-providers who use their privileged position to mistreat users (which they already do, but what'll be different is that this time around nobody will trust that you're a real human unless you belong at least one member of this cabal).
Just because an API or protocol has a certain capability, does not mean it is implemented for all use cases.
Folks seem to be hung up on the term "attestation" being in the response of a create call. If you look inside that object, there is another carve out for optional authenticator attestation, which is not used for consumer use cases.
I will keep repeating what I've said in the other comments. There is no credential manager attestation in the consumer synced passkey ecosystem. Period.
OK, so suppose you and I were bad guys. You work on the code that interfaces with the TPM on a windows device, and I work at an insurance provider and write code that authenticates users.
Suppose we hatch a conspiracy to take our users out of the "consumer synced passkey system". And into one where you can use the authentication ritual as a channel where you can pass me unique bits re: this user such that we can later compare notes about their behavior.
What about passkeys prevents us from doing this? How do we get caught, and by whom?
A while ago, I implemented a signin approach that looks similar to this "send a link/code" mode but (I believe) can't be exploited this way - https://sriku.org/blog/2017/04/29/forget-password/ - appreciate any thoughts on that.
Btw this predates passkeys which should perhaps be the way to go from now on.
One problem is you are requiring users to trust and click on a link in an email which is historically frowned upon. So you are undercutting phishing education.
If attacker would fool me at one website he will get that one account (possibly forever) and that's it. If it is a bank connected account, I can intervene and change email/account by writing a physical request to the bank for example, call the bank, do something. And likely it will be only a single bank account. But it may be even some unrelated account. Maybe it will be my Amazon account and all the attacker gets is some ebooks. Or Steam account. Or some email without important links. Etc.
Point is, the damage will be likely local to a single or a handful of accounts.
If all the accounts are protected by two factor on my phone and I lose it or it bricks, then I'm done. It will be a total mess with no paths to recover, except restarting literally everything from scratch.
I have Google Auth app on my phone and every few months I consider using it, but then reconsider and stay with passwords.
I think the "click a link in the email" solution is more than a "tiny" bit better isn't it? It almost completely solves the attack pattern you laid out. Passing the whole link to BAD is not only more tedious but totally ridiculous. That is not the kind of thing that even totally naive users would do.
And there is a significant benefit of not needing to worry about weak or repeated passwords, password leaks etc.
Overall that pattern feels significantly better to me than a normal password system, and MUCH better than the "we'll send you six digits to copy and paste" solution.
How is it worse than using a password? I think I'm missing something, please explain.
1) User goes to BAD website.
2) BAD website says “Please enter your email and password”.
3) BAD’s bots start a “Log in with email and password” on the GOOD website using the user’s email and password.
4) BAD now has full access to the user’s GOOD account.
In your example, the user is logging in to BAD.com, thinking it is GOOD.com.
In the OP's example, the user is logging in to BAD.com intentionally, but his GOOD.com account is still hacked into.
This is a lot harder for the user to catch on to.
Specifically, that OP describes sounds like a plausible log-in-with-big-tech-company flow that is really common these days.
I think GP has the following in mind:
In this case autofilled passwords are safe and convenient since they alarm the user that she isn't at GOOD.COM.A clickable link sent in email mostly works too, it ensures that the user arrives at GOOD.COM. (If BAD sends an email too, then there is a race condition, but it is very visible to the user.)
Pin code sent in email is not very good when the user tries to log in to BAD.COM.
You are.
There is no password in these new flows. They just ask for email or phone and send you a code.
Bad website only needs to ask for an email. It logs into Good with a bot using that email. Good sends you the code. You put the code in bad. Bad finishes the login with that code.
At no point in time is a password involved in these new flows. It's all email/txt + code.
Many sites work like this now. Resy comes to mind.
People hopefully won’t reuse the username/password they use on GOOD to log into BAD, so the login that BAD does in step 3 will fail.
Some percent of people will reuse their password. This is all but guaranteed.
There is no password. That's the point.
It's just an email, and a six digit code they text you.
Password managers can catch this case by not autofilling, hinting the user to take a step back and pay attention.
This is a 100x better explanation than what's in the blog post. The blog post is practically a tweet.
Please help me understand the passkey flow that solves this problem.
1) BAD actor tries to create account at GOOD website posing as oblivious@example.com.
2) GOOD website requests public key from BAD.
3) BAD provides self-generated public key.
4) GOOD later asks BAD to prove that they control the private key.
5) BAD successfully proves they control the private key.
Unless you have step 3b where GOOD can independently confirm that the public key does indeed belong to oblivious. But even that is easily worked around.
that's just a strawman bad account creation flow that has nothing to do with passkeys. you verify the email address first.
passkeys use a unique keypair per account, there's no single public key that represents you.
Indeed, I was illustrating that DecoPerson was proposing that passkeys solve an account creation flow problem. They do not.
But as DecoPerson points out, in the realm of account creation, your "verify the email address first" solution has its limits.
It is easy to conflate different aspects of trust and think they have the same solution.
> Passkeys is the way to go.
No, please, not as long as attestation is in the spec. I firmly believe that passkeys are intended to facilitate vendor lock-in and reduce the autonomy of end users.
Frankly, I do not trust any passkey implementation as much as I trust a GPG-encrypted text file.
There is no credential manager attestation in the consumer synced passkey ecosystem. Period.
I use a FIDO2 security key, I fail to see how I am locked in. Can you elaborate?
Re: Granny - Pew Research about Online Scams, granny is far less likely to lose her account (15%) than an 18-29 year old (26%). If she's upper income and white even less likely. Similar trends with falling for large numbers of scams. People who've had 3+, granny (19%), 18-26 year olds (24%). [1] The survey notably has the same perception results. Society views the youth as mostly immune from scams (believe only 22%), yet fall victim to them (26%), while worrying about old people (believe 84% fall for them), who actually don't fall victim that often (15%).
Most of the time, re: granny, women are targeted a much greater amount because of supposed weakness and vulnerability (report 2/3, victim 2/3), yet males send much larger amounts of money. ($112 vs $205) [2] Too be fair though, old people do tend to lose more with scams. Granny would probably lose $300 on average vs $113 for a 18-24. Conflicting numbers on the money #'s though, so some of that depends on which survey you ask.
Old people also tend to write each other a lot of cautionary warning stories such as the AARP article on Stan Lee's swindling in old age (security guard, "senior adviser", "protector", and daughter). [3]
Old people get a bunch of grief, yet old people are actually less likely to fall for the scams.
Also, if she's a retiree in Miami Beach, more likely to be targeted (Adak, AK; Deepwater, NJ; then Miami Beach, FL are the worst for scams.)
[1] https://www.pewresearch.org/internet/2025/07/31/online-scams...
[2] https://bbbmarketplacetrust.org/wp-content/uploads/2025/02/N...
[3] https://www.aarp.org/entertainment/celebrities/stan-lee-elde...
I hate passkeys because even as a savvy user, I don’t know what to do with them. Do they replace my password? Do I need to generate one passkey per account and per device? How do I login on a new device? Are password managers still relevant with passkeys?
They’re too opaque for my taste and I don’t like them.
I am afraid that this flaw is present for almost all phishable methods (SMS, TOTP, email OTP, App Push) to certain extent (except passkeys, mtls)
"Click a link in the email" isn't much secure either for most part. You might end up following a link blindly which can lure you into revealing even more information
Passkeys aren't that great either cause almost everyone has to provide a account recovery flow which uses these same phishable methods.
The language in communication is probably the most important deterrent here, second to using signals in the flow to present more friction to the abuser. A simple check like presenting captcha like challenge to the user in case they are not authenticating from the same machine can go a long way to prevent these kind of attacks at scale
> “Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website, and passing that link to BAD is more tedious and therefore more suspicious.
Somehow this makes me think of Pascal's Wager...
You just got through describing an attack where the victim was not aware that a bad actor can trigger a bona fide password reset code at an arbitrary time. For your little table of threats, you posit that at least clicking the link goes to the bona fide web site.
But there's a separate little table of threats for the case where an attacker controls the timing of sending a fake email. I believe realtors have this problem-- an attacker hacks their email and hangs back until the closing date approaches, then sends the fake email when the realtor tells the client to expect one with the wire transfer number/etc.
> The attack pattern is:
There are lots of attack patterns. That is one. I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
> Passkeys is the way to go. ... I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
... I do not agree your story is justification for passkeys, or for letting banks trust passkeys for authentication purposes. I'd rather she not lose access to banking services in the first place: I don't think banks should be allowed to do that, and I do not think it should be possible for someone to "steal all her money" so quickly -- Right now you should have at least several days to fix such a thing with no serious inconvenience beyond a few hours on the phone. I think it is important to keep that, and for banking consumers to demand that from their bank.
A "granny" friend of mine got beekeeper'd last year[1] and her bank reversed/cancelled the transfers when she was able to call the next say and I (local techdude) helped backup/restore her laptop. I do not think passkeys would helped and perhaps made things much worse.
But I don't just disagree with the idea that passkeys are useful, or even the premise of a decision here between losing all their money and choosing passkeys, I also disagree with your priors: Having to visit a bank branch is a huge inconvenience for me because I have to fly to my nearest. I don't know how many people around here keep the kind of cash they would need on-hand if they suddenly lost access to banking services and needed to fly to recover them.
I think passkeys are largely security-theatre and should not be adopted simply if only so it will be harder for banks to convince people that someone should be able to steal all their money/access with the passkey. This is just nonsense.
[1]: seriously: fake antivirus software invoice and everything, and her and her kid who is my age just saw the movie in theatres in like the previous week. bananas.
> I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit
It's looks almost the same as the log-in-with-big-tech flow that users are already used to.
> and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
You enter it on the website you are trying to log into and where you initiated the action, which in this scenario is the BAD website.
> I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
You and I think they are bullshit, but ... the problem is that bullshit is sometimes genuine.
I have got tired of how many times in recent years I have seen things that looked like phishing or had obvious UX-security flaws and reported them only to have got a reply from customer service that the emails and sites were genuine and that they have no intention of improving.
If janky patterns is the norm, then regular users will not be able to recognise the good-but-janky from the scams.
> I am not certain I believe it is very likely, because (a) I think "sign-in partner" is obvious bullshit, and (b) I don't understand why I would never enter a code into the wrong website. I believe it can be possible, but...
Now replace email a with text message sent from a short-code.
> I don't understand why I would never enter a code into the wrong website
That's what phishing is predicated on, and it seems to be successful enough.
It is all bananas. The old way with a local key on the computer and some silly Java program, a physical dongle to validate transactions with a number printed on a display, was way more fool proof.
Then the banks wanted you to use the dongle to verify yourself on phone and it all went downhill from there.
>(a) I think "sign-in partner" is obvious bullshit
Nearly every website tries to offer Google or Microsoft based sign in, "sign in partners" are commonplace.
Mobile phone App/Passkey authentication is just a way to pass the responsibility down to users. Losing a phone today is not just losing the passkey, there are "login with QR-code" schemes too, which do not need a password at all. It is a bad trend to pass all security onto the physical phone.
And good luck when your account is closed by the company, e.g. Microsoft or Apple or Google.
But you could replace #2 with "Enter your password from GOOD, as they are our sign-in partner". I'm not in favor of emailing 6 digit codes either, but your scenario presupposes that users will be willing to trust that two services have intermingled their auth, and in that case their password can be wrangled from them too.
My password manager won’t allow autofilling in the latter case, because it remembers the domain I used at sign-up time.
On the rare occasion that my password manager refuses to autofill, I take a step back and painstakingly try to understand why. This happens about once a year or twice.
Passkeys are still a shared secret, aren't they? Asymmetric cryptography would have been amazing. Barring that I would actually recommend Oauth or something like it, to limit the number of parties who manage shared secrets to a smaller set of actors who have more experience doing so.
They are in fact public/private keys and use signing a challenge for authentication.
But in practice they usually rely on attestation by an approved vendor, and the vendor won't let you control your private key, so they'll leverage it for lock-in.
No, they're just resident webauthn credentials which use asymmetric crypto.
> I’d rather granny needs to visit the bank to get access to her account again, than someone phishes her and steals all her money.
But granny can't go to a bank because they closed down most of their offices. Since 99% of what you need a bank for can be done using their app it no longer made financial sense to have a physical presence in most smaller towns and villages.
Lots of elderly were complaining about this when it happened because they were too lazy to learn how to use the bank apps. Hell, they already started complaining when you could no longer withdraw money at the desk even before they closed down the offices. Apparently even learning to use something as simple as an ATM was too much effort for them.
With luck, one day you'll be old and then perhaps you'll understand how unkind your last paragraph is.
You do realise the average granny is in cognitive decline and dealing with a myriad of health issues? You can judge a society (or a company) by how they treat their elderly
I suppose the GOOD site should say "do not enter this code on any other sites, we are NOT a login partner for any other sites" but a lot of people would probably not read that. Still, it would help. The very tricky thing about this scam is that it gets people to react to an email that they are expecting. Which means they will not be as guarded as if they got an email out of the blue.
Instead of showing a code, why not give a link that the user can click on?
I don't understand your example.
> 2) BAD website says “We’ve sent you an email, please enter the 6-digit code! The email will come from GOOD, as they are our sign-in partner.”
Does that mean that GOOD must be a 3rd party identity provider like Facebook, Apple, Google etc?
They don't have to actually be a 3rd party identity provider, just the user has to find it plausible that they might offer 3rd party login. Which, to be honest, pretty much any big or even medium-sized tech company might be doing these days.
No, BAD just inserts your email address on GOOD’s login page which sends you the login code, and they lie to prime you into thinking it’s not suspicious that the email came from someone other than BAD.
When you insert the login code on BAD, BAD uses it to finish the login process on GOOD that they started “on your behalf”.
BAD is lying about GOOD and presenting GOOD's legitimate service as a mere IdP for BAD, such that the user provides their code for GOOD to BAD so that the latter can then automatically log into GOOD.
There are sites that send you immediately a 6 digit code just by entering your email on their sign in page, they don’t even request a password. That means you could be phished on a fake website that when you enter your email there they do it on the real site, then you receive the real good code and enter it on the fake site.
It is just the same old stuff with username & password combination. I used to duplicate websites, they looked exactly like the original, except I was storing the entered username and password combination. I did this when I was a kid. The process is the same (or very similar) with everything else that is not a password.
True, they do it to facilitate access to their site without a password, but personally I don’t like getting an email just because I entered my username to sign in (my password manager takes care of filling the form so that email with a code is unnecessary to me).
I agree, I do not want an email either.
This attack technique is called "real time phishing". If you need a diagram or a more detailed explanation, look it up
BAD assumes:
1. you got login credentials at GOOD
2. you're using the same email address there
They then tell you GOOD will send you a code that you have to enter on their website.
Then they enter your Email on GOOD and request a reset, which sends a mail with a code to you.
You then enter the code on their website.
Now that they have the code they can enter it on GOOD and they have your account.
Good explanation! The GOOD's email should contains "Never give this code to others" and the user should know this clearly. I like phone and email OPT, it's easy to login.
> Passkeys is the way to go.
My problem with passkeys is that there is no hardware attestation like there is with Yubikeys and similar.
This means for security conscious applications you have no way of knowing if the passkey you are dealing with is from an emulator or the real-deal.
Meanwhile with Yubikeys & Co you have that. And it means that, for example people like Microsoft can (and do) offer you the option to protect your cloudy stuff with AAGUID filtering.
And similar if you're doing PIV e.g. as a basis for SSH keys, you can attest the PIV key was generated on the Yubikey.
You can't do any of that with passkeys.
> You can't do any of that with passkeys.
Device-bound passkeys which are used in workforce / enterprise scenarios are typically attested.
Attestation does not exist for consumer synced passkeys by design. It is an open ecosystem.
Isn’t clicking on a link in an email also problematic? It gets users in the habit of trusting links in emails. There is a history of those being used in bad ways as well.
I still don’t really understand what recovery looks like for a lost passkey… especially if I lose all of them. Not everything has a physical location where an identity can be validated, like a bank. Even my primary bank isn’t local. I’d have to drive about 6 hours to get to a branch office.
The recovery of a lost passkey is the same as a lost password.
How often do you lose the password for every account you have simultaneously?
There will not be a bank to visit.
> “Click a link in the email” is a tiny bit better because it takes the user straight to the GOOD website
I don't know, some would say taking an attack from trivial to virtually impossible is a bit more than a "tiny bit".
I work on a product that emails administrators with a list of actions they can take related to certain authorization requests. We got feedback from a customer that all requests were simultaneously approved then denied. It turns out their Microsoft provided email server follows all links and runs all javascript before showing it to the user
I like capability URLs. I know an URL isn't a secret, but it works in practice and it works well.
A bad practice is the shorten the code validity to a few minutes. This cannot really be justified and puts users under stress, which lessens security.
The discussion around passkeys, who is and isn't allowed to store them, almost killed them for me personally. I use them for very, very few services and I don't want to extend it.
Please log into BAD.com - we're a login provider to GOOD.com with a higher security level, from now on use BAD.com to log into GOOD.com
Why would I put a secret code from GOOD.com into BAD.com? That's the core of the problem.
If you put a code you get from GOOD.com into BAD.com, it's like you put a password from GOOD.com into BAD.com - don't do that.
> If you put a code you get from GOOD.com into BAD.com, it's like you put a password from GOOD.com into BAD.com - don't do that.
A password manager will protect me from doing the latter. There’s no way it can protect me from doing the former.
Any human can be tricked, no matter how smart they are. A bad actor just has to wait for the right moment. No amount of “don’t do that” can change that fact.
If a website says "Do this" and you're the person who follows random websites against security practices, because you believe in authority, a password manager does not help. You will open the password manager, search for GOOD.com and put it into BAD.com and be angry that your password manager can't do that for you.
"Any human can be tricked, no matter how smart they are."
and
"A password manager will protect me from doing the latter."
Don't work together. Either everyone can be tricked or not.
It says "Everyone can be tricked" but I can't be tricked because I use a password manager.
> you're the person who follows random websites against security practices, because you believe in authority
There are many reasons why such lapses of judgements happen, even to people who don’t believe in authority. For example, the fact that any human can be tricked.
> Don't work together. Either everyone can be tricked or not.
The password manager protects me from filling my password into the wrong site.
The password manager will not protect me from BAD.com tricking me into handing them out a one-time code that GOOD.com sent me via email.
So everyone can be tricked except you, because you use a password manager.
I literally wrote in my last sentence that I can still be tricked.
The excuse me, I read the last sentence differently.
I'm not sure what kind of websites are vulnerable to these attacks, but websites that have double authentication seem pretty safe to me. And if you forgot your password, then you receive an e-mail to change it with a secure link.
This point means the user is not paying attention: 1) User goes to BAD website and signs up. Steps 2-7 wouldn't be possible without 1.
"User not paying attention" is ultimately the reason for most phising attacks. It happens a lot, and we're trying to solve it as the known problem it is. Everybody, and I say everybody are human beings at the end of the day (so far...) and so by definition, can have a bad day and lower their defenses. It has ironically even happened to reputated security specialists.
How is it different from plain old password?
1) User goes to BAD website and enter credentials
2) BAD website use GOOD website to check if credential is valid
3) Pwned
It is just MITM attack. The moment you go to BAD and enter credential (password or one time code) you are done.
A similar flow can still happen with passwords. Granted, the user may be confused if they use a password manager and the password doesn't populate.
Would it be a viable and simple solution to only enter 6-digit codes into the specific website that requested it?
Isn't this the same thing as BAD asking, let us know the code i.e. password that GOOD gave you? Why would one be inclined to give BAD (i.e. someone else) this info?
If you're phished, you're probably not checking the domain too carefully anyway.
You get an email, providing you with a phishing link for miсrosoft.com (where the apparent c is actually the cyrillic "s", so BAD). In the background, they initiate a login to microsoft.com (GOOD), who then send you a 6 digit code from the actual microsoft.com. If you were fooled by the original phishing website, you have no reason to doubt the code or not enter it.
This came to my mind too. But by using a password manager it will be able to differentiate between the GOOD and BAD site. So I think the point is valid only if the user is not using a password manager.
Or copy pasting passwords manually. In that case the password manager is equivalent to a list of passwords on a sheet of paper.
Passkeys are just passwords that require a password manager.
So this is only relevant to websites that do not use passwords and only do a temporary one time code... this is not 2FA correct?
If the target was not actively trying to log into GOOD at that exact moment, why would they treat this as anything other than one of a phishing attempt or spam?
Because target WAS trying to login to BAD.
Imagine a "free porn, login here" website, when you put in your gmail address it triggers the onetime code from gmail (assuming it did that type of login) - thousands would give it up for the free porn.
Oh I see. Misread the whole scenario.
Links are more worse than otp but both can easily be secure if users check domain which users never do so links and otp are terrible. Long live passkeys.
> if users check domain which users never do
To be fair, can we blame them? There are so many legitimate flows that redirect like it’s a sport. Especially in payments & authn, which is where it’s most important. Just random domains and ping pong between different partner systems.
It’s still possible to use a button in the email if you include a copypasteable variant in the mail itself.
> Passkeys is the way to go. Password manager support for passkeys is getting really good.
And how do you access your password manager when your computer is locked ?
How do you access the phishing site when your computer is locked?
On your phone
Use password manager on your phone?
I don't store my passwords on the phone. Phones are fundamentally less secure than desktop, which can use strong virtualization [0] for security.
[0] My daily driver OS is https://qubes-os.org
If we are talking about real time phishing then sending a code to the email is as secure as a 2FA authentication with password and Google Authenticator code.
My password manager will protect me from entering my password into a website on the wrong domain. It won’t protect me in the passwordless case where the code is sent via email.
Can you explain this more, I don't understand Google authenticator completely? Could a bad actor spoof a 2FA as they can with an email, and capture your input?
The attacker would just ask you for the TOTP code and forward that to Google.
In practice it's maybe slightly harder, because they'd have to convince a user to enter their google 2fa code into a site that isn't obviously google?
I'd imagine a convincing enough modal would do the trick though, in a lot of cases.
> convince a user to enter their google 2fa code into a site that isn't obviously google?
if the BAD site itself looks legit, and has convinced a user to do the initial login in the first place, they won't hesitate to lie and say that this 2-factor code is part of their partnership with google etc, and tells you to trust it.
A normal user doesn't understand what is a 2factor code, how it works, and such. They will easily trust the phisher's site, if the phisher first breaks the user and set them up to trust the site in the beginning.
What google does is to send a notification to the user's phone telling them someone tried to access their account if this happened (or any new login to any new device you previously haven't done so on). It's a warning that require some attention, and depending on your state of mind and alertness, you might not suspect that your account is stolen even with this warning. But it is better than nothing, as the location of the login is shown to you, which should be _your own location_ (and not some weird place like cypress!).
What I don't understand is how the site will send the 2FA code request to the bad actors phone, instead of the real users phone? Is this not part of what makes it more secure than a text or email? Wouldn't the bad actor need to be logged into the authenticator as the user your trying to hack?
> how the site will send the 2FA code request to the bad actors phone, instead of the real users phone?
the 2FA code in this case is in the email, not via an app. This email is triggered by BAD on their end, but it is sent by GOOD.
If the 2fa is _only_ via the authenticator app, then the BAD will need to convince the user to type in that 2fa code from the app into the BAD site (which is harder, as nobody else does this, so it should raise suspicions from the user at least).
If we are talking about TOTP, there is a time limit to that, which makes it harder, yeah.
Not much harder. The state of the art of phishing right now is proxy based setups like evilginx which pass along credentials in real time. Then you just save the session cookie or change/add the 2fa mechanisms so you can get in whenever you want with the stolen credentials.
> 1) User goes to BAD website and signs up.
I think this is what Raymond Chen calls the other side of the airtight hatch.
The game is already over. The user is already convinced the BAD website is the good website. The BAD website could just ask the user for the email and password already and the user would directly provide it. The email authenticaton flow doesn’t introduce any new vulnerability and in fact, may reduce it if the user actually signs in via a link in the email.
I haven't been able to get into my Oracle (free) account for 2 years because I lost 2fa... Unless I start needing to pay them for something, they'll probably never answer my emails. There are consequences for losing your phone when using alternative authentication methods (be careful).
If I have a password manager, what good are passkeys to me? No thanks.
you can to the same with text messages, right? That's even more scary.
edit: kam corrected me below.
The browser that initiated the request is under the control of BAD in step 3.
ah, you are correct. thanks for pointing that out.
What about the 99 other places granny needs to regain access to after the much more common broke or lost phone many of which doesn't have a meaningful amount of customer service.
I see no reason not to use password + one of multiple 2FA methods so the user can regain control.
I guess this flow is even worse for authenticators like Duo or even Apple’s own iCloud logins with 2fa. You log on to a phishing site mimicking the real one, and your phone asks if it is you trying to log in. Yes of course it’s you logging in, but you don’t realize you’re logging in bad guys by proxy.
The prompts that show where the login is coming from are useless, too, because mapping from IP addresses to geographical locations is far from perfect. For example, my legit login attempts showed me all over my country map. If I’m in a corporate VPN already, its exit nodes may also be all over the map, and your legitimate login from, say, Germany may present itself as coming from Cyprus, which is shady as fuck.
If I seek to implement 2fa for my own service and have it be not theater and resistant to such phishing attacks, it gets difficult real fast.
[dead]
This is also the same problem with TOTP 2fa, passkeys are definitely the way to go for most people.
Easy to mitigate by only allowing the device that requested the 6-digit code to use the code.
Edit: See first reply, this is not a mitigation at all!
but the device is under the control of BAD. They fake a signin on their backend to GOOD. Your computer never touches GOOD at all, except from seeing the email from GOOD (which you're told about by BAD, and lied to about being a partner signin thing).
The problem being exploited by BAD is that your login account identifier (email in this case) is used in both GOOD (and BAD - accidentally or deliberately orchestrated), and 2-factor does not prevent this type of phishing.
The scheme is impossible, because the GOOD site says in the email "NEVER SHARE THIS ONE TIME CODE WITH 3RD PARTY APPS OR INDIVIDUALS"
You left out the /s tag. People don't read that bit.
/s tag?
Peope do read, if the email is short
They only read what they need to finish what they are currently trying to do, which in this case is the code they need to log in.
I know from experience that well designed messages with secure code are very understandable and make it virtually impossible to miss the warning.
On what grounds you say people dont read? Any evidence?
> I know from experience that well designed messages with secure code are very understandable
This premise seems flawed.
How can you possibly know from experience that something is “very understandable” if the only brain you have is your own?
How do you anticipate how other people with brains different from yours are going to behave in situations of cognitive impairment or extreme stress, things that happen in the real world?
There are common properties of phycology shared by people. UI design and ergonomics rely on such properties. In psrticular, how people read text.
But I am speaking of myself only. From experience receiving well designed message comparing to the experience with badly designed messages.
I am a data point of evidence supporing my view. The opinion that "people don't read" is a complete speculation, without convincing evidence.
The real problem that many services simply not include the warning in the message.
OP’s claim was not that “people don’t read.”
It was that “[t]hey only read what they need to finish what they are currently trying to do.”
Those are two different claims.
Ok. When they need the code they will have to scan through a message like
and will read the words, because they read left to right.The code should be in the same font as the rest of the text.
I can assure you that by now, my brain is conditioned to lock into the four-digit code as soon as it can, entirely ignoring everything around it, including the words to the left.
I’m an avid reader. But there are limits to what I can process, and our world has become so full of noise that it has become a coping strategy for brains to selectively ignore stuff if they feel it’s not important at the moment. That effect becomes even more pronounced as the brain deteriorates with age.
I do not believe that receiving such a message you will not notice the phrase.
And more so if you receive them constantly.
But of course, you are entitled to your opinion, even if it's wrong.
Phising = pretending you're the first party
Tuesday follows Monday
I don't know if you're sarcastic or just missing the problem; which is that people will be presented with lika a facebook login page, on a site with url like `facebook.quick-login.com` or `facebock.com` and they'll enter the passcode since as fair as they were concerned, they did everything correct. The disclaimer does shit preventing that, they »obviously« didn't share the code with any other website, they entered it on the facebooks as they were told!
I am sarcastic because this discussion is about a different attack. Not about fishing.
(The OP says one time codes are worse than passwords. In case of fishing passwords fail the same way as one time codes.)
I was also sarcastic/provocative even in the prev comment, saying the GOOD site always includes a warning with the code making the attack impossible. A variation of the attack is very widely used by phone scammers: "Hello, we are updating intercomm on your appartment block. Please tell us your name and phone number. Ok, you will receive a code now, tell it to us". Yet many online services and banks still send one time codes without a warning to never share it!
The fishing point may also be used in defence of one time codes: if the GOOD service was using passwords instead of one time codes, the BAD could just initiated fishing attack, redirecting the user to a fake login page - people today are used to "Login with" flow.
I don't see how that's worse than user-password authentication. For password without 2FA the attack pattern is
1) User goes to BAD website and signs up (with their user and password). BAD website captures the user and password
2) BAD website shows a fake authentication error, and redirects to GOOD website. Users is not very likely to notice.
3) BAD uses user and password to login to GOOD’s website as the user. BAD now has full access to the user’s GOOD account.
OK, with a password manager the user is more likely to notice they are in BAD website. Is that the advantage?
Most password managers are fussy about which websites they fill the password in on. It’s partly a convenience feature to only show relevant accounts but it’s also a security feature to avoid phishing.
Passkeys are stronger here because you can’t copy and paste a passkey into a bad website.
Can happen, but on the BAD website will the password manager not offer saved password, so user has higher chance of noticing smth. is wrong. Of course if he uses pass manager with complex passwords...
Four times a day, I get an email notification that someone requested a password reset for my Microsoft account, which gives me a six-digit number to recover my account. So every day, an attacker has four shots in 1,000,000 of stealing my account by just guessing the number. They've been doing this for years.
If the attacker's doing this to thousands of accounts - which I'm sure they are - they're going to be stealing accounts for free just by guessing.
I wrote up a security report and submitted it and they said that I hadn't sufficiently mathematically demonstrated that this is a security vulnerability. So your only option is to get spammed and hope your account doesn't get stolen, I guess.
I have added what I think they call login alias to my account. This blocks logins using the normal account username (which is my public email address), and only allows them via the alias (which is not public and just a random string). Not a single foreign login attempt since I enabled the alias.
You can enable it on account.microsoft.com > Account Info > Sign-in preferences > Add email > Add Alias and make it primary. Then click Change Sign-in Preferences, and only enable the alias.
I hadn't thought of this use case for aliases.
I had to make my Outlook email primary again on my Microsoft account, unfortunately, because of how I use OneDrive. I send people share invitations and there are scenarios (or at least there were the last time I checked) where sending invitations from the primary account email is the only way to deliver the invite. If your external email alias is primary, they'll attempt to send an email from Outlook's servers that spoofs the alias email :/
I just tested it, and it looks as if that was fixed. It seemed to work for me.
This sounds a lot like Steam, where the name on your profile page is a vanity string that you can change whenever you want, but the actual username in their system is an unrelated (and immutable) ID.
Then, is the login alias sort of a password? In that, it is something you know.
In a way, yes. I don't count on it being private though. But it appears nowhere online, so it's not used by credential stuffers or other bots.
Yep, back to passwords, but less secure ones.
joe@smith.com, joe.smith@bigcompany.com
...those will get "drive by" attacks no matter what.
Interesting that they're letting you alias it back to "coolkid5674321" again...
I had to do this as well. My account got spammed daily in such a way I had to verify my account and change my password on every login.
With the alias I no longer have this issue.
This is what I do. The crucial thing is to only use the alias for logging in.
I get a similar message constantly for an old Instagram account - "sorry you're having trouble logging in, click here to log in and change your password!"
The code length should ideally be adaptive and increase if this happens.
I get it too! I always assumed it was some hangover from that time I had to use crosses self Microsoft teams.
If they are doing this to 125,000 accounts, they should get an average of one account per day, right? So on average it would on average take them 342 years to get any specific account, but as long as they aren't trying for any particular account, they've got a pretty good ROI.
I guess the fix for this would be exponential backoff on failed attempts instead of a static quota of 4 a day?
Why would doing this to 125K accounts give them access to one account per day? The chances of guessing 6-digtis pin code for each account is the same (10^6) regdless of how many accounts your are attacking
It's never truly guaranteed and the numbers aren't quite one account per day at 125k accounts, but:
10^6 digits = 1,000,000 possibilities
125,000 accounts x 4 attempts per account per day = 500,000 attempts per day
---
1-(1-1/1,000,000)^500,000 ≈ 39%
So every day they have a roughly 39% chance of success at 125,000 accounts.
---
At a million accounts:
1-(1-1/1,000,000)^(4×1,000,000) ≈ 98%
Pretty close to 1 account per day
Off by a factor of 4 but the concept stands.
---
And 125k accounts will be close to guaranteed to getting you one each week:
1-(1-1/1,000,000)^(7×4×125,000) ≈ 97%
> 1-(1-1/1,000,000)^(4×1,000,000) ≈ 98%
> Pretty close to 1 account per day
No, this means there is a 98% chance you get _at least_ 1 account.
`1-1/1,000,000` is the probability you fail 1 attempt. That probability to the 4millionth is the probability you fail 4 million times in a row. 1 minus _that_ probability is that the probability that you _don't_ fail 4 million times in a row, aka that you succeed at least once.
The expected number of accounts is still number of attempts times the probability of success for 1 try, or: 4 accounts.
What are the chances of getting 500,000 guesses (4 each for 125,000 accounts) wrong ? My math says 60%, so probably not one account per day, but if they keep it up for a week and everything else holds, there's only a 3% chance they haven't gotten any codes right.
Guess the same code for every account.
Imagine the extreme case, where they pinged one million accounts and then tried the same code (123456) for each one. Statistically, 1 of those 1,000,000 six-digit TOTP codes will probably be 123456
I had the same issue on a useless old account. Could see the IP addresses of the sign-in attempts, they came from all over the world, all different ISPs, mostly residential. Nearly every request was from a unique /16! If botnets are used for something this useless, I dread to think what challenges at-risk people face
Adding 2FA was the solution
I couldn't find the method they were using in the first place, because for me it always asks for the password and then just logs me in (where were they finding this 6-digit email login option?!), but this apparently blocked that mechanism completely because I haven't seen another sign-in attempt from that moment onwards. The 2FA code is simply stored in the password manager, same as my password. I just wanted them to stop guessing that stupid 6-DIGIT (not even letters!) "password" that Microsoft assigns to the account automatically...
Microsoft allows you create a second "login only" account username to access your e-mail and other services. I was having the same problem as you but much worse. Check into it, only takes a few minutes to setup.
Does adding MFA not protect you against this? If you are secured by a TOTP on top of your password, it should not matter if they manage to reset your password.
Somewhat, but imho the Microsoft MFA is also full of similar flaws.
As an example: I've disabled the email and sms MFA methods because I have two hardware keys registered.
However, as soon as my account is added to an azure admin group (e.g. through PIM) an admin policy in azure forces those to 'enabled'.
It took me a long time debugging why the hell these methods got re-enabled every so often, it boils down to "because the azure admin controls for 'require MFA for admins' don't know about TOTP/U2F yet"
Imho it's maddening how bad it is.
Or you could enable MFA?
Four times a day, times say 5 years = 7_300 tries. Times 10_000 accounts ≈ 73_000_000 tries. They should have access to ~70 accounts by now.
Cheapest VPS is $5/month, residential proxies are $3/1Gb, which equals ~$200 / 5 years.
$3 per hacked account — is it good unit economy?
I was authenticating a set of scripts five times for each run with MFA. Once, it asked me for six MFA prompts with no disambiguating info.
Did I click “Yes” to the attack the fifth time, or was the sixth the attack? Or was it just a “hiccup” in the system?
Do I cancel the migration job and start from the beginning or roll the dice?
It’s beyond idiotic asking a Yes/No question with zero context, but that was the default MFA setup for a few hundred million Microsoft 365 and Azure users for years.
“Peck at this button like a trained parrot! Do it! Now you are ‘secure’ according to our third party audit and we are no longer responsible for your inevitable hack!”
> “Peck at this button like a trained parrot!
All of the prompts users get these days in an effort to add "security" have trained users to mindlessly say "yes" to everything just so they can access the thing they're trying to do on their computer; we've never had less secure users. The cookie tracking prompts should probably take most of the blame.
I know with the last major macOS update, nearly every app is now repeatedly asking if it can connect to devices on my network. I don't know? I've been saying yes just so I don't have stuff mysteriously break, and I assume most people are too. They also make apps that take screenshots or screen record nag you with prompts to continue having access to that feature. But how many users are really gonna do a proper audit, as opposed to the amount that will just blindly click "sure, leave me alone"?
On my phone, it keeps asking if I want to let apps have access to my camera roll. Those stupid web notifications have every website asking if it can send notifications, so everyone's parents who use desktop Chrome or an Android have a bunch of scam lotto site ad notifications and don't know how to turn them off.
Untold billions towards cyber security theater and there's still hackers. No one saw that coming!
I'd make a joke about cybersecurity theatre but I think zscaler will block the comment from being submitted
Should be using app registrations for that, not user accounts.
The worst part about this is it just further reinforces horrible habits and expectations.
Using a modern password manager, like 1password, is _easier_, safer, and faster than the stupid email-token flow. it takes a little bit of work and attention at first to setup across a couple devices, and verify it works.... but its really about the same amount of effort as keeping track of a set of keys for your house, car, and maybe a workplace.
If you make a copy of a door key when you move into a new place, you test the key before assuming it works. Same thing with a password manager. Save a password on your phone, test it on a different device, and verify the magic sync works. Same as a key copier or some new locks a locksmith may install.
Humans can do this. You don't need to understand crypto or 2fa, but you can click 'create new password' and let the app save some insanely secure password for a new site. Same with a passkey, assuming you don't save to your builtin device storage that has some horrible, hidden user interface around backing that up for when your phone dies.
And the irony is the old flow just works better! You let the password manager do the autofill, and it takes a second or two, assuming their is an email _and_ a password input. Passkeys can be even faster.
that little bit of work and attention is too much for most people.
I'm as frustrated about this as you are, but there is a large class of people who will not or can not understand and implement the password-manager workflow.
Of the people I know who are not in a tech career i'd say about 80% have nothing but contempt and ignorant fatalism toward security. The only success I've had is getting one older relative to start writing account credentials down in a little paper notebook and making sure there are numbers and letters in the passwords.
I get the point. However, from my own experience this type of one-time passcode is unfortunately the 2nd well-understood authentication method for non-tech people surrounding me. The 1st is the password, of course.
I don't know the general situation, but, at least in our small town, people would go to the phone service shop just for account setup and recovery, since it's just too complicated. Password managers and passkeys don't make things simpler for them either –– I've never successfully conveyed the idea of a password manager to a non-tech person; the passkey is somehow even harder to explain. From my perspective it's both the mental model and the extra, convoluted UX that's very hard to grasp for them.
Until one day we come up with something intuitive for general audience, passwords and the "worse" one-time code will likely continue to be prominent for their simplicity.
just stick with passwords then
I guess the problem is such people will mostly use passwords that are as weak as they can get away with.
Good luck finding a suite of modern, convenient services that will allow you to do that nowadays. I wish we could opt-in with some sort of I-know-what-I'm-doing-with-passwords-and-take-full-responsibility option.
You vastly underestimate the number of people who should not pick this option but would (because doing otherwise would be admitting their incompetence / ignorance) -- thus handily continuing the problem.
If you have password reset via email, as almost every service using passwords does, there’s no security gain over magic links/codes.
It’s actually worse, since now the email account or the password get you in, vs. just the email account.
> If you have password reset via email, as almost every service using passwords does, there’s no security gain over magic links/codes.
I disagree. The problem with the magic code is that you've trained the user to automatically enter the code without much scrutiny. If one day you're attempting to access malicious.com and you get a google.com code in your email, well you've been trained to take the code and plug it in and if you're not a smarty then you're likely to do so.
In contrast, email password recovery is an exception to the normal user flow.
Password reset also has phishing potential. I do see your point, but if a user doesn’t check domains, I think they can be easily phished through either route.
And even if proper passwords are used, many sites/apps use this pattern for account recovery if the password is forgotten so effectively this is the only security as an attacker has “forgotten” the password and just uses this flow to login.
I've got a little generic login tool that bits I write myself use for login, using this method, but it is not for anything sensitive or otherwise important (I just want to identify the user, myself or a friend, so correct preferences and other saved information can be applied to the right person, and the information is not easily scraped) - I call it ICGAFAS, the “I Couldn't Give A Factor” Auth System to make it obvious how properly secure it isn't trying to be!
Another issue that email based “authentication” like this (though one for the site/app admins more than the end user) has is the standard set of deliverability issues inherent with modern handling of SMTP mail. You end up having to use a 3rd party relay service to reduce the amount of time you spend fighting blocklists as your source address gets incorrectly ignored as a potential spam source.
> And even if proper passwords are used, many sites/apps use this pattern for account recovery if the password is forgotten so effectively this is the only security as an attacker has “forgotten” the password and just uses this flow to login.
Was about to post just this. This is the flow they use for account recovery so it's the weakest link in the chain anyway.
What's quite annoying is how agressive most products are into forcing this method over regular email+pw / Social Logins. Let me use my 100 chars password!
You are not the target audience, you are not even an outlier, it's probably time to accept this and look for long-term solutions that allow you to interface with the "mainstream".
Many (most?) people I know in the "target audience" want to keep their email+password logins.
The UX of having to switch apps or websites is terrible when I have auto fill available via the Web browser or a password manager.
Such long passwords are silly, they will be effectively truncated by the key length of the underlying cryptography.
Agreed. But since every character gives you around 6 bits (26*2 letters + 10 numbers + some special characters ≈ 64 = 2^6), you'd need 256/6 ≈ 43 characters to exhaust the checked entropy, so up to that level it makes sense.
If you use sentences instead of randomly generated characters, the entropy (in bits/character) is lower, so 100 characters might well make sense.
Which is why sha+bcrypt is always better than just bcrypt
Passwords are (or, rather, SHOULD be) cryptographically hashed rather than encrypted. It's possible to compute a hash over data which is longer than the hash input block size by feeding precious hashes and the next input block back in to progressively build up a hash of the entire data.
bcrypt, one of the more popular password hashing algorithms out there, allows the password to be up to 72 characters in length. Any characters beyond that 72 limit are ignored and the password is silently truncated (!!!). It's actually a good method of testing whether a site uses bcrypt or not. If you set a password longer than 72 characters, but can sign in using just the 72 characters of your password, they're in all likelihood using bcrypt.
Yeah, that's why bcrypt is broken and shouldn't be used today. It had a good run, but nowadays we have better options like scrypt or argon2.
It's not broken. It's just potentially less helpful when it comes to protecting poor guessable passwords. bcrypt isn't the problem, weak password policies/habits are. Like bcrypt, argon2 is just a bandaid, though a tiny bit thicker. It won't save you from absurdly short passwords or silly "correct horse battery staple" advice, and it's no better than bcrypt at protecting proper unguessable passwords.
Also, only developers who have no idea know what they're doing will feed plain-text passwords to their hasher. You should be peppering and pre-digesting the passwords, and at that point bcrypt's 72 character input limit doesn't matter.
Bcrypt alone is unfit for purpose. Argon2 does not need its input to be predigested.
It's easy for somebody who knows this to fix bcrypt, but silently truncating the input was an unforced error. The fact that it looks like and was often sold as the right tool for the job but isn't has led to real-world vulnerabilities.
It's a classic example of crypto people not anticipating how things actually get used.
(Otherwise, though, I agree)
Peppering is for protecting self-contained password hashes in case they leak. It's a secondary salt meant to be situated 1) external to the hash, and 2) external to the storage component the hashes reside in (i.e. not in the database you store accounts and hashes in). The method has nothing to do with trying to fix anything with bcrypt. You should be peppering your input even if you use Argon2.
Right, but peppering was not part of my comment. You can't always pepper, and there are different ways to do it. It's (mostly) orthogonal to the matter.
You do not have to do any transformations on the input when using Argon2, while you must transform the input before using bcrypt. This was, again, an unnecessary and dangerous (careless) design choice.
I don't understand your responses here. Clearly you are not familiar with what problem peppering solves, or why it's a recommended practice, no matter what self-contained password hashing you use. bcrypt, scrypt, Argon2; they are all subject to the same recommendation because they all store their salt together with the digest. You can always use a pepper, you should always use a pepper, and there's only one appropriate way to do it.
There are at least as many ways to pepper as there were to salt before salts became integral to the definition of a good KDF. To wit:
And no, you cannot always pepper. To use a pepper effectively, you have to have a secure out-of-band channel to share the pepper. For a lot of webapps, this is as simple as setting some configuration variable. However, for certain kinds of distributed systems, the pepper would have to be shared in either the same way as the salt, or completely publicly, defeating its purpose. Largely these are architectural/design issues too (and in many cases, bcrypt is also the wrong choice, because a KDF is the wrong choice). I already alluded to the Okta bcrypt exploit, though I admit I did not fully dig into the details.The HMAC-SHA256 construction I showed above, and similar techniques, accomplishes both transforming the input and peppering the hash. However, the others don't transform the input at all or, in one case, transform it in a way even worse for bcrypt's use.
Why is the "correct horse battery staple" advice silly?
Using memorable passphrases online is always a bad option because they're easily broken with a dictionary attack, unless you bump the number of words to the point where it becomes hard to remember the phrase. Use long strings of random characters instead, and contain the use of passphrases to unlocking your password manager.
To wit, each word drawn from a 10,000-word dictionary adds about 13 bits of entropy. At 4 words, you have (a little over) 52 bits of entropy, which is roughly equivalent to a 9-character alphanumeric (lower and upper) password. The going recommendation is 14 such characters, which would mean you'd need about 7 words.
The average person will create a passphrase from their personal dictionary of most-used words, amounting to a fraction of that. An attacker will start in the same way. Another problem with passphrases is that you'll have a hard time remembering more than a couple of them, and which phrase goes to what website.
Yes, in this case it would be easier to brute-force the key instead of the password, so the additional characters don't really help.
Brute-forcing the underlying key doesn’t help if you’re trying to mount a credential stuffing attack. Brute-forcing the password does.
For years (and way more recently than is appropriate), the financial institution Schwab would silently truncate your password to 8 characters.
If your password was 123lookatme, you could type 123lookaLITERALLYANYTHING and it would succeed.
And there is _NOTHING_ worse than being locked out of an account because without asking they reverse the password and second factor authentication while your traveling and don't have access to a phone/etc.
Nevermind. that pretty much all services treat the second factor as more secure than my 20 character random password saved in a local password safe. And those second factors are, lets see, plain text over SMS, plain text over the internet to an email address, etc, etc, etc.
What percentage of people reuse the same password as opposed to use a password manager?
I would say it is very high. In my experience password managers are rarely used by nontechnical people.
I just deleted my gofundme because they kicked me into this cycle today. Somehow I've managed to have an account there and make contributions over the years, but now they wanted my phone number and an MFA code to proceed, and there was no opt-out. I went through it but then deactivated my account. I need less of this in my life, and gofuneme is not essential to my life.
I'm in the rental market right now, and Zillow not only has a log-in for the app, but to read messages in your inbox, you have to MFA again each time, and the time-out period is about an hour.
We're being annoyed to death.
This is madness.
Ticketmaster did the same. They don't accept Google Voice numbers, yet my only number is Google Voice. The number tied to my SIM is an implementation detail that changes depending on where I am, but it's the only way I can get into that account now. My choices are to not go to events that are ticketed by them, or accept that I'll probably be locked out whenever I change SIMs.
SMS is literally the least secure form of authentication, because numbers expire after mere weeks, and get re-assigned shortly, within months, because of number shortage in many area codes.
Nothing like this could happen with any mainstream mail service like Gmail, where it's officially advertised that the accounts could never be reused.
The worst part about SMS is that not only is there the potential to be locked out permanently, but also you never know whether or not the service would allow login or password reset via SMS, thus, you never know if you're opening yourself to account takeover.
Google turned on 2FA on my account themselves, but the phone number on the account is outdated, so I was permanently locked out.
I read this sentence 4 times and I still can't parse it:
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
Because the sentence makes no sense, but what the author wanted to say was:
- You are in front of the attacker site that looks like a legitimate site where you have an account (you arrived there in any way: Whatsapp link, SMS, email, whatever). Probably the address bar of your browser shows something like microsoft.minecraft-softwareupdate.com or something alike, but the random user can't tell it's fake. The page asks you to login (in order to steal your account).
- You enter the email address to login. They enter your email address in the legitimate site where you actually have an account.
- Legitimate site (for example Microsoft) sends you an email with a six digit code, you read the code, it looks legit (it is legit) and you enter it in the attacker site. They can now login with your account.
I read it as just some web page that was bad, but not necessarily imitating a good sits. For example some new gaming forum that pops up, which is bad, but uses the gaming forum to get people to send them six digit codes which they use for whatever sites they see fit. Then the people who run the gaming forum are now stealing your Etsy account.
I think one can also understand it as the attacker being the one to enter the email first.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
Replace "can simply send your email address" with "can simply input your email address". An attacked inputs your email at login.example.com, which sends a code to your email. The attacker then prompts you for that code (ex. via a phishing sms), so you pass them the code that lets them into the account.
I believe (and the article should make it clear) that the article is criticizing specifically the use of the code that user must enter into a box, which invites man-in-the-middle attacks.
The article is not advocating against e-mail-driven URL-based password reset/login, whereby the user doesn't enter any code, but must follow a URL.
The six digit code can be typed into a phony box put up by a malicious web site or application, which has inserted itself between the user and the legitimate site.
The malicious site presents phony UI promoting the user to initiate a coded login. Behind the scenes, the malicious site does that by contacting the genuine site, and provoking a coded login. The user goes to their inbox and copies the code to the malicious site's UI. The site then uses it to obtain a session with the genuine site, taking over the user's account.
A SSL protected URL cannot be so easily intercepted. The user clicks on it and it goes to the domain of the genuine site.
So there are two complaints about this authn scheme that I'm seeing in this thread:
1. It's pretty phishable. I think this is mostly solved, or at least greatly mitigated, by using a Slack-style magic sign-in link instead of a code that you have the user manually enter into the trusted UI. A phisher would have to get the user to copy-paste the URL from the email into their UI, instead of clicking the link or copy-pasting it into the address bar. That's an unusual enough action that most users probably won't default to doing it (and you could improve this by not showing the URL in HTML email, instead having users click an image, but that might cause usability problems). It's not quite fully unphishable, but it seems about as close as you can get without completely hiding the authentication secret from the user, which is what passkeys, Yubikeys, etc., do. I'd love to see the future where passkeys are the only way to log into most websites, but I think websites are reluctant to go there as long as the ecosystem is relatively immature.
2. It's not true multi-factor authn because an attacker only needs to compromise one thing (your inbox) to hijack your account. I have two objections to this argument:
a. This is already the case as long as you have an email-based password reset flow, which most consumer-facing websites are unwilling to go without. (Password reset emails are a bit less vulnerable to phishing because a user who didn't request one is more likely to be suspicious when one shows up in their inbox, but see point 1.)
b. True multi-factor authn for ordinary consumer websites never really worked, and especially doesn't work in the age of password managers. As long as those exist, anyone who possesses and is logged into the user's phone or laptop (the usual prerequisites for a possession-based second factor) can also get their password. Most websites should not be in the business of trying to use knowledge-based authentication on their users, because they can't know whether the secret really came from the user's memory or was instead stored somewhere, the latter case is far more common in practice, and only in the former case is it truly knowledge-based. Websites should instead authenticate only the device, and delegate to the device's own authentication system (which includes physical possession and likely also a lock secret and/or biometric) the task of authenticating the user in a secure multi-factor way.
Two problems I’ve encountered with magic links:
* Mobile email clients that open links in an embedded browser. This confuses some people. From their perspective they never stay logged in, because every time they open their regular browser they don’t have a session (because it was created in the embedded browser) and have to request a login link again.
* Some people don’t have their email on the device they want to log in on.
Sending codes solves both of these problems (but then has the issues described in the article, and both share all the problems with sending emails)
Magic links can be used to authorize the session rather than the device. That is, starting the sign in process on your laptop and clicking the link on your phone would authorize your laptop's sign in request rather than your phone's browser. It requires a bit more effort but it's not especially difficult to do.
Wouldn't that be incredibly insecure? Attacker would just need to initiate a login, and if the user happens to click the link they've just given the attacker access to their account..
The reason why magic links don't usually work across devices/browsers is to be sure that _whoever clicks the link_ is given access, and not necessarily whoever initiated the login process (who could be a bad actor)
> Wouldn't that be incredibly insecure?
If done naively with a simple magic link, yes.
> and if the user happens to click the link they've just given the attacker access to their account
Worse: if the user's UA “clicks the link” by making the GET request to generate a preview. The user might not even have opened the message for this to happen.
> Wouldn't that be incredibly insecure?
It can be mitigated somewhat by making the magic link go to a page that invites the user to click something that sends a post request. In theory the preview loophole might come into play here if the UA tries to be really clever, but I doubt this will happen.
Another option is to give the user the option to transfer the session to the originating UA, or stay where they are, if you detect that a different UA is used to open the magic link, but you'd have to be carful wording this so as to not confuse many users.
> Worse: if the user's UA “clicks the link” by making the GET request to generate a preview.
You mean something like a popover preview that appears when the user hovers over a link?
Isn’t there a way to configure the `a` element so the UA knows that it shouldn’t do that?
> You mean something like a popover preview
That, or a background process that visits links to check for malware before the user even sees the message.
> Isn’t there a way to configure the `a` element so the UA knows that it shouldn’t do that?
If sending just HTML you could include rel="nofollow" in the a tag to discourage such things, bit there is no way of enforcing that and no way of including it at all if you are sending plain text messages. This has been a problem for single-use links of various types also. So yes, but not reliably so effectively no.
And we get back to the original point of the article (sort of). Opening a magic link should authenticate the user who opened the magic link, not the attacker who made the application send the magic link.
This is what makes securing this stuff so hard when you don't have proper review. What seems like a good idea from one perspective opens up another gaping hole somewhere else.
Off the cuff suggestions for improving UX in secure flows just make things worse.
> think this is mostly solved, or at least greatly mitigated, by using a Slack-style magic sign-in link instead of a code that you have the user manually enter into the trusted UI.
Magic links are better than codes, but they don't work well for cross-device sign-in. What Nintendo does is pretty great: If I buy something on my switch, it shows me a QR code I take a picture of with my phone and complete the purchase there.
I agree it is "mostly solved" in that there are good examples out there, but this is a long way from the solution being "best practices" that users can expect the website/company to take security seriously.
> a. This is already the case as long as you have an email-based password reset flow
I hard-disagree:
If I get an email saying "Hi you are resetting your password, follow these directions to continue" and I didn't try to reset my password I will ignore that email.
If I have to type in random numbers from my email every few days, I'm probably going to do that on autopilot.
These things are not the same.
> anyone who possesses and is logged into the user's phone or laptop (the usual prerequisites for a possession-based second factor) can also get their password.
I do not know what kind of mickey-mouse devices you are using, but this is just not true on any device in my house.
Accessing the saved-password list on my computer or phone requires an authentication step, even if I am logged-in.
I also require second-authentication for mail and a most other things (like banking, facebook, chats, etc) since I do like to let my friends just "use my phone" to change something on spotify or look up an address in maps.
> Most websites should not be in the business of trying to use knowledge-based authentication on their users, because they can't know whether the secret really came from the user's memory or was instead stored somewhere
They can't know that anyway, and pretending they do puts people at risk of sophisticated attackers (who can recover the passkey) and unsophisticated incompetence on behalf of the website (who just send reset links without checking).
> Websites should instead authenticate only the device, and delegate to the device's own authentication system
I disagree: Websites have no hope of authenticating the device and are foolishly naive to try.
> Websites should instead authenticate only the device
except I'm a user, not a device
>>I<< want to be authenticated, not my specific device that I'm going to switch at some point
You want to be authenticated specifically on the device that you're using to access the website. Not some arbitrary other device.
If you enter your username, password, and totp, and the website tells you you've logged in from some device halfway across the planet you've never heard of, you probably have a problem.
offering TOTP MFA auth is important for people who actually keep at least some minimal boundary between their passwords and TOTP seeds.
I don't like any of the methods used today. Passwords are OK for me since I pick strong pass phrases, use different emails per site but for me the superior option for me is IP/CIDR restrictions. A small handful of sites support it and some of those don't expose that they do because some people think a long DHCP lease is a static IP and that can cause a customer support ticket. It was a battle but I have managed to get some financial institutions to enable it for me. Every bank big and small can do this but tellers and bankers have no idea, only their IT person. When that fails I just disable internet access to my account from the financial institutions and go talk to a real person face to face. If that isn't an option I just don't do business with them. Simple as. I do 99.999999% of my internet access from home but if I depended on mobile I would have a VPN back to my home to utilize my static IP from a Linux laptop. I do not browse the internet from a cell phone and never will. Not perfect, nothing is.
wow your threat model is very different to me
wow your threat model is very different to me
If what you mean is that you have no online accounts then you are a few steps ahead of me. I will get there eventually but have some things to take care of first. Congrats on disconnecting from the internet though. I assume this site is your last holdout? I am envious if so. This site will also be my last online presence.
Can OP tell us how they implement one-time code email? Ever heard of PKCE flow applied to otp auth where there is a guarantee that the top flow can only be completed using the device/browser on which the user initiated the request?
Consider this scenarioUser initiates login on your site.
1. You generate a code_verifier (random) and a code_challenge = SHA256(code_verifier) and store the code_verifier in the browser session (e.g., local/session storage, secure cookie, etc.).
2. You send the code_challenge to the server along with the email address.
3. Server sends the email with a login code to the user, recording the challenge (associated with the email).
4. User receives the email and enters the code on the same device/session.
Client sends the code + code_verifier to the server. Server verifies: Code is correct. SHA256(code_verifier) == stored code_challenge.
The end result is that The code cannot be used from another device or browser unless that device/browser initiated the flow and has the code_verifier.
A combination of the above and a login link might help. But ultimately, the attacker will be relying on the gullibility of the user. The user will have to not check the urls
Assuming the bot knows to send a code_challenge and send the code_verifier together with the verification code
But then again, GOOD can just also ensure that their otp can only be completed from the GOOD domain/origin. That would shore things up at least.
What about all those services that force you to enable 2FA with SMS? How could you possibly know whether or not that opens you up to these SIM swap attacks?
This is a fundamental flaw with any login flow that is not phishing resistant. There is nothing novel about this attack.
An attacker can register a domain like office375.com, clone Microsoft's login page, and relay user input to the real site. This works even with various forms of MFA, because the victim willingly enters both their credentials and second factor into a fake site. Push-based MFA is starting to show IP and location data, but a non-technical user likely won’t notice or understand the warning and a sophisticated attacker will just use a VPN matching the users' location anyways.
Passkeys solve this problem through origin enforcement. Your browser will not let you use a passkey for an origin that the passkey was not created for. If they did, you could relay those challenges as well (still better than user + pass as the challenges are useless after first use).
Very short, badly written article. It can't even describe phishing correctly... At least label your threat model correctly.
While the premise is correct -- it's easy to complain but the author also provides zero recommendations on what is a better form of MFA.
You misread the short article.
It's about email as single factor auth, which has become very trendy of late. You just enter your email address, no password, and the email you a code. Access to your email is the only authentication.
Clearly I didn't misread that. It's literally the very first bullet point?
The first bullet point is "Enter an email address or phone number".
That's not MFA. MFA stands for multi-factor authentication. If the authentication only requires a code sent to an email OR phone number, that's just a single factor.
But then, email always was the only authentication. On any site, click "forgot password" and promptly they send you a reset password link. Very few sites have a challenge question.
Could be worse, I still sometimes get my password emailed in plain text by companies when I do that.
The first bullet point mentions phone number.
- Enter an email address or phone number
Thats not just email, that's also SMS.
Email OR SMS is still one factor. Its not multiple factors. How are you not getting that? Do you know what MFA means?
Even if it was Email OR password, that would still be one factor due to the OR. I do not think they are discussing in good faith.
> It's about email as single factor auth, which has become very trendy of late
I must be in the wrong bubble, I have not encountered any site that does this since the 2000s. It was a minor trend around then IIRC.
Anthropic is the main one. Its pushing a lot of others to do the same. I literally was arguing against that 2 weeks ago and the person who was pushing it said "Claude does that. Its really slick, no password to remember".
Patreon can do that too, depending on how you sign up.
It’s not slick at all. Passwords and MFA autofill, their image codes don’t, so I have to close the browser, go to email, copy code, delete email, go to browser, paste code just to login.
The entire email login flow is completely retarded. It’s not even secure.
A lot of services just do this de-facto, where you only need an email code to reset the password. Which is equivalent to single auth with email.
Email link to reset is better, email link + another auth (usually sms) is even better.
Only in an abstract threat model sense. In real world phishing its pretty different.
Its super odd if you land on facebook.com-profilesadfg.info/login thinking its just Facebook and try to login but get a "password reset" email. Most people would be confused as they don't want to reset their password.
Having it for every login means that just missing the website URL, everything else is 100% legit.
It’s not just slick, it is “secure” on the get go by thwarting any password stuffing attempts (if your email is not pwned already)
I believe Slack popularized this back then and still do it.
In India, almost all websites & apps, send a OTP to either mobile or email & ask you to enter that to login. Most of them have even disabled password based login flows. Really grinds my gears.
Spotify just started doing this. I even have a password saved in my password manager but instead of asking me they just sent an email with a code.
Booking does it and it frustrates me to no end.
Trip.com does this.
The first factor is access to your email. The second factor is…?
The article is not about multiple factor authentication.
It’s about single factor, password logins, using a one-time-token
The article is not about MFA. It is about using email as a single factor.
Thats simple a lie or you didn't read the article.
The very first bullet point states: Enter an email address or phone number
That insinuates email OR SMS.
It doesn't just mention email only.
The following is copied from wikipedia.
The authentication factors of a multi-factor authentication scheme may include: 1. Something the user has: Any physical object in the possession of the user, such as a security token (USB stick), a bank card, a key, a phone that can be reached at a certain number, etc. 2. Something the user knows: Certain knowledge only known to the user, such as a password, PIN, PUK, etc. 3. Something the user is: Some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc.
Email and phone are both in category one, comprising only one unique factor.
What is the minimum number of things you need access to in order to log in?
If you have access to the phone, you can log in. OR if you have access to the email account, you can log in.
You don't need to know the user's password, you only need access to one of these inboxes and nothing else. One-factor authentication, but worse, because there are multiple attack surfaces.
Half factor authentication, then, since either one will work.
It's still a single factor.
Agree with you
My main frustration with this sort of system, beyond the security risks is the terrible UX of a system like Spotify.
I appreciate most people log in and stay logged in but I frequently switch Spotify accounts and I use passwords to log in, instead of letting me choose password or a 6 digit code, every time I try and change account a needless 6 digit code is generated and sent to a shared inbox, a huge waste of resources and storage. In addition to being a security concern as flagged throughout this thread.
Spotify has terrible UX in general. You can't even copy the fucking track title.
Try multi-account containers so no need to log out? (Or Island on Android?)
> This is terrible for account security
It's "terrible" because the author can describe exactly one phishing vector?..
Have you ever tried resetting a password before? Passwords have a similar phishing vector, plus many other problems that magic links and one-time login codes don't have.
If six-digit login codes are less secure than passwords, the reasons why are certainly not found in this article.
There are some short comings about using email codes but I fail to see how this worse than passwords when the same exact kind of attack would work for passwords. The difference being that it would be worse with passwords which can be stored, reused later or sometimes changed directly on the service.
My password manager will never fill my password into the wrong site. I would need to do so manually which sets of so many alarm bells in my head.
With email pasting the number into a random website is the expected flow and there is basically no protection (some phones have basic protections for SMS auth but even this only works if you are signing in on the same device).
OP is just ragebait for nerds. How many articles have been published in the last 20 years about the issues with passwords? Now we're saying that the small chance some user ends up on bad-minecraft.com with a login form is actually worse than using "L3tmein!" as a password everywhere? Please find something more worthy to spend your time thinking about.
I came to the comments to say this. Nevor and I are one.
One additional annoyance with this type of login:
With a username and password field, these are automatically correctly filled by Safari.
With sites that only offer an email field, I have to manually fill it.
(Note that I tend to use different emails for different sites; if you only ever use one email this might not be a problem).
I thought this was going to be about Passkeys. Maybe if the FIDO Alliance can stop being obstinant and allow real backups, I'd be all in on them.
But if you could back up a passkey, wouldn't the key just be a password?
(I do agree with you about backups being essential, but my conclusion was "the idea is fundamentally flawed," rather than "it's one tweak away from greatness.")
No, because unlike a password you never provide the private key for a passkey to the site you’re logging into, which is how many password breaches occur.
This is the irreducible problem. It's the Emperor's New Clothes™. So either the secrets get generated and stored in tamper-protected hardware, or they are stored somewhere else that can be made portable. For the latter, then they ought to be serializable into some standard form.
Passkeys solve phishing by being domain bound and never exposing the private key. It's a huge improvement!
What do you mean by real backups? What's stopping you from backing up your keys? Its up to the passkey "provider" to allow passkeys backup/sync.
Well, having your passkey provider blocked for doing that might stop you.
https://github.com/keepassxreboot/keepassxc/issues/10407
Of course, they might just block you for not being on a whitelist of approved providers anyway.
The objection there was not to providing passkey backup. It was to doing it in plain text.
Since you keep posting this link, I'll just keep saying it: there is no credential manager attestation in the consumer synced passkey ecosystem. Period. There is no way to build and allowlist, by design. The consumer synced passkey ecosystem is open.
That's such a strawman argument. Read the link you pasted again
Strawman? We are talking about this link, right, the one that says:
> I've already heard rumblings that KeepassXC is likely to be featured in a few industry presentations that highlight security challenges with passkey providers, the need for functional and security certification, and the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
> The reason we're having a conversation about providers being blocked is because the FIDO Alliance is considering extending attestation to cover roaming keys.
> From this conversation it sounds like the FIDO Alliance is leaning towards making it possible for services to block roaming keys from specific providers.
Yes, read the quotes you took again. Attestation is not a thing currently. There is legitimate discussion about how to handle shitty password managers. If LastPass shits the bed again, it would be great to have a mechanism for others to block it or at least know that due to a major incident, keys from that tool are week. Debian OpenSSL keys were vulnerable for a long time and being able to know and alert or block private keys generated on a Debian machine is reasonable if not desirable. If KeepassXC is insecure or promote insecure practices who's fault is that and what do you suggest we do?
The entire issue is about doing the minimum possible of not exporting it in plaintext. Nothing is stopping you from decrypting it and posting it on your Twitter if you so wish. Just don't have the password manager encourage bad practices. How it that unreasonable?
> If LastPass shits the bed again, it would be great to have a mechanism for others to block it
And by the way, if and when something like that does happen, what's the user supposed to do if they suddenly find their passkey provider has been blocked?
Yes, we've seen you repeat that we have to read it again. I reread this morning before the post, but really just found more things supporting my position.
> To be very honest here, you risk having KeePassXC blocked by relying parties (similar to #10406).
From the linked https://github.com/keepassxreboot/keepassxc/issues/10406 > | no signed stamp of approval from on high > see above. Once certification and attestation goes live, there will be a minimum functional and security bar for providers.
> | RP's blocking arbitrary AAGUIDs doesn't seem like a thing that's going to happen or that can even make a difference > It does happen and will continue to happen because of non-spec compliant implementations and authenticators with poor security posture.
Is your argument that despite being doused with gasoline I can't complain because I'm not currently on fire?
So you’re just not gonna respond to any of the points explaining your straw man. Yeah you should read it again, and read my explanation again and let me know if you have any questions or responses. Dont douse yourself in gasoline and you won’t have to worry about being on fire.
(You have every right do douse yourself in gasoline. No one is taking that way from you. Just say away from everyone else)
Maybe you can let us know what definition of "strawman" you are using in this context?
KeePassXC is at risk of being blocked for making it easy to back up the passkeys. I don't see where that's been disproven or explained, other than saying "well attestation isn't enforced yet" -- that is, the metaphorical gasoline (provider AAGUIDs) hasn't yet been ignited (blocking of provider AAGUIDs)
> The entire issue is about doing the minimum possible of not exporting it in plaintext. Nothing is stopping you from decrypting it and posting it on your Twitter if you so wish. Just don't have the password manager encourage bad practices.
I don't disagree with this in principle, but it does warn you and realistically, what is the threat model here? It seems more like a defense-in-depth measure rather than a 5-alarm fire worthy of threatening to blacklist a provider. Maybe focus energy instead on this? (3+ year workstream now I guess?)
>> Sounds like the minimal export standard for portability needs to be defined as well.
> This is all part of the 2+ year workstream.
--
The more I get exposed to this topic, the less I'm convinced it was designed around people in the real world, e.g. https://news.ycombinator.com/item?id=44821601. Sure is convenient that it's so so easy to get locked into a particular provider, though!
Not really, they have no incentive to provide such a thing nor is it mandatory for them to do so.
Are you saying password managers don't have an incentive to provide a feature users want? That describes literally their entire featureset.
What incentive do they have to make it easy to migrate to a different provider?
People want the ability to back their passkeys up. The fact that this can be used for migrating is an unfortunate side-effect, as far as the provider is concerned.
Same. Sacrificing security by selling superficial convenience and limited security advantages for crucial inconveniences.
Perhaps there ought to be a well-known, "ACME" password-changing API such that a password manager could hypothetically change every password for every service contained in an automated fashion.
Even with backups, the attestation issue makes them awful.
I'm not familiar with this issue and a quick search didn't turn up anything obvious. Would you mind elaborating?
They are referring to the ability of a site you are logging into forcing you to use a client from a specific list or having a list of clients to deny.
It's copied over from FIDO hardware keys where each device type needed to be identifiable so higher tier ones could be required or unsecured development versions could be blocked.
This is what I was referring to, and we already have seen this happen in the wild with PayPal at one point (possibly still) blocking passkeys from e.g. Firefox. For now the argument against this seems to be that "Apple zeroes this out so service providers can't do it without risking issues for their many users who use Apple to store their keys", but clearly this is so precarious of a situation it may as well not be a thing. You can't depend on one trillion-dollar company not changing their minds on that tomorrow.
Even with the current flimsy "What about iPhones?" defense against attestation, is there anything stopping say Microsoft from just forcing you to install a different app to use Microsoft services?
It's like DRM: it will annoy legitimate users and keep them from obviously legit usecases, and be circumvented by people who are motivated.
Thanks, yes, I see this just came up in a similar comment thread (https://news.ycombinator.com/item?id=44823752)
What a crock, to not bother coming up with a way to make passkeys portable and then threaten to ban providers who actually thought about how humans might use them in the real world
Specifically they are referring to synced passkeys (passkeys generated by services like Google password manager/1Password/Apple and are linked to that account).
Because these passkeys are stored in the Cloud and synced to your providers account (i.e. Google/Apple/1Password etc), they can't support attestation. It leads to a scenario where Relying Parties (the apps consuming the passkey), cannot react to incidents in passkey providers.
For example: If tomorrow, 1Password was breached and all their cloud-stored passkeys were leaked, RP's have no way to identify and revoke the passkeys associated with that leak. Additionally, if a passkey provider turns out to be malicious, there is no way to block them.
Why?
Passkeys on cryptocurrency wallets such as the Trezor and Ledger are tied to the device's seed phrase and can be backed up.
Did you try that? I can't find any confirmation on if it's actually working for classical keys but it's for sure not supported for resident keys on Ledger.
https://www.ledger.com/blog/strengthen-the-security-of-your-...
https://github.com/LedgerHQ/app-security-key/issues/6
https://github.com/LedgerHQ/app-security-key/issues/7
Is Trezor implementation more mature?
Damn I misremembered, I've only tried it on Trezor, not Ledger. What I do know is I've successfully used my Trezor for signing in to Google and several other sites through one or both of Chrome or Firefox.
Erm... Passkeys _are_ backupable/syncable WebAuthn keys. You can get the clear-text Passkey private keys by just looking into your storage (Keychain on iOS).
What's missing is a standardized format for the export.
Wholeheartedly agree, however The Changelog Podcast helped shift my perspective on this. It's really about not having the responsibility of storing and maintaining passwords.
You should never store passwords anyways. You store hashes. I don’t see the issue. If you don’t trust yourself to keep a hash, maybe don’t store user information at all.
That's still not perfect though!
Most leaked passwords online come initially from leaked hashes, which bad actors use tools like hashcat to crack.
If your user has a password like "password123" and the hash gets out, then the password is effectively out too, since people can easily lookup the hash of previous cracked passwords like "password123".
No. This is why salts[0] are used.
[0] https://en.wikipedia.org/wiki/Salt_(cryptography)
This is how it should be done. But it still doesn't protect users fully, because attacker can try to brute-force passwords their interested in. It requires much more effort though.
And compute-intensive hash functions. Computers this day are powerful enough to hashcat each individual pwd+salt if a fast hashing function is used.
Salting already fixed this decades ago, and most modern password libraries will automatically generate and verify against a hash like <method>$salt$saltedhash if you use them instead of rolling your own.
So if they don't want to store your passwords because they do not want the responsibility of keeping it safe, should you trust your credit card and other personal information with them?
I feel like this is going to bite me in the ass 15 years from now but like bcrypt is really really hard to screw up
Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2.
So even 7 years ago bcrypt was only the 3rd recommended option.
You'll find that opinion is still divided among these three options. And bcrypt is harder to mess up. It has less parameters (it doesn't fall apart as easy) and salting is built in, whereas its not for scrypt and argon2. If, knowing nothing else about the competency of the programmer, I had to choose between an application using scrypt, argon2 and bcrypt, I'd pick bcrypt any day.
They follow with:
"But, seriously: you can throw a dart at a wall to pick one of these... In practice, it mostly matters that you use a real secure password hash, and not as much which one you use.
Kinda weird when they secure shop sites where you enter your payment information into. IKEA does this, for example.
So? They don’t want to store my password, so instead they immensely weaken the security of my account?
This is not good for the user.
This also doesn't address my biggest concern, google controls the chrome password manager and probably controls your email address. At a bureaucratic sneeze you can be denied access to your entire life.
We need something equivalent to "Americans will use anything but the metric system" but "Sites will force users to use anything but a password manager."
Related: SMS 2FA is not secure https://news.ycombinator.com/item?id=27447206
Some sites make this into a problem accessing their site by having an unsubscribe that doesn't account for this login method. Unsubscribing from marketing means I can no longer login
Wow, that's some joined-up thinking.
Where are the big tech founders who are proponents of this method on their platforms? I would like to hear their justification.
Public Shaming: Ally Bank, made this mandatory. I'm leaving them as soon as I can find a another bank with 3.x% on savings, bill pay that automatically retrieves bill amounts, and supports _at least_ TOTP.
Suggestions welcome if anyone has them.
I use Schwab (bank and brokerage). Their money market funds yields 4.x% with just a few more clicks to move into and out of the MMF. The Bill Pay retrieves the amount on my BofA credit card just fine. And it supports TOTP via Symantec VIP Access (it doesn't seem like you can use a standard TOTP app).
This is why I think people ending up locked into vendor implementations of passkeys will be a thing. We had a totally open standard, TOTP, and there were still (somewhat successful) efforts make it non-standard like the Symantec VIP Access you mentioned. How many authenticator apps do I have to install? I was hoping for one!
FWIW when I was researching this for my own accounts I believe I saw in passing that someone had figured out a way to extricate the TOTP secret from VIP Access to use in a standard TOTP app. I didn't look into it much though since none of my current accounts require it and it just seemed something to avoid.
Thanks! that's actually much closer to what I'm looking for.
pip install python-vipaccess looks like it'll provision new token, form which you can then use the secret in a regular TOTP app.
Wonder if that could be used to sidestep the proprietary app
https://news.ycombinator.com/item?id=27692315
looks like you can!
Services love it because it hands off the risk and responsibility to… Google/Gmail in most personal cases. This was why the pattern was adopted so quickly.
Lot's of services realized that users would use the reset password form for login.
To state the obvious, there the code is part of the url they have to visit.
I went hunting in the NIST documentation to see if this is even an approved authentication method and, technically, I can't find anything wrong with it (if we consider it to be a "Look-up Secret Authenticator", see NIST 800-63b section 5.1.2.1). They're technically abusing what is supposed to be a collection of pre-distributed authenticators (think recovery codes), but there's nothing prohibiting these look-up codes from being sent on-demand and there only being a single selection.
As for the method itself.. IMO they're certainly phishable, but I don't think they're any more phishable than a typical username/password prompt.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place.
An attacker can also simply present a login prompt, say a Google-looking one, and a user will just enter their credentials.
This is why phishing-resistant authentication is the one true path forward.
I need to make a version of https://neal.fun/password-game/ with increasingly ridiculous second factors.
Whole heartedly agree. It's not more secure if you only use the second factor of two factor auth.
Codes that are provided on demand by a service will always be far less secure than proper TOTP. Because in the case of proper TOTP, no secret ever leaves the service after initial configuration, but in the case of discount 2FA through email or especially SMS, a fresh secret has to be delivered to me each time, where it can easily be intercepted by all manner of attacks.
Absolutely, a shocking about if email traffic is still unencrypted. Any hop along the SMTP way could be compromised
Anthropic/Claude does this and it is a shame. They have the ability to code proper Authenticator and yet don’t.
[insert joke about vibe coding a shit auth service]
hilariously that is just what I tried to do the other day and oh boy are we safe from AI taking over just yet.
You're gonna have to take my passwords from my cold dead hands.
I recently set up passkey-only sign ins for a webapp I'm writing using Authentik [0](Python OIDC provider, with quite a nice docker-compose run-up, took only minutes to stand up.) It was surprisingly easy to configure everything so that passkeys are the only thing ever used.
If anyone would be interested I could write it up? I was surprised what a nice user flow it is and how easy it was to achieve.
[0] https://goauthentik.io/
so many of these Authentication providers have a hockey stick pricing scheme, where the first few users are near free and when you grow you are going to get mugged and kicked in the groin.
it's open source, if you self-host it's free
Still seems far, far more likely that the average user will have their account stolen via password theft/reuse than the more complicated scheme the author is describing. Links instead of codes also fixes the issue.
Links are not trustworthy and can leak to compromise.
*lead, oops!
As the author points out, email OTP can be phished if the user is tricked into sending their OTP to an attacker.
Email magic links are more phishing resistant - the email contains a link that authenticates the device where the link was clicked. To replicate the same attack, the user would have to send the entire link to the attacker, which is hopefully harder to socially engineer.
But magic links are annoying when I want to sign in from my desktop computer that doesn't have access to my email. In that case OTP is more convenient, since I can just read the code from my phone.
I think passkeys are a great option. I use a password manager for passkeys, but most people will use platform-provided keys that are stuck in one ecosystem (Google/Apple/MS). You probably need a way to register a new device, which brings you back again to email OTP or magic link (even if only as an account recovery option).
It's also a lot less convenient. Because I need to have access to my email, wait for the code, copy it etc. I hate companies that dump this extra work on me, like booking.com and all the AI companies.
Passkeys would be so much easier, convenient and so much more secure. I really don't understand why they go for this.
They aren’t ideal but are they actually worse than passwords? I’d bet that on net, more compromises happen with previously-leaked passwords
I haven't actually seen these being used as passwords like TFA states; they're usually a form of 2FA.
If they actually are passwords, yes, my password manager is a better UX than having to fetch my phone, open SMS, wait for the SMS, like good grief it's all so slow.
(In the 2FA form, I'd prefer TOTP over SMS-OTP, but the difference is less there.)
The largest site where I've seen this flow (username + email) is hotels.com.
Most people don't use a password manager. They just have shit passwords.
IF everyone switches to PASSKEYS, hackers are going to focus exclusively on them and they WILL find a way; then everyone will be FUBAR. Worse, BIG TECH is not going to take responsibility. THAT and AI? PROBLEMATIC. Nothing is foolproof. However... Proper password protocol must be taught.Once upon a time, people had password hints. Why not tech a combination of that and proper passwords?
There is a way to fix this. Don't just require a 6 digit code. Require a 6 digit code and a long random string (an expiring token), which is only present on the page the user visited, or in the email they were sent.
The actual weak link here is not the procedure itself. It’s the fact that your email services will happily accept phishing mails into your inbox.
I’m pretty sure we can prevent this by issuing some kind of proof of agreement (with sender and recipient info) thru email services. Joining a service becomes submitting a proof to the service, and any attempt to contact the user from the service side must be sealed with the proof. Mix in some signing and HMAC this should be doable. I mean, IF we really want to extend the email standard.
The email is coming from the legitimate service, it's a man-in-the-middle attack.
How does this scheme stop you from putting a legitimate code from a legitimate sender into an illegitimate website?
I've conscientiously ignored every attempt by every service in the past decade to bully me into giving up a phone number for 2FA. Authenicator apps and passkeys, fine. But never over SMS.
SMS isn't just insecure, it's a pain when you're out of the country.
Indeed, such a bad design where instead of a simple and quick 1-shortcut login from a fishing-resistant password manager users have to waste time switching back and forth between different apps/devices
Who thought having no passwords would be a good idea? Microsoft? Why am I not surprised.
https://support.microsoft.com/en-us/account-billing/how-to-g...
I think I have said the following till I go blue in the face:
1. Mobile phone numbers are not secure. SIM jacking is a thing, and a 6 digit code is not impossible to guess (it's only 1 in a million).
2. Sending codes/links via email is problematic as described by the article.
3. Inconsistent "best practices" confuse users, and frustrate them.
Relatedly with respect to passkeys, it seems we have the following tradeoff (simplified):
1. authentication via password: accounts stolen by criminals and then inaccessible to the user.
2. authentication via passkey: accounts lost by users because passkeys have friction, to say the least, when devices are lost/stolen/transferred.
It seems that big providers would much rather scenario 2.
There's a saying, isn't there? Cryptography fundamentally reduces to a key management problem
Yeah probably because stolen accounts are more of a hassle for them than lost accounts.
This is one of those problems for which a simple solution must exist, but which never gets solved.
Just like sending large files over the internet.
This article really opened my eyes to how phishing can exploit email verification codes. The shift towards using keys instead of passwords sounds promising for security. It's scary how easily users can be tricked, but your point about preferring security over convenience is spot on. Great read!
I'm always baffled that magic links is the only way to sign in to Anthropic Console or Claude. No passwords, no passkeys, nothing.
They do provide Google sign-in, but I've had issues with Google sign-in during traveling too often to consider it a legitimate option.
Some days when I'm tired of receiving a new authentication code, I'm half-jokingly thinking : we're certainly must reaching a point where at least half of all SMS messages sent are authentication codes (with a small payload, for now).
This has been driving me nuts. Ever since implementation. This method has been the biggest disappointment of login procedures and quickness. I dont want to go through, three to five steps just to login and in the meantime I forget what I came to the service for in the first place. There's gotta be a better method for security and streamlining sign in's. I should not have to do the work of security for the service and every other week I hear about the same service being hacked and millions of accounts are now affected.
Sometimes it just feels like they are trying to to block you from using script/code to use their services.
> This is terrible for account security
It's also terrible UX. Emails don't always arrive right away. (Especially if you've enabled greylisting.)
All the talk about passkeys boils down to:
A passphrase is basically like a password in the sense that I can lose it, but it's not like a password in the sense that I can actually memorise it. (Or rather, all of them)
I prefer my passwordstore workflow.
I remember two passwords, the rest is kept save for me and unlocked when I need them.
It's not perfect, but it's by far the least worse solution of them all.
I don't get it. How is mistakenly giving a one-time login to a malicious actor worse than giving it a permanent login (aka your password) ?
Also super annoying if you haven’t set up email on a device (like my iPad), now I have back and forth with my phone instead of going through my password manager.
Also the 6-digit codes tend to appear on the lock screen of my phone, which means anybody can see them. I can turn that off, I know, but many people will not.
To be fair, it can be even funnier:
The "Simply" music apps use a four digit code, sent by mail. And that never changes.
Easy account sharing! It is a feature!
I'd much rather have passkeys than the endless "email me a code" or "text me a code" crap we deal with today.
I can't be the only person here who is familiar with the word "attestation" in everyday life but had no idea what it means in the context of login security.
So I asked my friend Miss Chatty [1] about it. Hopefully this will help anyone who is as confused as I was.
https://chatgpt.com/share/68947f35-0a10-8012-9ae9-adadc3df8b...
[1] Siri and Alexa get to have cool names, so why can't ChatGPT?
I guess this is one reason why magic links are slightly better than being emailed a code.
We need a security standard that disallows using email as ID.
I think the registration pattern should be - user enters email to register. email is sent to that email with a link to verify. user clicks link. user gets email with username and password to login in to the profile created for them.
This reveals the user's password (even if temporary) in plain text in an unencrypted email. Basically the last thing you want.
A better workflow is to send the user a link where they can set their initial password themselves.
same thing in blue which additionally opens the door for someone else to change their password and lock them out, never mind the quality of passwords users set initially etc. Looking at you, mum, registering a new account everytime you forget the last password.
then maybe for auth: login link > password > one time code ? hard to be 100% confident
After all big companies has change to sms code authentication, we just realized this is a bad pattern, please take out all that y get back to "click a link in this mail" pattern, looks more secure.
Author seems confused. 2FA isn't about securing your account, it's about harvesting your phone number.
I agree thank you
I'm technical and I didn't understand this article.
From my experience, OTAC is typically associated with sites that want to prevent automation and scraping. By that I mean I have seen it being used at EACH log in to create extra friction. Interesting this is being used as part of "regular" security.
How's that different from the trivial phishing, when a malicious site looks like the target site and asks for the password?
From a design perspective, the reason this flaw exists is because the code can be typed on any machine and sent through any intermediary. More secure schemes are possible without much effort. Magic links have some pros/cons but overall I think they are better.
Here is what I do when the user logs in and email verification is needed:
1. Generate a UUID on the server.
2. Save the UUID on the client using the Set-Cookie response header.
- The cookie value is symmetrically encrypted and authenticated via HMAC or AES-GCM by the server before it is set, such that it can only be decrypted by the server, and only if the cookie value has not been tampered with. This is very easy to do in hapi.js and any other framework that has encrypted cookies.
- Use all the tricks to safeguard the cookie from being intercepted and cloned. For example, use a name with the __Host- prefix and these attributes: Secure; HttpOnly; SameSite=Lax;
3. The server sends an email to the user with a link like https://site.com/verify?code=1234, where 1234 is the UUID.
4. The user clicks the link and has their email verified.
- When the link is clicked, the browser sends the Cookie header automatically, the server decrypts it and compares it to the UUID in the URL and if that succeeds, the email has been verified. Again, this is very easy in hapi.js, as it handles the decryption step.
- Including the UUID in the magic link signals that there is _supposed_ to be a cookie present, so if the cookie is missing or it doesn't match, we can alert the user. It also proves knowledge of the email, since only the email has access to the UUID in unencrypted form.
5. The server unsets the cookie, by responding with a Set-Cookie header that marks it as expired.
6. The server begins a session and logs the user in, either on the page that was opened via the link or the original page that triggered the verification flow (whichever you think is less likely to be an attacker, probably the former).
Note that there are some tradeoffs here. The upside is that the user doesn't need to remember or type anything, making it harder to make mistakes or be taken advantage of. The downside is that the friction of having to use the same device for email and login may be a problem in some situations. Also, some email software may open a different browser when the link is clicked, which will cause the cookie to be missing. I handle this by detecting the missing cookie and showing a message suggesting the user may need to copy-paste the link to their already open browser, which will work even if they open a new tab to do it (except for incognito mode, where some browsers use a per-tab cookie jar).
Lastly, no cookie is 100% safe from being stolen and cloned. For example, a social engineering attack could involve tricking the user into sharing their link and Set-Cookie header. But we've made it much more difficult. They need two pieces of information, each of which generally can't be intercepted, or used even if intercepted, by intermediary sites.
> An attacker can simply send your email address to a legitimate service, and prompt for a 6-digit code. You can't know for sure if the code is supposed to be entered in the right place. Password managers (a usual defense against phishing) can't help you either.
Roughly the same security for password-login with email recovery. The only difference is that this makes the attack surface larger because the user is frequently using email.
The only secure login is through 1. a hardware device and 2. a solution where both the user/service are "married" and can challenge each other during the login process. This way, your certificate of authentication will also check that the site you are connecting to is who it says it is.
I would have agreed to this if it weren’t for the fact that, for various reasons, you occasionally need to copy and paste passwords manually from password managers. This phishing scenario is no worse.
Passwordless is fine.
Let’s be honest all forms of auth suck and have pros and cons.
The real solution is detect weird logins because users cannot be trusted. That’s why we build for them!
MS can also call you, then you only have to press # to log in. Makes it even easier for a spoof website.
I like passkeys on the Apple ecosystem
I've flipped my stance on this. I used to be pretty pro passkey, but after using them for a while what I've observed is:
1. There's very low consistency in implementation, so while I understand the problems passkeys solve, it seems like every vendor has chosen different subproblems of the problem space to actually implement. Does it replace your password? Does it replace MFA? Can I store multiple passkeys? Can I turn off other forms of MFA? Do I still need to provide my email address when I sign in (Github actually says No to this)?
2. The experience of passkeys was supposed to be easier and more secure than passwords for users who struggle to select good passwords, but all I've observed is: Laypeople whose passwords have never been compromised, in 20 years of computing, now deeply struggling to authenticate with services. Syncing passwords or passkeys between devices is something none of these people in my life have figured out. I still know two people in their late 20s who use a text file on their computer and Evernote to manage their passwords. What is their solution for passkeys? They don't know. They're definitely using them though. The average situation I've seen is: "What the heck is this how do I do this I guess I'll just click save passkey on this iOS prompt" and then they can never get back into that service. The QR code experience for authenticating on desktop using mobile barely works on every Windows machine I've seen.
3. There is still extremely low support among password managers for exporting passkeys. No password managers I've interacted with can do it. Instead its to my eyes become another user-hostile business decision; why should we prioritize a feature that enables our users to leave my product? "Oh FIDO has standardized the import/export system its coming" Yeah we've also standardized IPv6. Standards aren't useful until they're used. "Just create new passkeys instead of exporting" as someone who has recently tried to migrate from 1Password to custom-hosted Bit/Vaultwarden: This is the reason why I gave up. By the way, neither of these products support exporting passkeys.
It might end up being like USB-C where its horrible for the first ten years, but slowly things start getting better and the vision becomes clear. But I think if that's the case: We The Industry can't be pulling an Jony Ive Apple 2016 Macbook Pro and telling users "you have to use these things and you have no other option [1]". Apple learned that lesson. I'm also reasonably happy with how Apple has implemented Passkeys (putting aside all the lockin natural to using Apple products, at least its expected with them). But no one else learned that lesson from them.
[1] https://www.cnet.com/tech/your-microsoft-passwords-will-vani...
I'm having difficulty understanding what it means for an attacker to "send your email to a legitimate service"...
I think this means:
1. You go to evil.example.com, which uses this flow.
2. It prompts you to enter your email. You do so, and you receive a code.
3. You enter the code at evil.example.com.
4. But actually what the evil backend did was automated a login attempt to, like, Shopify or some other site that also uses this pattern. You entered their code on evil.example.com. Now the evil backend has authenticated to Shopify or whatever as you.
The site is comparing this method to plain username + password though. Doesn't that miss the obvious point that evil.example.com could do the exact same thing with the username + password method, except it's even easier to phish because they just get your username + password directly (when you type them in) and then an attacker can log in as you via a real browser?
evil.example.com can be a legitimate-looking website (e.g. a new tool a person might want to try). If it has a login with email code, it can try to get the code from a different website (e.g. aforementioned Shopify).
For the username + password hack to work, the evil.example.com would have to look like Shopify, which is definitely more suspicious than if it's just a random legitimate-looking website.
I assume it's a phishing scenario, given the note about password managers. Evil site spoofs the login page, and when you attempt to log in to the malicious site, it triggers an attempt from the real site, which will duly pass you a code, which you unwittingly put into the malicious site
TOTP is vulnerable to the same attack, though. If you are fooled into providing the code, it doesn't matter whether it's a fresh one to your email or a fresh one from your authenticator.
They are, which is one major issue with TOTP and most current MFA methods. There is an implicit assumption that you only get the full benefit if your usi g a password manager.
1. A password manager shouldn't be vulnerable to putting your password in a phishing site.
2. If your password is leaked, an attacker can't use it without the TOTP.
Someone who doesn't use a password manager won't get the benefits of #1, so they can be phished even with a TOTP. But they will get the benefits of #2 (a leaked password isn't enough)
Passkeys assume/require the use of a password manager (called a "passkey provider")
Passkeys do largely solve this issue. I love to use them whenever I can.
Sure, but you would have needed to input a password first, which autofill wouldn't have put into a spoofed site
Man in the middle attack basically.
Confused deputy is you?
It's a constant small annoyance in my life that "email" can mean either
* Electronic mail (the technology)
* An email message
* An email address
* An email inbox
In this example they mean email address.
It means that you go to foo.com and enter your e-mail to sign up. But foo.com routes that request and to bank.com, hoping you have an account there.
bank.com sends you verification email, which you expect from foo.com as part of the sign-up verification process. For some bat shit crazy reason, you ignore that the email came from bank.com and not foo.com and you type in the secret code from the email into the foo.com to complete the sign up process.
And bam! the foo.com got into your bank account.
A complete nonsense but because it works in 0.000000000000001% of the time for some crazy niche cases in the real world, let's talk about it.
The evil site usually says something like "enter the code from our identity partner x" or something, which is a lot more believable when it's a service like Microsoft that does provide services like that.
That is not how oAuth works.
That's the point: this isn't OAuth. It's just a way to phish the code.
If it is not oAuth, where does Microsoft come from then?
[dead]
[dead]
[dead]
[dead]
[flagged]
It's correct and to the point. What are you missing?
The author couldn't even be bothered to write about the supposed examples of these practices being wrong. The whole thing lacks detail and actual arguments, instead we get "please stop" like it's some sort of a reddit or twitter shitpost.
Look at this - https://news.ycombinator.com/item?id=44822267 - is this what this site is supposed to be now? Writing the article in the place of the author because the author couldn't be bothered to even form their own argument correctly? What the fuck?
The fact that this has been upvoted so high and allowed to stay on the front page is also a clear signal to others that this low-effort garbage is welcome here, which will only encourage others to post similarly worthless blogposts, lowering the overall quality of this site.
There are multiple comments in this very thread that are longer than this "article". My own comment is longer!
Is this what this site is supposed to be now? People ranting, complaining, and swearing about how a post submission is not what they think should be on the site?
The post spawned an interesting conversation, thats worth itself alone. Go put replies like this on reddit where they belong.
Interesting conversations can also happen under articles that have actual substance, there's no need to tolerate such short blogposts just because these might spawn an interesting discussion.
Funny that you mention reddit because this is the exact same type of spam that pollutes /r/programming.
[flagged]
[flagged]
The idea of needing to provide extremely personal information that’s somehow tailored for me just to use a service is so incredibly dystopian to me. I’d much rather use a password.
sure, it being a 6 digit code which has potential for social engineering can be an issue
like similar to if you get a "your login" yes/no prompt on a authentication app, but a bit less easy to social engineer but a in turn also suspect to bruteforce attacks (similar to how TOTP is suspect to it)
through on the other hand
- some stuff has so low need of security that it's fine (like configuration site for email news letters or similar where you have to have a mail only based unlock)
- if someone has your email they can do a password reset
- if you replace email code with a login link you some cross device hurdles but fix some of of social enginering vectors (i.e. it's like a password reset on every login)
- you still can combine it with 2FA which if combined with link instead of pin is basically the password reset flow => should be reasonable secure
=> eitherway that login was designed for very low security use cases where you also wouldn't ever bother with 2FA as losing the account doesn't matter, IMHO don't use it for something else :smh:
Did you mean to post this comment at https://news.ycombinator.com/item?id=44819917 ?
yes, that is embarrassing
We moved the comment for you.
I think you misplaced this comment and it belongs here: https://news.ycombinator.com/item?id=44819917