> While OpenAuth tries to be mostly stateless, it does need to store a minimal amount of data (refresh tokens, password hashes, etc). However this has been reduced to a simple KV store with various implementations for zero overhead systems like Cloudflare KV and DynamoDB. You should never need to directly access any data that is stored in there.
Written like someone who's never actually maintained an identify provider used in B2B contract work. You will inevitably come into contact with people who cannot make things work on their side and are doing things so unexpectedly that logging is insufficient to track down the error. Sooner or later you will need to look at the data actually in storage to figure out what your customers who are threatening to cancel their contract are doing wrong.
KV Stores aren't magical... and you do need to store this data somewhere.
So what's different between this (or any of these new-aged Auth services) and something else more traditional? If anything, these new-age services make it easier to access your data if you need to, since you often control the backing database unlike auth0, etc.
Both DynamoDB and Cloudflare KV are queriable.
I guess I don't understand the negativity in your comment. If anything, your complaint sounds like an issue you and your team cooked up on your own.
I don’t think it’s the architecture or technology the commenter is reacting to, it’s this line: “You should never need to directly access any data that is stored in there.”
Statements like that are a huge red flag that the designers of the product are not particularly experienced with operating this type of system at meaningful scale.
Eh, the technology stack they discuss is directly accessible, though.
I read this as an advertisement, meaning if everything it working well you don't need to manage the database. Which is probably how it works 99% of the time in fairness.
The circumstances are immaterial; if the creators of a system are blasé enough to imagine you’ll neither need nor want to manage or query the underlying data storage, then they’re telegraphing naiveté and inexperience. Capacity, throughput, latency, consistency concerns exist in any nontrivial system and the pointy end/bottleneck of these is very often at the level of persistence. Auth services can add privacy and integrity to that pile. And so on. Consequently, glossing over it with a handwave is a bright red flag.
To me this statement read very differently. I read it as saying that the amount of state is small and portable enough that I wouldn’t have to worry about scalability issues or be married to a particular database product. I think the original complaint about it is overly critical and nit picking.
all of the data in there will be exposed via an API and also directly queryable since it's in your infra
but the idea here is openauth does not handle things like user storage - it just gives you callbacks to store the data in whatever database you're using
precisely because of what you're talking about - eventually you need access to it
Can we not call any authentication scheme/protocol/service starting with "Open" and even "O" anymore? We already have OAuth, OATH, OpenID, OpenIDConnect, and Okta; it's getting out of hand.
"Auth" is also super overloaded. OP is an authentication or AuthN tool which is not the same nor does it encompass authorization or AuthZ. I'm partial to using the terms "identity" and "permissions" instead.
The example in the video even has the name `authorizer` in the default export, but really it's an `authenticator` or `identifyer` or `isThisUserWhoTheySayTheyArer` – not a `shouldTheyHaveAccessizer`.
I'm in total agreement with you but I guess the AuthN/AuthZ train left the station ages ago and no one outside of the business of selling these tools actually care. Oh well.. :o)
OAuth-based auth providers are nice, but they can have a weakness. When you have just one app, OAuth can be overkill: protocol is complex, and users suffer jarring redirects¹.
This is not surprising, because OAuth / OIDC is fundamentally designed for (at least) three parties that don't fully trust each other: user, account provider and an app². But in a single app there are only two parties: user and app itself. Auth and app can fully trust each other, protocol can be simpler, and redirects can be avoided.
It's fair to say that with OAuth the resource owner can choose to display a consent screen or not. For example, when consent is granted already, it can be skipped if the resource owner does not need it. Likewise, Google Workspace and other enterprise services that use OAuth can configure in advance which apps are trusted and thus skip permission grants.
Not to say the concern about redirects isn't legitimate, but there are other ways of handling this. Even redirects aren't necessary if OAuth is implemented in a browser-less or embedded browser fashion, e.g. SFAuthenticationSession for one non-standard example. I haven't looked this up in awhile but I believe the OAuth protocol was being extended more and more to other contexts beyond the browser - e.g. code flow or new app-based flows and even QR auth flows for TV or sharing prompts.
(Note I am not commenting on OpenAUTH, just OAuth in general. It's complex, yes, but not as bad as it might seem at first glance. It's just not implemented in a standard way across every provider. Something like PassKeys might one day replace it.)
For browserless, I was referring to a 2019 article that I could have sworn was newer than that, on the need for OAuth 2.1 that also covers how they added OAuth for Native Apps (Code Flow) and basically a QR code version for TVs: https://aaronparecki.com/2019/12/12/21/its-time-for-oauth-2-...
As for SFAuthenticationSession, again my info might be outdated, but the basic idea is that there are often native APIs that can load OAuth requests in a way that doesn’t require you to relogin. Honestly most of those use cases have been deprecated by PassKeys at an operating system level. There’s (almost) no need for a special browser with cookies to your favourite authenticated services if you have PassKeys to make logging in more painless.
I agree that passkeys would solve all that, but they have their own set of problems (mainly being bound to a device) and they are still very far from being universally adopted.
I’m looking forward to OAuth2.1 - at the moment it is still in draft stage, so it will take a couple more years until it’s done and providers start implementing.
My prediction is that passwords will be around for a long time, at least the next 10 years.
PassKeys are definitely the future, they aren't just device-specific, they can be synced also. https://www.corbado.com/blog/nist-passkeys talks about this, though I'll admit I haven't read anything on the subject yet. But I can say that most implementations of PassKeys seem to cloud sync, including 1Password, Apple, Google, Edge, etc.
I should also add that PassKeys that are tied to devices are like FIDO2 security keys, you should be able to add more than one to your account so that you can login with a backup if your primary FIDO2 token is unavailable.
Likewise, SSO should ideally be implemented such that you can link more than one social network - and a standard email address or backup method - in addition to the primary method you might use to login with. It has always bugged me that Auth0 makes it much harder than it should be to link multiple methods of login to an account, by default.
The biggest issue I've seen organisations facing with PassKeys is that neither iOS or Android require biometrics to unlock one - this seems like a massive drawback.
Most apps wanting extra authentication implement biometrics which fall back to an app-specific knowledge based credential like a PIN or password. As far as I can tell, PassKeys on those devices fall back to the device PIN which in the case of family PCs/iPads/tablets is known to the whole household.
I've seen several organisations give up on them for this reason.
The article by Ory's Aeneas Rekkas perfectly describes OAuth / OIDC problems. The only thing it misses is the suggestion for the alternative protocol for first-party auth. It does suggest that it's preferable to use simpler systems like Ory Kratos. But OAuth / OIDC is a set of protocols, not an implementation. Is there an a effort to specify simple auth protocol, when third-party auth is not needed?
It can vary by implementation need. Can you send a time-limited secret as a login link to someone's email as a replacement to entering or managing passwords? Can you use PassKeys? Or a simple username and password? (Password storage and management is left as an exercise to the reader.)
Part of the question is - why present a login? Do you need an identity? Do you need to authorize an action? How long should it last?
Generally, today, PassKeys are the "simple" authentication mechanism, if you don't need a source of third-party identity or can validate an email address yourself. (Though once you implement email validation, it is arguable that email validation is a perfectly simple and valid form of authentication, it just takes a bit more effort on the part of the user to login, particularly if they can't easily access email on the device they are trying to login on, though even then you can offer a short code they could enter instead.)
Frankly, the conclusion to "how to login" that I draw today is that you will inevitably end up having to support multiple forms of login, including in apps, browsers and by email. It seems inevitable then that you will end up needing more than one approach as a convenience to the end user depending on the device they are trying to sign in to, and their context (how necessary is it that they must be signed in manually vs using a magic link or secret or QR code or just click a link in their email).
I should also note that I haven't discussed much about security standards here in detail. Probably because I'm trying to highlight that login is primarily a UX concern, and security is intertwined but can also be considered an implementation detail. The most secure system is probably hard to access, so UX can sometimes be a tradeoff between security and ease-of-access to a system. It's up to your implementation how secure you might need to be.
To some, you can use a web-based VPN or an authenticating proxy in front of your app, and just trust the header that comes along. Or you could put your app behind Tailscale or another VPN that requires authentication and never login a user. It's all up to what requirements the app has, and the context of the user/device accessing it.
It's probably going to be vendor-specific or you will implement your own auth. At ZITADEL we decided to offer all the standards like OIDC and SAML, and offer a session API for more flexible auth scenarios. You will also be able to mix.
My personal gripe with OAuth is that the simple case can be implemented with like 2 redirects and 2 curls, but docs are often incomprehensibly complicated by dozens of layers of abstractions, services, extensions, enterprise account management and such.
Good for them for trying! I've been in the auth space for a few years and am surprised that a stateless AWS lambda for doing the exchange. (At least I haven't seen any.) So it is nice to see some serverless innovation.
Thoughts from a quick scan:
- They support PKCE (yay!)
- They suggest storing access tokens in localstorage (boo!)
Huge access tokens, slow validation AND bad security at the same time, boy we must be lucky.
- Encrypted JWT when saving sensitive information in cookie? Good start...
- ... Using Asymmetric encryption? Oh no...
- RSA-OAEP: At least it's not PKCS#1.5 amirite?
- Same RSA key is used for encryption and signature(?) Not great.
- Stateful Access Tokens: well...
I'm not sure how I feel about using stateful access tokens here at all. Since there is already a KV dependency, there are some advantages to storing stateful access tokens in the KV, most importantly you can easily revoke the tokens directly by deleting them. Revoking stateless tokens, on the other hand, is quite hard and not something that most web applications would care to implement.
The most common trade-off (and indeed, half of the raison d'être for OAuth 2.0 refresh tokens) is to have a very short-lived (e.g. 5 minutes) stateless access token and a long-lived stateful refresh token (which OpenAUTH already does). Revocation would still come with some delay, but you won't be able to use an outdated token for a long time after the user logs out or changes password. This is an acceptable trade-off for many applications, but I'm not sure if it's right to offer it as the only solution in a software package that wants to be generic: many applications will have compliance requirements that could rule out accepting logged out session tokens for such a period.
- JWT in any form or manner
The elephant in the room. Since JWT allows you to choose your own algorithm and offers some some rather bad options, using JWT can be considered as "Rolling your own crypto Lite". You have a lot of choices here and if you are not an expert you're bound to make some wrong choices: such as the ones I've listed above. If OpenAUTH had used PASETO for its tokens, it wouldn't be running into these issues at least since no clearly insecure options are available.
If you do use JWT, for the love of all that is good, never stray away from this path:
1. For symmetric tokens, use the HS* family of algorithms. That's the only available option anyway.
2. When using HS256/384/512, you should use randomly generated secrets from /dev/urandom [1]. The secret size in bits should be the same size as the HS algorithm bits (i.e. 32 bytes for HS256, 48 bytes for HS384 and 64 bytes for HS512). In any case, you should NEVER use passwords as the HS256/384/512 algorithm secret.
3. Do not use asymmetric tokens unless the there are are multiple token verifying parties which are separate from the token issuer. If the token issuer is the same as the verifier, you should use a symmetric token. If you've got one issuer and one verifier, you should probably still use a symmetric token with a shared secret, since there is no issue of trust. Asymmetric cryptography is always an order of magnitude easier to screw up than symmetric cryptography.
4. If you do use asymmetric cryptography, always use Ed25519. If you are forced to use something else for compatibility, use ES256/384/512. It still has some issues (especially if your random number generator is unreliable), but it's still better than RSA. You really want to use Ed25519 though.
5. If you want to encrypt JWT, don't. JWE is too hard to use correctly, and the only reliably secure option that is usually implemented by libraries (A256GCMKW) is slow and it's not very popular so I'm not sure how much analysis the algorithm (AES Key Wrap) has seen.
6. The best and easiest option for encryption if you really must use JWT (and can't use PASETO): Just take your payload, encrypt it with NaCl/Libsodium (secretbox[2]), base64 encode the result and stuff it inside a JWT claim. This will be faster, easier and more secure than anything JWE can offer.
thanks for writing this up! i have been looking at switching to PASETO instead of jwt
one thing though - the reason we use asymmetric encryption is to allow other clients to validate tokens without calling a central server
eg if you use AWS API Gateway they specifically have jwt authorization support where you can point them to a jwks url and it will validate requests
i need to look into the algorithm again - another constraint was trying to work across everywhere JS runs and i need to check if a better algorithm can be used that still works everywhere
> thanks for writing this up! i have been looking at switching to PASETO instead of jwt
I'm glad to hear that! PASETO would solve all the cryptography issues I've described above.
> one thing though - the reason we use asymmetric encryption is to allow other clients to validate tokens without calling a central server
There seem to be two different usages of asymmetric JWT in OpenAUTH:
1. Asymmetric RSA signatures for access tokens. These tokens can be validated by any third party server which supports JWT. The tokens are SIGNED, but not ENCRYPTED. If you did encrypt them, then third party servers will not be able to verify the token without having the private key — which is obviously insecure.
This type of token would usually be asymmetric if you want to support multiple audiences ("resource servers" in OAuth 2.0 terms) with the same token. If you have just one audience, I would still make this token symmetric, unless key distribution is a problem. AWS JWT authorizer sucks[1], but you could write your own lambda authorizer. Google Cloud (Apigee)[2] and Azure API Management[3] natively support HS256/384/512, so this is mostly an AWS problem.
2. Asymmetric signature AND encryption for cookies. I guess these cookies are used for saving SSO state and PKCE verifier, but I didn't dive deeply into that. This cookie seems to be only read and written by the OpenAUTH server, so there is no reason for using asymmetric encryption, let alone using it with the same RSA keypair for both encryption and signature[4].
Since the cookie is only read by OpenAUTH, you can just use PASETO v4.local for this cookie.
---
[1] I wouldn't trust the security of a product which ONLY allows RSA, when they could have enabled safer protocols with a single line of code.
There is already a closed ticket about JWT usage. Unfortunately the ticket author did not point at any specific weaknesses besides revocation to which the app author answered:
"JWTs aren't innately problematic - they come with the tradeoff of not being able to be revoked. The ideal setup is setting a short expiry time on the access_token (eg 5min) and setting a long expiry on the refresh token (which is not a JWT and can be revoked)."
The fact that this tradeoff is acceptable to some apps is correct. But I disagree that JWT tokens aren't innately problematic. Stateless tokens aren't innately problematic. Even stateless access tokens aren't innately problematic if you're willing to live with the tradeoffs (revocation delay or a centralized/propagated revocation list). But JWT is innately problematic, in the most literal sense possibly: the very nature of the JWT specification is to permit the largest amount of cryptographic flexibility including supporting (and even recommending) insecure cryptographic algorithms, that may cause security problems if chosen. In other words: it is innately (= in its very nature) problematic (= has the potential of causing problems). JWT is the poster child of being "innately problematic".
I think the app author confuses "innately problematic" with "impossible to implement securely". JWT is not impossible to implement securely. After all, even SAML with the nefarious XMLdsig is possible to implement securely and we do, in fact, have secure implementations of SAML. The real problem is that JWT is hard to implement securely if you do not have the right expertise. And OpenAUTH is giving us (yet another) clear example where the JWT implementation is less than ideal.
I will be the first to admit that I am not expert enough to know how to hack this kind of cryptography, but I am also not good enough to prove that it is safe to use.
Yes, that was my usage of "secure" here. I obviously didn't mean that we should blindly trust SAML implementations. SAML should be avoided if possible, due to inherently complicated implementation. The same goes true for JWT. Both standards have better alternatives which are viable for the majority of necessary use cases.
But when you can use them, cookies are demonstrably better. XSS is the main argument against localstorage. Even this article[0], which pillories cookies, starts off with:
...if your website is vulnerable to XSS attacks, where a third party can run arbitrary scripts, your users’ tokens can be easily stolen [when stored in localstorage].
The reasons to avoid cookies:
* APIs might require an authorization header in the browser fetch call.
* APIs might live on a different domain, rendering cookies useless.
CSRF is a danger, that's true. can be worked around. My understanding is that XSS has a wider scope and that many modern frameworks come with CSRF protection built in[1]. Whereas XSS is a risk any time you (or anyone in the future) includes any JS code on your website.
> * APIs might live on a different domain, rendering cookies useless.
That's when you implement a BFF which manages your tokens and shares a session cookie with your frontend while proxying all requests to your APIs. And as said, you "just" have to setup a way for your BFF to share CSRF tokens with your frontend.
Yup, big fan of the BFF. Philippe de Ryck did a presentation on the fundamental insecurity of token storage on the client that he allowed us to share: https://www.youtube.com/watch?v=2nVYLruX76M
If you can't use cookies (which as mentioned above, have limits) and you can't use a solution like DPoP (which binds tokens to clients but is not widely deployed), then use the BFF. This obviously has other non-security related impacts and is still vulnerable to session riding, but the tokens can't be stolen.
CSRF is not as big of an issue as it used to be, and when it is an issue it can be solved more easily and comprehensively than XSS:
1. The default value for SameSite attribute is now "Lax" in most browsers. This means that unless you explicitly set your authentication cookies to SameSite=None (and why would you?), you are generally not vulnerable to cookie-based CSRF (other forms of CSRF are still possible, but not relevant to the issue of storing tokens in local storage or cookies).
2. Most modern SSR and hybrid frameworks have built-in CSRF protection for forms and you have to explicitly disable that protection in order to be vulnerable to CSRF.
3. APIs which support cookie authentication for SPAs can be deployed on another domain and use CORS headers to prevent CSRF, even with SameSite=None cookies.
On the other hand, there are no mechanisms which offer comprehensive protection from XSS. It's enough for a single JavaScript dependency that you use to have a bug and it's game over.
For this reason, OAuth 2.0 for Browser-Based Applications (draft)[1] strongly recommends using a HttpOnly cookie to store the access token:
"This architecture (using a BFF with HttpOnly cookies) is strongly recommended for business applications, sensitive applications, and applications that handle personal data."
With regards to storing access tokens and refresh tokens on local storage without any protection it says:
"To summarize, the architecture of a browser-based OAuth client application is straightforward, but results in a significant increase in the attack surface of the application. The attacker is not only able to hijack the client, but also to extract a full-featured set of tokens from the browser-based application.This architecture is not recommended for business applications, sensitive applications, and applications that handle personal data."
And this is what it has to say about storing the refresh token in a cookie, while keeping the access token accessible to JavaScript:
"When considering a token-mediating backend architecture (= storing only access token in local storage), it is strongly recommended to evaluate if adopting a full BFF (storing all tokens in a cookie) as discussed in Section 6.1 is a viable alternative. Only when the use cases or system requirements would prevent the use of a proxying BFF should the token-mediating backend be considered over a full BFF."
In short, the official OAuth WG stance is very clear:
1. HttpOnly cookies ARE better in terms of security.
2. Storing Refresh Tokens in local storage is only recommended for low-security use cases (no personal data, no enterprise compliance requirements).
3. Storing short-lived Access Tokens in local storage should only be considered if there are technical complexities that prevent you from using only cookies.
Happy to see the effort! Fresh blood in the authn space is always welcomed.
Without rehashing the other good points commenters have made already, I’ll just say that every project starts out immature. What makes a project great is how willing the maintainers will be to grow along with the project and incorporate feedback. I’m excited to see future evolutions of this one.
It looks like it’s strictly for OAuth 2.0 flows. No SAML, no ldap, no Kerberos, so it’s just a basic key exchange for those who can’t be bothered. Auth is hard and consumes too much sprint cycles, as is, so anything is welcome in this space. I personally will stick to keycloak.
The people who require SAML, LDAP and Kerberos are often catering towards a specific userbase (ie. internal business customers).
The needs for Auth & Auth are different for public-facing apps/services. It's not entirely unsurprising many newer Auth solutions don't even attempt to implement SAML et al.
With all of the recent steep price hikes in the Auth SaaS space, it seems it's becoming increasingly important to actually own your user account data. By own, I mean have access to the database and be capable of migrating it somewhere else (even at a large inconvenience) if necessary.
KeyCloak seems awesome for this - but I am liking the "explosion" of new Auth providers that seem to be popping up everywhere these days.
You should check out FusionAuth if you are looking at KeyCloak. We play in a similar same space (self-hostable, support for SAML, OIDC, OAuth2). I'd say KeyCloak has wider coverage for some of the more esoteric standards and is open source while we have a more modern API, dev-friendly docs, and great (paid) support.
FusionAuth is not open source, but you can self-host it for free and own your data[0]. Or let us run it for you. In the latter case, you still own your data--get it all from our cloud if you want to migrate.
I'm proud that the team wrote an offboarding doc[1]. It's your darn customer data, and every provider should support out-migration.
Our pricing is kinda complicated as discussed before[0]. We're working on simplifying things.
Here's a list of features[1] which hopefully are clearer about what you get on what plans.
Where things get complex is that we sell both features/support and hosting, and you can buy both, either or neither from us. Our hosting isn't multi-tenant SaaS, but rather dedicated infrastructure providing network and database level isolation. That means what we offer is different than say a Stytch that offers a multi-tenant logical isolation.
Most folks that are price conscious run it themselves on EC2 or Render or elsewhere[2].
To be fair, the pricing there is not out of line with other hosted SaaS auth services. The segmentation is also not out of line either.
However, the paywall (for all of these auth services) ends up being quite steep for the couple features that matter for a non-hobby app, such as custom domain name and MFA (totp or hooking up to an external SMS service). Unfortunately it makes these features expensive when you are starting out (paying ~$40 a month for only a handful of users, sort of thing...).
It is nice to see more and more of these services allow you to take out your data and migrate though - including the self-hosted options. Being vendor-locked for your user account data is a really big deal in my opinion. It often means having zero good options if the vendor decides to rake you over the coals one day.
TOTP based MFA is included in the free, community plan.
As I mentioned elsewhere, for folks who are price conscious, self-hosting is the better option.
But I get it! The story I tell internally all the time is that when I was working at a startup, our entirely hosting bill was on the order of $200/month (on Heroku; it was a while ago). There's no way we would have paid $350 just for an identity solution. But we would probably have chosen to host FusionAuth community on heroku for much much less and operated it ourselves.
Most b2b products are going to need SAML auth. Any reasonably sized tech business will want to onboard their employees into the software through SSO and the easiest way to do that is usually SAML if they are using something like Okta or JumpCloud.
Along with that, if they have compliance requirements like SOC2 then they really want the whole flow including offboarding any employees that have left the company.
You are describing enterprise, not normal b2b. Majority of businesses out there buying SaaS/PaaS products are not big enterprise with SSO needs nor compliance requirements. The SMB market is huge.
How does this compare to supertokens https://supertokens.com/ that supports fastify express, hono, bun, koa, nuxt, react, vue, angular with email password + social login + OTP based login + single sign on all wrapped in a self hostable nice package?
Trying to understand where this fits in to the current ecosystem. Looks like it's sort of like Passport but it runs in a separate server, and apps use OAuth2 to communicate with it? The example video looks like it's only doing authentication. Does it do any authorization? Do apps need to use OpenAuth libraries directly, or can they get back with basic cookies/redirects/etc?
> While OpenAuth tries to be mostly stateless, it does need to store a minimal amount of data (refresh tokens, password hashes, etc). However this has been reduced to a simple KV store with various implementations for zero overhead systems like Cloudflare KV and DynamoDB. You should never need to directly access any data that is stored in there.
Written like someone who's never actually maintained an identify provider used in B2B contract work. You will inevitably come into contact with people who cannot make things work on their side and are doing things so unexpectedly that logging is insufficient to track down the error. Sooner or later you will need to look at the data actually in storage to figure out what your customers who are threatening to cancel their contract are doing wrong.
I've been there. Many times.
KV Stores aren't magical... and you do need to store this data somewhere.
So what's different between this (or any of these new-aged Auth services) and something else more traditional? If anything, these new-age services make it easier to access your data if you need to, since you often control the backing database unlike auth0, etc.
Both DynamoDB and Cloudflare KV are queriable.
I guess I don't understand the negativity in your comment. If anything, your complaint sounds like an issue you and your team cooked up on your own.
I don’t think it’s the architecture or technology the commenter is reacting to, it’s this line: “You should never need to directly access any data that is stored in there.”
Statements like that are a huge red flag that the designers of the product are not particularly experienced with operating this type of system at meaningful scale.
Eh, the technology stack they discuss is directly accessible, though.
I read this as an advertisement, meaning if everything it working well you don't need to manage the database. Which is probably how it works 99% of the time in fairness.
The circumstances are immaterial; if the creators of a system are blasé enough to imagine you’ll neither need nor want to manage or query the underlying data storage, then they’re telegraphing naiveté and inexperience. Capacity, throughput, latency, consistency concerns exist in any nontrivial system and the pointy end/bottleneck of these is very often at the level of persistence. Auth services can add privacy and integrity to that pile. And so on. Consequently, glossing over it with a handwave is a bright red flag.
To me this statement read very differently. I read it as saying that the amount of state is small and portable enough that I wouldn’t have to worry about scalability issues or be married to a particular database product. I think the original complaint about it is overly critical and nit picking.
yes this was the intent
99% of the time is a rather high rate of failure .. 1% of a year is still over 3 days. 1% of a million is still a lot of incidents
Sure but you are allowed to put the 99% front and center in marketing.
A good chunk of the other 1% are square-peg-in-round-home situations.
It is good to support all the various edge cases but it is also on 6 to focus on the happy path
> since you often control the backing database unlike auth0, etc.
Auth0 has an option where you can use your own database for identities
Starting at $240 a month...
all of the data in there will be exposed via an API and also directly queryable since it's in your infra
but the idea here is openauth does not handle things like user storage - it just gives you callbacks to store the data in whatever database you're using
precisely because of what you're talking about - eventually you need access to it
[flagged]
Can we not call any authentication scheme/protocol/service starting with "Open" and even "O" anymore? We already have OAuth, OATH, OpenID, OpenIDConnect, and Okta; it's getting out of hand.
"Auth" is also super overloaded. OP is an authentication or AuthN tool which is not the same nor does it encompass authorization or AuthZ. I'm partial to using the terms "identity" and "permissions" instead.
The example in the video even has the name `authorizer` in the default export, but really it's an `authenticator` or `identifyer` or `isThisUserWhoTheySayTheyArer` – not a `shouldTheyHaveAccessizer`.
I'm in total agreement with you but I guess the AuthN/AuthZ train left the station ages ago and no one outside of the business of selling these tools actually care. Oh well.. :o)
yeah this is a naming mistake a realized but thankfully in beta so can rework it
it used to be called authenticator! mixed it up unintentionally
Don't forget "Auth0". I've definitely talked to devs who were confused about the difference between Auth0 and OAuth.
Which, by the way, is a testament to how great the Auth0 brand is.
Cool project!
OAuth-based auth providers are nice, but they can have a weakness. When you have just one app, OAuth can be overkill: protocol is complex, and users suffer jarring redirects¹.
This is not surprising, because OAuth / OIDC is fundamentally designed for (at least) three parties that don't fully trust each other: user, account provider and an app². But in a single app there are only two parties: user and app itself. Auth and app can fully trust each other, protocol can be simpler, and redirects can be avoided.
I'm curious what OpenAUTH authors think about it.
¹ Except for Resource Owner Password Credentials (ROPC) grant type, but it's no longer recommended: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-secur...
² In addition, OAuth is mostly designed for and by account providers, and follows their interests more than interests of app developers.
It's fair to say that with OAuth the resource owner can choose to display a consent screen or not. For example, when consent is granted already, it can be skipped if the resource owner does not need it. Likewise, Google Workspace and other enterprise services that use OAuth can configure in advance which apps are trusted and thus skip permission grants.
Not to say the concern about redirects isn't legitimate, but there are other ways of handling this. Even redirects aren't necessary if OAuth is implemented in a browser-less or embedded browser fashion, e.g. SFAuthenticationSession for one non-standard example. I haven't looked this up in awhile but I believe the OAuth protocol was being extended more and more to other contexts beyond the browser - e.g. code flow or new app-based flows and even QR auth flows for TV or sharing prompts.
(Note I am not commenting on OpenAUTH, just OAuth in general. It's complex, yes, but not as bad as it might seem at first glance. It's just not implemented in a standard way across every provider. Something like PassKeys might one day replace it.)
> Even redirects aren't necessary if OAuth is implemented in a browser-less or embedded browser fashion, e.g. SFAuthenticationSession
Can you please expand on that or give me some hints what to look at? I have never heard of this before and I work with Oauth2 a lot.
When I look for SFAuthenticationSession it seems to be specific to Safari and also deprecated.
I always share this article because people overimplement OAuth2 for everything, it’s not a hammer: https://www.ory.sh/oauth2-openid-connect-do-you-need-use-cas...
For browserless, I was referring to a 2019 article that I could have sworn was newer than that, on the need for OAuth 2.1 that also covers how they added OAuth for Native Apps (Code Flow) and basically a QR code version for TVs: https://aaronparecki.com/2019/12/12/21/its-time-for-oauth-2-...
As for SFAuthenticationSession, again my info might be outdated, but the basic idea is that there are often native APIs that can load OAuth requests in a way that doesn’t require you to relogin. Honestly most of those use cases have been deprecated by PassKeys at an operating system level. There’s (almost) no need for a special browser with cookies to your favourite authenticated services if you have PassKeys to make logging in more painless.
Thanks for sharing!
I agree that passkeys would solve all that, but they have their own set of problems (mainly being bound to a device) and they are still very far from being universally adopted.
I’m looking forward to OAuth2.1 - at the moment it is still in draft stage, so it will take a couple more years until it’s done and providers start implementing.
My prediction is that passwords will be around for a long time, at least the next 10 years.
PassKeys are definitely the future, they aren't just device-specific, they can be synced also. https://www.corbado.com/blog/nist-passkeys talks about this, though I'll admit I haven't read anything on the subject yet. But I can say that most implementations of PassKeys seem to cloud sync, including 1Password, Apple, Google, Edge, etc.
I should also add that PassKeys that are tied to devices are like FIDO2 security keys, you should be able to add more than one to your account so that you can login with a backup if your primary FIDO2 token is unavailable.
Likewise, SSO should ideally be implemented such that you can link more than one social network - and a standard email address or backup method - in addition to the primary method you might use to login with. It has always bugged me that Auth0 makes it much harder than it should be to link multiple methods of login to an account, by default.
The biggest issue I've seen organisations facing with PassKeys is that neither iOS or Android require biometrics to unlock one - this seems like a massive drawback.
Most apps wanting extra authentication implement biometrics which fall back to an app-specific knowledge based credential like a PIN or password. As far as I can tell, PassKeys on those devices fall back to the device PIN which in the case of family PCs/iPads/tablets is known to the whole household.
I've seen several organisations give up on them for this reason.
The article by Ory's Aeneas Rekkas perfectly describes OAuth / OIDC problems. The only thing it misses is the suggestion for the alternative protocol for first-party auth. It does suggest that it's preferable to use simpler systems like Ory Kratos. But OAuth / OIDC is a set of protocols, not an implementation. Is there an a effort to specify simple auth protocol, when third-party auth is not needed?
It can vary by implementation need. Can you send a time-limited secret as a login link to someone's email as a replacement to entering or managing passwords? Can you use PassKeys? Or a simple username and password? (Password storage and management is left as an exercise to the reader.)
Part of the question is - why present a login? Do you need an identity? Do you need to authorize an action? How long should it last?
Generally, today, PassKeys are the "simple" authentication mechanism, if you don't need a source of third-party identity or can validate an email address yourself. (Though once you implement email validation, it is arguable that email validation is a perfectly simple and valid form of authentication, it just takes a bit more effort on the part of the user to login, particularly if they can't easily access email on the device they are trying to login on, though even then you can offer a short code they could enter instead.)
Frankly, the conclusion to "how to login" that I draw today is that you will inevitably end up having to support multiple forms of login, including in apps, browsers and by email. It seems inevitable then that you will end up needing more than one approach as a convenience to the end user depending on the device they are trying to sign in to, and their context (how necessary is it that they must be signed in manually vs using a magic link or secret or QR code or just click a link in their email).
I should also note that I haven't discussed much about security standards here in detail. Probably because I'm trying to highlight that login is primarily a UX concern, and security is intertwined but can also be considered an implementation detail. The most secure system is probably hard to access, so UX can sometimes be a tradeoff between security and ease-of-access to a system. It's up to your implementation how secure you might need to be.
To some, you can use a web-based VPN or an authenticating proxy in front of your app, and just trust the header that comes along. Or you could put your app behind Tailscale or another VPN that requires authentication and never login a user. It's all up to what requirements the app has, and the context of the user/device accessing it.
It's probably going to be vendor-specific or you will implement your own auth. At ZITADEL we decided to offer all the standards like OIDC and SAML, and offer a session API for more flexible auth scenarios. You will also be able to mix.
My personal gripe with OAuth is that the simple case can be implemented with like 2 redirects and 2 curls, but docs are often incomprehensibly complicated by dozens of layers of abstractions, services, extensions, enterprise account management and such.
Good for them for trying! I've been in the auth space for a few years and am surprised that a stateless AWS lambda for doing the exchange. (At least I haven't seen any.) So it is nice to see some serverless innovation.
Thoughts from a quick scan:
- They support PKCE (yay!)
- They suggest storing access tokens in localstorage (boo!)
- They support JWKS (yay!)
- JWT algorithm is not configurable: good!
- JWT Algorithm is RS512: uggh.
Huge access tokens, slow validation AND bad security at the same time, boy we must be lucky.
- Encrypted JWT when saving sensitive information in cookie? Good start...
- ... Using Asymmetric encryption? Oh no...
- RSA-OAEP: At least it's not PKCS#1.5 amirite?
- Same RSA key is used for encryption and signature(?) Not great.
- Stateful Access Tokens: well...
I'm not sure how I feel about using stateful access tokens here at all. Since there is already a KV dependency, there are some advantages to storing stateful access tokens in the KV, most importantly you can easily revoke the tokens directly by deleting them. Revoking stateless tokens, on the other hand, is quite hard and not something that most web applications would care to implement.
The most common trade-off (and indeed, half of the raison d'être for OAuth 2.0 refresh tokens) is to have a very short-lived (e.g. 5 minutes) stateless access token and a long-lived stateful refresh token (which OpenAUTH already does). Revocation would still come with some delay, but you won't be able to use an outdated token for a long time after the user logs out or changes password. This is an acceptable trade-off for many applications, but I'm not sure if it's right to offer it as the only solution in a software package that wants to be generic: many applications will have compliance requirements that could rule out accepting logged out session tokens for such a period.
- JWT in any form or manner
The elephant in the room. Since JWT allows you to choose your own algorithm and offers some some rather bad options, using JWT can be considered as "Rolling your own crypto Lite". You have a lot of choices here and if you are not an expert you're bound to make some wrong choices: such as the ones I've listed above. If OpenAUTH had used PASETO for its tokens, it wouldn't be running into these issues at least since no clearly insecure options are available.
If you do use JWT, for the love of all that is good, never stray away from this path:
1. For symmetric tokens, use the HS* family of algorithms. That's the only available option anyway.
2. When using HS256/384/512, you should use randomly generated secrets from /dev/urandom [1]. The secret size in bits should be the same size as the HS algorithm bits (i.e. 32 bytes for HS256, 48 bytes for HS384 and 64 bytes for HS512). In any case, you should NEVER use passwords as the HS256/384/512 algorithm secret.
3. Do not use asymmetric tokens unless the there are are multiple token verifying parties which are separate from the token issuer. If the token issuer is the same as the verifier, you should use a symmetric token. If you've got one issuer and one verifier, you should probably still use a symmetric token with a shared secret, since there is no issue of trust. Asymmetric cryptography is always an order of magnitude easier to screw up than symmetric cryptography.
4. If you do use asymmetric cryptography, always use Ed25519. If you are forced to use something else for compatibility, use ES256/384/512. It still has some issues (especially if your random number generator is unreliable), but it's still better than RSA. You really want to use Ed25519 though.
5. If you want to encrypt JWT, don't. JWE is too hard to use correctly, and the only reliably secure option that is usually implemented by libraries (A256GCMKW) is slow and it's not very popular so I'm not sure how much analysis the algorithm (AES Key Wrap) has seen.
6. The best and easiest option for encryption if you really must use JWT (and can't use PASETO): Just take your payload, encrypt it with NaCl/Libsodium (secretbox[2]), base64 encode the result and stuff it inside a JWT claim. This will be faster, easier and more secure than anything JWE can offer.
[1] https://www.latacora.com/blog/2018/04/03/cryptographic-right...
[2] https://libsodium.gitbook.io/doc/secret-key_cryptography/sec...
thanks for writing this up! i have been looking at switching to PASETO instead of jwt
one thing though - the reason we use asymmetric encryption is to allow other clients to validate tokens without calling a central server
eg if you use AWS API Gateway they specifically have jwt authorization support where you can point them to a jwks url and it will validate requests
i need to look into the algorithm again - another constraint was trying to work across everywhere JS runs and i need to check if a better algorithm can be used that still works everywhere
> thanks for writing this up! i have been looking at switching to PASETO instead of jwt
I'm glad to hear that! PASETO would solve all the cryptography issues I've described above.
> one thing though - the reason we use asymmetric encryption is to allow other clients to validate tokens without calling a central server
There seem to be two different usages of asymmetric JWT in OpenAUTH:
1. Asymmetric RSA signatures for access tokens. These tokens can be validated by any third party server which supports JWT. The tokens are SIGNED, but not ENCRYPTED. If you did encrypt them, then third party servers will not be able to verify the token without having the private key — which is obviously insecure.
This type of token would usually be asymmetric if you want to support multiple audiences ("resource servers" in OAuth 2.0 terms) with the same token. If you have just one audience, I would still make this token symmetric, unless key distribution is a problem. AWS JWT authorizer sucks[1], but you could write your own lambda authorizer. Google Cloud (Apigee)[2] and Azure API Management[3] natively support HS256/384/512, so this is mostly an AWS problem.
2. Asymmetric signature AND encryption for cookies. I guess these cookies are used for saving SSO state and PKCE verifier, but I didn't dive deeply into that. This cookie seems to be only read and written by the OpenAUTH server, so there is no reason for using asymmetric encryption, let alone using it with the same RSA keypair for both encryption and signature[4].
Since the cookie is only read by OpenAUTH, you can just use PASETO v4.local for this cookie.
---
[1] I wouldn't trust the security of a product which ONLY allows RSA, when they could have enabled safer protocols with a single line of code.
[2] https://cloud.google.com/apigee/docs/api-platform/security/o...
[3] https://learn.microsoft.com/en-us/azure/api-management/valid...
[4] https://crypto.stackexchange.com/questions/12090/using-the-s...
There is already a closed ticket about JWT usage. Unfortunately the ticket author did not point at any specific weaknesses besides revocation to which the app author answered:
"JWTs aren't innately problematic - they come with the tradeoff of not being able to be revoked. The ideal setup is setting a short expiry time on the access_token (eg 5min) and setting a long expiry on the refresh token (which is not a JWT and can be revoked)."
The fact that this tradeoff is acceptable to some apps is correct. But I disagree that JWT tokens aren't innately problematic. Stateless tokens aren't innately problematic. Even stateless access tokens aren't innately problematic if you're willing to live with the tradeoffs (revocation delay or a centralized/propagated revocation list). But JWT is innately problematic, in the most literal sense possibly: the very nature of the JWT specification is to permit the largest amount of cryptographic flexibility including supporting (and even recommending) insecure cryptographic algorithms, that may cause security problems if chosen. In other words: it is innately (= in its very nature) problematic (= has the potential of causing problems). JWT is the poster child of being "innately problematic".
I think the app author confuses "innately problematic" with "impossible to implement securely". JWT is not impossible to implement securely. After all, even SAML with the nefarious XMLdsig is possible to implement securely and we do, in fact, have secure implementations of SAML. The real problem is that JWT is hard to implement securely if you do not have the right expertise. And OpenAUTH is giving us (yet another) clear example where the JWT implementation is less than ideal.
I will be the first to admit that I am not expert enough to know how to hack this kind of cryptography, but I am also not good enough to prove that it is safe to use.
> we do, in fact, have secure implementations of SAML.
Do we?
I thought we only had implementations where with no currently known security problems.
> no currently known security problems
To be fair, that is the layman's definition of "secure"
Yes, that was my usage of "secure" here. I obviously didn't mean that we should blindly trust SAML implementations. SAML should be avoided if possible, due to inherently complicated implementation. The same goes true for JWT. Both standards have better alternatives which are viable for the majority of necessary use cases.
> If you do use asymmetric cryptography, always use Ed25519
What about secp256k1 / ES256K?
btw what's wrong with 512-bit keys?
What's wrong with tokens in local storage?
Less secure that HttpOnly cookies, which are not accessible by third-party JavaScript. LocalStorage also doesn't have automatic expiration.
Tradeoff is all the edge cases of cookies, CSRF etc. It's not a simple "cookies are better"
But when you can use them, cookies are demonstrably better. XSS is the main argument against localstorage. Even this article[0], which pillories cookies, starts off with:
The reasons to avoid cookies:* APIs might require an authorization header in the browser fetch call.
* APIs might live on a different domain, rendering cookies useless.
CSRF is a danger, that's true. can be worked around. My understanding is that XSS has a wider scope and that many modern frameworks come with CSRF protection built in[1]. Whereas XSS is a risk any time you (or anyone in the future) includes any JS code on your website.
0: https://pilcrowonpaper.com/blog/local-storage-cookies/
1: https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re...
> * APIs might live on a different domain, rendering cookies useless.
That's when you implement a BFF which manages your tokens and shares a session cookie with your frontend while proxying all requests to your APIs. And as said, you "just" have to setup a way for your BFF to share CSRF tokens with your frontend.
Yup, big fan of the BFF. Philippe de Ryck did a presentation on the fundamental insecurity of token storage on the client that he allowed us to share: https://www.youtube.com/watch?v=2nVYLruX76M
If you can't use cookies (which as mentioned above, have limits) and you can't use a solution like DPoP (which binds tokens to clients but is not widely deployed), then use the BFF. This obviously has other non-security related impacts and is still vulnerable to session riding, but the tokens can't be stolen.
> Philippe de Ryck
Almost certain it is one of those presentations which got me on the BFF bandwagon. Really awesome speaker.
CSRF is not as big of an issue as it used to be, and when it is an issue it can be solved more easily and comprehensively than XSS:
1. The default value for SameSite attribute is now "Lax" in most browsers. This means that unless you explicitly set your authentication cookies to SameSite=None (and why would you?), you are generally not vulnerable to cookie-based CSRF (other forms of CSRF are still possible, but not relevant to the issue of storing tokens in local storage or cookies).
2. Most modern SSR and hybrid frameworks have built-in CSRF protection for forms and you have to explicitly disable that protection in order to be vulnerable to CSRF.
3. APIs which support cookie authentication for SPAs can be deployed on another domain and use CORS headers to prevent CSRF, even with SameSite=None cookies.
On the other hand, there are no mechanisms which offer comprehensive protection from XSS. It's enough for a single JavaScript dependency that you use to have a bug and it's game over.
For this reason, OAuth 2.0 for Browser-Based Applications (draft)[1] strongly recommends using a HttpOnly cookie to store the access token:
"This architecture (using a BFF with HttpOnly cookies) is strongly recommended for business applications, sensitive applications, and applications that handle personal data."
With regards to storing access tokens and refresh tokens on local storage without any protection it says:
"To summarize, the architecture of a browser-based OAuth client application is straightforward, but results in a significant increase in the attack surface of the application. The attacker is not only able to hijack the client, but also to extract a full-featured set of tokens from the browser-based application.This architecture is not recommended for business applications, sensitive applications, and applications that handle personal data."
And this is what it has to say about storing the refresh token in a cookie, while keeping the access token accessible to JavaScript:
"When considering a token-mediating backend architecture (= storing only access token in local storage), it is strongly recommended to evaluate if adopting a full BFF (storing all tokens in a cookie) as discussed in Section 6.1 is a viable alternative. Only when the use cases or system requirements would prevent the use of a proxying BFF should the token-mediating backend be considered over a full BFF."
In short, the official OAuth WG stance is very clear:
1. HttpOnly cookies ARE better in terms of security. 2. Storing Refresh Tokens in local storage is only recommended for low-security use cases (no personal data, no enterprise compliance requirements). 3. Storing short-lived Access Tokens in local storage should only be considered if there are technical complexities that prevent you from using only cookies.
[1] https://datatracker.ietf.org/doc/html/draft-ietf-oauth-brows...
enable same-site??
if you're doing things not from same-site, I'd posit you're doing something I want blocked anyways.
Malicious access to these tokens is malicious access to the service.
https://www.keycloak.org/ is pretty great too, if you need a little more.
What do you mean a little more? More like several truckloads more. :) Keycloak is great, but it's a beast.
It's a typical speech pattern here in NZ. A little more when you mean a lot. Sorry I forget to translate for international audiences. :)
Keycloak is a lot less than some other OAuth servers though.
What's it missing?
Happy to see the effort! Fresh blood in the authn space is always welcomed.
Without rehashing the other good points commenters have made already, I’ll just say that every project starts out immature. What makes a project great is how willing the maintainers will be to grow along with the project and incorporate feedback. I’m excited to see future evolutions of this one.
2024 and we still are not sure how to implement auth. Webdev is fantastic.
Is this like a Passport.js alternative?
https://www.passportjs.org/
Does it only support username/password + OAuth? I didn't see much information on if it supports SAML. I'm interested in how it compares to things like https://github.com/zitadel/zitadel and https://github.com/ory/kratos
It looks like it’s strictly for OAuth 2.0 flows. No SAML, no ldap, no Kerberos, so it’s just a basic key exchange for those who can’t be bothered. Auth is hard and consumes too much sprint cycles, as is, so anything is welcome in this space. I personally will stick to keycloak.
The people who require SAML, LDAP and Kerberos are often catering towards a specific userbase (ie. internal business customers).
The needs for Auth & Auth are different for public-facing apps/services. It's not entirely unsurprising many newer Auth solutions don't even attempt to implement SAML et al.
With all of the recent steep price hikes in the Auth SaaS space, it seems it's becoming increasingly important to actually own your user account data. By own, I mean have access to the database and be capable of migrating it somewhere else (even at a large inconvenience) if necessary.
KeyCloak seems awesome for this - but I am liking the "explosion" of new Auth providers that seem to be popping up everywhere these days.
Disclosure: I work for FusionAuth.
You should check out FusionAuth if you are looking at KeyCloak. We play in a similar same space (self-hostable, support for SAML, OIDC, OAuth2). I'd say KeyCloak has wider coverage for some of the more esoteric standards and is open source while we have a more modern API, dev-friendly docs, and great (paid) support.
FusionAuth is not open source, but you can self-host it for free and own your data[0]. Or let us run it for you. In the latter case, you still own your data--get it all from our cloud if you want to migrate.
I'm proud that the team wrote an offboarding doc[1]. It's your darn customer data, and every provider should support out-migration.
0: https://fusionauth.io/download
1: https://fusionauth.io/docs/lifecycle/migrate-users/offboard
Maybe I’m not your target audience but Yikes! Your pricing was unexpectedly high.
Also it’s not clear what premium features were or why MFA is a premium feature but only available at top tiers.
Hiya, thanks for the response.
Our pricing is kinda complicated as discussed before[0]. We're working on simplifying things.
Here's a list of features[1] which hopefully are clearer about what you get on what plans.
Where things get complex is that we sell both features/support and hosting, and you can buy both, either or neither from us. Our hosting isn't multi-tenant SaaS, but rather dedicated infrastructure providing network and database level isolation. That means what we offer is different than say a Stytch that offers a multi-tenant logical isolation.
Most folks that are price conscious run it themselves on EC2 or Render or elsewhere[2].
0: https://news.ycombinator.com/item?id=41269197
1: https://fusionauth.io/feature-list
2: Here's render instructions: https://fusionauth.io/blog/fusionauth-on-render
To be fair, the pricing there is not out of line with other hosted SaaS auth services. The segmentation is also not out of line either.
However, the paywall (for all of these auth services) ends up being quite steep for the couple features that matter for a non-hobby app, such as custom domain name and MFA (totp or hooking up to an external SMS service). Unfortunately it makes these features expensive when you are starting out (paying ~$40 a month for only a handful of users, sort of thing...).
It is nice to see more and more of these services allow you to take out your data and migrate though - including the self-hosted options. Being vendor-locked for your user account data is a really big deal in my opinion. It often means having zero good options if the vendor decides to rake you over the coals one day.
Hiya, thanks for the feedback.
TOTP based MFA is included in the free, community plan.
As I mentioned elsewhere, for folks who are price conscious, self-hosting is the better option.
But I get it! The story I tell internally all the time is that when I was working at a startup, our entirely hosting bill was on the order of $200/month (on Heroku; it was a while ago). There's no way we would have paid $350 just for an identity solution. But we would probably have chosen to host FusionAuth community on heroku for much much less and operated it ourselves.
Anyway, thanks for your feedback.
Most b2b products are going to need SAML auth. Any reasonably sized tech business will want to onboard their employees into the software through SSO and the easiest way to do that is usually SAML if they are using something like Okta or JumpCloud.
Along with that, if they have compliance requirements like SOC2 then they really want the whole flow including offboarding any employees that have left the company.
You are describing enterprise, not normal b2b. Majority of businesses out there buying SaaS/PaaS products are not big enterprise with SSO needs nor compliance requirements. The SMB market is huge.
Enterprise types of users are their own beast.
we will add SAML adapters as well
but the flow between your apps and openauth will always be oauth
How does this compare to supertokens https://supertokens.com/ that supports fastify express, hono, bun, koa, nuxt, react, vue, angular with email password + social login + OTP based login + single sign on all wrapped in a self hostable nice package?
OpenAuthJs is literally a hono app.
Trying to understand where this fits in to the current ecosystem. Looks like it's sort of like Passport but it runs in a separate server, and apps use OAuth2 to communicate with it? The example video looks like it's only doing authentication. Does it do any authorization? Do apps need to use OpenAuth libraries directly, or can they get back with basic cookies/redirects/etc?
I'm guessing this is for service providers and not identity providers. Just a suggestion, but that could be more clear in the description.
https://youtu.be/mKKx8uXw5ak
[dead]
[dead]