
For years, OAuth (OAuth 2.0, actually) has been the go-to protocol for delegated access on the web. It allowed applications to act on a user's behalf, whether fetching emails, accessing profile information, or posting on social media, without requiring the user’s password. It was a noble effort to improve security, and for a time, it seemed like the right direction.
But today, I can no longer support OAuth for long term future security.
Not because it failed in its intent, but because its architecture has become a patchwork of compromises, stacked atop an inherently brittle foundation. OAuth is now a protocol that depends on trust it cannot verify, tokens it cannot protect, and flows it cannot fully secure.
A Great Protocol for Bygone Times
Don’t get me wrong!
OAuth was a great solution for its time. It addressed a real and pressing need, eliminating the dangerous practice of password-sharing between apps and services—by introducing a standardized way to delegate access. It brought much-needed structure to web authorization, enabling a generation of applications to interoperate securely (or at least more securely than before).
But like many transitional technologies, OAuth now shows its age. Its patchwork evolution has become increasingly difficult to manage, and its foundational reliance on bearer tokens and centralized flows no longer meets the demands of today’s cryptographically capable, decentralized, and adversarially aware ecosystem. It’s time to acknowledge the role OAuth played in getting us here, and to let it go as we build what comes next.
Security by Indirection
OAuth relies on opaque bearer tokens, blobs of data that, once issued, can be used by *anyone* who possesses them. There is no cryptographic binding between the token and the holder. This means that if a token is intercepted, leaked, or misused, the server cannot distinguish between a legitimate client and an attacker. It's security by possession, not proof.
PKCE: Better, But Still a Band-Aid
The Proof Key for Code Exchange (PKCE, pronounced "pixie") was introduced to improve OAuth security for public clients like mobile apps. It works by having the client generate a random secret, called a code verifier, and send a hashed version (the code challenge) during the initial authorization request. Later, when redeeming the authorization code, the client must present the original verifier, which the server can check against the challenge.
Yes, this is an improvement. It closes a known attack where malicious apps could intercept an authorization code and immediately redeem it. But it doesn't fundamentally fix OAuth's problems, it just adds another layer of shared secret transmission over the same fragile infrastructure.
At the end of the day, PKCE still requires:
A shared secret to pass through the same untrusted path.
Reliance on the server correctly enforcing validation rules.
TLS to protect the integrity of both the challenge and the verifier.
If either the verifier or the code is intercepted (via a rogue proxy, a compromised client, or mishandled logs), PKCE cannot protect the flow. It reduces certain risks, but it does not eliminate OAuth’s reliance on bearer tokens or its exposure to ambient credential theft. PKCE is clever, but it’s still lipstick on a fundamentally flawed model.
Too Many Moving Parts, Too Much Centralization
OAuth depends on trusted third-party authorization servers, often centralized identity providers. This creates a fragile dependency chain:
If the authorization server is compromised, all access is compromised.
If the server enforces weak policies, every connected app inherits that weakness.
If the provider revokes your access or changes their policies, your app may break without warning.
In short: you're only as secure as the least secure party in the OAuth chain.
Delegating Security to TLS Is Not Enough
OAuth assumes that the transport layer, namely TLS (HTTPS), will protect the data in motion. But in practice, this assumption breaks down quickly in real-world deployments. Every time a reverse proxy handles a request, every time an HTTP redirect occurs, or every time an authorization code or token is passed via URL parameters or headers, sensitive data gets dumped out into the clear at the application layer.
That means:
Reverse proxies can log tokens and authorization codes.
Load balancers can see everything in the request stream.
URL-based tokens can be captured in browser history, referral headers, or logs.
This is not end-to-end encryption. This is hop-by-hop exposure. And OAuth’s reliance on TLS to “secure the channel” means it has no built-in resilience when that channel is made porous by real infrastructure. The moment the token leaves your client, it’s effectively unguarded.
OAuth Is Hard to Implement Securely
Even experienced developers regularly implement OAuth incorrectly. Why? Because:
The specifications are sprawling and ambiguous. (‘It’s a framework, not a protocol!’)
Different providers interpret flows differently.
Most implementations require nuanced knowledge of redirect URIs, scopes, state parameters, refresh tokens, expiry, token introspection, and more.
It’s not just hard to use, it’s hard to audit. You cannot look at an OAuth-based system and easily verify its security guarantees.
We Need Cryptographic Guarantees, Not Tokens
The internet has evolved. We now have protocols that use cryptographic authentication, capability-based security, and verifiable credentials at the outset. These offer:
Holder-bound tokens that can’t be replayed or stolen.
Self-generation of cryptographic keys that exist outside of the systems they use.
End-to-end verifiability without trusting intermediaries or underlying transport mechanisms.
OAuth cannot adapt to this world without abandoning its foundational assumptions. And clinging to bearer tokens in an age of cryptographic identity is not just outdated, it’s dangerous.
It’s about Protecting Payloads, not Pipes
TLS has done a great job securing the pipes between junctions. TLSi encrypts data in transit, protects against eavesdropping, and authenticates connections between network nodes. For decades, it’s been the backbone of secure communication on the web.
But in today’s distributed and proxy-laden infrastructure, securing the pipe is no longer enough. Data passes through reverse proxies, load balancers, edge nodes, and third-party services, all of which terminate TLS and expose the payload at the application layer. What we need now is payload-level security: protection that travels with the data itself, ensuring confidentiality, integrity, and verifiability end-to-end, regardless of how many intermediaries handle or inspect the stream.
It’s time to evolve from trusting the transport to securing the content.
It’s OK for now…
While OAuth may no longer represent the ideal, it remains deeply embedded in our current infrastructure and will likely continue to serve a practical role in the short to medium term. We shouldn't hesitate to use it where it makes sense, especially when alternatives are not yet mature or widely supported. But as we look toward the future, we must be willing to explore and adopt more resilient, cryptographically secure approaches that better align with the needs of a decentralized, privacy-conscious internet.
This evolution doesn’t require abandoning OAuth overnight. It simply means acknowledging its limitations and preparing for what comes next. That’s not a rejection of progress, it’s a continuation of it. And that’s okay.
But in the Future, No More Tolerating Inherent Insecurity
OAuth was a bridge technology. It helped us transition away from sharing passwords, but it’s not where we need to stay. Continuing to rely on OAuth in 2025 is like encrypting a message but mailing the decryption key along with it.
I can no longer support a model that entrusts our most sensitive data and permissions to a design that cannot fundamentally guarantee its own security.
It’s time to move on, from bearer tokens to bound credentials, from opaque tokens to verifiable claims, from OAuth to protocols designed for a decentralized, cryptographically secure internet.
The future is not OAuth.
If you're working on decentralized identity, cryptographic access control, or protocols beyond OAuth, I'd love to hear what you're building.
Very crisp description on the issues 2025+. What I would expect is OAth (OIDC4VC++) to start providing a bridge to at least more secure communication alternatives, such as replacing "bare metal" use of HTTPS with end2end secured protocols like ToIP's Trust Spanning Protocol, or adaptations of DIDComm. Certainly OIDC4VC is using DIDs and DIDComm in some scenarios for more secure communication.
In tech, the old hangs on with tweaks vs do-the-new commitment, until there is a catastrophic failure of the old, or the new finally overtakes the old via large organizations (public and private) finally moving on.
Very helpful update for me, thanks