Every Authentication Method Is Another Way In
How the OR problem in authentication design turns every new login button into an attacker's shortest path
Josh Jones understood cryptocurrency security better than most people alive. He had built Bitcoin Builder, a trading platform where users could buy and sell bitcoins trapped inside the collapsing Mt. Gox exchange — work that required him to think precisely about custody, key management, and trust boundaries. He understood what could go wrong, because he had spent years building systems for people who had watched it go wrong. So when it came to his own T-Mobile account — the account tethered to his phone number, which was tethered to his two-factor authentication, which was tethered to his cryptocurrency wallets — he took the step that security-conscious people take. He requested T-Mobile’s highest protection tier: an eight-digit PIN that was supposed to block any changes to his account.
On February 21, 2020, at some point that Jones would only reconstruct later, his phone went dark. Not dead — dark. The screen read “No Service.” Somewhere, a T-Mobile employee had transferred his phone number to a SIM card controlled by someone else. The eight-digit PIN — the one security measure Jones had specifically requested — was never entered. It was simply bypassed. The attacker now received every call and text message meant for Jones, including the two-factor authentication codes protecting his crypto wallets. Within minutes, over 1,500 Bitcoin and roughly 60,000 Bitcoin Cash — $38 million at the time — were transferred out. The attacker, it turned out, was a seventeen-year-old who had learned about SIM swapping from friends online. Law enforcement later linked him to associates who hijacked 45 Twitter accounts, including those of Joe Biden, Bill Gates, Jeff Bezos, and Elon Musk, using the same technique.
Jones had done the thing you’re supposed to do. He had added the extra layer. He had requested the AND gate — the control that should have required his PIN and his identity before any change was authorized. But T-Mobile’s system treated that PIN as an OR — one possible check among several, skippable by an employee who didn’t ask for it or a process that didn’t require it. The strongest lock on the front door didn’t matter, because the system had a side entrance that nobody was watching.
It took five years of litigation, twelve days of arbitration testimony, and an 89-page interim award before T-Mobile paid $33 million — the largest SIM-swap arbitration on record. Then they moved to seal the findings, blocking public access to the details of their security failures.
The same window, forty-nine years apart
On the morning of October 19, 2025, four men in yellow high-visibility vests parked a truck on the Seine side of the Louvre. It was a furniture lift — the kind you can rent to move a couch into a third-floor apartment. Two of them raised the platform to a second-floor balcony of the Galerie d’Apollon, home to the French Crown Jewels, while the other two waited below on motor scooters. One used an angle grinder to cut through a window. They entered the gallery, smashed two display cases, grabbed pieces of jewelry, descended the lift, and all four escaped on the scooters. The two thieves were inside the museum for less than four minutes. In their haste, they dropped the Crown of Empress Eugénie on the street — 1,354 diamonds and 56 emeralds, damaged on the pavement.
The Louvre funnels eight million visitors a year through its hardened glass pyramid entrance — bag checks, ticket scans, security personnel. But the thieves didn’t use the front door. They used a second-floor window on the river side of the building — a window that had been used by masked thieves in 1976 to steal a jeweled sword belonging to King Charles X. That sword was never recovered. The same weak point, exploited twice, forty-nine years apart. A 2014 audit had warned about security flaws in the building. A decade later, Cour des Comptes data showed only 39 percent of rooms were covered by cameras. The CCTV camera in the Apollo Gallery was facing the wrong direction. The eight pieces the thieves escaped with were valued at an estimated €88 million.
A SIM swap in February. A jewel heist in October. A crypto entrepreneur in his office and four men in yellow vests on the banks of the Seine. These stories have nothing in common — except the design failure that made both of them inevitable.
The OR problem
Jones had an eight-digit PIN protecting his T-Mobile account. But the PIN was one of several paths an employee could use to authorize changes — and the employee who processed the SIM swap used a path that didn’t require it. The Louvre had a hardened front entrance processing millions of visitors. But the building had dozens of other access points, and the one the thieves chose had been compromised before, flagged in audits, and left unhardened.
In both cases, the institution invested heavily in security at the expected entrance and left alternative paths unexamined. The security of the entire system was determined not by the strength of the strongest control, but by the weakness of the weakest. This is the OR problem: when multiple paths lead to the same asset and any single path is sufficient, the attacker doesn’t need to defeat your best security. They need to find the one door you forgot to lock.
Now look at your login page.
A typical SaaS application in 2026 offers five ways to sign in: email and password, Google, Facebook, Apple, and maybe GitHub or Microsoft. Five doors into the same house. These paths are configured in an OR relationship — an attacker who compromises any one of them gains access to the account. The effective security is not the strength of the strongest method. It is the strength of the weakest. Every social login button is an additional door you don’t control the lock to, and the odds are that nobody in your organization has ever counted the doors.
The same pattern repeats one layer up. A typical enterprise user has MFA options active simultaneously: push notifications to their phone, push notifications to their tablet, SMS codes, email codes, and a TOTP authenticator app. Five second factors, configured as OR alternatives, where any single one satisfies the requirement. The attacker targets SMS — vulnerable to SIM swapping, SS7 exploits, and social engineering of carrier employees — and the hardware key’s superior security becomes irrelevant. The FBI documented $26 million in SIM-swap losses in the U.S. in 2024. A 2020 Princeton study tested the defenses of major carriers and found an 80 percent success rate for fraudulent SIM-swap attempts on the first try. Groups like Scattered Spider used SIM swapping and MFA fatigue attacks to breach Uber, Cisco, and Rockstar Games — organizations that had MFA in place and believed it was working.
This is Josh Jones’s story, repeating at scale. He had the PIN. Uber, Cisco, and Rockstar had MFA. In both cases, the strongest control was undermined by a weaker parallel path that nobody modeled as part of the same security posture.
Predicted, in detail, thirteen years ago
Here is what makes the Louvre story useful beyond metaphor. Nobody needs a research paper to understand that a building with ten doors is only as secure as the weakest one. That principle is obvious in physical space. You can see the doors. You can count them. When four men with a rented furniture lift reach a second-floor window, every person reading the story immediately thinks: why wasn’t that window hardened?
But in digital authentication, the same principle has been invisible for over a decade — despite someone writing it down.
In 2012, Joseph Bonneau, Cormac Herley, Paul van Oorschot, and Frank Stajano published “The Quest to Replace Passwords: A Framework for Comparative Evaluation of Web Authentication Schemes“ at the IEEE Symposium on Security and Privacy. The paper evaluated thirty-five authentication schemes across twenty-five properties spanning security, usability, and deployability. It remains the most comprehensive comparative framework for authentication design ever published.
Buried in the analysis is a finding that should have reshaped how the industry thinks about login pages: when you compose authentication methods in an OR relationship, the composite scheme inherits the worst security properties of any individual component, not the best. The framework made this structural, not anecdotal. It wasn’t a warning about a specific vulnerability. It was a formal demonstration that OR composition itself — the architecture, not any particular implementation — guarantees degradation.
The paper is thirteen years old. In that time, the industry has responded by adding more doors. More social login integrations. More MFA options. More fallback channels. More OR paths to the same identity. Each one evaluated against its own spec, its own security checklist, its own review — and none of them evaluated as part of the composite.
Bonneau and his colleagues didn’t predict a specific SIM swap or a specific account takeover. They predicted something worse: that the design pattern the industry was adopting would, by mathematical certainty, produce security outcomes weaker than any individual method. The paper exists. The industry kept building.
Why the doors opened in the first place
To understand why login pages look the way they do, you have to understand what they replaced — and why.
Passwords failed the industry for two reasons, and both were business problems before they were security problems. The first is that users reuse passwords. The same string that protects someone’s bank account protects their pizza delivery app, which means that every breach at some other service is functionally a breach at yours. The second is that users forget passwords. Constantly. And every forgotten password is a support ticket, a help desk call, a lost session, a churned customer. Account lockout isn’t just a security event. It is an operational cost that scales with your user base and never stops.
Social login solved both problems — for the business. Google handles the credential. Google handles the lockout. You never staff the help desk. The security rationale was real — Google is genuinely better at authentication than most applications will ever be. But the driving force was economic. Each social login button on the registration page replaced a password the user would forget and a support ticket the business would pay for.
This is why the doors proliferated. Not because teams were careless. Because each door closed a business case. And the pattern compounds in a way that feels like security but functions as exposure. Most applications use email as the canonical identity anchor. When someone authenticates via Google OAuth with the same email as an existing password-based account, the industry default is to silently link them — to assume that a matching email means the same person, without requiring the user to prove ownership through the method they originally used. This feels like a convenience feature. What it actually does is allow anyone who controls that email through any provider to walk into the account through a door the user never opened. Security researchers call this “account pre-hijacking.” Avinash Sudhodanan and Andrew Paverd demonstrated it across 75 popular online services in 2022, finding at least 35 vulnerable — including Dropbox, Instagram, LinkedIn, and Zoom. A year later, Salt Labs showed the inverse: in their “Oh-Auth“ research, they demonstrated that sites like Grammarly, Vidio, and Bukalapak failed to verify OAuth access tokens at all, meaning an attacker who harvested a user’s Facebook token on any site could reuse it to take over accounts on dozens of others — even ones the user never signed into with Facebook. The system treats a shared token or a shared email as proof of identity, when it is only evidence of access.
Now watch how the failure hides inside a competent design review. A product manager adds Google login and conversion lifts 20 percent. An engineer implements it against the spec, validates the state parameter, checks the token audience. A security engineer reviews the implementation, confirms it follows OAuth best practices, approves it. Three months later, the same sequence happens for Facebook. Then Apple. Then a magic-link flow. Each review is scoped to the method being added. Each method passes. And at no point does anyone step back and ask: how many OR paths now lead to the same identity, and what is the assurance level of the weakest one?
There is no design review template with that field. No threat model with that column. No ticket in the backlog for “composite authentication posture.” Product owns conversion. Engineering owns implementation. Security owns each method’s correctness. Nobody owns the composite. The OR relationship between methods lives in the gap between all three teams — visible to each, owned by none.
The Louvre’s curators didn’t leave that window unhardened because they were negligent. They hardened the entrance they expected visitors to use and didn’t model the building as a composite of every possible entry point. Authentication teams do the same thing, for the same reason: each decision is locally rational — even economically optimal — and the failure only becomes visible when you stop evaluating methods and start counting doors.
The door nobody built
The path forward starts with a single question, applied to every authentication decision a team makes: does this add an AND, or does it add an OR?
An AND makes the system stronger. Requiring a password and a hardware key means an attacker must compromise both. An OR makes the system weaker. Allowing a password or a Google login or a Facebook login means the attacker can choose the easiest path. Jones’s eight-digit PIN was designed as an AND — a gate every path had to clear. T-Mobile’s internal process implemented it as an OR — one of several ways to authorize a change, skippable by an employee who didn’t ask for it. That single design decision cost $38 million. The Louvre hardened its front entrance as if it were the only way in, while a window on the Seine, exploited in 1976 and flagged in a 2014 audit, remained an OR that nobody closed. That cost €88 million and four minutes.
Passkeys are the best answer the industry has produced to the problem that opened all those doors. Each site gets a unique credential, cryptographically bound to the domain, phishing-resistant by design, protected by a biometric the user already unlocks fifty times a day. No password reuse. No phishing. No help desk tickets for forgotten passwords. Passkeys solve the two business problems — reuse and lockout — that drove the social login explosion in the first place. They are not another door. They are a better door that can replace the weaker ones.
But only if teams actually close the old doors behind them. The industry’s instinct, predictable by now, is to add passkeys as another button on the login page — another OR, alongside the passwords and social logins and SMS fallbacks that were already there. This is the exact pattern the entire article has been about. The correct adoption strategy is not to add passkeys alongside everything else. It is to add passkeys and remove every path that can’t justify its risk. Sunset password-only login. Remove SMS as a standalone second factor. Every remaining path should meet a minimum assurance threshold — and any path that falls below it comes out. When MFA is required, it is layered as AND — passkey anddevice trust — not offered as a menu of interchangeable options where the attacker picks the weakest.
Account linking must require authenticated consent — if someone arrives via a new login method with the same email as an existing account, the system should require them to prove ownership through the method they originally used before the link persists. CISA has warned explicitly that enrolling in an authenticator app does not unenroll you from SMS, and the same principle applies everywhere: a stronger method added alongside a weaker one doesn’t raise the floor. It just adds a door the attacker will ignore.
This is buildable. Teams can start Monday morning by auditing every authentication path to every identity, counting the ORs, and asking whether each one earns its risk. Authentication systems carry real migration debt — the Real ID Act took twenty years to enforce after Congress recognized this same OR pattern across state driver’s licenses. Every state issued IDs under different standards, but any state’s ID was accepted at every airport — fifty doors, and the attackers found the one with the weakest lock. Eighteen of the 19 hijackers had held 30 state-issued IDs between them — seven obtained fraudulently from Virginia, where a stranger at a 7-Eleven could sign a residency affidavit on your behalf. Three of those Virginia IDs were used to board planes at Dulles on the morning of September 11. Honest timelines matter more than optimistic roadmaps.
But even if teams execute all of this perfectly — passkeys adopted, weaker paths deprecated, ORs reduced, MFA layered as AND — there is a void at the end of the trajectory that the industry has not yet faced. Passkeys sync through cloud accounts. An Apple passkey lives in iCloud Keychain. A Google passkey lives in Google Password Manager. If a user loses access to that foundation account — forgot their Apple ID password, lost their only device, got SIM-swapped out of their Google recovery flow — every passkey stored in it becomes inaccessible simultaneously. The user doesn’t need to reset one password on one site. They need to recover one account to recover everything. The lockout problem hasn’t been solved. It’s been concentrated.
The entire trajectory of authentication — from passwords to social login to passkeys — has been an attempt to engineer around a question the industry finds expensive and inconvenient: how do you verify that a human being is who they say they are? Each layer of abstraction delegates that question to someone else’s system. Passwords delegated it to the user’s memory. Social login delegated it to Google and Facebook. Passkeys delegate it to Apple and Google’s cloud infrastructure. The technology gets better at every step. But none of these layers eliminate the moment where a person has lost access to everything and needs to prove, to another human or a process that actually checks, that they are the person who owns the account.
Identity verification — actually confirming the human — is the floor that the system needs. Not as the daily authentication method. Not as something users encounter in the normal flow. As the backstop. The recovery path that works when every technology layer has failed. The authentication industry has spent twenty years optimizing the happy path and treating the recovery path as an afterthought — a security question, an SMS fallback, a “contact support” link that routes to a chatbot. When the wave of foundation-level lockouts arrives, and passkey adoption guarantees that it will, every service built on that foundation will face the same question the password era faced: how do you let someone back in?
The framework for answering that question already exists. NIST SP 800-63A has defined identity proofing levels since 2017: IAL1, where identity is self-asserted and never verified; IAL2, where real-world identity is confirmed through evidence, remotely or in person; IAL3, where physical presence is required. The revision finalized in July 2025 updated these standards for the age of passkeys and deepfakes. Nearly every consumer authentication system in production today operates at IAL1 — self-asserted identity, never verified. The blueprint has been on the shelf for eight years. The industry looked at the cost of IAL2 and decided that self-assertion was good enough.
Congress didn’t solve the OR problem across state driver’s licenses by inventing better IDs. They mandated minimum standards for verifying the human before issuing one. The authentication industry has the technology answer. It has the architecture answer. It is still avoiding the hardest question — whether verifying the actual human, at the foundation, is a cost worth bearing.
The systems aren’t failing because the locks are weak. They’re failing because nobody is counting the doors. And behind every door, eventually, is a person who needs to be recognized — not by a token, not by a provider, not by a synced credential, but as themselves.



