The Integration Layer Is You
Why the smartest interface you’ve ever used still can’t buy you a book.
I was three hours into building a model for a physics ontology when Claude told me to read Bondi.
The recommendation was specific. I’d been working through a problem in relativistic kinematics, and the AI had identified a gap in my reasoning that mapped precisely to an argument Hermann Bondi made in Relativity and Common Sense in 1964. The recommendation was grounded in the structure of the work I was doing, aware of why this particular book would fill this particular hole. I knew the book. I’d read parts of it at school years ago. I needed the full text in front of me.
I was sitting at what is arguably the most sophisticated human-computer interface ever built. A system that could co-develop theoretical physics, identify gaps in my reasoning, and connect them to sixty-year-old texts. And it could not buy me the book.
I picked up my phone. Opened Amazon. Typed “Bondi Relativity and Common Sense” into a search box. Tapped a button. The book arrived two days later.
The feeling was absurd. Like being pulled a decade out of the future to feed a paper card to a mainframe. One moment I was engaged in the highest-bandwidth intellectual collaboration I’d ever experienced with a machine. The next I was typing five words into a search field designed in 2005, on a platform that had no idea I existed until thirty seconds ago, that couldn’t possibly know why I wanted this book or what I planned to do with it. The smartest interface I’ve ever used had handed me off to a search box. And between those two systems, carrying the entire context of why and what and how, was me.
Nate Baber is a partner at a personal injury firm in Connecticut. His firm uses AI tools for case analysis, document review, contract drafting. The technology can synthesize thousands of pages of medical records, identify patterns across depositions, surface precedents from decades of case law. When the analysis is done and the motion is drafted, Baber needs to file it with the court.
He faxes it.
Not always. Not everywhere. But often enough that he considers fax capability non-negotiable. “It doesn’t matter how modern my firm’s systems are,” Baber has said. “The infrastructure I have to work within often defaults to fax.” He has sent documents to court clerks from a courthouse parking lot at 8:15 in the morning, walked inside with the confirmation page minutes later, and made his hearing. The AI that helped him draft the motion in hours has no connection to the system that files it. The fax machine that files it has no knowledge of the analysis that preceded it. Baber carries the context between them.
The pattern should look familiar. A revolutionary interface that understands reasoning but cannot execute the action that follows. An institutional system that processes transactions but cannot accept the context that precedes them. A human being crossing the gap alone, carrying everything.
Both failures happen at the same seam: the assumption that a new interaction paradigm will be self-contained, when every paradigm in history has been partial. The conversational interface models dialogue as the complete interaction. The court filing system models the document submission as the complete interaction. Neither models the user’s actual workflow, which begins in one system and ends in the other. The gap between them has no designer. It has only a user.
Ben Shneiderman saw this coming in 1983. He identified what makes an interface feel direct: you see the thing you’re working with, you act on it physically, and you get immediate feedback. Drag a file into a folder. You see it move. It lands. The interface disappears. You feel like you’re touching the thing itself.
Conversational UI violates all three principles for anything that isn’t language. For some purchases, you need to see options, compare prices, evaluate alternatives — spatial tasks that dialogue handles badly. But my case was simpler than that. I didn’t need to browse. I didn’t need to compare editions. I knew exactly which book I wanted. The AI that recommended it knew which book I wanted. The entire context of the transaction was already inside the conversation. The interface just couldn’t do anything with it. For reasoning, dialogue is extraordinary. For doing the thing that follows the reasoning, it has no surface at all. Researchers confirmed the broader pattern in 2024, testing LLM interfaces directly against Shneiderman’s framework. Same failure. Forty-one years later.
But Shneiderman explains only half of the problem. He explains why I couldn’t buy the book through Claude. He doesn’t explain why the attorney can’t file a motion without a fax machine. For that, you need Susan Leigh Star.
Star was a sociologist who spent her career studying the invisible infrastructure of institutions. In 1989, she and James Griesemer introduced the concept of “boundary objects,” artifacts that sit between different communities and hold different meanings for each. A fax confirmation page is a boundary object. So is a PDF with specific margin requirements. Each one looks like a technical specification. Each one is actually an encoding of institutional authority.
The court doesn’t require a fax because courts are old-fashioned. The court requires a fax because a fax solves a governance problem. It provides a sender ID, a timestamp, a point-to-point transmission record, a confirmation of receipt. It answers the questions the court needs answered: who filed this, when, and can we prove it? Email doesn’t answer those questions reliably. Messages get filtered. Servers bounce files. Delivery is probabilistic. The fax is deterministic. It’s not a technology preference. It’s an accountability infrastructure. Replace the artifact without replacing the governance function, and the institution will reject the replacement. Rationally.
So the gap between your AI tool and the court filing system persists. Not because nobody has built a bridge. Because the institution on the other side of the bridge has structural reasons to keep it closed. The bridge would dissolve the accountability model the court’s entire process depends on.
Every interface is built as a closed world. Claude models interaction as dialogue. When the dialogue reaches a point where the user needs to act — buy a book, schedule a meeting, file a document — the interface has no surface for it. The conversation is the product. What happens after the conversation is someone else’s problem.
Amazon models interaction as transaction. When the user arrives with a search query, the system assumes the query is the beginning. The two hours of physics research, the AI’s specific recommendation, the reason this book matters to this project: all of it gets compressed into keywords. Amazon doesn’t want the context. Amazon wants the search term.
Submission is the only interaction the court filing system recognizes. When the attorney arrives with a document, the system assumes the document is self-contained. The months of analysis, the AI-assisted synthesis, the reasoning that shaped the brief: none of that travels with the filing. The court wants the paper. Formatted correctly. On time.
Each system was designed by reasonable people solving a real problem within the boundaries they drew. Claude’s designers built an extraordinary dialogue system. Amazon’s designers built an extraordinary transaction system. The court system’s architects built a filing process that maintains accountability across millions of cases. Within their own boundaries, each works. The failure is between them.
Edwin Hutchins spent years studying how Navy navigation teams distribute cognitive work across people and tools. His 1995 book Cognition in the Wild made a simple, devastating argument: when you model a single tool as the complete cognitive system, you miss the cognitive work the human is doing to bridge between tools. The unit of analysis isn’t the tool. It’s the whole system, including the human labor that stitches the tools together.
I am performing cognitive labor that Claude’s designers never accounted for. When I carry the Bondi recommendation from the conversation to Amazon, I’m translating between two systems that don’t share context, don’t share data models, and don’t know the other exists. I am the integration layer. The attorney filing via fax is performing the same labor. The motion that an AI helped draft in hours gets printed, carried to a fax machine, transmitted to a court clerk, and re-entered into a case management system. At every transition, the human carries the context. That labor is invisible because no one designed it. It exists in the negative space between systems that each believe they are complete.
Amazon doesn’t just fail to accept context from conversational AI. Amazon has no incentive to accept that context. If I could buy a book without leaving Claude, Amazon loses the browsing session, the recommendation algorithm touchpoints, the cross-sell opportunities, the advertising impressions. Amazon’s entire revenue model depends on me being inside Amazon’s interface. The context transfer isn’t just unbuilt. It’s structurally unwelcome.
The same logic applies to every platform that monetizes attention. Publishers block AI crawlers because their business model requires page views. Retailers restrict API access because their conversion funnel requires browsing. Courts mandate specific filing formats because their accountability model requires institutional control of the submission process. Every one of these is a rational decision by the institution that owns the other side of the boundary. The gap stays open. The human keeps performing uncompensated integration labor between systems that are, for their own reasons, invested in not talking to each other.
Nobody designed this standoff. It emerged from two systems optimizing for incompatible goals. The conversational interface was designed to synthesize information on the user’s behalf. The commercial internet was designed to prevent synthesis, because synthesis disintermediates the platforms that monetize the user’s presence. The user lives in the space between them.
What changes when you design for the workflow instead of the interface?
The first thing that changes is the unit of design. Stop designing interfaces as closed worlds. Start modeling the user’s actual task, which almost always begins in one system and ends in another. This sounds obvious. It requires confronting a decomposition philosophy so deeply embedded in how we build that most teams never question it.
MECE. Mutually Exclusive, Collectively Exhaustive. Anyone who has sat through a strategy engagement knows the framework. It’s how consultants decompose problems, how organizations decompose responsibility, how platforms decompose into services. Clean partitions. No overlaps. No gaps. Everything accounted for. The lie is in “Collectively Exhaustive.” MECE accounts for everything inside the boundaries and nothing between them. The context I carried from Claude to Amazon lives in no partition. The attorney’s cognitive labor between the AI tool and the fax machine belongs to no service. Hutchins would say MECE draws the boundaries of the cognitive system too tightly. The fix isn’t looser boundaries. It’s shared ownership at the seams — responsibility for what happens when work crosses from one system to another, explicitly assigned rather than silently outsourced to the user.
Conversational AI systems today have no concept of “next action.” The dialogue ends and the user leaves. Building a next-action surface means the conversational interface needs to know what systems exist downstream and how to hand off context to them. When Claude recommends a book, the interface should know that the next likely action is acquisition and offer a path that carries the full context forward: this book, recommended for this reason, in this edition, at this price, available from these sources. The user evaluates options visually, spatially, in the mode Shneiderman identified as necessary for selection tasks. The reasoning stays conversational. The selection becomes direct manipulation. Two modes in one interface, switching at the task boundary.
Some early versions of this exist. AI assistants that can search the web, present product cards, even initiate purchases. But most bolt transactional capability onto a dialogue interface without changing the interaction model. They’re chatbots with buy buttons. The deeper design challenge is recognizing when the task shifts from linguistic to spatial and changing the modality accordingly. That requires the interface to model the user’s workflow, not just the user’s words.
The institutional side is harder, and honesty matters here. Courts will not abandon fax because a startup builds a better filing tool. They will abandon fax when something provides the same governance properties — deterministic delivery, sender authentication, timestamped proof, chain of custody — in a form that the institution trusts. That’s not a technology problem. It’s a trust infrastructure problem. The technology to provide cryptographically verified, timestamped, tamper-proof document submission exists. Blockchain-based filing, verifiable credentials, zero-knowledge proofs of identity. The institutional willingness to accept these as equivalent to a fax confirmation page does not exist yet. Building that trust takes years of pilot programs, regulatory engagement, and demonstrated reliability. No shortcut exists.
The commercial moats are a different problem with a different solution. Amazon will accept context from a conversational AI when someone builds an economic model that makes integration more valuable to the retailer than the browsing session it replaces. That model doesn’t exist yet.
The fax machine will eventually disappear from law firm workflows. The search box will eventually stop being the only bridge between reasoning and purchasing. These are engineering and institutional design problems with visible, if difficult, paths forward. The deeper question is whether we’ll design the next interface transition any differently than we designed the last five. Every paradigm shift — CLI to GUI, desktop to web, web to mobile, and now keyboard to conversation — has produced the same structural failure: builders who modeled their new interface as the complete world, institutions that defended the old interface as the only trustworthy one, and users carrying the context between them, performing cognitive labor that nobody acknowledged, designed for, or compensated.
Shneiderman told us in 1983 what makes interfaces feel direct. Hutchins told us in 1995 that the cognitive system is larger than any single tool. Star told us in 1989 that institutional infrastructure encodes power and resists replacement. We had the research. We built conversational AI without any of it.
The integration layer is still you.
Sources
Research & Academic Works
Ben Shneiderman, “Direct Manipulation: A Step Beyond Programming Languages,” Computer 16, no. 8 (1983): 57–69.
Damien Masson, Sylvain Malacria, Géry Casiez, and Daniel Vogel, “DirectGPT: A Direct Manipulation Interface to Interact with Large Language Models,” Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (2024).
Edwin Hutchins, Cognition in the Wild (Cambridge, MA: MIT Press, 1995).
Edwin Hutchins, James D. Hollan, and Donald Norman, “Direct Manipulation Interfaces,” Human–Computer Interaction 1, no. 4 (1985): 311–338.
J.D. Hollan, E. Hutchins, and D. Kirsh, “Distributed Cognition: A New Foundation for Human-Computer Interaction Research,” ACM Transactions on Human-Computer Interaction 7, no. 2 (2001): 174–196.
Susan Leigh Star and James R. Griesemer, “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–39,” Social Studies of Science 19 (1989): 387–420.
Susan Leigh Star, “This Is Not a Boundary Object: Reflections on the Origin of a Concept,” Science, Technology & Human Values 35, no. 5 (2010): 601–617.
Geoffrey C. Bowker, Stefan Timmermans, Adele E. Clarke, and Ellen Balka, eds., Boundary Objects and Beyond: Working with Leigh Star (Cambridge, MA: MIT Press, 2015).
Legal Industry Data
American Bar Association, “2024 Solo and Small Firm TechReport,” ABA Legal Technology Survey (2024). Source of the 49% solo practitioner electronic fax usage and 85% electronic court filing statistics.
American Bar Association, “ABA Releases Its Newest Survey on Legal Tech Trends,” (March 2025). Overview of the 2024 survey methodology and findings.
ABA Journal, “The Facts About the 21st-Century Fax — and How Lawyers Can Use It to Their Advantage,”(February 2019). On why lawyers are still required to fax by courts and government offices.
FAXAGE, “Why Online Faxing Is Imperative in the Legal Field,” (2025). Includes Nate Baber quotes on fax as institutional infrastructure. Note: FAXAGE is a fax service vendor; Baber’s quotes are used as a first-person account, not as an independent source.
IAPP, “US Federal Judges Discuss the Intersection of Emerging Technology, AI with the Legal System,” (April 2026). Judge Burroughs on legal technology gaps and AI features disabled in judicial tools.
Legal AI Adoption
MyCase, “2025 Guide to Using AI in Law,” (January 2026). 85% of lawyers using generative AI daily or weekly.
Artificial Lawyer, “Predictions 2026,” (January 2026). On AI hallucination rates in court filings and the widening gap between internal AI adoption and external filing systems.
Background Reading
Hermann Bondi, Relativity and Common Sense: A New Approach to Einstein (New York: Dover, 1964).



