
The Human API Is Broken
And AI is making it worse.
You are not your LinkedIn profile.
You know it. I know it. The recruiter copy-pasting "I came across your profile and was impressed" into 600 inboxes knows it too.
But here's the thing nobody is saying out loud: the infrastructure we use to understand and allocate human value hasn't been redesigned in decades, and the world it was built for no longer exists. Every company, platform, and AI tool building on top of it is constructing a skyscraper on sand.
The Human API Is Returning Garbage
When people find each other through relationships, it works. A warm intro, a conversation at a bar. The capture layer is a living human brain. The signal is rich. The matching is contextual. That's why the best hires, the best co-founder pairings, the best deals still come through networks.
The problem is that doesn't scale. So the economy built infrastructure: resumes, profiles, job boards, pitch decks, talent platforms. A pipeline. Capture your identity, transmit it, match it to a need, attempt a connection. This pipeline governs how founders find investors, how companies find vendors, how buyers find agencies. It's the human API, the interface through which the economy queries who you are and what you're worth.
That API is returning stale data, missing fields, and wrong answers. And nobody is fixing the interface. They're just querying it faster.
The compression was intentional. Resumes and titles reduce complex humans to comparable signals, built for recruiters scanning 200 applications with limited time and no compute. For most of the 20th century, that worked. It doesn't anymore. AI can process rich context on every candidate in seconds. But that context doesn't exist in any structured form. So we automated the query and never upgraded the data.
Meanwhile, the structures these profiles describe are dissolving underneath them. Companies are collapsing management layers, redefining roles around AI, and reorganizing faster than any profile can track. "Software engineer" now means three different jobs: writing systems code, prompting AI, or architecting without touching a line. Titles are changing faster than the systems that store them.
LinkedIn has 1.3 billion members. 76% of employers still can't fill roles. Think about that. The largest professional database in human history, and it can't answer the basic question it was built to answer. Because it stores job titles and employer names, not what people actually do or want. LinkedIn knows where you worked and what title someone gave you in 2021. It doesn't know what you're building now, how you work with AI, what you shipped last month, or what you've decided to stop doing. Those are the signals that matter. They change weekly. No platform captures them.
So what does predict whether two people should work together? Depends on the match. For co-founders, it's values alignment and how they fight. For a key hire, it's capability fit at this stage, not the last one. For an investor, it's thesis alignment and how they act when the numbers go sideways. Different signals, but they share three properties: they're contextual (depends on the specific match), current (right now, not three years ago), and relational (about the fit between two people, not either one in isolation).
Profiles are the exact opposite. Individual. Historical. Static. We built the entire infrastructure around describing people in isolation and have almost nothing for describing whether two people would actually work well together.
The useful fragments exist. Scattered across GitHub, Substacks, deal histories, Slack groups, community reputations. Non-portable, unstructured, impossible to query. And the most important signal, what someone actually wants right now, what they'd move for, lives nowhere except inside their head.
The Outreach Death Spiral
Cold email used to have friction. Someone sat down, wrote a message, clicked send. The friction was the filter.
That friction is gone. AI can generate 10,000 personalized emails for the cost of a coffee. The machine gun is fully automatic.
Here's the result: the more AI automates outreach, the less any outreach works. Response rates have dropped from 8.5% to under 5% in five years. Nineteen out of twenty cold emails get no response at all.
And here's what's counterintuitive: the emails look better than ever. AI has made surface personalization trivially easy. Reference someone's title, their latest post, their company news. But knowing someone's job title isn't the same as knowing their intent. The outreach looks personal. It's still blind.
Inboxes drown. Walls go up. AI was supposed to connect us. Instead, it's making us unreachable. Not because we lack tools, but because we lack current, permissioned, decision-relevant signal. Neither side knows the other's intent. So one side sprays, and the other side hides.
Somewhere right now, a founder and an engineer who should be building together are two degrees apart. A VC is looking for a deal that a bootstrapped company two time zones away would be perfect for. Two researchers working on the same problem don't know the other exists. None of them will connect. The signal they need doesn't exist in any queryable form, and whatever outreach might have found them got buried under ten thousand messages that had no business being sent.
What AI Can't Eat
If AI keeps absorbing more of what humans do, where does human value actually live?
AI is eating the execution layer from the bottom up. Rote tasks. Skilled outputs. Now judgment and strategy. At each stage, the thing that used to differentiate you becomes table stakes.
But there's a ceiling. Human value is concentrating into three layers:
Ownership. Someone has to deploy capital, bear consequences, and be legally accountable.
The edge. The moving frontier where AI capability runs out and you need a specific human to close the gap.
Trust. The relationships between humans at those layers. The co-founder chemistry and advisor judgment that determine whether companies live or die.
As the execution layer gets cheaper, these three layers become more valuable. When fewer things require humans, each remaining human decision carries more weight. Human matching becomes more critical in an AI world, not less.
And the edge moves. What AI couldn't do six months ago, it can do now. Matching for the edge means knowing what someone is capable of right now, against a frontier that shifted last month.
If your value lives in these layers, and increasingly it does, you can't capitalize on it. The market can't see it. An engineer whose superpower is architectural taste looks, on paper, like every other senior engineer. A founder whose edge is recruiting in a crisis has no legible way to signal that. Their real differentiation is dark matter. It exists, it's enormous, and it's economically invisible.
The infrastructure built for the execution layer was never designed for ownership, the edge, or trust. And the gap between where value lives and where the system looks for it grows wider every day.
The Silo Problem
Everything above describes the infrastructure as it exists today. But something else is changing the equation entirely: people are getting personal AI agents.
Not chatbots. Agents. OpenClaw went from obscurity to 200,000 GitHub stars in weeks. People are running it on their laptops, connected to email, calendar, messaging, files. A 24/7 assistant that manages their day, drafts messages, writes code, takes actions while they sleep. The trajectory is clear. Within a few years, most people will have a personal AI agent that knows them better than any colleague does.
Here's what makes this relevant: that agent will know you extraordinarily well. It's reading your emails, seeing your calendar, learning what excites you. It's building deep, real-time context about who you are right now.
But it has absolutely no idea who everyone else is.
Your agent is brilliant about you and blind to the world. When it needs to find you a cofounder, an investor, a vendor, where does it go? Back to LinkedIn. Back to the dead profiles. Back to the sand.
A thousand agents, each knowing their human intimately, none able to see each other. Smart assistants trapped in very small rooms.
The future isn't agent-to-LinkedIn. It's agent-to-agent, inside a shared system of living context.
Restoring Signal
The fix isn't better algorithms on bad data. It isn't AI that writes faster cold emails. It isn't another LinkedIn competitor with a cleaner UI.
The fix is better proof, better intent signals, and better permissioned trust edges. At the moment a specific decision needs to be made.
What has this person actually built? What are they working on right now? What are they open to? Who have they worked well with, and in what context? These are answerable questions. If the infrastructure exists to answer them.
Won't this just become LinkedIn with more fields? It would, if built on self-reporting. The critical difference is building signal from revealed behavior: what you actually respond to, who you choose to meet, what you spend your time building, what your agent observes about your real priorities. Behavioral signal isn't truth. It's a proxy, shaped by constraints and context. But it's a meaningfully better proxy than self-reported claims, because you can't fake what you do as easily as what you say.
When that signal exists, the downstream effects compound. A founder doesn't cold-email 200 VCs hoping one bites. The network surfaces the two whose thesis matches their problem. Outreach loses its grip, not because we blocked it, but because better signal makes most of it unnecessary.
When agents enter the picture, they finally have somewhere to go. Not the void, not LinkedIn, but a network where every node carries real, current, permissioned context, maintained by its own agent, updated continuously. Agent-to-agent coordination becomes meaningful because both sides are operating on real signal.
This is what we're building at Expreso.
Not a better profile. Not a smarter algorithm. Better proof of what you've built. Better signal of what you want.
The world doesn't need more connections. It needs the right ones.