What Your Search Results Are Actually Telling You

Matt Rathbun · March 2026

You're looking for the deployment procedure. You know it exists because you wrote part of it six months ago. You type "production deployment runbook" into the search bar of whatever tool your company uses for knowledge, and you get back a hundred and forty results.

The first page of results includes the runbook you're looking for — last updated eight months ago, probably still mostly correct. But it also includes a meeting summary from a deployment retrospective that happened to use the word "deployment" fourteen times. A draft architecture proposal someone started and never finished. A company-wide announcement about a deployment freeze from last December. Two onboarding documents that mention deployment in passing. A quarterly roadmap that listed "improve deployment pipeline" as a goal. And those are just the first ten.

A hundred and forty results. All identical-looking entries in a list. Same font. Same layout. Same visual weight. The runbook you need, the announcement that expired four months ago, and the abandoned draft are all presented as equally valid answers to your question.

You scan. You open three tabs. You close two of them within seconds. You find the procedure, realize the section about rollback is outdated, and message a colleague to ask what the current process actually is.

Five minutes of search. Five seconds of actually finding. And a DM to another human, because in the end, the fastest way to get accurate information from your knowledge base is to bypass it entirely and ask someone who knows.

You've probably blamed the search engine for this. Google has spent two decades and billions of dollars on web search, so when your wiki returns a wall of undifferentiated results, blaming the tool feels right.

The search isn't broken. The search is honest. It's showing you the most accurate picture of your knowledge base you'll ever see — and you hate looking at it.

The mirror

Here's what the search results are actually showing you: your knowledge base has no concept of what kind of thing it's holding.

That deployment procedure and that meeting summary are stored the same way. They're represented the same way. They're indexed the same way. They're surfaced the same way. From the system's perspective, they are the same kind of thing — a page, a document, a block of text with a title. The system doesn't know that one is an operational procedure someone might need at 2 AM during an incident, and the other is a record of a conversation that was useful for about a week.

You know the difference. You carry that knowledge in your head. When you scan that first page of results, you're doing a rapid, unconscious classification that the system never does: that's a procedure, that's meeting notes, that's a draft someone abandoned, that's an announcement that's expired, that's an architecture decision. You're running a sorting algorithm the tool doesn't have. And you're so practiced at it that you've stopped noticing you're doing it.

This is the hidden cost. Not the five minutes of searching — the invisible work of compensating for a system that treats all knowledge as the same shape. You've been doing this so long it feels normal. But it's not free. It scales with the size of the knowledge base, it fails when new people join who don't carry your mental model, and it completely breaks when you remove the human compensator from the loop.

Your sidebar, your folder hierarchy, your carefully maintained collection structure — those are compensations too. You built them to impose a shape on knowledge that the system refuses to model. They work, more or less, until someone creates a document in the wrong collection, or the team grows and the mental model doesn't transfer, or you need to find something across collections rather than within one.

The search bar strips away all of those compensations and shows you the raw state: flat, undifferentiated, shapeless. Every result the same kind of thing. The search engine is a mirror — and it doesn't lie.

Knowledge has personas

A deployment procedure and a company policy are different kinds of knowledge with fundamentally different properties.

A procedure has steps. It has conditions — if this fails, do that. It has prerequisites. It has an expected outcome and failure modes. It's meant to be followed, not just read. When someone searches for a procedure at 2 AM during an incident, they need step-by-step instructions, not a narrative essay about the deployment philosophy.

When someone searches for a policy, the questions are different: what's the rule, who decided it, and is it still in effect? A policy carries an authority source — a regulation, a board decision, an executive mandate. It has a scope and an expiration. It's meant to be referenced, not followed step-by-step.

Meeting notes are the odd one out. They're a record of a conversation at a point in time — useful for about a week while the actions are in progress, then they recede into historical context that matters only if someone asks "why did we decide that?" They are not procedures. They are not policies. They are not meant to persist with the same weight.

These aren't categories I'm imposing. They're properties the knowledge already has. A procedure is a procedure regardless of what container you put it in. The problem is that every knowledge management tool strips away these properties and replaces them with a single, universal container: the page. The wiki page. The document. The block of text with a title.

Your knowledge has personas — distinct identities with different lifecycles, different interaction patterns, different quality requirements, and different reasons to exist. Your tools don't model any of this. So you carry it in your head, and the system works only as well as your memory and attention hold up.

How the flattening happens

Think about the last time you wrote a procedure in your wiki. You opened a new page. You got a blank document with a blinking cursor. Maybe a title field at the top. And then you started writing — in prose, because that's what the tool gives you. Paragraphs. Maybe some bullet points. A heading or two.

But a procedure isn't prose. A procedure is a sequence of conditional actions with defined prerequisites, expected outcomes, and failure modes for each step. Writing it as prose is like writing a recipe as an essay — the information is all there, but the shape is wrong. The container doesn't match the content.

And it's not just the visual shape. The system doesn't know it's holding a procedure, so it can't do anything useful with that knowledge. It can't check whether every step has a defined failure mode. It can't flag that step three references a service that was decommissioned last quarter. It can't present it differently in search results — as a procedure to follow rather than a document to read. It can't even distinguish it from the meeting notes that live in the same collection, because from the system's perspective, they're both just pages.

The same thing happens to every kind of knowledge. A policy that should carry metadata about its authority, scope, and review schedule gets stored as a page with a title and some paragraphs. A decision record that should be linked to the options that were considered and the evidence that informed the choice gets stored as a page with some headings. A glossary that should be queryable and referenced from other documents gets stored as a page with a table that someone has to manually keep current.

The tool gives you one container. You pour everything into it. And then you compensate for the loss of shape by carrying the type information in your head — by knowing, from memory and experience, which pages are procedures and which are policies and which are abandoned drafts that nobody ever cleaned up.

The inherited assumption runs deeper than any one tool. From the earliest wikis to the latest all-in-one workspaces, the fundamental model has been: knowledge is text, text goes in pages, pages go in a hierarchy. Some tools have added databases and tables and boards and whiteboards and timelines — but these are additions bolted onto the same page-first model, not a rethinking of the model itself.

The cost of compensation

The obvious response: so what? We've been doing this for years and it works. Everyone knows which docs are which. The system functions.

It functions the way an office with no filing system functions. Everyone knows where their own stuff is. New people struggle for months. Finding something across teams requires asking someone. And the longer the system runs, the harder it gets — because the mental models that compensate for the lack of structure are locked in the heads of the people who built them, and those people eventually move on.

There are three specific failure modes.

The new hire. Every organization has a version of this: the new person who joins, tries to use the wiki, can't tell what's current from what's abandoned, can't tell what's a procedure from what's a brainstorm, and spends their first three months building the mental model that everyone else has internalized. The knowledge base is only useful to the degree that you already know what's in it. For someone new, it's almost worse than nothing — it creates false confidence. They find a document, take it at face value, and act on information that the rest of the team would have instantly recognized as outdated or incomplete.

The scale wall. Mental models work when you have fifty documents and five people. They work less well when you have five hundred documents and fifty people. They fail completely when you have five thousand documents and five hundred people. At that scale, no individual carries a comprehensive mental model. Everyone has a partial map. And the partial maps don't overlap perfectly, which means different people have different understandings of what the authoritative sources are, which leads to the thing every large organization eventually discovers: the wiki has become multiple wikis that happen to share a search bar.

The oral tradition fallback. This is the one that should concern you most, because it's the one that's already happening. When the knowledge base is flat and undifferentiated and the mental models are partial and individual, people stop searching and start asking. "Hey, do you know where the deployment procedure is?" "Who owns the compliance documentation?" "Is that architecture doc still current?" Each of those questions is a small failure of the knowledge base — a moment where it would have been faster and more reliable to find the information in the system, but it wasn't, so a human intermediary was used instead. This is how organizational knowledge reverts to oral tradition: not in a dramatic collapse, but in a slow accumulation of DMs and shoulder taps that gradually replace the system that was supposed to make them unnecessary.

Three actors, not two

There's a framing that helps make sense of all this.

Your organization has three kinds of actors, not two. There are the humans who create and consume knowledge. There are the AI agents — the copilots, the chat assistants, the automation scripts — that increasingly process and act on knowledge. And there is the knowledge itself.

That third one is the one nobody treats as an actor. Knowledge is treated as the passive substrate — the thing that sits in a container and waits to be acted upon. Users have accounts, permissions, profiles. Agents have API keys, tool access, rate limits. Knowledge has... a title and a body.

But knowledge has properties that matter. It has a lifecycle — is it current, historical, in progress, or superseded? It has a persona — is it a procedure, a policy, a decision record, a scratch space? It has quality dimensions that depend on its persona — a procedure needs step-completeness, a policy needs authority sourcing, meeting notes don't need either. It has relationships to other knowledge — this procedure implements that policy, this decision record supersedes that one, this glossary defines terms used across all of these.

None of this is modeled. The system stores the text and leaves everything else to the humans. Which was adequate when humans were the only ones reading. It is not adequate now.

What the agent sees

Think about what happens when you remove the human compensator entirely. When an AI agent queries your knowledge base — through a RAG pipeline, through an MCP server, through whatever integration your organization is building — it sees what the search bar sees. Flat. Undifferentiated. Shapeless.

The agent doesn't know that the 2019 procedure has been superseded. It doesn't know that the meeting notes are ephemeral context, not authoritative guidance. It doesn't know that the draft was abandoned. It doesn't know that the policy applies only to the London office. It doesn't carry any of the mental models that your humans have built over years of working in the system.

And here's the part that should keep you up at night: the agent doesn't know that it doesn't know. It treats every result with the same confidence. It will synthesize an answer from a current procedure and an abandoned draft without blinking, because from its perspective they're both the same kind of thing — text with a title that matched the query.

The flattening that humans have been quietly compensating for becomes a failure mode. And not an occasional failure — a structural one. Every query. Every interaction. Every time the agent reaches into your knowledge base, it's navigating the same flat, shapeless landscape that your human users navigate, except without any of the compensating context that makes the human navigation work.

You've been making your search engine's job harder than it needs to be. You've been making your new hires' onboarding slower than it needs to be. You've been making your own job harder than you realize. But the AI agents are where the cost becomes undeniable, because the agent can't call a colleague and ask "hey, is this doc still current?" The agent just uses what it finds, exactly as the system presents it.

What specification even means in a flat world

This connects to something deeper. When you ask "is this document well-written?" — the answer depends entirely on what kind of document it is. And if the system doesn't know what kind of document it's holding, that question can't be answered meaningfully.

A deployment procedure that reads beautifully as prose but doesn't specify what to do when step four fails is a bad procedure, regardless of how clear the writing is. The same is true across every persona — a policy missing its regulatory citation, meeting notes that don't identify the actual decisions. Quality depends on what kind of thing you're measuring.

A procedure is well-specified when every step has a defined precondition, action, expected outcome, and failure response. Quality looks completely different for a policy — authority source, scope, effective date, exception process. And a scratch space doesn't need specification at all. Applying formal quality standards to working knowledge in progress would be counterproductive.

But when everything is a page, you can't make these distinctions. You can't measure the quality of a procedure as a procedure when the system doesn't know it's a procedure. You can't check whether a policy cites its regulatory source when the system doesn't know it's a policy. You can't exempt a scratch space from quality requirements when the system doesn't know it's a scratch space.

The flattening makes knowledge harder to find AND harder to improve. The two failures compound: you can't find the right document because the system presents all documents the same way, and you can't improve the document you find because the system doesn't know what "good" means for that kind of document.

What if knowledge had personas?

What would change if the system knew what it was holding?

Start with the procedure — because that's where you started this essay. You're searching at 2 AM during an incident. If the system knew it was holding a procedure, it could present it as steps to follow, not a page to read. It could check whether every step has a defined failure mode. It could flag that the rollback instructions reference a service decommissioned last quarter. The document's lifecycle would be different — reviewed after every incident, not on a quarterly calendar.

That same awareness cascades across every persona. A policy surfaces its authority source and review date in search results without requiring you to open the document. When the regulation changes, the system flags every policy that implements it and every procedure that operationalizes those policies — a chain of dependencies that today only exists in the heads of the people who built them. A scratch space gets exempted from quality scoring entirely. It wouldn't show up alongside authoritative content in search. It wouldn't confuse new hires or pollute an AI agent's context. And when the working draft graduated to a formal document, the transition itself could trigger the governance and specification requirements appropriate to its new persona.

This isn't a feature request. It's a structural shift. The system would need to model knowledge as a first-class citizen — not just text in a container, but an entity with its own identity, lifecycle, quality dimensions, and relationships. The way your organization already models users and increasingly models agents.

Three actors. Three sets of properties. Three kinds of intelligence the system needs to have.

The humans have always compensated for the system's ignorance about the third actor. But the humans are tired, and the agents can't compensate at all, and the knowledge base keeps growing. At some point the compensation breaks, and what you're left with is what the search bar has been showing you all along: a flat, undifferentiated collection of text that looks the same because the system treats it the same, even though everyone who works with it knows it isn't.

This is the problem we're building VanaMD to solve. If these ideas resonate — if you've looked at your search results and felt the weight of a knowledge base that can't tell you what kind of thing it's holding — we'd love to hear from you.