Spam didn't disappear because people stopped sending it. It disappeared because the inbox got a filter.
Something new is filling the space. AI-generated content is arriving in inboxes, shared documents, vendor proposals, and internal memos at a volume the enterprise hasn't reckoned with. The mechanism is different this time. Spam came from strangers. What's arriving now comes from colleagues, counterparts, and vendors. It looks professional. It often runs two or three times longer than it needs to. And the person who sent it, in most cases, has not read it.
We call the consequence the decoding burden: the cognitive and time cost that transfers from the sender to the recipient every time an AI-generated communication goes out without review. The decoding burden doesn't feel like a cost to the sender. It is entirely a cost to the recipient.
The editor left the building
Three conditions have to be true for AI slop to land in an inbox, and all three are now simultaneously true.
AI makes it possible to generate confident-sounding text on topics you don't understand well enough to write about yourself. An account manager can produce a technical summary of a product they've used twice. A coordinator can draft a strategic memo about a market they read one article about. The barrier to producing text has dropped to near zero. The barrier to producing accurate, well-calibrated, appropriately-scoped text has not.
AI output also reads as plausible at a glance. The register is professional. The sentences are complete. The document has the shape of a document. Unless you know the territory well enough to catch what's off or missing, it passes a quick scan. It only breaks down when someone who actually knows the subject reads it carefully. That person is usually the recipient.
And the edit step has been outsourced along with the writing step. When someone writes, they edit by necessity: deciding what to include, what to cut, what needs more specificity. When someone uses AI to generate content, those choices belong to the model. Sending the output without review means the sender has outsourced not just the writing but the judgment that should accompany it. The document arrives. The judgment didn't.
These three conditions compound. An account manager writing about a technical topic they don't know deeply, using an AI that produces output that looks authoritative, without reviewing before sending, generates something that lands on the recipient as a multi-page document requiring either prior knowledge to interpret or significant time to decode. Neither was obvious to the sender. The sender experienced a time savings.
The cost doesn't disappear
The sender's experience is efficiency. A document that would have taken two hours to write took twenty minutes to generate and send. By the sender's metric, AI worked.
The recipient's experience is the opposite. They receive a five-page brief containing two paragraphs of useful information, surrounded by paragraphs that don't follow, conclusions that don't connect to the premises, and claims they'd need to verify before acting on. Or they receive an eight-sentence email where three sentences would have done it, padded with context they didn't ask for and caveats that contradict each other. They spend time they don't have decoding something they didn't request.
The time didn't disappear. It transferred. Twenty minutes saved at the sending end appeared as thirty minutes of overhead at the receiving end. The decoding burden is this transfer: the cost the sender shed lands on the recipient.
At the scale of a team, the math becomes uncomfortable. If a group of people is sending AI-generated communications to other groups, and each communication requires meaningful interpretation effort, the aggregate cost across the receiving end isn't visible anywhere. Each recipient absorbs their portion individually. No line item. No flag. No system to surface it. The cost is real, distributed, and completely untracked.
| Stakeholder | Action | Metric (Tracked) | Cost (Untracked) |
|---|---|---|---|
| Sender | Generates 5-page brief | +100 min saved | Zero |
| Recipient 1 | Decodes brief | Zero | -15 min overhead |
| Recipient 2 | Decodes brief | Zero | -15 min overhead |
| Organization | Aggregate | High "productivity" | Negative net ROI |
People have always sent bad emails. What's changed is the rate. A person writing badly is constrained by how fast they can write. A person generating badly is constrained by nothing. The decoding burden doesn't scale with the number of bad writers in the organization. It scales with the throughput of the tools they're using.
The invisible ledger
Spam was visible because it came from outside. The inbox filter worked because the signal was strong: unknown sender, no prior relationship, suspicious domain. Something in the message declared itself as noise.
The decoding burden is invisible because it comes from inside. The communication is from a person you work with, in a format you recognize, on a topic that's legitimate. Nothing triggers a filter, because there is no filter, because nothing about the signal looks like noise.
The productivity story makes this structural. AI adoption inside most enterprises is measured at the generation end: time saved drafting emails, documents produced per employee, summaries generated across functions. These metrics capture the sender's experience. They don't capture what happens downstream. The person who saved twenty minutes generating a vendor brief doesn't appear in the same dashboard as the three colleagues who each spent twenty minutes making sense of it. The org's AI productivity number went up. The aggregate cost went up too. Only one of those is being tracked.
No filter is coming
Spam became a manageable problem because it was legible to a system. It had detectable properties: unfamiliar senders, characteristic vocabulary, structural patterns that distinguished it from wanted communication. A filter could learn those properties and catch spam before the inbox saw it.
The decoding burden doesn't have those properties. AI-generated content sent without review isn't distinguishable from good content by an automated system. It's distinguishable by a knowledgeable reader, but that reader is the one absorbing the cost, not the one positioned to prevent it at the source. There is no filter that sits between the sender and the recipient and catches the five-page document that should have been two paragraphs.
What would prevent the cost at the source is an organizational norm: something like "review it before you send it," understood not as a suggestion but as an expectation with accountability behind it. That norm doesn't exist in most enterprises. The field is in the stage where the technology has moved faster than the culture around it. The expectation that AI-assisted communication requires human judgment before it leaves someone's outbox hasn't solidified into something most teams treat as non-negotiable.
Until it does, the decoding burden goes where it always goes: to the recipient. Not because the sender chose to impose a cost. Not because of policy. Because the cost transferred automatically the moment the message was sent, and nobody had built the norm that would have kept it from transferring.
Spam got filtered. The filter worked because the signal was legible and the cost was external. The decoding burden has neither property. It will keep arriving until the organizations receiving it decide it costs enough to stop.
Trackmind helps enterprises design AI workflows and the operational discipline to run them. Learn about our AI and ML practice.