The New Shape
"Tool Shaped Objects" asks the wrong question
Will Manidis published an essay this week called “Tool Shaped Objects” that gave a lot of people a phrase for something they already felt. His argument: most of what passes for AI usage today is performative. We’re not doing work with these systems. We’re experiencing the nice sensation of work. The token budget is the capex. The dashboard is the deliverable. The number goes up, but nobody asks what the number means (or cares).
He’s right about what’s happening, but he’s wrong about what it means.
The Right Observation, the Wrong Question
Manidis frames the current AI moment as a question: is this a tool, or a tool-shaped object? Are we doing real work, or are we playing Cow Clicker at institutional scale?
It’s a clean binary - and a satisfying one. It sorts the world into people who get it and people who are being duped. It lets you feel like you’ve seen through the hype. And it is, I think, completely the wrong question.
“Tool or tool-shaped object” only works as a frame if you know what the tool is supposed to do. You can tell whether a hammer is a real hammer because you know what hammers are for. But what are large language models for? Manidis assumes we know. He measures the current mess (the agentic workflows that go nowhere, the viral essays nobody fact-checks, the token budgets spent on memos nobody reads) against a standard of productive output borrowed from the world before these systems existed.
He’s asking whether the telephone is a good cuneiform tablet.
What General-Purpose Technologies Actually Do
There’s a pattern in the history of technology that’s so consistent it should be boring at this point. When a truly general-purpose technology arrives, not a better tool but a new medium, it doesn’t improve existing categories of work. It makes those categories inadequate descriptions of what’s now possible. And before anyone figures out the new categories, there’s a period of spectacular, expensive, embarrassing fumbling that looks like waste if you’re measuring it against the old frames.
Electricity is the clearest example. When factories first electrified in the 1880s, they did the obvious thing: they took the big steam engine out of the middle of the factory, put an electric motor in the same spot, and kept everything else the same. The power still flowed through a single central shaft. The machines still had to be arranged around it. The building was still designed for steam.
It didn’t work very well. For about thirty years, electrified factories showed almost no productivity gains over steam-powered ones. Critics (and there were many) pointed to this as evidence that electricity was overhyped. A tool-shaped object, if you will. The spending was real and the output gains were not.
Then a generation of managers and engineers who had grown up with electricity, who understood it not as a substitute for steam but as a fundamentally different thing, redesigned the factories from scratch. They realized that electric power didn’t need a central shaft. Every machine could have its own motor. Which meant machines didn’t need to be arranged in a line. Which meant the factory floor could be organized by workflow instead of by proximity to the power source. Which meant the resistance and line-shaft friction that had been eating thirty percent of the energy just disappeared. Which meant you could put the factory in a single-story building with windows and daylight instead of a cramped multi-story tower built around a vertical shaft.
Output nearly doubled. But the point is: none of this was visible from within the steam frame. If you asked “is the electric motor a better steam engine?” the answer was basically no. The right question was: what does a factory look like when you stop thinking about centralized power? That question took thirty years to answer. The fumbling was the only way to learn what the technology was actually for.
The internet went through the same thing. The first commercial use of the internet was basically “digital brochures,” companies taking their print advertisements and putting them on a website. The metrics were borrowed from print: page views as circulation, banner ads as display ads, hits as newsstand sales. For about a decade, the serious money was in trying to make the internet work as a better magazine or a better catalog or a better Yellow Pages.
Most of those efforts failed. The ones that succeeded (Amazon, Google, eventually Facebook) didn’t succeed by being better versions of what came before. They succeeded by discovering entirely new categories of action that the medium made possible. Search as a behavior. Social networking as a behavior. One-click purchasing as a behavior. Online reviews as a behavior. They were net new things that had no pre-internet equivalent and couldn’t have been predicted from within the magazine-and-catalog frame.
The fumbling phase (the digital brochures, the pointless Flash animations, the visitor count plugin) looked like waste at the time. It was. But it was also the mechanism by which an entire culture learned to think natively in a new medium. You couldn’t skip it. You couldn’t read your way to the insight. You had to let millions of people use the thing badly until a few of them stumbled onto the uses that mattered.
The Fumbling Is the Mapping
This is what’s happening with AI right now. The agentic workflows that produce only their own existence. The token budgets spent on reports nobody reads. The teams of smart engineers building systems of breathtaking complexity whose primary output is the experience of complexity itself. Manidis looks at this and sees FarmVille. I look at it and see people exploring a space of cognitive action that didn’t exist eighteen months ago, without a map, in the only way a genuinely new space can be explored: by wandering around in it and seeing what happens.
The outputs of this exploration will not be “better memos” or “faster legal briefs” or “cheaper code.” Some of that will happen and it will be the least interesting part. The interesting part will be net new categories of thinking and doing that don’t have names yet because the actions that produce them didn’t previously exist.
The interesting use of the telephone wasn’t faster telegraph messages. It was the 911 call. The conference call. The long distance call with your grandparents. These were new social behaviors that emerged from the medium itself, that nobody predicted, and that couldn’t have been evaluated by the criteria of the telegraph era, because the telegraph era had no frame for “idle conversation over distance” or “instantaneous emergency contact with authorities.” Those concepts didn’t exist. The telephone made them possible, and the fumbling phase (the years of people using phones badly, of critics asking “why would anyone want to talk to someone they could just send a wire to?”) was how the new possibilities got discovered.
That’s where we are with AI. Not at the end of a hype cycle, but in the early, messy, expensive, embarrassing phase of exploring what becomes possible when the cost of a unit of cognitive work drops toward zero. We’re in the “putting an electric motor where the steam engine was” phase. The factories haven’t been redesigned yet. When they are, the current productivity debate will look as quaint as arguing about whether electrified factories were more efficient than steam.
The Kanna Proves the Opposite of What He Thinks
Manidis opens his essay with the story of Chiyozuru Korehide, a kanna-maker in Kyoto whose family has been forging blades for three hundred years. The blade takes days to set up. The shaving is transcendent. And a power planer does the same work in a fraction of the time. Manidis uses this to illustrate beautiful waste: the kanna exists so that the setup can exist. It’s aesthetic, not economic.
But follow the metaphor to where it actually leads.
The temples at Higashi Hongan-ji, the ones those blades were forged for, are still standing. The tradition of Chiyozuru kanna-making is still alive. The power planer exists and is more efficient by every conventional measure. And yet the kanna practice persisted, not because it’s pretty, but because it encodes something the power planer cannot. Three centuries of knowledge about metallurgy, wood behavior, blade geometry, and craft judgment, transmitted through practice, through the ritual of the setup itself. The days of hand-fitting the dai, flattening the blade back, mating the chipbreaker: that’s not wasted time. It’s a knowledge-transfer mechanism. It’s how you produce a craftsman who understands materials at a level that can’t be written down.
The kanna produced a tradition. And “tradition” isn’t the right word either, because it sounds soft. What it produced was a form of knowledge so durable that it outlasted the economic logic meant to replace it. The power planer is more efficient. The kanna is more generative. These are not the same axis, and confusing them (measuring the kanna by the planer’s criteria) is exactly the error Manidis is making with AI.
The people building complex agentic systems that produce nothing visible aren’t wasting time. They’re developing a kind of knowledge about failure modes, latency costs, prompt architecture, integration pain, the actual behavior of these models under pressure, that can’t be acquired any other way. This will show up in what is built next.
Two Sermons, One Congregation
The week’s other main character is Matt Shumer, whose essay “Something Big Is Happening“ reached forty million readers. Manidis calls it slop. It isn’t. It’s a sermon, and I mean that descriptively, not dismissively.
Shumer is writing to his friends and family. He’s trying to compress a messy, fast-moving, genuinely disorienting reality into a felt sense of urgency. The COVID framing is overwrought. Some of the specific claims are inflated. But the underlying observation, that AI capability is improving faster than public perception is updating, isn’t wrong. He’s a guy who watched his own job transform and is trying to warn the people he cares about. That’s not slop. IMO, it’s a sincere attempt at translation.
But Shumer’s essay is built on the same assumption as Manidis’s critique. Shumer says: AI is an incredibly productive tool, wake up. Manidis says: AI is a tool-shaped object, it’s not productive at all, wake up. They’re having a fight about whether the telephone is a good telegraph. Shumer says it’s amazing; Manidis says it’s not; neither one stops to notice that it’s not a telegraph.
This is how culture processes the arrival of a new category before it has language for it. The evangelist and the skeptic aren’t opposed. They’re complementary. The sermon gets people to pick up the phone. The critique gets them to ask whether the call was worth making. Both are necessary. Neither is above the process it participates in. And neither can see what the phone is actually for, because that hasn’t been discovered yet. It will be discovered by the people who are currently mashing their palms in to the keypad.
The Medium Is the Message (In Plain English)
Marshall McLuhan named this in 1964: the medium is the message. When a new medium arrives, everyone argues about the content. Is it good? Is it real? Is it productive? But the content is a distraction. The transformation is how the medium restructures the patterns of human life around it. The content of early television was I Love Lucy. The message was the suburbanization of attention. AI’s content right now is memos and code and agentic workflows. The message is that the interface between human intention and executed action is changing shape, and the cost structure of cognitive effort is being rewritten. You can’t see that by evaluating the output. The reshaping is the whole point.
What Becomes Possible
But this raises the fair objection: how do you know? Every failed technology had a fumbling phase too. 3D TV had evangelists. The Segway had breathless press coverage. The distinguishing pattern isn’t whether the early phase looks stupid. It always looks stupid. The pattern that separates technologies that reshape everything from technologies that fade is generality plus falling cost curves. 3D TV added depth to video and never got cheaper. The Segway solved one narrow problem and never expanded beyond it. Electricity and the internet could be applied to domains their creators never imagined, and they rode cost curves that kept the exploration space expanding faster than the waste accumulated. LLMs have both properties: they work across every domain that involves language and thought, and the cost per unit of cognitive work is dropping by roughly an order of magnitude per year.
The embryonic evidence is already visible if you look outside the productivity frame. I recently built a Claude Code plugin for a structured memory system for AI collaboration, not a wrapper or a shortcut, but a new category of object (tiered persistence, human-controlled promotion gates, correction propagation). This solves a problem which could not have been articulated two years ago because the substrate it addresses didn’t exist. In medicine, clinicians are using these models to synthesize patient histories across fragmented record systems, producing not faster diagnoses but a different kind of clinical picture, one that surfaces patterns across years of data that no individual doctor had the time or cognitive bandwidth to hold in their head at once. And outside professional work entirely, people are using extended AI dialogues to process grief, articulate trauma, and work through emotional knots they couldn’t untangle with another human, not because the AI is a therapist, but because the absence of social consequence creates a space for honesty that didn’t previously exist. None of these are “better memos.” They are new categories of action in wildly different domains. That is what generality looks like before it has a name.
If the capability curve plateaus and cost stops falling, the exploration space stops expanding, and much of the current spending becomes a bubble, not an investment. That’s a real possibility, not a remote one. If I’m wrong about AI being electricity, that’s how we’ll know.
The Mess Is the Mechanism
Manidis ends his essay with a bumper sticker: “Ask what the number is before making it go up.” It’s good advice for a procurement meeting. It’s terrible advice for navigating a transformation.
When the landscape is new and unmapped, you don’t navigate by asking whether each step was productive. You navigate by walking. The mess (the FarmVille-at-institutional-scale, the agentic workflows that produce nothing, the viral essays nobody evaluates, the token budgets that buy the feeling of progress) isn’t what’s in the way. The mess is how a new space gets explored by a civilization that doesn’t yet have the categories to describe what it’s finding.
Every technology that ever mattered looked exactly this stupid, this wasteful, this performative, right before it changed everything. The factories were arranged around a shaft that didn’t need to be there. The websites were digital brochures for companies that didn’t need brochures. The phone calls were bad telegraphs.
And then someone redesigned the factory. Someone built Amazon. Someone called 911.
The question isn’t “is this number real?” The question is “What are the new numbers?” The only way to find out is to keep embarrasingly fumbling, more honestly, more curiously, and with less concern about whether it looks like work, until the new shape reveals what it’s for.


