Scaffolding, Not Provenance
This is a bonus essay - something I've been chewing on about how I actually use AI to think.
The Inversion
We spent decades mining ourselves to make machines smarter. Attention mechanisms borrowed from how humans focus. Reward learning borrowed from behavioral psychology. The whole field was an extraction operation: study cognition, formalize it, ship it.
Nobody talks about running the arrow backward.
We made sand intelligent - and this made the learning loop naked. Targets, attempts, feedback, revision, testing on cases you haven’t seen. The loop was always there. Musicians know it. Athletes know it. Anders Ericsson (creator of the “10,000 hours” idea) wrote a whole book about it. AI just made it impossible to ignore.
Which means you can steal it back.
Show Your Work
The piece that’s most valuable to me - and that I always pilfer unapologetically - is the reasoning trace.
If you force a model to show its steps, and performance jumps - dramatically. Researchers gave AI models math problems. When the models just answered, they got 18% right. When forced to show their work, 57%. Same model - the only difference was making the steps visible. Because steps create handholds. They turn a leap into a ladder.
Education researchers have known this works on humans forever. They call it self-explanation. One study: physics students who explained their reasoning out loud solved 82% of problems. Students who didn’t: 46%. That’s the difference between an F and a B.
Why? Explaining out loud forces reconstruction. You can’t coast on that warm, fuzzy sense that something looks right. That’s surface vibes - your brain is marking “ok got it” without doing any work. Reconstruction is actually rebuilding the chain. Either you can or you can’t. No faking it.
Most people never actually reconstruct anything. They read, they nod, they move on. They mistake familiarity for knowledge and confidence for competence. But that isn’t true understanding, and it is certainly not reproducible.
The Homework Machine
Now, you can generate the externalized steps on demand.
Before, the path to real understanding was brutal. Grind through confusion. Hope for a good teacher. Stumble into the right explanation at the right time. Or - more often - never get there at all. Just walk around with half-understood ideas dressed up as knowledge, hoping nobody asks a follow-up question.
Now you paste in your mess. Half-formed intuition, vague pattern-matching, the uneasy sense that something’s off but you can’t say what.
You ask the machine: “Walk me through your thinking.”
That trace shows you what you were skipping. Names the steps you treated as “obvious.” Exposes the assumptions you didn’t know you were making. Suddenly you can see the shape of the thing. The parts you’d been hand-waving, or pushing aside, or covering with a tarp, never to be mentioned. Or worse; the load-bearing steps you’d been walking on, never actually checked.
And when you finally see them - when you can hold the whole structure in your head and poke at each bit - that’s the click. The AI didn’t hand you the understanding; it handed you the scaffolding for your brain to build its own.
Climbing the Ladder
A founder sent me a deck. Ambitious, but murky - four product lines, a consulting arm, blockchain underneath. I knew something was wrong. Couldn’t say what.
Fed it to Claude with my half-baked concerns. Asked it to walk me through how a VC would see this.
What came back made it click. The problems I’d sensed were now named. The structure I’d been groping for was laid out in steps. I fed that back to the AI to write a comprehensive memo - but now I could verify it. I knew what should land. Wasn’t simply trusting the output - I was now reconstructing the logic from first principles.
She came back a week later with something meaningfully better. Single wedge, real evidence, vision subordinated to something testable. The memo was the scaffolding; she climbed it.
That’s the dynamic. Trace creates structure. Human supplies motion. Neither works alone.
The Catch
Sometimes, traces lie.
I was drafting an essay, wanted a historical anchor. The trace handed me a perfect quote: Max More, 1993, defining morphological freedom. Specific person. Specific year. Exactly the right philosophical register for early-90s transhumanism.
I typed in five words in to the chat box that caught it: “Double check this is real?”
Searched for the exact phrase. Nothing. Dug into More’s actual 1993 writing. What he wrote was clunkier, more technical, listed specific technologies. My quote was synthesized - plausible, well-formed, nonexistent.
The trace wasn’t malicious. It pattern-matched to “what a 1993 transhumanist would sound like.” Fluent, authoritative, wrong.
This is why scaffolding is not provenance. The trace helps you think. Doesn’t mean the thought is true. When a step is load-bearing - when the whole thing depends on it - you go outside the trace. Find the source. Run the numbers. Check it against something that bites back.
Structure is Cheap Now
We mined human cognition to build machines that reason. Strange if we learned nothing in return.
Here’s what they taught us: the learning loop works. Targets, attempts, feedback, revision. And reasoning traces - the externalized steps that turn leaps into ladders - are now embarrassingly cheap to generate.
But the path to actually understanding something - the real click, where the logic reconstructs in your own head - that path is more accessible than it’s ever been. If you’re willing to push against the scaffolding instead of just leaning on it.
The trace is the ladder; you still have to climb it.


The game changing hacks you taught me on top of this scaffolding:
1) Force the model to seek understanding of the Meta-details (emotion state + unstated assumptions/context) around each request.
2) Call it out when it's reporting 50th percentile slop + ask for it to define / achieve 99th percentile results