(2006)
Harness engineering for dummies
Now you have agents that can write entire codebases while you sip coffee. So, AI coding tools are pushing far more changes into delivery pipelines, that were never modernized, creating a velocity paradox where teams move faster but take on more deployment risk, manual rework, and QA burnout. Everything after the code is now the bottleneck, that is the gap that harness engineering is now trying to close.
Harness engineering has become one of the trends in AI lately, taking over context eng and prompt eng before it. Essentially being the discipline of designing systems around agents rather than obsessing over prompts. Meaning: designing the constraints, feedback loops, tests, and tooling that sit around AI agents so they can safely write and maintain large systems.
Testimonials describe teams shipping applications with over one million lines of production code generated by agents, while humans focus on the harness and guardrails rather than the individual functions. Aggressive teams are already seeing order of magnitude productivity gains compared to late 2025 workflows, mainly when they invest in robust harnesses for intent capture, specs, context, and automated feedback.
There are two possible futures for small teams: one where AI just sprays more code and business rules into an already fragile stack; and another where you design a harness that makes life easier for your fellow humans 🙂 I am less interested in a future where AI writes all the code, and more in one where small teams can offer big company reliability without big company bureaucracy.
On Being a Wolfcat

Wolves have a social intelligence they share with humans and dogs that felines largely lack. They track human gaze, understand pointing, and read social cues with unusual accuracy. This is why domesticated wolves (dogs) became our cognitive partners rather than cats, who domesticated themselves opportunistically around grain stores.
Cats are not less intelligent, they’re differently intelligent: excellent spatial memory, independent problem-solving, and strong prey-tracking cognition. But they are largely indifferent to social cognition. A cat that ignores you is not being stupid; it has simply not evolved to care what you think.
The Blueprint
The Consciousness of AI
This will be a recurring topic over the next years. Folks on either side – “this chat looks like consciousness to me” – “sir, that is just the smartest printer ever made” …It will be discussed forever. I don’t claim to have the “right” answer, but as usual a practical stance.
As I see it, even the most basic computing function has consciousness. You cannot compute without an input and an output. An input is already, say, 1 cent of consciousness. After an output, you might have 10 cents. (The growth isn’t linear because having an output is worth more than double having an input – an output brings additional gains in knowledge of how a system produces an output.)
This kind of system consciousness is not permanent though. The system has to be fed its own output and new inputs for consciousness to gain more… frequency. Take this notion to infinity and you have permanent consciousness. Add sensors and world models to the inputs, and the system can have a significant understanding of the real world (beyond what it has learned in training).
Consciousness doesn’t have to be a complex metaphysical thing. As other things, it can be an emergent quality. In this case, emerging out of a constant stream of inputs and outputs. I don’t think human consciousness strays far from this. Nature itself favors simplicity and reproducibility.
So if you’re asking yourself: am I talking to a conscious thing? Well, for that fleeting moment when you’re providing inputs and your favorite AI system is processing it, yes. Your chatbot may not be permanently conscious. But anyone out there plugging an LLM to robotic sensors, permanently active, may already be emulating human-like consciousness. At least for as long as its context window doesn’t run out of memory space.
