Justin Heath

I build software that removes friction and keeps people moving.

I work across legacy and modern .NET stacks, usually where the process is broken, manual, or both. I fix it by building systems people can actually use. I'm also deep into collaborative AI architecture — designing how humans and agents build real software together.

Portrait of Justin Heath

What I'm Working On

  • The Discovery Journal

    A guided self-examination tool for discovering your cognitive architecture. Interactive web app, browser-native, no backend.

  • Family Coordination App

    Meal planning to recipes to shopping lists. Built for my actual household with Blazor, .NET 8, PostgreSQL, and Docker. GitHub

  • Claude Code Plugin

    Custom configuration tuned for .NET, C#, and PowerShell workflows so AI assistance fits real production constraints. GitHub

  • Schedule I Mod

    MelonLoader mod for Schedule I with C# support for both Mono and IL2CPP runtimes. GitHub

Visual Work / Generated Art

AI-generated, 2026

My automated workflows generate images from task context. I also have my assistant mark milestones with special art tied to the work itself.

Writing / Thinking

Friction as Telemetry

Most productivity advice assumes friction is a character flaw.

Distracted? Focus harder.
Inconsistent? Try a better system.
Struggling? Apply more willpower.

I accepted that premise for twenty years.

New tools. New frameworks. New Monday-morning reinventions. Each system worked briefly, then collapsed. The failure felt personal — not dramatic, just a constant background hum: this shouldn't be this hard for you.

Recently, I stopped optimizing systems and started profiling the operator.

Diagnostics.

How does my energy actually behave?
How does attention move?
What sustains engagement in practice, not theory?

The results were inconvenient.

I don't recharge by stopping — I recharge by switching modes.
I don't read sequentially — I sample, traverse, build mental maps.
I don't experience completion as rewarding — momentum is the signal.

Every productivity structure I had forced myself through was designed for a different cognitive architecture.

Linear workflows for a nonlinear thinker.
Completion-driven systems for a progress-driven brain.
Rest-recovery assumptions for a nervous system that experiences rest and boredom identically.

The real tax wasn't inefficiency. It was misalignment.

And layered beneath that misalignment was something more corrosive: the shame response. Not "this is difficult," but "this is difficult and therefore something is wrong with me."

You can't willpower your way out of that loop.

You can, however, stop building systems that trigger it.

That was the unexpected shift. The problem was never discipline. It was design.

Once I mapped the actual operating characteristics, the productivity question changed form entirely:

Not Which system works best?
But Which system matches reality?

Friction isn't always resistance.

Sometimes it's telemetry.

The Specification Shift

For twenty years, the bottleneck in software was implementation speed. How fast can you write it, debug it, ship it. That's not the bottleneck anymore.

When a human writes code, ambiguity in the spec gets resolved through judgment, context, and a Slack message that says "did you mean X or Y?" The human fills in gaps with reasonable guesses. When an AI writes code, ambiguity gets resolved with software guesses — plausible-looking implementations that compile, pass linting, and have nothing to do with what the customer actually needs. The AI builds exactly what you described. If what you described was incomplete, you get incomplete software that looks finished.

The skill that matters now isn't "Can I build this?" It's "Can I describe this precisely enough that a machine builds it correctly?" That requires systems thinking, deep customer understanding, and a precision of language that most of us never had to develop — because we could always course-correct while we coded. That safety net is gone. Implementation complexity used to camouflage how few people were actually good at the specification work. The machines have stripped away that camouflage.

Cognitive Architecture as a Design Input

I spent five weeks building AI infrastructure at full velocity — 836 commits, features shipping daily. Everything worked. Everything functioned. And it all felt wrong. Not broken-wrong. Arbitrary-wrong. Like building a house without knowing who was going to live in it. Turns out the answer was "me" and I'd never bothered to ask myself how I actually move through the spaces I build for myself.

So I stopped building and started examining the operator. Not the code. Me. I treated my own brain like a poorly-documented legacy system and ran a structured analysis: How does my mind sustain momentum? What fragments it? Where does friction actually come from?

The answer wasn't discipline problems or focus issues — it was architectural mismatch. I was building systems designed for a brain I don't have. Linear workflows for a nonlinear thinker. Recall-dependent navigation for a recognition-dominant memory. Rest-recovery assumptions for a nervous system that recharges by switching, not stopping. Once I mapped the actual cognitive architecture, every design decision that had felt uncertain suddenly had a governing principle. The highest-leverage act in building personal software isn't gathering requirements — it's discovering how the person using it actually thinks. That process turned out to be transferable, and I built a tool for it at discover.heathdev.me.

You Don't Trust — You Instrument

People who insist they'll "never trust AI-generated code" are asking the wrong question. Trust isn't the mechanism. Instrumentation is.

Here's the shift: when agents write your code, the code itself becomes opaque. You stop reading it the way you'd review a pull request. Instead, you define scenarios — end-to-end descriptions of what "good" looks like, stored outside the codebase where the agent can't touch them. You let the agent build. Then you evaluate behavior against those scenarios. It's the same way machine learning model quality comes from evaluation metrics, not from reading the weights. Nobody opens a neural network and checks the numbers. You check what it does.

The critical principle is separation: the thing being validated cannot also control the validation criteria. Same reason students don't write their own exams. Your role shifts from code reviewer to scenario designer — and that turns out to be a more interesting job, because you're defining what matters instead of arguing about implementation details.

The Capability Overhang

There is a massive gap right now between what AI development tools can do and what most people are actually doing with them. The frontier is agents writing entire codebases autonomously. The median is developers using autocomplete and occasionally pasting code into ChatGPT. That gap is temporary arbitrage — the people and teams who close it first capture enormous value.

But here's what nobody tells you when they hand you a new AI tool: you will get slower before you get faster. It's called the J-curve, and it's a well-documented adoption pattern. You spend time evaluating suggestions instead of just writing code. You spend time correcting "almost right" output. You context-switch between your mental model and the AI's. One senior engineer put it this way: "Copilot makes writing code cheaper but owning it more expensive."

The organizations seeing 25-30% productivity gains aren't the ones that installed Copilot and called it done. They went back to the whiteboard and redesigned their entire workflow around AI capabilities. The mistake most teams make is interpreting the initial dip as evidence that the tools don't work. It's not evidence the tools don't work. It's evidence the workflow hasn't adapted yet. And the distance between teams who push through the curve and teams who don't is getting wider every month.

Externalized Intent

If it only exists in the code, it doesn't exist.

Code captures what and how. But intent — the why, the constraints you considered, the alternatives you rejected, the shape of the problem as you understood it when you made the decision — that evaporates unless you deliberately put it somewhere. And every context switch forces you to pay for that loss. Sleep. Project switches. Weeks passing. You come back and re-derive intent from implementation, which is the most expensive activity in any momentum-driven workflow. You're doing archaeology on your own decisions.

Documentation isn't overhead. Tests aren't bureaucracy. They're the system's memory. Three forms, each serving a different moment: documentation captures why it's this way, tests capture what should be true, and visualization captures what the structure looks like. Without all three, every pause becomes a dig site. With them, every pause is resumable. The default failure mode for builders is building without externalizing — shipping code and leaving the reasoning in your head, where it has approximately a 72-hour half-life. The system has to resist that, because your future self is the one who pays.

Background

  • Started in a hospital basement in 2000, routing calls through a switchboard database. Twenty-five years later, I still end up in the same kind of work: systems that should talk to each other but don’t, and processes running on patience instead of automation.
  • Eight years at Horizon Hobby, from support to building their B2B ordering platform and bridging .NET systems to IBM mainframes. That’s where I learned the interesting problems are usually at the seams.
  • Thirteen years at Volition, the studio behind Saints Row and Red Faction. I led IT development across HRIS, payroll automation, SharePoint migrations, and internal tools, and cut HR payroll processing time in half. The studio closed in August 2023.
  • Now at Veson Nautical, building analytics platforms with hundreds of structured fields per document, real-time dashboards for fleet-scale reporting, and pipelines that make compliance workflows survivable. I’m also running a governed AI-assisted development environment with specialized agents and verification frameworks.
  • The constant: find the manual, fragile thing held together by institutional knowledge, then replace the fragility with a system that lasts.