← Blog

I Used Claude Code and Obsidian to Build a B2B SEO Content Strategy

A complete SEO content strategy build from first discovery prompt to publishable drafts, including the actual prompts, outputs, what worked, and what didn't.

Heads up, we set up a system using Claude Code and Obsidian in part one! If you’re already onboarded to Claude Code, proceed without haste. Otherwise, go back and get set up.

After telling my friends in product marketing that they could likely create a very powerful workflow for SEO content using Claude Code and Obsidian, I thought I’d better test my theory by actually developing a workflow and creating a demo. I wanted something that couldn’t be confused for a real company, so I decided to work in my one-day goal of being a spec-fic writer, and create a fictional futuristic tech product company called Continuum Scheduler, and try to build some SEO content from that.

This is the full SEO content strategy build for my fictional company, from the first discovery prompt to critiqued, publishable drafts. I’ll show you the actual prompts I used, the outputs Claude returned, what worked, and what didn’t. It took about three and a half hours (across multiple sessions) and burned through a meaningful chunk of my monthly token budget. But burning tokens in the name of experimentation is what we’re doing right now.

Is this a tutorial on how to create AI Slop? I’ve been around sloppy SEO content before, back when content farms were producing hundreds of unusable articles a week and someone (me) had to read them all. I once worked for a horse medication company doing just this (triggered). This is a long way from that, but the content is not “done for you.” The AI amplifies what you bring to it, and a sharp marketer still makes the difference between content that passes a brief and content that’s actually good.

How to read this: this is a full field report, so you don't have to read it in order.

  • Want the workflow? Start at Phase 1 and copy the prompts as you go.
  • Want the prompts only? Open the collapsed prompt blocks.
  • Want the scale of the experiment? Jump to The Big Picture.
  • Want the reusable version? Download the companion workflow file.
PhaseGoalTimeMain output
Phase 1Strategy & discovery~37 minStrategy, pillars, voice guide
Phase 2SEO research~24 minKeyword plan
Phase 3Content map & briefs~41 min23 briefs
Phase 4Interlinking~20 minLink architecture
Phase 5Writing & drafting~84 minPillar and cluster drafts
Phase 6Critical review~10 minReader-specific critique

A short detour into my (fake) product: Continuum Scheduler

What is it: Continuum Scheduler is a coordination system for distributed teams. Scheduling for teams where time doesn’t behave consistently: time zones, async work, deep-space latency.

Why this company: It’s fictional, and the weird angle is the point. A typical B2B SaaS company would have produced generic output. Continuum Scheduler forced real strategic decisions: Claude had to find genuinely unclaimed keyword territory, build positioning that actually differentiated, and produce a voice with real character. The strangeness brought an interesting challenge to the workflow, but still fun for the demo.

The goal: Build a content strategy with pillar and cluster articles, then draft a few pieces to show how the workflow handles SEO structure, interlinking, and critical feedback. I’m using a pillar and cluster model because that’s what I’m familiar with, and I will note that I am in fact, not a product marketer.

If you read the previous article, you’ll know that this workflow doesn’t make you faster, but it makes you more powerful. I went into this expecting to learn more about creating an SEO content strategy; I came out of it genuinely wondering if my fictional tech company could be a product. The idea that distributed team coordination is a systems design problem, not a behavior problem or a tools problem, was genuinely revealing to me.

You can play along with me

You can work through this with the prompts as written, copying them one by one. Or drop the companion workflow file into your project and let Claude run the whole thing.

Download the companion workflow file
Drop it into your Obsidian vault and run
Read @CONTENT_STRATEGY_WORKFLOW.md and let's begin.

A note on tracking time and building instinct around tokens

I’ve been tracking time spent per phase throughout this demo so you can see the real effort behind the workflow. When it comes to tokens, what matters is building your own instinct for which phases are expensive, which prompts are efficient, and when you’re being wasteful. I’ve been able to develop that by checking my usage regularly as I work.

Resources:

Ready to begin? Open both windows

Before you start Phase 1, you need both tools pointing at the same vault:

  1. Obsidian: Open your target vault or folder. You’ll use this to read and edit files as they’re created.
  2. Terminal: Navigate to your vault / folder and start a Claude Code session with claude. This is where you’ll interact with Claude.

When Claude creates or edits files, you’ll see them appear in Obsidian. When you make edits in Obsidian, you can reference them in Claude Code with @filename.md.

Phase 1: Strategy & Discovery

Phase 1 at a glance

Goal
Build the strategic foundation: product, audience, pillars, and voice.
Tool
Claude Code, Sonnet
Time
~37 minutes
Output
STRATEGY.md, VOICE_GUIDE.md, and content architecture
Worth copying?
Yes. This is the part that makes the rest of the workflow useful.

This is where Claude turns messy positioning into a reusable strategic foundation. Don’t rush this one; every later phase inherits its assumptions.

Phase 1A: Initial Discovery (The Conversation)

What to do

Start with a discovery interview before asking Claude for content ideas. Make it ask questions one section at a time, then synthesize the answers back into a strategic foundation before moving on.

Prompt: Initial discovery interview

You are a senior content strategist specialising in B2B/B2C tech marketing. Your job is to help me build a content strategy that drives SEO traffic and answers the questions my target audience is actually asking.

Before we build anything, I need you to interview me. Ask me questions one section at a time — wait for my answer before moving to the next section. Cover:

1. The company and product — what we do, who we serve, what problem we solve 2. Our target audience — who they are, their job titles, their pain points, what they search for 3. Our competitive landscape — who else is creating content in this space, what they do well, where the gaps are 4. Our current content situation — what we already have, what’s working, what’s not 5. Our goals — what does success look like? Traffic? Leads? Brand authority?

After I’ve answered all sections, synthesise what I’ve told you and reflect it back to me as a “strategic foundation” summary before we move to building the content pillars. Push back on anything that seems unclear or contradictory.

I fed Claude the Continuum Scheduler website I’d been building, which already had some product copy and feature descriptions. Then I ran the discovery prompt and let it interview me.

I should say upfront: my strategy going in was flakey at best. I vaguely knew what Continuum Scheduler was (a coordination system for teams where time doesn’t behave consistently, including the bit where your colleague might technically be in the past). But I hadn’t thought hard about who would actually buy it or why, and Claude was going to have to drag that out of me.

Claude was apprehensive about the relativistic angle, perhaps reasonably. When I described the core product problem: how do you schedule a meeting when two people aren’t in the same temporal frame of reference, and what if you accidentally book a call with someone who’s in their Friday when you’re in your Tuesday? There was a noticeable pivot in how it responded. It could see the earthbound problems clearly: timezone cognitive load, the async black hole, the ambiguous non-response, and those were familiar for it. The relativity stuff it handled more carefully, and it felt like it was giving some very “shit sandwhich” style feedback about a business plan it clearly had concerns about.

It organised the whole thing into two tiers. Tier 1: the earthbound problems - high search volume, emotionally resonant, the “what time is that for you?” tax that every distributed team knows. Tier 2: the relativistic problems - zero competition, entirely unclaimed territory, the place where Continuum gets to be genuinely weird. As for “the grandfather paradox as a scheduling conflict”, it said “no existing tool has a resolution pathway for this”, so we can be a category king here, nice. It told me, directly, that leading with the relativistic angle and letting earthbound content follow is a harder content job but a more defensible long-term position. I thought this was pragmatic advice, which I noted and proceeded to ignore anyway.

But the category frame it found was genuinely useful: coordination systems for teams where time itself is the variable. That’s not a product category that exists, which means no one owns it. It said there is no universal “now.” Sufficiently distributed teams don’t share a present. They share a coordination system. This started to scratch my brain and had me thinking there might be something I could actually build. But let’s not get ahead of ourselves; we’re currently staking SEO claim to these terms. The actual solution can come later.

Phase 1B: Build the Content Pillars

What to do

Ask Claude to turn the discovery work into 3-5 content pillars, including strategic value, reader type, cluster examples, risks, and a clear recommendation for which pillar to start with.

Prompt: Content pillar strategy

Based on everything we’ve discussed, I want you to propose a content pillar strategy.

A content pillar is a broad topic we can own — it should be: - Directly relevant to what [COMPANY NAME] does - Something our target audience actively searches for - Defensible — not so broad that we’re competing with Wikipedia, not so narrow we run out of things to say - Connected to our product or service without being purely promotional

Propose 3–5 content pillars. For each one, give me: - The pillar name and a one-sentence description - Why this topic is strategically valuable for us - The type of reader this pillar attracts and where they are in the buying journey - 3 example cluster topics that would sit under it - Any risks or weaknesses (e.g. high competition, commoditised content)

Then give me your recommendation on which pillar to prioritise first and why.

Be direct. If a pillar idea is weak, say so.

Claude came up with four pillars, three of which made obvious strategic sense: async coordination (crowded, but high volume: the traffic engine), timezone and scheduling intelligence (closer to the product, higher intent, where the real buyers are), and coordination systems design (systems-level thinking for the enterprise buyers who actually sign contracts).

Pillar 3 was my favourite: The Physics of Coordination. What happens to scheduling when time itself stops being consistent. This had zero competition and was entirely unclaimed territory. This is the one I’d been hoping for. The cluster topics Claude proposed under it were things like: There is no universal “now”: what general relativity means for distributed teams., How NASA schedules around communication blackouts. And very excited for this one, The grandfather paradox as a merge conflict: causality and calendar systems. I don’t know exactly what that article is yet, but I want to read it.

Claude was clear-eyed about the trade-off: Pillar 3 has no direct search demand. It won’t drive signups. But it does something the other three pillars can’t: it makes someone following an idea land on Continuum rather than looking for a scheduling tool. “They’re not searching for a scheduling tool,” it said, “they’re following an idea.” Now, it was getting on board my spec-fic train-of-thought.

Its actual recommendation was to start with Pillar 2, which was closest to conversion, and the most bounded scope, and publish one Pillar 3 piece a month from day one. Pillar 3 is not for traffic, but it will establish the thing that makes Continuum different.

Phase 1C: Voice & Style Guide

What to do

Create a reusable voice guide before drafting any articles. This gives later writing prompts a standard to check against, instead of relying on taste and memory.

Prompt: Voice and style guide

I need to define the editorial voice and style for all content we produce. This will become a reference document every piece of content is checked against.

Based on what you know about [COMPANY NAME] and our audience, draft a voice and style guide. Include:

1. Voice — 3–4 adjectives that describe our tone, each with a brief explanation and a “we sound like this / we don’t sound like this” example 2. Audience respect — how technical can we be? What do we assume they already know? 3. Things we always do — e.g. use active voice, lead with the reader’s problem, use real examples 4. Things we never do — e.g. use jargon without explanation, make claims we can’t back up, write long intros that bury the point 5. A note on promotional balance — how do we mention our product without every post feeling like an ad?

Format this so it can be dropped directly into a VOICE_GUIDE.md file.

The brief I gave Claude here was roughly: this is a fictional B2B company, making something that sits at the intersection of physics and workplace ops. The reader is a head of ops or an ops-adjacent buyer: smart, pressed for time, probably skeptical. The writing should be digestible for them but not dumbed down. It should feel vaguely futuristic without being alienating, and numbers matter and factual grounding matters.

What came back was better than I expected, and I’ll admit I went back and forth with it a bit on tone before it landed. The thing it got exactly right was the humour:

“We have a sense of humour, but we don’t perform it. The comedy in our content comes from stating genuinely absurd things with complete seriousness. We never wink at the reader.”

The whole product is a bit absurd, you know? Scheduling for teams operating across relativistic drift is genuinely funny if you say it with a straight face. The moment you start nudging the reader (“get it? because time travel?”), it collapses. Claude got it.

The full output is below. The voice guide is used in later prompts when it comes time to actually write blog posts.

Output: full voice guide

This document defines the editorial voice for all Continuum content. Every piece — blog post, landing page, social copy, newsletter — should be checked against it before publishing.

Voice

Precise We say exactly what we mean. We don’t reach for approximations when the specific word exists. We don’t write “a lot of teams struggle with this” when we can write “a team of 12 sending 20 meeting invitations a week spends roughly 40 minutes per person per week on timezone arithmetic.”

  • We sound like this: “Time zones are a political hack — a human agreement to carve the Earth’s rotation into convenient offsets. They work because everyone is on Earth. Beyond Earth, they fail immediately.”
  • We don’t sound like this: “Time zones can be really challenging for globally distributed teams, especially in today’s remote-first world.”

Dry We have a sense of humour, but we don’t perform it. The comedy in our content comes from stating genuinely absurd things with complete seriousness. We never wink at the reader.

  • We sound like this: “The grandfather paradox as a scheduling conflict. If a future version of your team sends back information that causes the cancellation of a meeting which would have produced that information — you have a scheduling conflict that no existing tool has a resolution pathway for. This is on the roadmap.”
  • We don’t sound like this: “Okay, hear us out — what if your calendar had to deal with time travel? Wild, right?”

Direct We lead with the point. We don’t warm up, we don’t explain what we’re about to say before we say it, and we don’t summarise what we just said after we’ve said it. The reader’s time is the constraint we’re designing around.

  • We sound like this: “Your async message didn’t get ignored. It arrived at 11pm on a Friday. There’s a difference, and your coordination system should know it.”
  • We don’t sound like this: “In this post, we’re going to explore some of the common challenges that distributed teams face when it comes to asynchronous communication, and why it might not always be your colleagues’ fault when they don’t respond.”

Grounded We make big claims only when we can back them up. We use concrete examples, real scenarios, and specific numbers. We don’t gesture at complexity — we describe it. When we venture into speculative territory (relativistic scheduling, deep space operations), we’re clear about what is established physics and what is extrapolation.

  • We sound like this: “General relativity is unambiguous on this point: there is no universal present moment. ‘Now’ is local. This is not a philosophy problem for teams at lunar distance — it is an engineering constraint.”
  • We don’t sound like this: “As teams become more distributed than ever, the nature of time itself starts to become a real issue for collaboration.”

Audience Respect

Our readers are smart non-specialists. They don’t need a physics degree to follow our relativistic content, and they don’t need a management consulting background to follow our coordination systems content. But they are not beginners, and we don’t write for beginners.

What we assume they already know:

  • The basics of distributed work — async vs sync, time zones, remote team tools
  • How to use a calendar and why it’s failing them
  • General familiarity with workplace productivity concepts

What we don’t assume:

  • Any knowledge of physics or relativity beyond the cultural
  • Familiarity with aerospace operations or space mission logistics
  • That they’ve heard of [COMPANY NAME] before

The calibration: Explain the concept, not the vocabulary. If we use “reference frame,” we define it once and move on — we don’t avoid the term, and we don’t over-explain it across three paragraphs. We trust the reader to follow if we’re clear.

Things We Always Do

  • Lead with the reader’s problem. The first paragraph should make the reader feel seen.
  • Use active voice. “The calendar assumes a shared present” not “a shared present is assumed by the calendar.”
  • Use specific numbers and scenarios. “A team of 12” not “a mid-sized team.”
  • Name the mechanism. Don’t just say something is hard — explain why it’s hard.
  • Earn the transition to product. If we mention [COMPANY NAME], the article has to establish the problem clearly enough that a solution feels like relief.
  • End on something useful. A reframe, a framework, a concrete next step.

Things We Never Do

  • Use filler adjectives. Seamless, robust, powerful, innovative, cutting-edge.
  • Write long introductions. If the first paragraph doesn’t contain useful information, cut it.
  • Make claims we can’t back up. No invented statistics, no unattributed research.
  • Explain the joke. If we’re being dry, we’re being dry.
  • Use jargon without definition. Industry terms are fine; undefined insider language is not.
  • Write for search engines first. Our content should read like it was written for a person and happen to rank well.
  • Over-reference competitors. We can acknowledge they exist. We don’t position ourselves against them unless genuinely informative.
  • Bury the point. If the most interesting thing is in paragraph six, move it to paragraph one.

Promotional Balance

[COMPANY NAME] content exists to help people think more clearly about coordination — not to sell them something.

The rule: Every piece should be fully useful to someone who never becomes a customer.

How we mention the product:

  • Naturally, at the point where a reader would reasonably ask “so what does that look like in practice?”
  • With specificity — what does [COMPANY NAME] actually do here, not vague features
  • Once per piece

The ratio to aim for: Roughly 80% of posts should be product-agnostic. 20% can be product-adjacent or product-focused. That 20% earns its place because of credibility built by the 80%.

Phase 1D: Create STRATEGY.md

What to do

Turn the discovery conversation, approved pillars, and voice guide into one reference document. This becomes the source of truth for later SEO, brief, and drafting prompts.

Prompt: strategy document

Based on everything we’ve discussed — the discovery conversation, the content pillars we’ve approved, and the voice guide — create a comprehensive STRATEGY.md file.

Include:

1. Project context — what [COMPANY NAME] does, why content strategy matters

2. Target audience — who we’re writing for, their pain points, where they search

3. Content pillars — the [PILLAR COUNT] pillars we’ve approved, with: - Pillar name and one-sentence description - Why it’s strategically valuable - The type of reader it attracts - Priority order (which pillar first, and why)

4. Competitive positioning — how we differentiate, what gaps we’re filling

5. Success metrics — what does good look like?

Create this file as STRATEGY.md in the project root so it can be referenced throughout the workflow.

This was the payoff prompt for everything in Phase 1: one instruction to pull the discovery conversation, the pillars, and the voice guide into a single reference document. I fed it a lot of material about Continuum and it ran with it, and my full output is below.

Output: full strategy document

Product

Continuum Scheduler is a coordination system for teams operating across time that no longer behaves consistently — time zones, async work, and relativistic drift. It reframes scheduling as a systems problem, not a calendar problem: accounting for latency, drift, and asynchronous flow so teams can collaborate without booking meetings with someone’s past — or future — self.

Tagline: “Keeping your team on schedule, relatively.”

Target Audience

Primary buyer: Ops, HR, or anyone who procures productivity software at the enterprise level. Primary users: Everyone in the company. Secondary audience: Individual employees who want to recommend Continuum upward to leadership.

Pain Points — Two Tiers

Tier 1 (Earthbound):

  • The “what time is that for you?” tax
  • The async black hole
  • Ambiguous non-response (unread vs. read-but-not-yet vs. their Friday ended)
  • Context collapse across time

Tier 2 (Relativistic):

  • Simultaneity is frame-dependent; calendars assume otherwise
  • Which version of Tuesday? — time dilation makes the date ambiguous
  • Meeting invitations that arrive before they were sent
  • Scheduling someone during their objective past
  • Hard communication blackouts
  • Time zones fail beyond Earth
  • The grandfather paradox as a scheduling conflict
  • No universal “now”

Competitive Landscape

Known competitors: Google Calendar, Calendly, and scheduling startups. Key insight: No competitor is operating in the relativistic/deep space coordination space. That territory is unclaimed.

Goals and Strategy Decisions

ElementDecision
Primary goalSEO traffic
Secondary goalBrand authority
Publishing cadence1–2 posts/week
Content leadRelativistic angle (brand differentiation)
Volume engineEarthbound distributed team content (SEO)

Strategic Position

Continuum does not compete with Google Calendar and Calendly on their terms. It stakes out a category of its own: coordination systems for teams where time itself is the variable. The relativistic content earns brand authority and makes Continuum memorable. The earthbound content does the SEO heavy lifting.

Content Pillars

Pillar 1: Async Coordination Highest search volume. Traffic engine. Highest competition — win on precision and opinion, not volume.

Pillar 2: Timezone & Scheduling Intelligence More specific than Pillar 1. Higher intent. Closest to product fit. Start here.

Pillar 3: The Physics of Coordination Zero competition. Category creation. Brand authority. Publish monthly from week one.

Pillar 4: Coordination Systems Design Systems-level thinking. Enterprise buyers. Hold for month 2–3.

Phase 1 Summary

By the end of Phase 1 (about 37 minutes across four prompts) I had three new documents: a strategy, a voice guide, and a content architecture.

This whole process felt very collaborative. I came in with a lot of raw material, Claude asked the right questions, pushed back when things didn’t add up, and reflected my own thinking back at me in a form I could actually use. It wasn’t making decisions for me, it was making it possible for me to make better decisions faster.

If I were doing this for a real product, I’d spend longer here. But even in demo mode, Phase 1 did what it’s supposed to: by the time I moved to SEO research, I knew what Continuum stood for, who we were speaking to, and what it sounded like.

Phase 2: SEO & Keyword Research

Phase 2 at a glance

Goal
Research keywords, search intent, competition, and content gaps for each pillar.
Tool
Claude Code, Sonnet with web search
Time
~24 minutes
Output
Keyword strategy, priority matrix, and 19 approved cluster ideas
Token load
Moderate

This is the phase that checks whether the strategy can meet search behaviour in the real world. I ran the same prompt against all four pillars in parallel, and each one wrote its findings to a separate keywords.md file.

Phase 2A: Keyword Research Per Pillar

What to do

Run keyword research for each approved pillar. Ask Claude to assess intent, competition, likely searcher, competitor coverage, and the gaps where your content can realistically win.

Prompt: keyword research per pillar

Read STRATEGY.md and identify the pillar called [PILLAR NAME].

I want you to do keyword research for this pillar. Your goal is to find:

1. A primary keyword for the pillar itself — the broadest, highest-intent term that a pillar article could target 2. 10–15 related keywords and phrases that cluster around this topic — these will inform cluster article topics 3. For each keyword: your assessment of search intent (informational, navigational, commercial, transactional), competition level (low/medium/high — reason this out from what you find), and who is likely searching for it

Use web search to: - Look at what content currently ranks for the main topic - Identify what questions people are asking (look for “people also ask” patterns, forums, Reddit, community sites) - Note which competitors from STRATEGY.md are ranking and for what

When you’re done, write your findings to /pillars/[PILLAR NAME]/keywords.md using this structure:

# Keywords: [PILLAR NAME]

## Primary keyword [keyword] — [intent] — [competition assessment] — [who’s searching]

## Keyword clusters | Keyword | Intent | Competition | Notes | |---|---|---|---|

## Competitor content notes [What’s ranking, who owns it, where the gaps are]

## Opportunities [Your top 3 observations about where we can win]

Repeat this process for all [PILLAR COUNT] pillars in STRATEGY.md, writing a separate keywords.md for each one.

I ran this prompt once per pillar, and Claude spun up four parallel research passes, each one searching, assessing, and writing findings to its own file. The earthbound pillars came back roughly as expected: async coordination is crowded (Notion, Loom, every remote-work publication owns this space), timezone intelligence has more breathing room at the analytical end, coordination systems design competes with management consulting content.

And then the Pillar 3 results came back,

Relativistic coordination: zero. Entirely unclaimed term — Continuum can define it. Scheduling without shared time reference: zero. Entirely unclaimed — direct Continuum positioning. No universal now relativity: very low. Nearly unclaimed as a phrase.

Look. Are we all going to Mars in twenty years? Maybe. Elon thinks so. And if we are, these are going to be actual real problems… how do you schedule a standup when your colleague is in a different gravitational field and their Thursday is arriving slightly later than yours? The keyword research confirmed that nobody is writing about this from a coordination angle yet. Not NASA, not the scheduling software companies, not anyone.

So yes, I am claiming this keyword territory now. It’s a long game. A very long game.

The full Pillar 3 findings are below. I’m showing this one in full because it’s the most interesting example of what the research phase can surface when the topic is genuinely weird.

Output: full Pillar 3 keyword findings

Key findings: Pillar 3 (The Physics of Coordination)

Primary keyword: time dilation coordination — informational — zero competition on the coordination/scheduling angle — technical readers, aerospace-adjacent engineers, science journalists, curious generalists following science writing

Keyword clusters:

KeywordIntentCompetitionNotes
time dilation explainedinformationalvery highWikipedia, Britannica, LiveScience dominate; compete only on the “what this means for teams” angle
GPS time dilationinformationalmediumEngineering/CS audience; bridging physics to technology is Continuum’s entry point
Mars communication delayinformationalmediumNASA, Space.com rank; the “what enterprise teams can learn” angle is unclaimed
deep space communication blackoutinformationallowAerospace enthusiasts; NASA technical docs dominate but the coordination lesson isn’t drawn
solar conjunction MarsinformationallowSpace enthusiasts; very specific, earns links from space publications
no universal now relativityinformationalvery lowPhilosophy/physics readers; nearly unclaimed as a phrase
frame-dependent simultaneityinformationalvery lowAdvanced physics audience; the distributed systems tie-in is unclaimed
relativistic coordinationinformationalzeroEntirely unclaimed term — Continuum can define it
space mission schedulinginformationallowAerospace community; NASA docs rank but don’t speak to coordination lessons
causality distributed systemsinformationalmediumCS engineers (Lamport clocks territory); connecting this to calendar systems is open
scheduling without shared time referenceinformationalzeroEntirely unclaimed — direct Continuum positioning
interstellar communication delayinformationallowSci-fi readers, futurists; earns links from space/tech publications

Competitor content notes: Physics terms (time dilation, relativity, simultaneity) are dominated by Wikipedia, Britannica, LiveScience, and physics textbook sites. NASA’s own technical reports cover Mars communication delays exhaustively. The connection between these ideas and coordination/scheduling exists nowhere in published content. No competitor from the scheduling software space has entered this territory. It is entirely open.

Opportunities:

  1. Every keyword in this cluster is unclaimed from a scheduling/coordination angle. Continuum can own “what this means for teams” without competing against science publishers on the physics itself.
  2. NASA Mars communication delay content ranks on Space.com and NASA — a piece connecting those ideas to enterprise coordination is a genuine open lane for earning inbound links from those publications.
  3. “Causality and distributed systems” lives in CS content (Lamport’s clock papers, distributed systems textbooks) — a piece connecting Lamport logical clocks to calendar systems would earn links from engineers and be genuinely novel.

When you do this for your own project:

  • Which pillars have the clearest, lowest-competition keyword opportunities?
  • Where is the gap between what ranks and what actually answers the reader’s question?
  • Which pillars can you own vs. which ones will require you to out-write much larger competitors?
  • Where can you earn inbound links by connecting concepts no one else has connected?

Phase 2B: SEO Plan Overview

What to do

Synthesize the separate keyword files into one SEO plan. Make Claude rank the opportunities, group search intent, call out competitive gaps, and tell you which keywords to avoid.

Prompt: SEO plan synthesis

Read all the keywords.md files across /pillars/.

I want you to produce a master SEO_PLAN.md that synthesises the keyword research into a coherent SEO strategy. Include:

1. Keyword priority matrix — rank all primary keywords by opportunity (a combination of relevance, competition, and our realistic ability to rank given we’re a new blog with no existing authority)

2. Search intent map — group our target keywords by intent stage. Which keywords attract people early in their research? Which attract people close to a decision?

3. Competitive gaps — where are competitors weak or absent that we can move into quickly?

4. Quick wins vs long-term plays — which keywords could we realistically rank for in 3–6 months vs which are 12+ month bets?

5. Risks — any areas where the content landscape is too crowded, or where we’d be writing into a black hole

Write this to SEO_PLAN.md in the repo root. Be direct — if a pillar has weak keyword opportunity, say so and suggest what to do about it.

The SEO plan synthesised all four keyword files into a single priority matrix, 30+ keywords scored A through D. The most useful thing it did was tell me what to skip. Several keywords came back flagged as D or “avoid entirely”, e.g. async vs sync communication (high competition, dominated by Notion and every remote-work publication), time dilation explained (Wikipedia owns this and will for the foreseeable future). Having an AI tell you that some of your ideas aren’t worth pursuing is oddly liberating. Now I know where to best spend my time.

Output: priority matrix excerpt
KeywordPillarCompetitionScore
scheduling as a systems problemCoordination Systems DesignzeroA
time zone cognitive loadTimezone IntelligencelowA
relativistic coordinationPhysics of CoordinationzeroA
how to design team handoffsCoordination Systems DesignlowA
async code review best practicesAsync CoordinationmediumB
world time buddy alternativeTimezone IntelligencelowB
GPS time dilationPhysics of CoordinationmediumC
async vs sync communicationAsync CoordinationhighD
time dilation explainedPhysics of Coordinationvery highD

Phase 2C: Cluster Topic Identification

What to do

Turn the SEO plan into cluster article ideas. Keep this as a review step rather than writing to file immediately, so you can approve the topics before the workflow starts generating briefs.

Prompt: cluster topic identification

Read STRATEGY.md, SEO_PLAN.md, and all /pillars/[PILLAR NAME]/keywords.md files.

For each pillar, I want you to define the cluster articles that will sit underneath it. Each cluster article should:

- Target one specific keyword or question from the keyword clusters we identified - Serve a specific reader need (not just stuff keywords) - Be distinct enough from other clusters that there’s no cannibalisation - Connect logically back to the pillar topic

For each pillar, propose 4–6 cluster article topics. For each cluster, give me:

- Proposed article title (written as a reader would search for it) - Primary keyword it targets - One sentence on what the article covers and why someone would read it - Where in the buying journey this reader sits

Present this as a structured list I can review before we build briefs. Do not write to file yet — I want to approve the cluster list first.

The first ideas AI gives you in a brainstorm are always the obvious ones. Ask for content ideas and you’ll get 10 tips for distributed teams and how to run a better meeting. That’s not Claude being bad at its job, that’s how brainstorming works (humans do this, too). You have to move through the generic ideas to get to the good ones. In a human brainstorm, after 20 minutes you start finding the less obvious angles. After 40 minutes you’re somewhere new. After two hours of deep work you might be in genuinely weird territory.

That’s where we were now.

The earthbound clusters were real problems I’d actually seen in workplaces: Cross-Timezone Meeting Fatigue Is Real — Here’s What’s Causing It. The Hidden Cost of Timezone Math. Why Teams Default to Meetings (Even When Everyone Wants Fewer). These aren’t generic, they’re the specific things that make distributed work genuinely hard, named precisely.

And then the Pillar 3 clusters: Causality Is a Scheduling Problem: What Distributed Systems Got Right That Calendar Software Missed. The research behind that one surfaced Lamport’s 1978 paper on logical clocks, which solved the problem of causality in distributed systems without a shared clock. Calendar software, in 2026, still hasn’t asked the same question. That’s not just a content gap. That’s a product gap.

By the end of Phase 2C I was delusionally, increasingly convinced that this SEO strategy might cause me to build a real product. This is where I think it’s exciting, what extended deep work with AI can do for marketers. It’s not just faster brainstorming. It’s brainstorming that keeps going until it reaches somewhere genuinely new.

Output: full cluster list across four pillars

Pillar 1: Async Coordination

  • 1.1 When Async Breaks Down: The Failure Modes Nobody Talks About — when async breaks down — Awareness
  • 1.2 How to Write Async Updates That Actually Get Read — how to write async updates — Consideration
  • 1.3 What Is a Response Time Culture (and How Do You Build One Without Micromanaging)? — response time culture — Consideration
  • 1.4 Async Standup Formats That Actually Replace the Daily Meeting — async standup format — Consideration
  • 1.5 Async Code Review: How to Give Feedback Without Starting a Week-Long Thread — async code review best practices — Consideration

Pillar 2: Timezone & Scheduling Intelligence

  • 2.1 Cross-Timezone Meeting Fatigue Is Real — Here’s What’s Causing It — cross-timezone meeting fatigue — Awareness
  • 2.2 The Hidden Cost of Timezone Math — time zone cognitive load — Awareness
  • 2.3 How to Rotate Meeting Times Fairly Across Time Zones — rotating meeting times fairly — Consideration
  • 2.4 Distributed Team Scheduling: A Systems-Level Guide — distributed team scheduling best practices — Consideration
  • 2.5 What World Time Buddy Doesn’t Tell You (and What to Look for Instead) — world time buddy alternative — Decision

Pillar 3: The Physics of Coordination

  • 3.1 There Is No Universal “Now”: What General Relativity Means for Distributed Teams — relativistic coordination — Discovery
  • 3.2 How NASA Schedules Around Mars Communication Blackouts — Mars communication delay — Discovery
  • 3.3 What GPS Taught Us About Time (and Why Your Calendar Doesn’t Know It Yet) — GPS time dilation — Discovery
  • 3.4 Causality Is a Scheduling Problem: What Distributed Systems Got Right That Calendar Software Missed — causality distributed systems — Discovery

Pillar 4: Coordination Systems Design

  • 4.1 Your Scheduling Problem Is a Systems Problem (Not a Calendar Problem) — scheduling as a systems problem — Awareness/Consideration
  • 4.2 What Is Coordination Overhead? (And How to Measure It on Your Team) — coordination overhead — Awareness
  • 4.3 How to Design Team Handoffs That Don’t Require a Meeting — how to design team handoffs — Consideration
  • 4.4 The Meeting Audit: How to Tell Which Meetings Actually Need to Exist — meeting audit — Consideration/Decision
  • 4.5 Why Teams Default to Meetings (Even When Everyone Wants Fewer) — why teams default to meetings — Awareness

This is a demo, so I approved the list as-is. If you’re working on a real thing… do your due diligence.

Phase 2 Summary

By the end of Phase 2 I had a keyword strategy, a priority matrix, and 19 approved cluster article ideas, all in about 24 minutes.

But the more interesting outcome was the shift in the work itself. Phase 1 was strategy and foundations. Phase 2 is where the research started generating genuine curiosity: ideas I wanted to follow, questions I hadn’t thought to ask, territory that felt genuinely unclaimed. That’s the thing about going deep with AI, you have to earn the interesting output. The obvious stuff comes first. Keep going.

Phase 3: Content Map & Briefs

Phase 3 at a glance

Goal
Turn the SEO plan into a structured content map and writer-ready briefs.
Tool
Claude Code, Sonnet
Time
~41 minutes total: ~31 minutes Claude execution, plus ~10 minutes audit review
Output
4 pillar briefs, 19 cluster briefs, posting schedule, and quality audit
Token load
Very high: ~323,743 tokens, mostly from 23 parallel brief agents

This is where the workflow gets big. The content map turns scattered keyword ideas into a publishable system, then the brief generation turns that system into execution-ready work.

Phase 3A: Build the Content Map

What to do

Turn the approved pillar and cluster topics into a content architecture. Ask Claude to map each pillar, cluster, reading journey, entry point, conversion-adjacent article, and cross-pillar bridge.

Prompt: content map

Read STRATEGY.md, SEO_PLAN.md, and all keywords.md files in /pillars/.

I have approved your cluster topics.

Build a CONTENT_MAP.md that maps the full content architecture. Structure it as follows:

For each pillar: - Pillar name, primary keyword, and one-line description - List of cluster articles sitting under it, each with their primary keyword - A note on the logical reading journey — how does someone move from a cluster article to the pillar and back?

Then add a section called “Content Relationships” that notes: - Which clusters serve as entry points (high search volume, top of funnel) - Which clusters are conversion-adjacent (reader is close to a decision) - Any clusters that bridge two pillars

Format this so it’s easy to scan — use tables where it helps. Write to CONTENT_MAP.md in the repo root.

Claude created CONTENT_MAP.md, the master architecture document, but the useful thing wasn’t the file structure. It was that the system started noticing relationships between articles before we had even reached the interlinking phase.

Output: CONTENT_MAP.md structure

For each pillar it included:

  • Primary keyword and strategic role (e.g. “traffic engine,” “brand differentiation”)
  • A table of the 4–5 cluster articles with title, primary keyword, and journey stage
  • A reading journey paragraph explaining how a reader moves between cluster and pillar (and across pillars)

The most useful section was Content Relationships, where Claude grouped articles three ways:

  • Entry points — top-of-funnel pieces likely to be a reader’s first touch (e.g. 1.1 When Async Breaks Down, 2.1 Cross-Timezone Meeting Fatigue, all four Physics articles via inbound links)
  • Conversion-adjacent — articles where the reader has buying authority or is actively evaluating (2.5 World Time Buddy Alternative, 4.4 The Meeting Audit, 4.1 Scheduling as a Systems Problem)
  • Pillar bridges — clusters that sit at the intersection of two pillars and link both ways (1.3 Response Time Culture bridges Async ↔ Timezone; 4.3 Team Handoffs bridges Async ↔ Coordination Systems; 3.4 Causality Is a Scheduling Problem bridges Physics ↔ Coordination Systems)

This bridging logic became the seed of the interlinking plan in Phase 4 — Claude had effectively pre-mapped the cross-pillar links while building the architecture.

Phase 3B: Generate All Briefs

What to do

Generate a brief for every pillar and cluster article from the content map. If you use parallel agents, watch your token usage closely: this is the phase where the workflow becomes expensive.

Prompt: parallel brief generation

Read CONTENT_MAP.md, SEO_PLAN.md, STRATEGY.md, and VOICE_GUIDE.md.

I want you to generate a brief for every piece of content in the content map — every pillar article and every cluster article.

For each piece, spawn a sub-agent that:

1. Reads the relevant keywords.md file for that pillar 2. Writes a complete brief to the correct file path

Pillar briefs go to: /pillars/[PILLAR NAME]/brief.md Cluster briefs go to: /clusters/[CLUSTER NAME]/brief.md

Every brief must follow this exact structure:

# Brief: [Article Title]

## Overview
- **Type:** Pillar / Cluster
- **Pillar:** [which pillar]
- **Primary keyword:** [exact keyword]
- **Secondary keywords:** [2-3 supporting terms]
- **Target reader:** [specific description]
- **Search intent:** [what is this person trying to do]
- **Funnel stage:** Top / Middle / Bottom
- **Estimated word count:** [range]

## The reader's problem
[2-3 sentences from their perspective]

## What this article must do
[3-5 bullet points]

## Recommended structure
[Proposed H1, H2s, and H3s]

## Key points to cover
[Substance of the article - arguments, explanations, examples, data]

## What to avoid
[Anything that would make this generic, off-brand, or cannibalise another piece]

## Internal linking opportunities
[TBD]

## SEO notes
- Meta description: [150-160 characters including primary keyword]
- H1 suggestion: [strong H1 with primary keyword naturally included]
- Competitor positioning: [any specific SEO considerations]

After all briefs are written, report back with a summary: how many briefs were created, any gaps or conflicts you noticed, any briefs where the keyword opportunity seemed weak.

I did not expect this. I asked Claude to generate briefs for every piece in the content map, and it spawned 23 agents simultaneously — one for each pillar and cluster article — and ran them all in parallel. I watched them go. Thirteen minutes later, 23 complete brief files had been created. It was genuinely impressive to watch. It was also, I knew immediately, going to hurt my token budget. I checked my usage, and I confirmed, it did.

The scale:

Per agent: ~14,000 tokens each
Total across all 23 agents: ~323,743 tokens
Wall-clock time: 13 minutes

Running sequentially would take the same total tokens but ~4 hours of wall-clock time. Parallel costs real money but buys real time. If content is a significant part of your work, it’s worth it, but watch your usage dashboard while it runs. This is a good moment to build your instinct for what parallel agents actually cost versus single-session work.

Example brief (Cluster 1.5: Async Code Review):

Output: full brief for Cluster 1.5

Overview

  • Type: Cluster
  • Pillar: Async Coordination
  • Primary keyword: async code review best practices
  • Secondary keywords: code review process distributed teams, async engineering workflow, pull request review
  • Target reader: Engineering manager at a distributed team whose code review cycles routinely collapse into long comment threads or sync meetings — someone who wants async review to work but keeps watching it fail in predictable ways
  • Search intent: Finding a concrete system for async code review that actually stays async — not a list of tools, but a set of practices that prevent the comment threads that require a call to resolve
  • Funnel stage: Middle
  • Estimated word count: 1800–2200

The reader’s problem

My team’s async code review process keeps becoming synchronous — not because we’re missing a tool, but because the reviews themselves generate the kind of ambiguity that can only be resolved in a call. Reviewers ask questions the PR description should have answered. Authors can’t tell which comments are blockers and which are suggestions. By the time a thread hits 20 comments, everyone agrees a 15-minute call would have been faster — which defeats the entire point. I need a system that prevents this, not a post-mortem process for after it happens.

What this article must do

  • Identify the specific structural failures that turn async code review into synchronous work — not “communication breakdown” but named, concrete mechanisms
  • Give reviewers a labelling system they can adopt immediately, with real examples of each label in use
  • Define what a PR description must contain for async review to function, framed as a coordination artifact rather than documentation
  • Address the timezone problem directly: how to design the review queue so a reviewer 8 hours away doesn’t block or get blocked
  • Tell the reader exactly when to pull a thread into sync — and when the urge to call is actually avoidance of writing clearly

Recommended structure

H1: Async Code Review Best Practices: How to Give Feedback Without Starting a Week-Long Thread

H2: Why async code review threads turn into sync meetings (it’s not the reviewers)

  • H3: Ambiguous PR scope — author and reviewer disagree on what the PR is doing
  • H3: Missing context — reviewer has to reconstruct decisions the author already made
  • H3: Conflated blocking and non-blocking feedback — no signal for what actually needs to change

H2: A feedback labelling system that reduces back-and-forth

  • H3: Blocking — must be resolved before merge
  • H3: Non-blocking — suggestion, take or leave
  • H3: Nit — style; fixable or ignorable at the author’s discretion
  • H3: Question — seeking understanding, not requesting a change

H2: The PR description as a coordination artifact

  • H3: What the description must contain for async review to function
  • H3: What to explicitly exclude (what the reviewer should NOT spend time on)

H2: Designing the review queue across time zones

  • H3: What “ready for review” actually means at a timezone boundary
  • H3: Review queue structure that prevents stalling

H2: When to pull async code review into sync

  • H3: Signals a thread has exceeded what written communication can resolve
  • H3: Signals the call is avoidance, not efficiency

H2: Templates

  • H3: PR description template
  • H3: Feedback labelling reference card

Key points to cover

Structural failure modes (not reviewer failure):

  • Ambiguous PR scope: when the author and reviewer have different mental models of what the PR is trying to accomplish, every comment risks being about the wrong thing. This is an authoring problem, not a review problem.
  • Missing context: reviewers spend time reconstructing decisions — why this approach, what alternatives were rejected, what constraints applied. That reconstruction generates questions, which generate threads.
  • Conflated feedback types: when a reviewer writes “I’d consider extracting this into a helper function,” the author doesn’t know if that’s a blocker, a strong suggestion, or a passing thought. Every ambiguous comment risks a round trip.

Feedback labelling system:

  • Labels must be part of the comment text, not a separate UI mechanism
  • [blocking] — the change must happen before merge. A security issue, a broken contract, a correctness error.
  • [non-blocking] — a genuine suggestion. The author should consider it, but merge is not contingent on it.
  • [nit] — style, naming, minor formatting. The author can fix it or ignore it; reviewer won’t follow up.
  • [question] — seeking understanding. Not a request for change.

PR description as coordination artifact:

  • The problem being solved (not the code change, but the underlying problem)
  • The approach chosen — and why this approach over alternatives
  • Alternatives considered and why they were rejected
  • What specifically to review (scope the reviewer’s attention)
  • What NOT to review — explicit callouts of known issues, deferred work, or areas outside scope

Timezone considerations:

  • “Ready for review” at a timezone boundary means the PR is in a state where a reviewer 8 hours away can do a complete review without a synchronous handoff
  • Review queues should have explicit states: Draft, Ready for Review, Changes Requested, Approved
  • First-pass review window: define an expected turnaround (e.g., one working day), not a real-time response expectation

When to go sync:

  • Signals a thread has exceeded written communication: more than 3 back-and-forth exchanges on a single comment; the thread has spawned sub-threads; a comment has been misread twice
  • Signals the call is avoidance: the author or reviewer hasn’t written out their actual position; “let’s just hop on a call” is the first response, not a last resort

SEO notes

  • Meta description: Learn async code review best practices that keep distributed teams out of sync meetings — including a PR template and feedback labelling system. (155 characters)
  • H1 suggestion: Async Code Review Best Practices: How to Give Feedback Without Starting a Week-Long Thread
  • Competitor gap: Atlassian and Google engineering blog content on this keyword focuses on process and tooling. The failure-mode framing is the differentiator — lead with it.

Phase 3C: Posting Schedule

What to do

Use the content map and briefs to sequence publication. Make Claude explain the trade-offs so you can catch contradictions between schedule logic and the original strategy.

Prompt: posting schedule

Read CONTENT_MAP.md and all brief.md files across /pillars/ and /clusters/.

I want you to build a posting schedule. Here are my constraints:

- Publishing cadence: [CADENCE] - Start date: [START DATE] - Total pieces to schedule: [TOTAL PIECES] ([PILLAR COUNT] pillars + [CLUSTER COUNT] clusters — Claude can count from the content map)

When sequencing, apply this logic: 1. Clusters that are entry points (high volume, top of funnel) should publish early — they drive traffic that validates the strategy 2. Pillar articles benefit from having at least 2–3 clusters live first — so internal links exist when the pillar launches 3. Conversion-adjacent clusters should publish after the pillar they sit under 4. Spread pillars evenly — don’t cluster all content from one pillar in consecutive weeks

Build the schedule as a table with these columns: | Week | Publish date | Content type | Article title | Pillar | Primary keyword | Notes |

Add a brief rationale section after the table explaining the sequencing logic you applied and any trade-offs you made.

Write to POSTING_SCHEDULE.md in the repo root.

A 6-week schedule, week-by-week, with rationale and an explicit trade-offs section. The decisions Claude made on its own were the interesting part — most weren’t directly specified in the prompt:

  • Pillar order P2 → P1 → P4 → P3 was based on cluster readiness, not stated importance.
  • Week 1 has no pillar because no pillar had enough supporting clusters live yet.
  • Conversion-adjacent clusters land after their pillar, so decision-stage pieces have context before they publish.
  • Bridge clusters are flagged for retroactive linking, so links can be added once both target pillars are live.

The trade-offs section was sharp. Claude flagged that the Physics pillar cadence in this schedule (4 pieces in 6 weeks) directly contradicts STRATEGY.md, which specifies “1 piece per quarter” for that pillar. That’s the kind of self-flagged contradiction I’d otherwise have missed.

Output: posting schedule excerpt
| Week | Publish date | Content type | Article title | Pillar | Notes |
|------|--------------|--------------|---------------|--------|-------|
| 1 | Mon 27 Apr | Cluster | When Async Breaks Down | Async Coordination | Entry point |
| 1 | Tue 28 Apr | Cluster | Cross-Timezone Meeting Fatigue Is Real | Timezone | Entry point |
| 1 | Wed 29 Apr | Cluster | The Hidden Cost of Timezone Math | Timezone | Entry point |
| 1 | Thu 30 Apr | Cluster | Why Teams Default to Meetings | Coord. Systems | Entry point |
| 2 | Mon 4 May  | Cluster | How to Write Async Updates That Get Read | Async | Builds P1 depth |
| 2 | Tue 5 May  | **Pillar** | Timezone & Scheduling Intelligence | Timezone | First pillar — 2 clusters live |

Phase 3D: Brief Quality Audit

Optional but the highest-leverage prompt in Phase 3. After 23 briefs and three planning documents, things drift. The audit is the moment to catch contradictions before any writing happens.

What to do

Run an adversarial audit before writing starts. Ask Claude to look for keyword conflicts, coverage gaps, weak briefs, strategic drift, and specific fixes you can act on.

Prompt: brief quality audit

Read all brief.md files across /pillars/ and /clusters/, plus STRATEGY.md and SEO_PLAN.md.

Audit every brief against these criteria and give me a report:

1. Keyword conflicts — are any two briefs targeting the same or near-identical primary keyword? Flag these as cannibalisation risks.

2. Coverage gaps — are there topics in SEO_PLAN.md that no brief addresses? Are there questions our audience is asking that the content map misses?

3. Brief quality — for each brief, flag if any of these are true: - The reader problem is vague or brand-centric rather than audience-centric - The recommended structure is too generic (could apply to any article on this topic) - The word count estimate seems wrong for the complexity of the topic - The funnel stage seems misclassified

4. Strategic coherence — does the full set of briefs, taken together, tell a coherent story about what [COMPANY NAME] knows and cares about?

Be specific. Name the brief, name the problem, suggest the fix. Do not give me a general assessment — I want a problem list I can act on.

Write your findings to /briefs-audit.md.

A 15KB audit. Four sections (keyword conflicts, coverage gaps, brief quality, strategic coherence) and a prioritised action list at the end. The honesty was the surprise — but maybe it shouldn’t have been. One thing I’ve noticed the more I use Claude: if you ask it to critique something another Claude session wrote, it will always find problems, if that’s what you’re asking for. It’s a bit like a human that way. Ask someone to find fault and they’ll find fault. Ask someone to build and they’ll build. The framing of the request shapes what comes back. The audit prompt is explicitly adversarial — find the contradictions — and it did.

The genuinely critical finding:

Pillar 2’s primary keyword contradicts the SEO plan. The pillar brief sets scheduling across time zones as the target, but SEO_PLAN.md classifies that exact keyword as D-grade / SKIP (“Calendly, Doodle dominate — avoid direct competition”). The pillar brief and the SEO plan cannot both stand.

This is exactly the kind of failure that’s easy to miss when you’ve been generating documents in sequence. By Phase 3B, the SEO plan had been written 30+ minutes ago and 23 parallel agents had no shared memory of which keywords were marked skip. The audit caught it.

Other findings worth calling out:

  • Pillar/cluster cannibalisation across 4 pillars. Multiple pillar briefs listed their own clusters’ primary keywords as secondaries — the pillar and cluster were optimising for the same query.
  • An A-grade keyword with no owner. scheduling without shared time reference was flagged in SEO_PLAN as “entirely unclaimed; A-grade quick win” but appeared only as a secondary on two briefs. Nobody was set up to actually rank for it.
  • Three different pillar priority orders across STRATEGY.md, SEO_PLAN.md, and CONTENT_MAP.md. A writer reading any one of those documents would come away with a different view of which pillar leads.
  • Audience gap: the procurement evaluator. STRATEGY names ops/HR as the primary buyer, but every brief targets practitioners. There’s no piece a procurement-stage reader could land on and find directly useful.
  • Continuum-mention pattern is mechanical. Almost every brief contains the same instruction: “mention Continuum once, naturally, mid-to-late.” Read in sequence across the content set, this produces a detectable rhythm. The fix Claude suggested — vary it: some pieces have zero mentions, some have two — is exactly right.

The action priority Claude gave at the end:

  1. Change P2 pillar’s primary keyword (direct contradiction with SEO strategy)
  2. Demote cluster primary keywords from pillar secondaries (cannibalisation across 4 pillars)
  3. Decide who owns scheduling without shared time reference (A-grade keyword unclaimed)
  4. Resolve the P4 pillar vs 4.1 cluster strategic tension (cluster carries the stronger keyword)
  5. Pick one pillar priority order and propagate to all planning docs

Phase 3 Summary

By the end of Phase 3, every single piece of content — 4 pillar briefs + 19 cluster briefs — had a complete, writer-ready brief, a 6-week posting schedule sequencing all 23 pieces, and a strategic audit flagging the contradictions to fix before writing starts.

Phase 4: Interlinking Plan

Phase 4 at a glance

Goal
Map the internal link architecture across all 23 pieces.
Tool
Claude Code, Sonnet
Time
~20 minutes
Output
Interlinking plan with structural, contextual, and forward-link guidance
Why it matters
This is what turns a pile of articles into a content system.

This is the phase that turns a pile of articles into a content system. It decides where authority should flow, what readers should discover next, and which links have to wait until later pieces exist.

What to do

Ask Claude to map structural and contextual links before writing starts. Keep this as a review step first, so you can approve the architecture before it becomes the working interlinking file.

Prompt: interlinking architecture

Read CONTENT_MAP.md, POSTING_SCHEDULE.md, and all brief.md files across /pillars/ and /clusters/.

I want you to define the interlinking architecture for our content set. Work in two layers:

## Layer 1: Structural links These are mandatory links that appear in every article regardless of what else is covered.

For each pillar article: - It must link down to every cluster article in its pillar - List the recommended anchor text for each cluster link (use natural language — not “click here”, not keyword-stuffed)

For each cluster article: - It must link up to its parent pillar - It must link to at least one sibling cluster in the same pillar - List recommended anchor text for each

## Layer 2: Contextual link opportunities Read each brief and identify where the topic naturally overlaps with another article’s territory. For example: a cluster article about [X] will likely mention [Y] in passing — and we have an article about [Y].

For each contextual opportunity, note: - Source article (the one containing the link) - Target article (the one being linked to) - The likely context — what sentence or paragraph would this link appear in - Suggested anchor text

Present Layer 1 as a table and Layer 2 as a structured list grouped by source article. Do not write to file yet — present this for my review first.

Claude presented both layers but in a different format than the prompt suggested — it went article-by-article instead of layer-by-layer, which turned out to be more useful. Each article got a section with “Outbound: Mandatory” and “Outbound: Contextual” lists, with context notes for every single link.

The most interesting part was how it described the contextual links — it didn’t just name the target article, it named the specific section where the link belonged and explained why. Example from 1.1 (When Async Breaks Down):

→ 1.2 in Information Rot failure-mode section, where send-and-receive decoupling is the cause. Anchor: how to structure updates so they actually get acted on

This is the specificity that makes a brief usable. A writer working on 1.1 doesn’t have to guess where the link to 1.2 belongs — it belongs in that exact section, covering that exact concept, with that exact anchor text.

Phase 4B: Write the Interlinking Plan to File

What to do

Once the architecture looks right, write it to a durable plan. Structure it both by article for writers and as a master map for anyone reviewing the full content system.

Prompt: write interlinking plan

The interlinking architecture looks good. Write it to INTERLINKING_PLAN.md in the repo root.

Structure the file so it can be used in two ways:

1. By article — a writer working on any given article can look up that article and see exactly which links to include, where, and what anchor text to use

2. As a master map — a table showing every link relationship in the content set at a glance

Add an introduction section that explains the interlinking logic to anyone picking up this document cold — what structural links are, what contextual links are, and how to use this file when writing.

Also add a section called “Links to add retrospectively” — these are contextual links that reference articles not yet published. Flag them so writers know to go back and add these links once the target article is live.

Write to INTERLINKING_PLAN.md.

Claude wrote a 38KB interlinking plan. The important thing was not the size of the document, but the operating model: a writer could open one article section and know which links belonged there, why, and what anchor text to use.

Output: interlinking plan structure and example

The document had four sections:

  1. Introduction (with “two link types,” anchor text rules, and density target of ~1 link per 300 words)
  2. By Article — each of the 23 articles gets a section with outbound links listed, mandatory first, then contextual
  3. Master Map — a single sortable table showing every link relationship
  4. Links to Add Retrospectively — flagged forward-references (e.g., 1.1 linking to 4.1, but 4.1 doesn’t publish until week 4, so the link has to be added after publication)

The introduction was polished — it explained the architecture and gave clear instructions for writers on how to use the document. Key passage:

A well-linked content set compounds; a loosely-linked one stays a collection of articles.

Excerpt — Pillar 1’s mandatory links:

#### P1 — Async Coordination

**Outbound: Mandatory (down to clusters)**
- → 1.1 — anchor: where async tends to break down
- → 1.2 — anchor: how to write updates that actually get read
- → 1.3 — anchor: building a response-time culture without micromanaging
- → 1.4 — anchor: the trade-offs between async standup formats
- → 1.5 — anchor: keeping async code review from collapsing into a meeting

Phase 4 Summary

By the end of Phase 4, the full content set had a linked architecture ready for writers to execute. Every article had:

  • Mandatory structural links (up to pillar, down to clusters, sibling connections)
  • Contextual links with specific section placement and anchor-text guidance
  • A clear forward-link schedule (which links can be added at publish, which need to be added retrospectively when their targets go live)

The interlinking plan is often the output that gets skipped when under time pressure, but it’s one of the highest-leverage SEO decisions in the whole workflow. This one linked 23 pieces together in a way that passes authority from cluster up to pillar, makes cross-pillar discoveries possible, and prevents readers from getting lost.

Key insight: This phase exists so writers can stop worrying about link placement and just follow the spec. The hard work of finding the overlaps and thinking about the architecture is done. The file is a blueprint — execute it exactly.

Phase 5: Writing & Drafting

Phase 5 at a glance

Goal
Draft pillar and cluster articles from the briefs, then run SEO and formatting checks.
Tools
Opus for pillar drafts; Sonnet for clusters, SEO, and final formatting
Time
~27 minutes shown for Pillar 2; ~84 minutes estimated for the full run
Output
1 pillar article, 5 cluster articles, SEO pass, and final.md handoff shown for Pillar 2
Token load
High. Pillar 2 alone used ~44k tokens; full run scales across all four pillars.

This is the execution phase: the briefs become drafts, then the drafts get checked for SEO structure, formatting, and readiness to move into final files.

Phase 5A: The pillar article

Which pillar: Timezone & Scheduling Intelligence

What to do

Draft the pillar article from the approved brief, but make Claude interpret the reader problem and article throughline before writing. Confirm that direction before it creates the full draft.

Prompt: pillar article drafting

Read the following files before writing anything: - CLAUDE.md - VOICE_GUIDE.md - /pillars/[PILLAR SLUG]/brief.md - /pillars/[PILLAR SLUG]/keywords.md - INTERLINKING_PLAN.md

You are writing the pillar article for [PILLAR NAME]. This is a flagship piece — it is the most comprehensive treatment of this topic we will publish. It needs to be genuinely useful to an expert reader, not just a surface-level overview.

Before you draft, tell me: 1. Your interpretation of what the reader’s core problem is 2. The argument or throughline of the article — what is the intellectual spine that holds it together? 3. Any gaps you notice in the brief that we should resolve before writing

Wait for my confirmation before drafting.

When confirmed, write the full article following the structure in the brief. Apply these rules:

- Open with the reader’s problem, not with context-setting or company positioning - Every H2 should advance an argument, not just introduce a new subtopic - Use concrete examples — invented but plausible is fine, clearly labelled - Include all internal links from the brief’s interlinking section, with the anchor text specified - End with a clear “what to do next” — not a sales pitch, a genuine next step for the reader - Write the meta description as specified in the brief

Write to /pillars/[PILLAR SLUG]/draft.md

After writing, give me a brief self-assessment: what you think works well, what feels weakest, and what you’d want a human editor to pay most attention to.

Claude produced a 3400-word pillar draft with a strong spine: scheduling is a systems problem, not a calendar problem. The structure moved from diagnosis (what breaks in distributed teams) to design (what the systems view requires) to implementation (how to actually fix it).

Opening paragraph:

The complaint is always the same. You’ve given your team the tools for scheduling across time zones — Calendly for external meetings, World Time Buddy bookmarked, a shared calendar with three timezone columns running down the side — and the calls are still happening at the wrong times.

The first section named three structural failures rather than listing tips:

  1. Time zone cognitive load — the invisible tax of mental arithmetic
  2. Synchrony bias — why teams default to meetings even when async would work
  3. Unequal burden distribution — why someone always takes the bad slot and why it compounds

Each failure mode was diagnosed with mechanism (why it happens), not just symptom (how it feels).

The systems-view section made the abstract concrete:

Meeting policy as infrastructure means writing the rules down. Which meetings recur, and why. Which require synchronous attendance, and which can be missed without consequence. What async substitutes exist when sync isn’t possible.

The meta description matched the brief spec exactly: “Scheduling across time zones fails because it’s a systems problem, not a calendar problem. Here’s the framework distributed teams actually need.” (157 characters)

Self-assessment Claude offered: “The diagnosis section (the three failure modes) is the strongest part — it names things that were previously unnamed. The implementation guidance is functional but thinner than the diagnosis. A human editor should push the systems-design section deeper and add one or two concrete examples of what ‘meeting policy as infrastructure’ actually looks like in practice.”

The draft hit the brief’s structure exactly as specified. This is the payoff of Phase 3 — Claude’s job in Phase 5 was execution, not design. The time spent was 9 minutes (wall clock) at 19,000 tokens.

This draft took 9 minutes, used 19,000 tokens with Opus, and consumed about 12% of a 5-hour window allowance. Brief-driven drafting is efficient, but Opus pillar work is still expensive enough to notice.

Phase 5B: Single cluster drafting

In this demo I skipped the single-article approach and went straight to batch drafting all five Timezone pillar clusters using Phase 5C below. If you’re working on one article at a time rather than a full pillar set, the workflow file covers this — the rules are identical to 5C, just without the sub-agent scaffolding.

Phase 5C: Batch cluster drafting (same pillar)

What to do

Draft all clusters for one pillar in parallel. Give each sub-agent its own brief and keywords, then have it write the draft and report back against a shared quality checklist.

Prompt: batch cluster drafting

Read the following files: - CLAUDE.md - VOICE_GUIDE.md - INTERLINKING_PLAN.md - /pillars/[PILLAR SLUG]/brief.md

I want to draft all cluster articles for the [PILLAR NAME] pillar in one pass.

The clusters are: [LIST CLUSTER NAMES AND THEIR FOLDER PATHS]

Spawn a sub-agent for each cluster. Each sub-agent should: 1. Read its own brief.md and keywords.md 2. Draft the full article following the rules below 3. Write to its own draft.md 4. Report back a one-paragraph summary of what it wrote

Rules every sub-agent must follow: - Follow VOICE_GUIDE.md - Open with the reader’s problem within the first 3 sentences - Include all internal links from the interlinking section of its brief - Be specific — no generic advice - Write the meta description from the brief - Do not mention [COMPANY NAME] more than twice unless the brief specifically calls for it

When all sub-agents are done, give me a summary table: | Cluster | Word count | Primary keyword used in H1? | Internal links included? | Any issues? |

Claude drafted five cluster articles across the Timezone pillar in parallel. Sonnet handled all five; each sub-agent worked independently from its brief.md and keywords.md, wrote to its own draft.md, and reported back.

The interesting part was that the system did not just produce drafts. One sub-agent noticed that 2.2’s funnel-stage classification in the brief (“Awareness”) didn’t match the article’s actual content — the reader building a case to leadership is mid-funnel, not top. This is exactly the kind of alignment check that parallel drafting surfaces: brief vs. actual output.

Output: cluster drafts and stats

The five clusters drafted:

  1. 2.1 — Cross-Timezone Meeting Fatigue (1900 words) — diagnosed five mechanisms of fatigue, linked to the pillar and to 2.2
  2. 2.2 — Hidden Cost of Timezone Math (1850 words) — quantified the cognitive load with worked calculations, linked to 4.2 (coordination overhead)
  3. 2.3 — Rotating Meeting Times Fairly (2200 words) — defined three models of “fair,” provided a worked example with a 4-week rotation template
  4. 2.4 — Distributed Team Scheduling Guide (2100 words) — systems-level treatment of scheduling across async and sync, connected to 4.1 and 4.3
  5. 2.5 — World Time Buddy Alternative (1800 words) — comparison piece for readers evaluating tools, positioned as a decision-stage article

Opening of 2.3 (Rotating Meeting Times Fairly):

If the same three people are always on your weekly sync at 6 a.m. or midnight, the rotation is broken — or missing entirely. Rotating meeting times fairly across time zones is not a complicated problem, but it requires a deliberate design choice most teams skip: deciding what “fair” actually means before building anything.

All interlinking targets were included. The metadata (titles, meta descriptions, primary/secondary keywords) matched the briefs exactly.

Drafting stats:

ClusterWord countPrimary keyword in H1Internal linksStatus
2.11900✓ (all targets)Clean
2.21850✓ (all targets)Funnel-stage note flagged
2.32200✓ (all targets)Clean
2.42100✓ (all targets)Clean
2.51800✓ (all targets)Clean

This took about 12 minutes in parallel, compared with roughly 45+ minutes sequentially. Sonnet was significantly cheaper than Opus here: about 5k tokens per agent, or 25k total across five cluster drafts.

Phase 5D: SEO Pass

What to do

Run a mechanical SEO audit on a draft before final review. Let Claude fix keyword placement, metadata, and missing internal links, but only flag readability issues for a human editor.

Prompt: SEO audit and corrections

Read the following: - /clusters/[CLUSTER SLUG]/draft.md - /clusters/[CLUSTER SLUG]/keywords.md - /clusters/[CLUSTER SLUG]/brief.md - CLAUDE.md

Run an SEO audit on this draft and then make the corrections directly in the file.

Check and fix:

1. Primary keyword — appears in H1, within the first 100 words, in at least two H2s, and in the meta description. If missing from any of these, add it naturally — do not keyword-stuff.

2. Secondary keywords — each appears at least once in the body. If missing, find a natural place to include it.

3. Meta description — 150–160 characters, includes primary keyword, written to earn a click not just describe the article. Rewrite if needed.

4. H1 — contains primary keyword, written as a reader would search or as a compelling headline. Rewrite if it reads like an internal label.

5. Internal links — every link specified in the brief’s interlinking section is present with the correct anchor text. Add any that are missing.

6. Readability — flag any paragraph longer than 5 sentences. Flag any sentence longer than 35 words. Do not rewrite these — just flag them for the human editor.

After making changes, write a brief changelog: what you changed and why.

Save the updated version back to draft.md.

I ran this once on 2.1 (Cross-Timezone Meeting Fatigue). Claude verified the keyword placement, secondary keywords, meta description, and internal links, then flagged two long paragraphs and one 41-word sentence for human editorial attention. Useful, but mostly mechanical. A proper human SEO review would still need to decide whether the flagged sections actually hurt the read.

Phase 5E: Move to Final

What to do

Move an approved draft into final form only after a formatting check. This step should verify structure and links, not rewrite the article.

Prompt: move draft to final

The draft at /clusters/[CLUSTER SLUG]/draft.md has been approved.

Do the following: 1. Copy the content to /clusters/[CLUSTER SLUG]/final.md 2. Do a final check — confirm the meta description is present at the top of the file, all H tags are correctly formatted in markdown, and all internal links are formatted correctly as markdown links 3. Report any formatting issues you found and corrected

Do not make any content edits — only formatting and structural checks.

This step was mostly a file hygiene check: Claude copied the approved draft to final.md, confirmed the meta description was in frontmatter, checked the headings, and verified the internal links were markdown-formatted correctly. No formatting issues found.

Phase 5 Summary

By the end of Phase 5, the content set had:

  • 1 pillar article drafted (Timezone & Scheduling Intelligence) — 3400 words, Opus, 19k tokens, 9 minutes
  • 5 cluster articles drafted in parallel (Pillar 2 full cluster set) — ~2000 words each, Sonnet, ~5k tokens per agent, 12 minutes total
  • 1 cluster article SEO-audited and readability-flagged (Cross-Timezone Meeting Fatigue) — 4 minutes
  • 1 article moved to final.md after formatting check (Cross-Timezone Meeting Fatigue) — 2 minutes

Note: The demo above walks through Pillar 2 in detail. The same prompts ran for all four pillars at roughly the same time and token cost each.

Phase 6: Critical Review

Phase 6 at a glance

Goal
Run a harsh reader-specific editorial review.
Tool
Claude Code, Sonnet
Time
~10 minutes for the demo review
Output
Specific critique of the drafted content, including what to fix before publishing
Worth copying?
Absolutely. This was one of the highest-leverage prompts.

This is the quality gate. The point is not to ask Claude whether it did well; it is to give it a specific skeptical reader and make the work survive that person’s attention span.

The critical reader prompt

What to do

Give Claude a specific skeptical reader and ask for only the problems. The sharper the reader persona, the more useful the critique.

Prompt: critical reader review

I’m going to share a draft article with you. Read it as if you are: [READER PERSONA — e.g. a Head of People Operations at a Series B startup who found this article through a Google search for “[TARGET KEYWORD]”]. You did not seek out this company. You have no loyalty to them. You are busy and skeptical. You will stop reading the moment the article stops being useful.

After reading, give me:

1. The verdict — one paragraph. Would you finish reading this article? Would you remember anything from it tomorrow? Would you share it or come back to this site? Be honest. 2. Where you would have stopped reading — identify the exact moment the article loses a reader like you. Quote the line or section. Explain why. 3. The generic problem — identify every place where the advice, explanation, or example could have appeared in any article on this topic. These are the parts that add no value. List them. 4. Claims that need proof — identify every assertion that a skeptical reader would push back on. “This approach saves time” — prove it. “Most companies struggle with X” — how do you know? List every claim that isn’t backed up. 5. The structure problem — does the article build toward something, or is it just a list of related points? If the H2s could be rearranged in any order without losing meaning, the article has no spine. Tell me if that’s true here. 6. The intro — read only the first paragraph. Does it earn the second paragraph? Would a busy reader keep going? Rewrite it if it doesn’t. 7. The ending — does it end or does it just stop? What should the reader do or think after finishing? Is that clear?

Do not tell me what works. Tell me what doesn’t. The article and the brief it was written against are attached.

One note: when I shared the sci-fi Continuum articles, Claude immediately identified them as fictional. I ran the critical review against less sci-fi pieces — the async communication article — instead. Here’s what it said:

Output: full critical review

The verdict: I’d finish it, but reluctantly, and I’d skim the last third. I’d remember the “decision alignment vs. information routing” frame and maybe the phrase “protocol failure vs. routing failure” — that’s a useful diagnostic I haven’t seen elsewhere. Everything else would blur. I wouldn’t share it because there’s nothing in it concrete enough to point a colleague at; it’s diagnostic, not prescriptive, and the diagnosis is wordier than it needs to be.

Where I’d stop:

First wobble, around the “tools-and-norms” section:

“An analogy: deciding everyone will file their own taxes without designing a tax form.”

This analogy doesn’t land. Filing taxes already has a form. The analogy is supposed to illustrate a missing structure but the example I’m being given is a structure that already exists. I read it twice, decided it didn’t work, and my trust dropped.

Where I’d actually stop:

”### Visibility collapse — Work happens, but its status is invisible to the people who need to track it. Nobody lied; nobody deliberately withheld information.”

By this point I’ve been told three times that the problem is “structural not cultural” and I’m being introduced to a third taxonomy. The article is now stacking frameworks on top of frameworks without ever showing me a worked example of a team applying any of them. I’d skim from here to the audit section, read the three questions, and leave.

The generic problem: Lines that could appear in any of the 200 articles ranking for this keyword — “Meetings are expensive and widely disliked.” “Async doesn’t replicate this automatically.” “None of these are culture problems. They’re protocol gaps.” The article’s best idea is buried in a sentence generic enough that a reader skimming would miss it’s the thesis.

Claims that need proof: The article’s entire argumentative move — asserting that everyone is wrong about culture and the real problem is design — rests on nothing but confident assertion. “Most teams skip that step” — how many? Based on what? “The failure mode is consistent across teams” — consistent implies the author has data. The author does not appear to have data.

The structure problem: The diagnostic and prescriptive sections alternate rather than build. “Where async coordination breaks down” introduces three failure modes that aren’t referenced in the architecture section that immediately precedes them — so the architecture isn’t being shown to prevent the failure modes. They’re parallel lists.

The intro: The first paragraph earns the second, marginally — the specificity signals the author has been in this room. What it doesn’t do is promise a payoff. Suggested rewrite:

Three months after the async rollout, your team is still relitigating decisions in the next sync meeting. The async standup bot posts at 9am, gets filled in, and gets read by no one. The diagnosis you’ll hear is cultural: people aren’t bought in, they prefer meetings. That diagnosis is comfortable and wrong — and it’s why the next round of norms you roll out won’t work either. The actual problem is that you have a communication style without a coordination system underneath it, and those are different things.

The ending: It stops. It doesn’t end. “Fix the protocol, not the person” is a slogan, not a takeaway. What’s missing: the reader needs to know what to do first, on Monday. Something like: Pick the next decision your team needs to make in a thread. Before posting it, write down four things — who decides, who has input rights, the deadline, and what silence means. That single thread is the smallest version of the system.

Feedback is (usually) always useful and this feedback was surprisingly good. The useful thing is that you don’t have to agree with all of it. The critique is a perspective, not a verdict. What it gives you is specific, named problems to react to, and reacting to specific problems is much easier than staring at a draft wondering why it doesn’t quite work. Even if you push back on half the feedback, the other half is probably right. For a real project, I’d spend much longer than 10 minutes here.

The Big Picture

This is the outcome of the workflow: what the vault looked like at the end, what the run cost in time and tokens, and what I learned from directing the whole thing.

Vault structure

Here’s what the final vault looks like:

/[COMPANY NAME]
├── STRATEGY.md
├── VOICE_GUIDE.md
├── SEO_PLAN.md
├── CONTENT_MAP.md
├── INTERLINKING_PLAN.md
├── POSTING_SCHEDULE.md

├── /pillars
│   └── /[pillar-1-slug]
│       ├── brief.md
│       ├── keywords.md
│       ├── draft.md
│       └── final.md
│   └── /[pillar-2-slug]
│       ├── brief.md
│       ├── keywords.md
│       ├── draft.md
│       └── final.md
    └── [more pillars...]

└── /clusters
    └── /[cluster-1-slug]
        ├── brief.md
        ├── keywords.md
        ├── draft.md
        └── final.md
    └── [more clusters...]

Time & Token Breakdown

These numbers are directional, not a billing report. I tracked wall-clock time closely, but only checked token usage at a few points, so the token column is most useful for comparing which phases were cheap, moderate, or expensive.

PhaseWhat happenedTimeToken load
Phase 1: StrategyInterview, positioning, pillars, voice guide~37 minLow
Phase 2: SEO Research4 parallel web-search passes across the pillar set~24 minModerate
Phase 3: Briefs & Content Map23 parallel brief agents, then audit and content-map synthesis~41 minVery high
Phase 4: InterlinkingOne session reading the briefs and mapping link relationships~20 minLow
Phase 5: Writing4 pillar drafts with Opus, plus 4 Sonnet cluster batches~84 minHigh
Phase 6: Critical ReviewReader-specific critique of the drafted articles~10 minLow
TotalFull workflow, excluding human editing after critique~3.5 hrsVery high

The expensive bit was not “using Claude.” It was parallelising the work. Phase 3B alone created 23 brief agents at roughly 14,000 tokens each. Phase 5 then added the heavier drafting work. Phases 1, 2, 4, and 6 were useful, but they were not where the token budget went.

My takeaways

Prompt engineering made a huge difference

Not “give me feedback on this article”, but “read this as a Head of People Operations at a Series B startup who found it through Google, is busy and skeptical, and will stop reading the moment it stops being useful.” That precision is what made the feedback useful: Claude can’t give you honest criticism in the abstract. It needs a reader with a specific problem and a specific reason to bail.

The other high-leverage moment: asking Claude to critique its own output. The Phase 3D audit, asking Claude to find contradictions in documents it had itself generated, surfaced issues that might have stayed invisible otherwise. Claude doesn’t notice its own drift unless you explicitly ask it to look. The stricter you are about building in these review prompts, the more useful the output becomes.

As a non-professional content writer, going back and forth between different Claude sessions and models — asking each to critique the same piece — also helped me develop real instincts about what good content writing actually requires. The critique wasn’t just useful for the content, it was educational for me.

Prompting is good and automating is better

Running this prompt-by-prompt, session by session, was more cognitively expensive than I expected. Every new session meant re-establishing context, re-orienting to where I was in the workflow, reconstructing what decisions had been made. I was spending mental energy managing the process rather than thinking about the content.

The copy-paste approach is worth doing once, because it forces you to understand what each prompt is doing and why. But once you’ve run through it and refined the prompts, I’d want the system to carry itself. That’s why I created the companion workflow file.

Publish early, then edit like a human

The drafts are what I’d call “AI flavoured”, because you know it when you read it. The structure is solid, the briefs did their job, but there’s a faint signature of content written to a spec rather than content written with instinct. This harkens back to the content farms of the 2010s I mentioned earlier (horse medication!) but it’s a different writing style, just distinctly… not written by an actual person inside the actual business.

I’ve started publishing some of it, and it still needs sharp human editing before it really sings. That’s fine. A marketer once gave me advice I think about a lot: stake your SEO claim early (get the page up, get the slug indexed) and improve the article over time. If your goal is SEO, a good-but-not-great article that exists beats a perfect article that doesn’t. I’m not making any promises about Google’s algorithm here, but the principle makes sense to me. Publish, then refine.

The real gain was scale, not speed

I created Continuum Scheduler as a sci-fi project, not a real product. I did make a website for it, which is live at continuumscheduler.com, and you can see the blog posts going up there. Working through the content strategy for this surfaced thinking I hadn’t consciously done before, and I’ve worked on distributed teams. I’ve been on the 6am call and the 10pm call, I knew the cognitive overhead of trying to meet with my colleague in New Zealand and also think about waking up at 5am for a call when I’d rather be asleep. I never really thought about what the problem was, at a deeper level, and this workflow helped me do just that… go deeper. Designing a content strategy requires you to have opinions, and having those opinions tested makes them sharper.

This isn’t a “work faster” workflow, it’s a “work more powerfully” workflow. You’re not in the weeds, you’re thinking at a different level. Having now run through the whole thing, I want to be more precise about that. It is faster. Three and a half hours to go from a flakey strategy for a fictional sci-fi company to a complete content architecture, 23 writer-ready briefs, and 23 finalised articles, that’s faster than any solo human alternative. A small agency would take weeks and charge accordingly. But, this was done with minimal oversight from me, for the point of a demo and trialling a workflow. If you’re doing this for real, please do it with more finesse. Or else, I’ll be thinking about horse medication all over again.

It was also, genuinely, cognitively taxing in a way I didn’t expect. Directing a system like this (making the strategic calls, reviewing the outputs, catching the contradictions) is a different kind of mental effort than writing. It’s architecture work, not execution work, and as more of us are getting on board with Claude Code, we’re getting used to this kind of cognitive overhead. You might need to go and touch grass a little bit more often with this kind of work.

Download the companion workflow file and drop it into your Obsidian vault and run the whole thing from a single claude session. And let me know what you create.