Skip to main content
The Next Race logo
BlogContact

Building an MCP Server for an Endurance App

by The Next Race

We shipped TheNextRace's Model Context Protocol (MCP) server this week, in two flavors at once: a stdio server distributed via npm, and a fully OAuth 2.1-authenticated remote server at https://www.thenextrace.app/mcp. Together they let Claude Desktop, Claude Code, Cursor, and claude.ai all call into your TheNextRace account.

This post is for anyone thinking about doing the same thing for their own product. It's not a tutorial — it's a frank account of what worked, what didn't, and where the time actually went.

The starting point

Before this week, TheNextRace had no public REST API. The web app and the mobile app both talked directly to Supabase using the anonymous key plus session cookies. That's a fine architecture for two first-party clients, but it's a non-starter for an MCP server: agents need stateless authentication, ideally a token they can paste, and Supabase's session cookies don't fit that shape.

So the first chunk of work — about a third of the total time — was writing what we should have written months ago:

  1. A personal_access_tokens table with hashed token storage (SHA-256, never plaintext).
  2. A "Settings → API Tokens" panel where users can generate and revoke tokens.
  3. Six /api/v1/* routes (me, races, plans, plans/[id], workouts, profile) that wrap the existing Supabase queries behind a clean REST surface.
  4. A reusable bearer-auth wrapper that scopes every query to the authenticated user.

None of that is MCP-specific. But it's the layer the MCP server sits on top of, and skipping it would have meant duplicating the user-scoping logic and the Supabase access patterns into the MCP code itself. We've seen that approach before; it ages badly.

The two transports

Modern MCP clients fall into two camps:

Stdio. Claude Desktop, Cursor, Claude Code spawn a process locally and talk to it over stdin/stdout. The "server" is a binary the user installs (in our case, via npm install -g @thenextrace/tnr). Auth is whatever the binary stored locally — for us, the same personal access token used by the CLI.

Streamable HTTP. claude.ai and similar web clients can't spawn processes. They connect to an HTTPS URL, do an OAuth dance, and then issue JSON-RPC over HTTP. Authentication has to be OAuth 2.1 with PKCE.

We built both. It's the same tool definitions either way; the only difference is the transport plumbing.

For stdio, we used the official @modelcontextprotocol/sdk and let it handle JSON-RPC framing. For the URL-based version we hand-rolled a small JSON-RPC dispatcher inside a Next.js route handler, because the SDK's HTTP transport assumes Node http server semantics that don't translate cleanly to Next.js's Request/Response model. ~150 lines of dispatch code; cleaner than wrestling with an adapter.

Tool design

We exposed exactly ten tools: whoami, list_races, list_plans, list_library_workouts, get_profile, create_training_plan, create_race, delete_plan, create_library_workout, update_profile. That covers the headline demo end-to-end and nothing else.

The single biggest lever for tool quality, by a wide margin, is the descriptions. Agents pick tools based on what the description says — not the name, not the schema. We rewrote ours three times, getting more specific each time. Examples:

  • For list_races: "ALWAYS call this before create_race or delete_plan — agents must reference races by id, not name."
  • For create_race: "Every race must belong to a training plan — call list_plans first (or create_training_plan if the user has none). raceType accepts canonical enum values (run_marathon, tri_full_distance, …) AND friendly aliases (marathon, ironman, 70.3, 5k)."
  • For update_profile: "thresholdPace is run threshold pace as 'mm:ss' per km. css is swim CSS as 'mm:ss' per 100m. Updating any of these recalculates training zones."

Vague descriptions translate directly to agents picking the wrong tool or hallucinating arguments. Detailed descriptions translate to agents that just work.

OAuth from scratch

The HTTP transport needs OAuth 2.1 with PKCE plus Dynamic Client Registration (DCR). We considered using a library — there are several — and decided against it for a few reasons:

  1. The OAuth 2.1 spec for public clients is small and well-defined. The actual code is ~400 lines.
  2. Libraries bring opinions about session storage, token formats, refresh strategies. Most of those don't fit cleanly with our existing Supabase auth.
  3. We wanted to understand exactly what we were shipping. OAuth bugs tend to be subtle, and "we trusted the library" isn't a comforting incident retro.

What we built:

  • /.well-known/oauth-authorization-server and /.well-known/oauth-protected-resource for discovery.
  • /api/oauth/register for DCR — clients self-register and get a client_id.
  • /oauth/authorize page for user consent.
  • /api/oauth/token for the auth-code-with-PKCE and refresh-token grants.
  • A unified bearer.ts that accepts either a personal access token or an OAuth access token, transparently, on every /api/v1/* route.

Tokens are opaque (random base64url) rather than JWTs. Easier to revoke, simpler to reason about, and JWTs were the wrong tradeoff for our scale.

What we cut

A bunch of things, deliberately:

  • Plan generation. This is the actual product feature, and it's a multi-day project on its own. The MCP can read and write training data; it can't yet generate a periodized block. That comes next.
  • Streaming responses. The MCP spec supports SSE for long-running tools. None of ours are long-running, so we ship synchronous JSON. We can add streaming when there's a tool that needs it.
  • Edits and deletes for races and workouts. v0.1 ships the create paths only. Edits land in v0.2.
  • Activity / connect commands. OAuth callbacks for Strava, Wahoo, Oura already exist for the web app — wiring them into the CLI is multi-day work and a separate release.

Time breakdown

Roughly:

  • 2 hours: REST API + PAT system + Settings panel
  • 1 hour: CLI scaffolding, auth commands, domain commands
  • 1 hour: stdio MCP server with tools
  • 30 min: npm publish, GitHub Actions release CI
  • 3 hours: URL-based MCP with full OAuth 2.1
  • 30 min: GitHub Secret Scanning webhook

Call it eight hours total, plus testing. It's possible because the API + the tool surface are small. If you're building this for a product with hundreds of endpoints, the math changes.

What we'd skip if we did it again

  • Provenance attestation on a private repo. npm's --provenance flag requires a public GitHub repo. Our CI silently passed through "publish failed: provenance unsupported on private repos" once before we noticed.
  • Embedded type discriminators in token prefixes. We started with tnr_pat_ and tnr_oauth_ then realized we only have two token types and they live in distinct tables. One tnr_ prefix; check both tables on lookup. Simpler, easier to register with secret scanners.
  • Skipping the consent UI. It's tempting, especially for a personal-use launch. Don't. The "first time someone external connects this" scenario is when an explicit consent screen earns its keep.

What this enables

For users: a way to manage TheNextRace from inside whatever AI tool they already live in. For us: a foundation that makes plan generation, when it lands, immediately reachable from every MCP-compatible surface — not gated behind another integration project.

If you're building something similar for your own app, the takeaway is: the MCP plumbing is the easy part. The hard part is having a clean, well-scoped API to expose. Build that first; the protocol layer is straightforward.

The CLI and stdio server are open via npm; the URL-based MCP is at https://www.thenextrace.app/mcp.

Ready to start training?

Create your first training plan and start tracking your progress today.