All Work
Software & Technology Architecture Integration Digital Transformation Performance

AI-Native Project Management via MCP

The engineering and product team at a growth-stage SaaS company (internal tooling)

Small business · $6K–10K initiative

Weekly PM overhead per engineer

3.5 hours
45 minutes

Time to done

7 days
4 days

Time and Effort

40 hours
28 hours

The Problem

The engineering team was not underperforming. They were spending a disproportionate share of their time on the coordination overhead that surrounds engineering work rather than on the work itself. Tickets needed researching before they could be estimated. Documentation needed writing before context could be handed off. Research spikes that should take an hour were stretching across days because the engineer doing them was also fielding standup questions and chasing down decisions scattered across three tools. The 3.5 hours of PM overhead per engineer per week was the visible symptom; the underlying cost was consistently developing up to the 7-day average sprint cycle that should have included more time to quality check.

The Approach

The core insight was that most of the repetitive cognitive work (research, summarisation, status synthesis, documentation drafts) followed predictable patterns that an AI assistant with the right context could handle reliably. A Model Context Protocol server gives that assistant structured, real-time access to the data it needs: open tickets and their history from Linear, commit and PR activity from GitHub, and decision logs from Confluence. The assistant doesn’t replace engineering judgment; it handles the retrieval and translation work so engineers don’t have to.

We built the MCP server in two weeks and focused on wiring the three existing tools into a single queryable surface with appropriate access controls. The integration into daily workflow was intentionally low-friction: scheduled AI tasks take the first pass at research spikes, documentation stubs, and pre-meeting briefs. Engineers review and edit rather than starting from scratch.

The prompt library, built from real planning sessions and past sprint retrospectives, was the highest-leverage part of the build. Outputs that match the team’s existing communication style get adopted; outputs that sound generic get ignored. The average sprint cycle dropped from 7 days to 4 within the first full sprint after rollout.

The Solution

A Model Context Protocol (MCP) server exposing Linear, GitHub, and Confluence as structured, queryable context for an AI assistant. The tool fully integrated into the team’s existing daily workflows without adding new tools or login surfaces.

  • MCP server with tri-source context: the assistant has real-time, read access to open tickets and history from Linear, commit and PR activity from GitHub, and decision logs from Confluence.

    Mitigates: engineers spending 3.5 hours every week pulling context together from three different tools before they could start the work they were actually hired to do.

  • Scheduled AI task execution: predictable, repeatable tasks (research spikes, documentation stubs, pre-meeting briefs) are handled by the assistant on a schedule.

    Mitigates: research and documentation stretching across days because engineers kept being pulled away to answer questions and attend meetings before they could finish.

  • Sprint pre-meeting briefs: a synthesised brief is generated and distributed before each sprint planning session, drawing from all active tickets, recent commits, and outstanding decisions.

    Mitigates: planning meetings where nobody had the full picture, leading to estimates that missed and scope that shifted mid-sprint.

  • Prompt library built from real sessions: the assistant’s output is calibrated to the team’s existing communication style using transcripts from actual planning sessions and retrospectives.

    Mitigates: AI output that sounds generic, gets rewritten, and ends up saving no time at all.

  • Role-appropriate access controls: the MCP server enforces read-only access scoped per data source, with no write permissions to any connected system.

    Mitigates: the reasonable concern that giving an AI tool access to production data creates security exposure which could adoption entirely, if not addressed.

Project Timeline

  1. Week 1–2

    Workflow audit: mapped every touchpoint between engineers, PM tooling, and stakeholder reporting

  2. Week 3–4

    MCP server built: Linear, GitHub, and Confluence surfaces exposed as structured context

  3. Week 5

    AI assistant integrated into daily standup and sprint review workflows

  4. Week 6–7

    Team onboarded; prompt library built from real meeting transcripts and planning sessions

  5. Week 8

    Baseline metrics confirmed; manual status reporting process retired

Technologies used

Model Context Protocol Claude API TypeScript Linear API GitHub API Confluence API Node.js

Ready to start a similar project?

Let's talk about your specific challenges and what outcomes matter most to your business.

Start the conversation →