deliveryprocessroadmap

Building a 90-Day Delivery Roadmap From Scratch

No process, no roadmap, no predictability. Here's how to go from zero to a working delivery system in 90 days — even if your team has never followed a structured process.

Alvaro Burga ·

You just joined a company as CTO. Or maybe you're a founder who finally realized that "figure it out as we go" isn't working anymore. Your team has no defined process, no roadmap, and no way to predict when anything will be done.

Where do you start?

After setting up delivery systems at over a dozen companies, I've found that 90 days is the sweet spot. Short enough to maintain urgency, long enough to build something sustainable. Here's the playbook, week by week.

Before You Start: The Honest Assessment

Before building anything, spend 2-3 days understanding what you're working with. Talk to every person on the team individually. Not group meetings — private conversations. Ask three questions:

"What's the biggest thing slowing you down?" Developers will tell you things they'd never say in front of their manager. Common answers: unclear requirements, too many meetings, constant context-switching, broken development environment, slow code reviews.

"When was the last time the team delivered something on time?" If nobody can remember, you know the problem is systemic, not situational.

"If you could change one thing about how the team works, what would it be?" This tells you what the team is ready to change. Starting with changes the team wants makes adoption much easier than imposing changes from the top.

Write down everything. You'll use these insights to prioritize what to fix first.

Days 1-30: Foundation

The first month is about establishing the basics. Don't try to be sophisticated. The goal is to go from chaos to a simple, repeatable rhythm.

Week 1: Define "done" and set up visibility

Most teams without process have a fundamental problem: nobody agrees on what "done" means. For one developer, done means "I pushed the code." For another, it means "it's deployed and tested." For the product person, it means "the customer can use it."

Write a simple definition of done and post it where everyone can see it. Something like:

"Done means: code written, reviewed by a teammate, tests passing, deployed to production, and verified by someone other than the developer who built it."

Then set up a board. Jira, Linear, Trello, a whiteboard — the tool doesn't matter. What matters is that every piece of work the team is doing is visible in one place with a clear status: To Do, In Progress, In Review, Done.

Week 2: Establish a weekly rhythm

Introduce two meetings and eliminate everything else:

Monday: Weekly planning (30 minutes). The team looks at the board, decides what to work on this week, and commits to a small number of items. Small is key — commit to 60-70% of what you think you can do. Delivering everything you committed to builds confidence. Overcommitting and failing destroys it.

Friday: Weekly review (30 minutes). What did we actually complete? How does it compare to what we planned? What got in the way? One thing to try differently next week.

That's it. Two meetings. One hour total. Everything else can be handled asynchronously or in quick conversations as needed.

Week 3: Limit work in progress

Look at the board. If your team of 5 has 20 items in progress, that's your biggest problem. Nothing is getting finished because everyone is juggling too many things.

Introduce a simple rule: no more than 2 items per person in progress at any time. When something is blocked, the priority is unblocking it — not starting something new.

This will feel uncomfortable. Developers will say "but I can't do anything while I wait for code review." The answer: "Help review someone else's code, or help unblock the stuck item." The point is to finish work, not start work.

Week 4: Measure your baseline

After three weeks of the new rhythm, you have your first real data:

  • How many items did the team complete each week?
  • How long did items take from start to done?
  • How many items were blocked, and for how long?
  • How accurate was the Monday commitment vs. Friday reality?

Write these numbers down. This is your baseline. Everything from here is about improving these numbers.

Days 31-60: Optimization

The foundation is in place. Now you make it better.

Week 5: Address the top blocker

Look at your data from the first month. What was the most common reason items got stuck? Common culprits:

Slow code reviews. Fix: Set a rule that all pull requests get reviewed within 4 hours during business hours. Track it.

Unclear requirements. Fix: Before any item enters "In Progress," it must have clear acceptance criteria written in plain language. If the developer doesn't understand what "done" looks like, the item goes back to the product person.

Environment or deployment issues. Fix: Dedicate a developer for one week to fix the CI/CD pipeline, set up proper staging environments, or automate whatever is causing friction. This feels like lost development time, but it pays back tenfold.

Pick the single biggest blocker and fix it this week. Don't try to address three things at once.

Week 6: Introduce stakeholder updates

By now, your team has been delivering in a predictable rhythm for a month. It's time to make that visible to the rest of the company.

Start a weekly one-paragraph email to stakeholders (CEO, product, sales — whoever cares about what engineering delivers). Format:

"This week we completed: (list). Next week we plan to: (list). One thing to be aware of: (risk or blocker, if any)."

Short. Factual. Consistent. This builds trust faster than any dashboard or detailed report. Stakeholders start to feel like they know what's happening without having to ask.

Week 7: Refine planning with data

You now have 6 weeks of throughput data. Use it to plan more accurately:

  • "We've completed an average of 7 items per week over the last 6 weeks."
  • "Our completion rate has been between 5 and 9 items per week."
  • "Items involving the payment system take 2-3x longer than average."

Use these patterns to make smarter commitments. If you know payment items take longer, budget accordingly. If your throughput is 7 per week, commit to 6 and deliver 7 — underpromise, overdeliver.

Week 8: Add a lightweight retrospective

Now that the team has a rhythm, add a 30-minute retrospective every two weeks. Not a complaint session — a structured conversation:

  • What does the data say about the last two weeks?
  • One thing that worked well (keep doing it)
  • One thing to try differently (run a two-week experiment)

One improvement per retro. After four retros, you've made four concrete changes, each validated by data.

Days 61-90: Scaling

The system works. Now you make it resilient and plan for the future.

Week 9: Build a 90-day roadmap

You now have enough data to make credible predictions. Sit down with the product team and leadership:

  • "Our team completes about 7 items per week."
  • "Here are the 40 items on our backlog, prioritized."
  • "At our current pace, we can complete roughly 28 items per month."
  • "Here's what we can realistically deliver over the next 90 days."

This isn't a promise carved in stone — it's a forecast based on real data. Update it monthly as new data comes in and priorities shift.

Week 10: Document the system

Write down how the team works. Not a 50-page process document — a one-pager:

  • Our weekly rhythm (planning Monday, review Friday)
  • How we track work (board, statuses, what "done" means)
  • Our WIP limits (2 per person)
  • How we handle unplanned work (evaluate urgency, trade off against planned work)
  • How we communicate with stakeholders (weekly email)
  • How we improve (retro every 2 weeks, one experiment at a time)

This document does two things: it makes the system survive team changes (new hires can read it on day one), and it makes the implicit rules explicit so nobody has to guess.

Week 11: Plan for sustainability

The biggest risk at this point is regression. The team goes back to old habits when things get stressful. To prevent this:

Protect the rhythm. The weekly planning and review meetings are non-negotiable. They don't get cancelled for "urgent" work. They don't get skipped because "everyone's too busy." They're the foundation — everything else depends on them.

Keep measuring. The moment you stop tracking completion rates and cycle times, you lose visibility. Make data review a 2-minute part of the weekly meetings, not a separate effort.

Celebrate consistency. When the team hits their commitment three weeks in a row, acknowledge it. When throughput improves, point it out. People repeat behaviors that get recognized.

Week 12: Review and set the next 90 days

Compare where you are now to where you started:

  • Week 1 completion rate vs. Week 12 completion rate
  • Average cycle time then vs. now
  • Number of items blocked for more than 2 days then vs. now
  • Stakeholder satisfaction then vs. now (just ask them)

Most teams see a 40-60% improvement in delivery predictability over 90 days. Not because they worked harder, but because they built a system that makes problems visible early and creates a consistent rhythm.

Then plan the next 90 days. What do you want to improve next? Faster cycle times? Better estimation? Automated deployments? More sophisticated metrics?

The first 90 days builds the engine. Everything after that is tuning it.

You Don't Have to Do This Alone

Building a delivery system from scratch while also managing a team and shipping product is hard. You're essentially rebuilding the plane while flying it.

If you want help accelerating this process — someone who's done it a dozen times and can guide your team through the transition — let's have a conversation about it. The first 90 days set the trajectory for everything that follows. Getting them right matters.

Ready to fix your delivery?

Let's talk about your challenges in a free 30-minute call.

Book a Discovery Call
Book a Call