This article was written by Leke Ojo, Product Manager at Rank Capital. Rank Premium is your personal investment banker that gives you access to expert wealth building strategies. Rank Premium is coming soon.
I built a full Investment Dashboard in a few days. PRD, data model, wireframes, tested working prototype. Then I handed it over to engineering for deployment.
The product is a client-facing portal where customers can track their holdings, gains and losses, and cash balances. There is also an internal dashboard for wealth managers to manage client accounts. Auth, email alerts, audit trails, CSV uploads. The full V1 scope.
I used AI as a partner all through the process. Not to avoid thinking, but to reduce the time between having an idea and actually shipping it.
What the process looked like
I started the way most product managers start: with a PRD. But instead of spending a week drafting, circulating, and revising, I drafted and refined it with AI, working through edge cases and data models in real time. Within hours, the spec was tight enough to build against.
From there, I built responsive wireframes and a working React prototype straight from the spec. Product logic got tested end to end, including validation rules, access control, and empty states. Then I tested the full app again and packaged it for deployment.
The output was not a document. It was a tested, deployment-ready prototype.
A few things I learned
1. Bring your real work. Use a real PRD, a real user flow, a real problem. That is where AI becomes useful. Not in toy demos or hypothetical exercises.
2. Think of AI as a thinking partner, not an output machine. The best results come from going back and forth, not from a single prompt. I would push back, add constraints, ask it to reconsider. That iteration loop is where the quality comes from.
3. Prompt in markdown. One well-structured markdown prompt can replace several messages and saves tokens. Structure your briefs, specs, and requirements in markdown before feeding them in. It compresses the number of prompts you need significantly.
4. Document your design system in markdown with reference images. When AI already knows your colors, typography, and components, the output quality increases and you spend less time correcting. Front-load this context and the returns compound across every screen you build.
5. Get your engineers to prepare a starter markdown of the stack. Framework, database, folder structure, API conventions. When AI works with your actual system specs from the beginning, handoff becomes smooth and migration to internal services is easier. This one step alone can save days of rework.
6. Scope before you prompt. Break the product into modules like auth, dashboard, and reports. Each module gets more attention and issues show up earlier. Trying to build everything in one pass is how you end up with a mess.
7. Review everything like it came from a junior. AI speeds things up, but the product decisions still sit with you. What to build, what to cut, what actually matters. Do not abdicate judgment just because the output looks polished.
The bigger picture
This whole experience reminded me of the first time I saw a computer in primary school. Just clicking around, not really understanding it but knowing it could do anything. Building with AI brought that feeling back. I had not felt that in a long time.
The PM role is not getting smaller. The cycle time is. And the PMs who figure out how to build at this pace will set the standard for what comes next.
What I would do differently: using Get Shit Done (GSD)
After shipping the dashboard, I came across Get Shit Done (GSD), a spec-driven development system built for Claude Code. It is a lightweight context engineering and meta-prompting layer that makes AI coding tools reliable and repeatable. Looking at my process in hindsight, GSD would have solved several friction points I ran into and made the entire build more structured.
Here is what I would change.
Structure the build as phases, not one long session
My process was largely one continuous thread with AI. It worked, but by the time I was deep into the wealth manager dashboard, context from earlier decisions like auth logic, data models, and validation rules was getting diluted. AI responses started losing precision.
GSD calls this context rot, and it solves it with phased execution. Each phase gets its own planning, execution, and verification cycle with a fresh context window. No accumulated noise.
This is exactly the “scope before you prompt” lesson I learned the hard way, but turned into a system. With GSD, I would have run /gsd:new-project to capture the full vision, requirements, and roadmap upfront, then /gsd:plan-phase for each module. Each phase would execute in a fresh context with the full token budget dedicated to that module alone.
Lock implementation decisions before building
One thing I did well was iterate on decisions with AI before building. But those decisions lived in conversation history, not in a structured document. When I revisited a module later, I sometimes had to re-explain constraints I had already worked through.
GSD has a /gsd:discuss-phase command that captures implementation decisions (layout preferences, interaction patterns, empty states, error handling) into a CONTEXT.md file. That file feeds directly into the planning and execution steps. My decisions would not just be somewhere in the chat. They would be structured context that every subsequent agent reads automatically.
Let parallel agents handle research and verification
I did all the research and testing within the same conversation. The same context window was doing requirement analysis, code generation, debugging, and verification simultaneously. That is a lot of load on one session.
GSD spawns parallel agents for different stages. Researchers investigate implementation approaches. Planners create atomic task plans. Executors build in fresh contexts. Verifiers check the work against goals. The orchestrator stays light. My main session would have stayed fast and responsive while the heavy lifting happened in dedicated sub-contexts.
Get atomic commits from the start
My git history from the dashboard build is functional but not surgical. Some commits bundle multiple changes because the AI was building across concerns in the same pass.
GSD enforces atomic commits per task. Each task gets its own commit with a clear message. If something breaks, git bisect finds the exact failing task. Each change is independently revertable. For a financial product where auditability matters, this discipline would have been worth it from day one.
Use structured verification instead of manual testing
I tested the dashboard manually and caught issues through my own review. But there was no structured step that checked the output against the original requirements systematically.
GSD’s /gsd:verify-work walks you through testable deliverables one at a time. Can a client see their portfolio balance? Does the CSV upload validate column headers? If something fails, it spawns debug agents to diagnose root causes and creates fix plans ready for re-execution. No manual debugging, no guessing where things went wrong.
The bottom line
My process worked. I shipped a tested prototype in days. But it was powered by my own discipline in structuring prompts, scoping modules, and reviewing output carefully. GSD codifies that discipline into a repeatable system.
The context engineering, phased execution, and spec-driven planning are not nice-to-haves. They are the difference between a good outcome that depended on personal effort and a reliable process that works consistently every time.
Next build, I am running GSD from day one.
Now Here Is the Part That Should Excite You
Everything you just read — the dashboard, the architecture, the speed of building — all of it exists in service of something bigger.
We built this infrastructure because we believe that the kind of investment expertise that used to be reserved for wealthy clients with private bankers should be available to everyone. Not a generic robo-advisor that moves money around based on a five-question quiz. Not a dashboard full of numbers with no one to help you understand what they mean.
Rank Premium is something different.
Think of it as having your own investment banker — someone (and something) that knows your financial situation, tracks your portfolio, spots opportunities, flags risks, and helps you make decisions with confidence. Not instead of you. Alongside you.
The wealthy have had access to this kind of personalised financial guidance for decades. We are changing that.
Rank Premium is not live yet. But it is coming and when it launches, the waiting list is going to be the place you want to be. Early access means getting into a system that is built to grow with you: smarter recommendations, deeper insights, and the kind of wealth management that used to require a minimum investment most of us will never see.
If you have ever looked at your savings and thought “I know I should be doing more with this, I just do not know where to start”, Rank Premium is being built for that exact feeling.
