
I love Dragon Quest. For the uninitiated: it was one of the first Japanese RPGs to hit the States (as Dragon Warrior), and it drops you into a world with basically zero guidance. No waypoints. No tutorial pop-ups. A few words from an NPC, and then it’s just you, a sword, and a large map to figure out on your own.
Getting dropped into a new codebase feels a lot like that.
This happens in our work quite often.You're a new hire trying to make a good first impression. You're switching teams inside a company you know well but a codebase you don't. You're an ambitious junior dev who just got asked what you want to own and you have absolutely no idea where to start. You're a bug fixer jumping between services, trying to understand just enough to not make things worse. Or you found an open source project on a weekend and now you're staring at a repo wondering where and how to contribute. The personas are different, but the feeling is the same. I talked through a version of this at Boston Code Camp late last year, and I've kept thinking about it ever since.
So let's talk about how to get up to speed: the classic approaches, where foundation AI models help (and this has changed a lot since I gave the talk in late Nov), and where AI still leaves you on your own. And along the way, I’ll highlight the things I've noticed that separate engineers who ramp fast from engineers who stay lost longer than necessary.
The old playbook is a classic because it works.
You start by exploring. Read the README, if it's been updated recently, which is not guaranteed. Pick one end-to-end flow: an endpoint, a CLI command, a button in the UI. Follow it all the way through. Trace the call stack. Figure out which functions touch which. It's slow, but it builds real mental models.
Then you start digging. Recent pull requests. Issue tracker. Integration tests. Runbooks. A grep for TODO/FIXME comments, which is a free tour of what previous engineers knew was wrong but didn't fix.
If you’re really ambitious (or really responsible), you do some archaeology. Git history. Git blame on interesting but uncommented functions. Old pull requests. PR threads where someone argued about an approach for three days before merging the thing anyway. If your team does postmortems, those are gold, because someone may have already documented the exact landmine you're about to step on.
And eventually, you build up enough knowledge to chat with the senior engineer who knows the service and you ask them to walk you through it. Yes, you feel like you're wasting their time. Do it anyway. There's context that lives only in their head and nowhere else, and the only way to get it is to ask. If you can get the basics elsewhere, you can spend your time with them on the high-level why instead of the low-level how.
All of this is valuable. But it's also manual detective work, and it doesn't scale well when you're dealing with a large codebase, multiple services, or a situation where the person who knows the answers is booked solid.
I was at that conference last year and asked the room how many people had used Claude Code or Codex to explore a codebase. A few hands went up. Most of them had only done this for personal projects, as companies were still just starting to evaluate coding agents. I suspect that if I did this same talk today, a lot of hands would go up. Agentic development frameworks and the foundation models on which they depend are getting better fast.
Here's where they shine: when you're learning a new framework or language pattern, and you want to ask whether it's the same as what you know from another language rather than wading through tutorials until you find the answer. A year or two back, I was messing around with Rust and kept running into concepts that felt familiar from C, but I needed more advanced examples than the tutorials offered, but not operating system ones. Being able to ask directly, and get a concrete comparison from an AI tool, saved me a lot of time.
The same idea applies to a codebase. You can point an agent at your repo and ask where you should start if you want to contribute, and get a reasonable answer. Claude Code’s `explore` subagent does exactly this at the start of most sessions. It's not bad.
But a few things are true at the same time:
Here's something I've found after years of watching engineers ramp up in new codebases. Selfishly, I want my team to get up to speed fast because it boosts their confidence and our collective velocity. But I’ve realized the ones who move fast aren't necessarily reading more code. They're reading the codebase in an entirely different way.
Ambitious newer engineers tend to approach a codebase like that Dragon Warrior map: explore every tile, read every file, try to build a complete picture before doing anything. Which is understandable. You don't know what matters yet, so you treat everything as potentially important.
More experienced engineers skip straight to the signals, asking:
If you can read these signals early, you make much better decisions about where to focus. You also learn to spot bad hygiene. A senior engineer knows that a PR should do one thing, like a good function. If you see a pull request that’s 5,000 lines long and tries to refactor a module while also adding a feature and fixing two bugs, you’ve found an area where things are likely to break again. You can find the places where a small, confident contribution is possible. You stay away from the areas that are mid-refactor or historically problematic until you understand the context.
Here's what it looks like when you don't. Someone joins a team, scans the repo, finds a module that looks messy and under-documented, and decides that's a great place to make an impact by going above and beyond. They spend their spare cycles over a sprint cleaning it up. What they didn't know is that the module is actively being rewritten by two other engineers. The work was fine. The timing was a complete waste. A quick look at where recent activity was concentrated would have told them everything they needed to know before writing a single line.
The instinct develops slowly, by sitting in incident reviews, attending postmortems, and watching senior engineers ask the same questions at the start of every new project. Eventually you stop asking what the code does and start asking why it keeps changing. The even harder part is that most organizations don't make these signals easy to find. They're scattered across git history, dashboards in three different tools, and the heads of people who've been around long enough to remember. You can assemble a picture, but it takes time, which is exactly what you're trying to save.
I ran into a similar issue myself late last summer, as an engineering leader. I came back from vacation to find the team had been in full scramble mode for a week. A pinned dependency had disappeared. They'd done the right thing by pinning it, but it vanished anyway, and they'd had to drop everything to deal with it. I found out in about 30 seconds by looking at the shape of the work: a big drop in feature development, a spike of KTOL (keeping the lights on) activity, all of it clearly reactive, concentrated in areas that had been quiet the week before. The signals told the story before anyone had to explain it.
That's the core of what Flux does. It analyzes the code, commits, pull requests, and activity across your repositories and surfaces exactly the kind of signals I've been describing:
It starts from the code itself. Tickets and status reports can only tell you so much.
For someone new to a codebase, that means having a real map before you start exploring. Something more comprehensive than a wiki diagram or a README that may or may not reflect reality. Ground-truth intelligence from the code itself, based on what's actually been happening.
You don't get fast at ramping up by reading faster. You get fast by learning to ask better questions earlier.
Where is the work? What kind? Where does it break? Those three questions, asked in the first few days, will tell you more about a codebase than a week of file-by-file exploration. They're also the questions that separate engineers who feel confident in new territory from engineers who stay lost longer than they need to.
The signals are there. Most of the time, nobody's told you to look for them.
Aaron Beals is the CTO of Flux, where he assembles high-performing engineering teams and translates business needs into innovative solutions. At Flux, Aaron combines his experience building scalable, secure enterprise-class SaaS apps with cutting-edge AI models to deliver new products to support engineering managers like himself.