Mastering the Balance: A Guide to Prioritizing Technical Debt in Software Development

Explore the art of balancing technical debt with feature development in our latest guide. Perfect for Engineering Managers, it offers practical strategies for marrying code quality with market fit. A must-read for savvy software development teams!

Mastering the Balance: A Guide to Prioritizing Technical Debt in Software Development
Photo by Loic Leray / Unsplash

In an earlier article I talked about the importance of a prioritized stack of incoming work. Today I want to talk about what sort of work tends to go onto this stack, and how as Engineering Managers we make sure that we don’t starve the necessary technical work in favor of an ever-increasing bundle of features. 

No More Triangles 

high-angle grayscale photography of triangular staircase
Photo by Viswanath V Pai / Unsplash

There used to be three disciplines involved in Software Engineering: Dev, PM, and QA. In the late 2000s, the industry moved away from QA as a separate discipline, rolling it in with Dev. For PMs this was a windfall because, at least on paper, the QA engineers were now devs who could do dev tasks, meaning team capacity is overall increased and more features could get done. 

Things didn’t work out this way, of course, because testing was still necessary, though any focus that was taken off QA because of this transition resulted in more bugs being shipped to customers. So, PMs got their added features, but at the cost of quality. 

So now, in this brave new world, we have engineers and PMs. Simplistically put, engineers tend to pull toward better code and product quality, and PMs tend to pull toward features and market fit. This is simplistic, of course, but I think if you look long enough at any team, you will see this separation of concerns at play. 

That Pesky Tech Debt 

brown snail on green leaf
Photo by Ante Hamersmit / Unsplash

As an Engineering Manager I often talk with devs who would love to prioritize a tech debt reduction initiative. (Let’s call it OE – operational excellence – for short.) The challenge is how to justify that resource investment against product market fit work. Given many Engineering Managers are ex-developers, they instinctively know a turd in the codebase when they see one, but how does one translate this instinctual understanding into a data-based justification that PMs and senior leadership could get behind? 

First, let me make a bold statement: if you can’t justify an investment in terms of data, you shouldn’t be making it. It doesn’t matter whether that investment is features or OE work. It must be designed to affect a measurable metric, and its effect on that metric must be estimated before the investment is made. This is the same approach as figuring out what features to invest in. PMs are used to needing to justify their Big Feature Idea by referencing what KPIs the idea is meant to affect, and their performance is evaluated based on how accurately their estimate reflects the post-ship reality. Why should Engineering work be any different? 

This isn’t a radical idea. Hopefully, everyone understands that data driven approaches yield better results. Everyone knows KPIs are important. The problem is agreeing on what KPIs are more important than others. 

So really, the issue of selling tech debt work isn’t not knowing the effect doing so would have on the product, developer productivity, etc. The issue is the lack of agreement on how important that effect is to the business. And without that agreement, that tech investment is a tough sell. 

Time for a Meta-Conversation 

white clouds
Photo by C Dustin / Unsplash

Does your organization have KPIs? Are they prioritized? Does the org value metrics like developer productivity, as measured by cycle time, velocity, time-to-prod, bug tail counts, etc?  

If not, consider addressing that first. Have a meeting with your manager and discuss what KPIs you consider important and why. Make the case for why engineering-centered metrics are important. If your manager has an engineering background, this should be easy to do. If not, the explanation might be a bit more of a challenge, but ultimately the point is self-evident: if devs work faster, they produce more work. The issue is that without that understanding being explicitly encoded in your org’s charter, you’re going to have to have that conversation every time a new tech debt investment comes up. 

Definition of Ready / Definition of Done 

Another essential tool for selling technical debt investments is an upfront agreement on DoR/DoD criteria. To recap briefly for non-Agilists: Definition of Ready specifies what information must be present in a work item before it’s accepted onto the team’s ToDo queue. Definition of Done specifies what must be completed for the item to be considered complete. Both are trust-builders between the team and its stakeholders and senior leadership. This applies not only to tech-debt work, but to all work a team takes on. 

But for Tech Debt specifically, the item should specify what KPIs it’s designed to affect. That greatly simplifies the conversation about why that item should be worked on, and what priority it should be with respect to the other work the team takes on. Likewise, the team and its stakeholders must have a clearly defined set of acceptance criteria (usually part of Definition of Ready), so they understand precisely what they’re expecting to get.  

Keep OE work on your backlog 

a pile of bags of cement sitting next to each other
Photo by Tim Johnson / Unsplash

One request I constantly make to the devs on my team is to put their OE initiative ideas into tasks. (For most of you, that would be JIRA Stories.) Those stories should be edited to conform to our team’s DoR criteria and should live alongside the feature stories.  

Why is this important? Because these stories then come up in triage and planning meetings, giving us a chance to rank them against the feature stories. If the OE work lives primarily in the minds of the devs, no such chances materialize, and the work is far, far more likely to get ignored.  

Having the OE work appear in your stack also gives you and your leadership an at-a-glance view of where the team’s time investment is going. How many OE stories do you have interspersed between feature stories? Do you have a chunk of OE where you work on nothing else, or do you tend to pick off one OE story at a time?  

All these approaches can work if OE isn’t starving.  

Conclusion: The EM Interview Question 

When I’ve interviewed for EM roles in the past, I’m often asked the question of how I, as an Engineering Manager, deal with balancing the team’s feature work against the OE necessities. Hopefully, you can see how approaching OE work just as you would feature work makes such questions relatively irrelevant.  

There’s nothing inherently different about tech debt. It’s just work. Changes that need to be made to code. Development time. Regression risk. All the same calculus that applies to feature work. And just as feature work, it must be justified in terms other than “our code sucks and we need to make it better.” Yes, it takes more work to figure out what effects the proposed changes would have, but believe me, the result is well worth the effort.