CRUFT: A way to quantify technical debt
This is my rephrasing (read: oversimplification) of this article by Ben Rady.
What is CRUFT?
Complexity is a measure of how hard it is to understand and change the code. A high complexity means it is harder to change the software to fit business needs.
Risks are the unwanted behaviour of the code.
Uses are the wanted behaviour of the code.
Feedback determines how fast we can iterate on changes. This can be through observability (logging/metrics) or user feedback.
Team is the amount of people able to effectively support the code. We need people to understand and maintain the code.
How to use CRUFT?
Most work you do as a software engineer affects the CRUFT factors.
- Adding a new feature to your code base increases the Uses but also the Complexity
- Only implementing the happy path might increase Uses but it also increases Risk
- Onboarding developers increases Team support
Technical debt us just another way to affect the CRUFT factors.
- Adding observability increases Feedback and decreases Risk
- Removing (unused) features reduces the amount of Uses your software has to reduce Complexity and increase Team support
By specifying what the debt is we can make more better decisions on which debt is worth paying off.
Assuming you (can) measure these factors, they can also be used to figure out if paying off the debt was effective.
Measuring CRUFT
To make informed engineering decisions, we need to measure CRUFT in a way that reflects the system’s real-world behaviour. While no single metric captures the full picture, approximations can guide improvements over time.
- Complexity: One common proxy is Lines of Code (LOC). Larger codebases tend to be harder to change. However, raw LOC can be misleading, so pairing it with cyclomatic complexity or code churn analysis provides a clearer picture.
- Risk: The number of open issue/bug tickets is one way to measure risk, especially if categorized by severity. Additionally, test coverage gaps, incident frequency, or security vulnerabilities can contribute to risk evaluation.
- Uses: The simplest measure is the number of passing tests, but this has limitations.
- Feedback: All feedback is information over time—the key question is how fast you detect and respond to changes. Metrics include MTTD (Mean Time to Detect) for failures and the latency of user-reported issues.
- Team: The Bus Factor (how many developers must leave before the project is in trouble) is a useful heuristic. Additional indicators include onboarding time for new developers and dependency on single points of knowledge.