Price Sensitivity – The Missing Dimension

As I consider the different charts, models and diagrams most often used to explain the concept of technical debt, it occurs to me that they are missing an important dimension, which I will call ‘sensitivity.’ In short, does this item/problem/issue matter within the larger context of its (eco)system? This dimension was also missing from the static analysis tools I wrote and used earlier in my career, which severely impacted their usefulness, and therefore their usage.

In particular, I remember deploying FindBugs at Google [1], and the conclusion that the tool could not determine the impact of the defects it found because it had no context. Putting that context into economic terms, what is the price of an item, or perhaps the price of either fixing or ignoring it? We talk about costs and benefits, which to my mind are generally fixed quantities. But the price of a item is variable, and allows us to bring the powerful laws of supply and demand into the picture.


“What’s it worth to you?” I asked myself. When I was facing 100,000 build targets with missing direct dependencies while attempting to disallow transitive build dependencies [2], my price was very low. Once I had automatically fixed all the easy cases, my price rose as the goal got closer. I priced the final few, most difficult fixes very highly, as the value of all the earlier work depended on their timely completion. Once finished, the build system permanently enforced the desired property, essentially making the price infinite from then on. The debt in question had been paid off.

So, I am arguing that we use some analogy of ‘price’ as the measure of what matters in the larger context, and let the ‘market’ set that price for each TD item. Some items will be quite price sensitive, others quite insensitive, to extend the metaphor. This will bring the social aspects of economics into our models, and perhaps cube the quadrants by adding another orthogonal dimension.

[1] Nathaniel Ayewah, William Pugh, J. David Morgenthaler, John Penix, YuQian Zhou. Evaluating Static Analysis Defect Warnings on Production Software. Proc. 7th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, ACM Press, New York, NY, USA (2007).

[2] J. David Morgenthaler, Misha Gridnev, Raluca Sauciuc, Sanjay Bhansali. Searching for Build Debt: Experiences Managing Technical Debt at Google. Proc. Third International Workshop on Managing Technical Debt, IEEE (2012).

TD Indexes – What is missing?

We experimented different tools providing some kind of Technical Debt Indexes, we tried to identify and analyze their main features and observe what is missing. I outline below some observations we done with Marco Zanoni on these issues.

Different measures have been proposed to address the estimation of Technical Debt, by analyzing the artifacts contained in a project.
The most recurring characteristics of the proposed measures are:

  • localized hints or issues, with an assigned score;
  • scores are then aggregated by following the containment relationships existing in the software, e.g., modules, and the aggregation operator is usually the sum;
  • scores are assigned to the single issues a priori and arbitrarily; default scores are setup by relying on the experience of tools’ developers and may be customized to adapt to the specific project’s needs.

Most hints contributing to the building of the measure fall under these categories:
1. coding issues regarding violation of best practices
2. metric values and violations to some defined thresholds
3. detection of more structured issues, e.g., architectural/code smells.

This approach is surely reasonable and motivated by practical feasibility, scalability (new knowledge about issues can be coded and added to the additive model), and manageability of the analysis (as pointed out by Jean Louis Letouzey in “Questions that are quite systematically raised about TD”).
The final goal of TD indexes is to allow the **Management** of Technical Debt, i.e., to allow developers (we use “developers” to mean all people designing/developing/operating the software at any level of abstraction) to understand that a choice (conscious or not) has consequences, which are usually non-functional, that can affect both developers (e.g., ease of maintenance and evolution) and users (e.g., performances, security). Given the knowledge about the **risk** derived by each choice, developers should also know how much resources are needed to transform their software in a way that removes or mitigates the risk. So, there are the two widely recognized aspects of the problem: the cost of keeping the system, and the cost of fixing it.

Do the current TD indexes allow estimating these aspects?

Since measures are implemented by summing scores that are assigned to each issue recognized by the analysis tool, the precision of this estimation is tied to two factors:
1. the precision of the single scores
2. the appropriateness of the aggregation model

1. As for the precision of the single scores, in all the models we know about, scores are arbitrarily fixed. They are fixed by experts, but they are fixed. Depending on the index definition, scores represent the cost/time of fixing the issue or its relative contribution (a penalty, usually) to the overall quality of the system. Both aspects lack empirical evidence, as other details like the thresholds applied to metrics when detecting, e.g., size or complexity violations. A more sound result would be obtained, in our opinion, if the maintenance costs and the impact on the quality of the systems could be fitted from empirical data, and customized per domain/technology/organization. This could allow choosing which issues are relevant and which are not, on a statistical base, and to obtain an estimation, e.g., of their *actual* cost of fixing, or to quantify the existing relations with maintenance times or defects.

2. As for the appropriateness of the aggregation model, when trying to estimate the cost of fixing a certain set of TD issues, one should consider that any change applied to a system has consequences that are not obvious. Software systems are structured as complex graphs, where each single change impacts every neighbor node recursively, both at design time and runtime. In this context, aggregation by sum is simplistic. Especially when dealing with design/architecture level issues, fixing an issue may remove an entire other set of related issues, or generate other ones. An ideal MTD tool should be able to understand these inter-relations and exploit them to suggest the sets of fixes that maximize the obtained quality, by following the structure of the system, and not a super-imposed estimation model driven essentially by quality aspects. Quality aspects are relevant, but should be used to understand which aspects of a system should receive more attention at coarse granularity.

Moreover, respect to point 1. , some issues may be underestimated due to their rarity in history. This is where another complementary approach could be used, that cares about rare (but not so much) issues that lead (potentially) to out-of-scale risks. We can borrow this approach from security analysis, where a very small vulnerability can disclose extremely important information. People working on these issues do not think in terms of tradeoffs, but try to follow practices that are proved and *do not allow* some bad things happening. When dealing with security, a lot of effort is spent to be sure that, e.g., a user cannot enter a system without successful authentication. That would have extremely bad consequences, which costs are exponential w.r.t. the effort spent in avoiding the issue.

This complementary approach consists in collecting which issues can have (even anecdotically) generated external issues that are “extreme”, like system shut down, data loss, project failure, and (when they are detectable) remove them from the project, with maximum priority. Do not even associate them to a score, because their effect is out of scale. If you multiply the chance of suffering from this risk by the costs it have, it is probably high anyway, but the problem is that if that happens no one would be able to pay that cost. This goes beyond the usual debt/interest metaphor, and resembles more how “black swans” behave.

**A possible research agenda

What we would like to see and work on in future research is:

  • estimating the relative relevance and/or absolute time/costs associated to hints/issues detectable by software analysis tools, with the aim of providing a TD estimation index with an **empirical base**;
  • collecting evidence regarding the root causes of known large-scale failures in both operations and development, with the aim of generating a blacklist of the issues to absolutely avoid in any project;
  • explore the existing structural and statistical inter-relations among different TD issues:
  • generating alternative estimation models that rely on the structure of a software, and that allow the simulation of changes to estimate with higher precision the effort needed to implement fixes and their consequences.

The Agile Alliance Technical Debt Initiative: First deliverables

A little bit more than one year ago, I get the opportunity to lead an Agile Alliance Program focused on Technical Debt. The purpose of the program (also called Initiative) is to provide practical recommendations and answers to questions like:
• What is Technical Debt?
• How do I make informed decisions on when and how to address it?
• How can I start to manage Technical Debt

We formed a group of six people and we had a kick-off meeting in May 2015 to organize our work on different objectives. Then, we collaborate mostly by Hangouts, as we are located in different countries on both sides of the ocean. We also had a workshop in July 2015 in Washington DC, to present and discuss our first results and share ideas and thoughts.

Some of the first results (available here) of our work have just been published on the Agile Alliance website. This is :
• An introduction to the concept of Technical Debt
• A set of recommendations: Project Management and Technical Debt
• The Agile Alliance Debt Analysis Model (A2DAM)

The last item is a very basic set of code related rules which should be the minimum to comply with in order to produce “right code”. This is not a large inventory of best practices, of requirements to comply with. As the opposite, it is a short and simple “ground level” set of good practices to help teams to start. All the rules are characterized by fields like impact, remediation model etc.

An initial version has been built by the AA Program Team. This initial list has been sent for review to experts in 16 companies developing static code analysis tools. 11 of them have provided their feedback about the relevance and clarity of the practices, the associated thresholds and their tool implementation. Their feedback has been taken into account for delivering the final version.
Now that this list is public, we welcome all kind of feedback (in order to improve and enhance it on a regular basis) and we hope that it will help the community to better manage technical debt.

These are our first deliverables. There will be more this year.

During my consulting engagements within…

During my consulting engagements within large organization, I meet senior managers and exchange with them on the topic of technical debt. I would like to share with you:
• The questions that are quite systematically raised by my contacts
• My current answers to their questions.
• My suggestions, my proposal about what needs to be done to make progress?

Question 3:
What can I do with all the Technical Debt accumulated in my legacy applications?

My current answer:
As you do not have the entire necessary budget to pay off your debt, you have to compromise. I propose to them an approach based on the use of two types of data for each application.

  • 2 external data: the estimated relative added value level and the estimated annual maintenance charge.

-2 internal data: The amount of technical debt and interest as measured by a tool that implements the SQALE method.

The principle is simple. If an application has a low added value and has no maintenance activity, then its technical debt has no impact and improving its debt is not necessary.
If an application provide high added value and has a high maintenance activity then its technical debt should be very low. This application must be given priority in the allocation of budget for improvement. This approach is applied successfully for some time by very large organizations. I have explained it in more detail in a recent article published in the Cutter IT Journal.
Here is a copy (please don’t distribute or copy for copyright reasons)of this article.

I will be happy to present (and demonstrate) this approach with more detail during our seminar.

Defining the financial aspects of technical debt

The concept of technical debt is closely related to the financial domain, not only due to the metaphor that bonds it with financial debt, but also because technical debt represents money. On the one hand, it represents money saved while developing at a lower quality or money earned when delivering the product in time, whereas on the other hand, it represents money spent when applying a refactoring.

As a result, financial terms are broadly used in TD literature. In order to work towards a framework for managing technical debt, we have attempted to organize a glossary of the most common financial terms that are used in the state of the art. The glossary presents these terms and a definition for each one. The definitions are a result of synthesizing the way that the terms are used in literature and in some cases they reflect our understanding of how these notions could prove beneficial for technical debt management.



Next, we illustrate our view on how financial terms are used in technical debt literature by employing three explanatory figures. In Figure 1, we assume a software system that is composed of 7 artifacts (e.g. software components). Artifacts 2 and 3 have been developed on the desired levels of design-time quality, whereas in all other artifacts several compromises have been made. This difference in quality is shown by the distance δquality. The immature software artifacts are named technical debt items or liabilities. While developing the aforementioned artifacts, the development team spent less effort than it was required in the optimal case. The effort that is required to address this difference in the levels of quality is termed principal of the TD. Principal can be considered as an asset for the company, since it can be used as financial leverage in order to invest in any other activity. Such activities can be the development of by-products or decreased time to market. The earning of these activities divided by the principal represents the return of investment (ROI) of investing the principal.

figure 1

Figure 1

In Figure 2, we suppose that the same system, after its deployment, requires the addition of a feature. For simplicity, we assume that this feature is global and the same effort needs to be spent in every artifact. However, in the artifacts where technical debt is accumulated (TD items) additional effort (δeffort) is required because of their deteriorated design-time quality (e.g. low maintainability, incomplete documentation that leads to low understandability). This additional effort is the interest that the development team has to pay, due to the accumulated amount of debt. If the interest becomes so high that maintenance is not financially feasible or beneficial, the project becomes bankrupted.

figure 2.png

Figure 2

In Figure 3, we consider the evolution of a system with accumulated technical debt. The two series in the line chart represent the evolution of the system without any repayment activity (red line) and with one repayment activity performed in revision-4 (blue line). We suppose that the system starts with an amount of debt that increases over time (interest rate is represented by the slope of each line, e.g., θ1 and θ2). Interest rate is not presented to be stable, but floating, since the slope of the lines is increasing over time. This increase is expected, since the design-time quality of decayed projects, deteriorates quicker than the quality of better-designed products (the poor getting poorer). Therefore, the TD risk for low design-time quality products is higher than the TD risk of high design-time quality products. We assume that the interest rate increases due to the increased difficulty of resolving existing problems, which in turn is caused by the gradual increase in size and functionality. Due to the repayment actions in revision-4, some artifacts’ design-time quality is increased, i.e. technical debt is decreased (δTD1). The effort that is spent during this repayment activity is the value of repayment. Furthermore, because of the floating rate of the interest we can see that in revision-7 the value of repayment increases, as illustrated by the distance between the two lines (δTD2 > δTD1). The value of the repayment performed in revision-4, is named future value of repayment in the timestamp of revision-7. Assuming that present time is revision-2 and that a future repayment activity (e.g. the one performed in revision-4) should be valuated at present conditions, one can assess at revision-2 the present value of the repayment to be performed in revision-4. Finally, accepting that software value is related to product quality (de Groot et al.) can lead us to the conclusion that the enhancement of design-time quality that is achieved through the repayment on revision-4 represents the value-added of TD repayment.

figure 3

Figure 3

The question that rises and has still to be answered is whether and how the metaphor and the aforementioned financial terminology can assist the technical debt community in building a concrete conceptual model for technical debt and eventually in defining methodologies for effectively managing TD.


  • Ampatzoglou, A. Ampatzoglou, A. Chatzigeorgiou, P. Avgeriou, “The financial aspect of managing technical debt: A systematic literature review”. Information & Software Technology 64: 52-73 (2015)
  • J. de Groot, A. Nugroho, T. Back, and J. Visser, “What is the value of your software?”, 3rd International Workshop on Managing Technical Debt (MTD ‘12), IEEE Computer Society, pp. 37 – 44, (2012)


Questions that are quite systematically raised about TD – (cont.)

During my consulting engagements within large organization, I meet senior managers and exchange with them on the topic of technical debt. I would like to share with you:
• The questions that are quite systematically raised by my contacts
• My current answers to their questions.
• My suggestions, my proposal about what needs to be done to make progress?

Question 2:
What’s the definition of technical debt?

My current answer:
I don’t have a definition of technical debt, because it is a metaphor. You don’t define metaphor, you just use it (if you find it useful).
The metaphor has been introduced by Ward Cunningham, one of the authors of the Agile Manifesto.
Ward used it the first time when he was developing a financial application in Smalltalk. He wanted to justify to his boss the refactoring they were doing, so he used a financial analogy.
If you want to know more about the metaphor, you can:
• Look on the web for some videos, interviews of Ward on the topic.
From what I know, Ward never provided a precise “Definition” of its metaphor. I personally think that it’s a fully deliberate position.
• Look for a recent document on the Agile Alliance web site (here). It is an introduction (not a definition) to the concept which has been reviewed and completed by Ward

What needs to be done to make progress about the topic raised by this question?
I personally think that experts, researchers should stop providing definitions of technical debt. Their posts, their articles provide added value when they share experience, studies, recommendations etc. on the topic. Providing or referencing a definition of technical debt is not fully ethical.
Let me explain my position.
Has a comparison, Tom McCabe introduced Cyclomatic Complexity in 1976. After that, nobody tried to give personal, improved definition of the concept introduced (and so owned) by Tom. The research community focused on studying the concept, finding correlations between the measure and external quality aspects of software etc. Of course, he gets feedback, comments on its concept and its definition. In 1996 Tom published a revised definition of its measure.
The Technical debt metaphor has been “invented” and is “owned” by Ward. If there is one day a definition of the concept it should come from Ward.

There is still plenty of room for work on the concept. The community can bring added value for identifying and defining other complementary concepts like technical debt item, obsolescence, other IT debts etc and for providing methods, practices, recommendations, tools etc for managing technical debt.

Community-Based Repository of Tools to Support Empirical Research

As with much of the rest of software engineering, the current state-of-the-art research in the area of architectural technical debt and software architecture in a broader sense are impeded by the myriad disjoint research and development environments. The resulting “one off” solutions inhibit further advances, and make it difficult to systematically synthesize novel research techniques on top of existing ones and to cross-validate those techniques. As a result, researchers and practitioners needing to build cutting-edge architecture-based tools must often create their building blocks (e.g., software components and frameworks) from scratch. In doing so, they tend to unnecessarily repeat each other’s efforts and even to revert to solutions that had already been tried and established as ineffective.

We have identified the following five key challenges which are faced by the software engineering community, when conducting research in the area of technical debt as well as software architecture:

1 – MTD Tool Accessibility and Reusability of Research Techniques. Implementations of research techniques and tools are often not easily accessible, unavailable, defective, or are no longer supported by their original creators. For tools that do work, it is common for them to not operate as advertised, resulting in major effort required to adapt these tools for further MTD research.

2 – Lack of Benchmarks and Datasets. Access to and construction of public artifacts, case studies, and benchmark datasets are challenges shared across the field of software engineering. For the research in the area of technical debt, these challenges are particularly pronounced due to the fact that many of the factors contributing to technical debt (e.g design decisions) have a tendency to be undocumented. For instance software architecture artifacts embody significant amounts of expert knowledge. In practice, developing such artifacts is expensive, organizations are reluctant to share such artifacts, and monopolizing architectural knowledge may create a perception of job security, further disincentivizing the construction and maintenance of such artifacts. As a re sult, the research community often relies on small datasets, lacking the domain- or application-specific knowledge needed to reconstruct accurately most factors contributing or impacting technical debt. It is thus necessary to develop an instrument that can aid in the generation, storage, and sharing of such artifacts by reverse engineering existing large-scale open-source systems. In earlier work, we examined the practicality of using various reverse-engineering approaches to obtain ground-truth architectures to partly address this challenge.

3 – Interoperability of Tools. Technical Debt research is hampered by distributed research environments and stove-piped solutions emerging from different research groups. This, in turn, inhibits research advances, makes it difficult to synthesize techniques and tools in new and exciting ways, and complicates comparisons of research solutions. Researchers and practitioners in need of cutting-edge technical debt analysis must often recreate tools or their major elements, including basic code analysis, reverse-engineering functions, and frameworks. Furthermore, different assumptions that these tools make (e.g., about the execution environments, formats used, implementation languages, etc.) prevent their combined use, further inhibiting breakthroughs.

4 – Reproducibility of Experiments and Analyses. Due to inaccessible, non-reusable, or defective tools, datasets, and case studies, and incompatible underlying tool assumptions, it is difficult to reproduce the results of many previous software architecture-oriented and technical debt research studies. In software engineering research, reproducibility is often rendered too difficult or impossible, even for studies designed to be repeatable.

5 – Technology Transfer. Despite the fact that practitioners understand, appreciate, and emphasize the criticality of software architecture as well as managing technical debt in the success of software systems, technology transfer in this area is hindered by the fact that most prototype tools are not sufficiently mature to support production-level or industrial usage. An overwhelming majority of software- engineering research groups lack the resources (e.g., personnel and hardware) needed to build tools that are robust and scalable enough to be easily and effectively used by other researchers, let alone by industry-grade software projects.

I would like to acknowledge my collaborators on this project, Nenad Medvidovick, Sam Malek, Josh Garcia for their contribution in summarizing the community-based challenges and formulating solutions.

Questions that are quite systematically raised about TD

During my consulting engagements within large organization, I meet senior managers and exchange with them on the topic of technical debt. I would like to share with you:
• The questions that are quite systematically raised by my contacts
• My current answers to their questions.
• My suggestions, my proposal about what needs to be done to make progress?

Question 1:
Is technical debt just a new fad that will pass in a few years?

My Current answer:
No, the concept of technical debt is a true paradigm shift. Once you have adopted this measurement concept, you won’t come back to traditional measurement system for code quality. This is due at least for 2 reasons that are purely mathematical.
The first reason is that technical debt is measured on a ratio scale (principal and interest are unlimited numbers. A file with X0 days of debt has X times more debt than a file with 10 days debt).
Almost all code quality measurement systems that the community has proposed during the last 30 years was producing measures, indexes on limited intervals like [1 to 5], [0 to 10], [0 to 100]. (I remember the MI3 and MI4 maintainability indices). Such measurement systems have representation issues and should be replaced by ratio scale measure. In fact, if we leave the software world and look at the measures we use every day for centuries (weight, distance, area …), they all are ratio scale measures.
The second reason is that technical debt is aggregated with additions. This is the only aggregation rule compatible with a representative measurement system (which means that if the analysis tools are accurate, the system will not make false positives, when aggregating file level measures to build a module or application level measure).
This looks quite short explanations about these two reasons. If you want more details, I invite you to read an article that I published in 2010 and which covers more in detail the measurement theory applied to code measures: Valid-2010 Conference paper downloadable here.

What needs to be done to make progress about the topic raised by this question?
I personally think that this paradigm shift is a huge progress, a major step toward the systematic measure of code quality. I think that the TD expert community does not communicate enough about this major breakthrough achievd by the concept of technical debt.

Technical Debt Conceptual Model

We, the technical debt research community, agree that a common conceptual model of technical debt that we collectively improve and validate would increase the pace of technical debt research. Therefore, as organizers we felt it is important to tease this apart together during the workshop. Early conceptual models offered by Martin Fowler (the debt quadrants) and Steve McConnell (intentional versus unintentional debt) provided useful starting points, but do not suffice to guide answering the hard questions for eliciting, quantifying, and reducing debt and transitioning to developers validated, easy to adopt practices.

Different technical debt enthusiasts refer to this semantic model in different ways: “technical debt framework”, “technical debt landscape”, “conceptual model”, “empirical model”, “financial model” ,“quality model”, “measurement model”.  The concepts discussed in these models are not consistent either.  Is design debt the same as architectural debt? If defects are not technical debt, what are postposed defects? Does principal of debt map to all code quality violations? Does principal change? What are the attributes of interest?

The underlying goal of all these models are common, to guide defining technical debt concepts and creating methods to control the inputs and outputs for managing it. Several blog posts here already refer to the conceptual model.  In addition, there are several papers already published that can help shape a strawman conceptual model of technical debt.  We compiled a reading list to help us all prepare for our sessions during the workshop when we discuss the conceptual model.

We believe that a baseline model will help the technical debt community make collective progress rather than coming up with yet another model variation. The reading list is meant to be representative rather than all-inclusive. If we have skipped a fundamental work that should be included comment and we will add it.

All the papers referred to are here: Ipek TD papers (in a zip file).

Systematic literature reviews and technical debt landscape

Chen Yang, Peng Liang, Paris Avgeriou:
A systematic mapping study on the combination of software architecture and agile development. Journal of Systems and Software 111: 157-184 (2016)

Areti Ampatzoglou, Apostolos Ampatzoglou, Alexander Chatzigeorgiou, Paris Avgeriou:
The financial aspect of managing technical debt: A systematic literature review. Information & Software Technology 64: 52-73 (2015)

Zengyang Li, Paris Avgeriou, Peng Liang:
A systematic mapping study on technical debt and its management. Journal of Systems and Software 101: 193-220 (2015)

Edith Tom, AybüKe Aurum, and Richard Vidgen. 2013. An exploration of technical debt. J. Syst. Softw. 86, 6 (June 2013), 1498-1516.

Nicolli S. R. Alves, Thiago Souto Mendes, Manoel Gomes de Mendonça Neto, Rodrigo O. Spínola, Forrest Shull, Carolyn B. Seaman:
Identification and management of technical debt: A systematic mapping study. Information & Software Technology 70: 100-121 (2016)

Clemente Izurieta, Antonio Vetro, Nico Zazworka, Yuanfang Cai, Carolyn B. Seaman, Forrest Shull:
Organizing the technical debt landscape. MTD@ICSE 2012: 23-26

Philippe Kruchten, Robert L. Nord, Ipek Ozkaya:
Technical Debt: From Metaphor to Theory and Practice. IEEE Software 29(6): 18-21 (2012)

Comparative studies on debt identification:

Nico Zazworka, Antonio Vetro, Clemente Izurieta, Sunny Wong, Yuanfang Cai, Carolyn B. Seaman, Forrest Shull: Comparing four approaches for technical debt identification. Software Quality Journal 22(3): 403-426 (2014)

Griffith I., Reimanis D., Izurieta C., Codabux Z., Deo A., Williams B., “The Correspondence between Software Quality Models and Technical Debt Estimation Approaches,” IEEE ACM MTD 2014 6th International Workshop on Managing Technical Debt. In association with the 30th  International Conference on Software Maintenance and Evolution, ICSME, Victoria, British Columbia, Canada, September 30, 2014.

Case Studies:

Griffith I., Izurieta C., Taffahi H., Claudio D., “A Simulation Study of Practical Methods for Technical Debt Management in Agile Software Development,” Winter Simulation Conference WSC 2014, Savannah, GA, December 7-10, 2014.

Antonio Martini, Lars Pareto, Jan Bosch:
A multiple case study on the inter-group interaction speed in large, embedded software companies employing agile. Journal of Software: Evolution and Process 28(1): 4-26 (2016)

Ariadi Nugroho, Joost Visser, and Tobias Kuipers. 2011. An empirical model of technical debt and interest. In Proceedings of the 2nd Workshop on Managing Technical Debt (MTD ’11). ACM, New York, NY, USA, 1-8.

On the Interplay of Technical Debt and Legacy

For each instance of technical debt, the identification, assessment, and optimal route of governance between short- and long-term yields is unique [1]. Commonalities do, however, exist. One of these is the ways with which technical debt is accumulated for the project. McConnell [2] identified intended (i.e. strategic) and unintended (i.e. accidental) accumulation which describe the two variations of the immediate situation wherein technical debt is accumulated. Arguably, however, there is also a third way which is delayed accumulation.

All software products are static. That is, after they have been developed, prior to being developed again, they remain in the exact same state (formalized for technical debt by Schmid in [3]). The environment around them, however, is dynamic. Technologies, people, organizational structures and processes change. All these and many others can be seen to have a link back to the static software product. As an explicit example, continued updates to a technology that is used to implement a software product: here the software product, for which development has stopped, does not abide to the latest version of the technology, and it becomes detached from the environment’s assumptions as they are no longer delivered via the technology’s updates. Hence, when the development is continued for this software product, we note that it has accumulated technical debt in a delayed fashion as current assumptions do not apply for it.

From the management perspective, there is a considerable difference between immediate and delayed accumulation. Immediate accumulation is affected mainly by matters that reside within the producing organization and its project. We may look into altering strategies and implementing new processes to affect the management of immediate, intended technical debt. Management of immediate, involuntary debt is often more indirect. For example, implementation and design quality issues often arise from practitioners having communication issues or being unaware of all applicable best practices. In these scenarios exercises to enhance social togetherness and focused training, respectively, can be utilized.

As per the previous description of delayed technical debt accumulation, its management is not limited to the producing organization only. Rather, the whole environment affects it; something that is impossible to subject to management and must be accepted to cause problems in the future. Within the fault management domain, efforts in this area are categorized under fault tolerance. Within the software development and maintenance domain, arguably, this is very close to legacy software management.

Legacy software has a variety of definitions, but it generally captures software artifacts which can not be subjected to the same maintenance and management efforts as newly created artifacts [4]. In practice, these are often implementation artifacts which are old, undocumented and/or untested, and for which the original developer is no longer available–either a new team has taken over in the organization, or the implementation has been acquired from somewhere else. If we consider delayed technical debt accumulation to capture inoptimalities that emerge due to the environment progressing around a static software product, we could argue that legacy software is a very close match to it.

As an effort to shed light into the accumulation and composition of technical debt that software organizations face today, and, especially, to probe the close relation of delayed technical debt accumulation and legacy software further, we conducted a practitioner survey. This survey was administered as a web-based questionnaire in Brazil, Finland, and New-Zealand. We captured a total of 184 responses from a diverse set of respondents using both agile and traditional development methods in which the practitioners assumed several roles ranging from developers to managers and client representatives. We have discussed results of the Finnish survey in more detail before [5] while a forthcoming article reviews the multi-national results. The multi-national set captured 69 descriptions for a concrete technical debt instance. Let us look at the distribution captured for the instances’ origins.


Figure 1: Origins of technical debt instances (N=78 as multiple origins were indicated for some instances)

We see from Figure 1 that over 75% of captured technical debt instances have indicated origins in software legacy. Whilst acknowledging that there is a number of limitations affecting for example the results generalizability, using this distribution as a basis, we would like to discuss the interplay of technical debt and legacy further.

It is evident that there is a strong connection between technical debt and legacy as most technical debt instances are affected by it. Hence, technical debt could be seen to benefit from integration of legacy software management procedures, as this field has a very established status. However, there are matters that should be explored when legacy software methods are integrated into technical debt management. Firstly, is legacy software, as per the close similarity, only a component of delayed technical debt accumulation? Arguably not, as the overall current state of the software product–to which legacy can be counted to–has an effect on technical debt accumulation and management [6].

Second, legacy software is generally a negative term for “derelict code” while technical debt implies pursuing asset management functions for inoptimalities in varying software artifacts. Reviewing Figure 1, one identifies that there is a potential danger in legacy being re-branded under the more favorable technical debt concept. Here, the asset management possibilities are left unexplored if legacy is not diligently converted into technical debt instances enabling full management. Noting the unobtrusive nature of legacy software, this is not an easy task to do, and having technical debt instances with varying levels of accuracy is bound to deteriorate technical debt management efforts overall.

Either way, as per the limited view provided by our survey, legacy is a very close companion of technical debt, and we should pursue narrowing the gap between these fields. While total control over technical debt is extremely challenging to ascertain, finding that technical debt management is the key to sustainable and efficient software development should still motivate us to pursue it.

[1] N. Brown, Y. Cai, Y. Guo, R. Kazman, M. Kim, P. Kruchten, E. Lim, A. MacCormack, R. Nord, I. Ozkaya et al., “Managing technical debt in software-reliant systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research. ACM, 2010, pp. 47–52.
[2] S. McConnell, “Technical debt,” 10x Software Development Blog,(Nov 2007). Construx Conversations. URL= http://blogs. construx. com/blogs/stevemcc/archive/2007/11/01/technical-debt-2.aspx, 2007.
[3] K. Schmid, “A formal approach to technical debt decision making,” in Proceedings of the 9th International ACM SIGSOFT Conference on Quality of Software Architectures. ACM, 2013, pp. 153–162.
[4] M. Feathers, Working effectively with legacy code. Prentice Hall, 2004.
[5] J. Holvitie, V. Lepp¨anen, and S. Hyrynsalmi, “Technical debt and the effect of agile software development practices on it-an industry practitioner survey,” in Sixth International Workshop on Managing Technical Debt. IEEE, 2014, pp. 35–42.
[6] A. Nugroho, J. Visser, and T. Kuipers, “An empirical model of technical debt and interest,” in Proceedings of the 2nd Workshop on Managing Technical Debt. ACM, 2011, pp. 1–8