Remediations strategies with SQALE

Here are the slides presented during my lightening talk :
Remediations in SQALE-Dagstuhl

The Agile Alliance Technical Debt Initiative: First deliverables

A little bit more than one year ago, I get the opportunity to lead an Agile Alliance Program focused on Technical Debt. The purpose of the program (also called Initiative) is to provide practical recommendations and answers to questions like:
• What is Technical Debt?
• How do I make informed decisions on when and how to address it?
• How can I start to manage Technical Debt

We formed a group of six people and we had a kick-off meeting in May 2015 to organize our work on different objectives. Then, we collaborate mostly by Hangouts, as we are located in different countries on both sides of the ocean. We also had a workshop in July 2015 in Washington DC, to present and discuss our first results and share ideas and thoughts.

Some of the first results (available here) of our work have just been published on the Agile Alliance website. This is :
• An introduction to the concept of Technical Debt
• A set of recommendations: Project Management and Technical Debt
• The Agile Alliance Debt Analysis Model (A2DAM)

The last item is a very basic set of code related rules which should be the minimum to comply with in order to produce “right code”. This is not a large inventory of best practices, of requirements to comply with. As the opposite, it is a short and simple “ground level” set of good practices to help teams to start. All the rules are characterized by fields like impact, remediation model etc.

An initial version has been built by the AA Program Team. This initial list has been sent for review to experts in 16 companies developing static code analysis tools. 11 of them have provided their feedback about the relevance and clarity of the practices, the associated thresholds and their tool implementation. Their feedback has been taken into account for delivering the final version.
Now that this list is public, we welcome all kind of feedback (in order to improve and enhance it on a regular basis) and we hope that it will help the community to better manage technical debt.

These are our first deliverables. There will be more this year.

During my consulting engagements within…

During my consulting engagements within large organization, I meet senior managers and exchange with them on the topic of technical debt. I would like to share with you:
• The questions that are quite systematically raised by my contacts
• My current answers to their questions.
• My suggestions, my proposal about what needs to be done to make progress?

Question 3:
What can I do with all the Technical Debt accumulated in my legacy applications?

My current answer:
As you do not have the entire necessary budget to pay off your debt, you have to compromise. I propose to them an approach based on the use of two types of data for each application.

  • 2 external data: the estimated relative added value level and the estimated annual maintenance charge.

-2 internal data: The amount of technical debt and interest as measured by a tool that implements the SQALE method.

The principle is simple. If an application has a low added value and has no maintenance activity, then its technical debt has no impact and improving its debt is not necessary.
If an application provide high added value and has a high maintenance activity then its technical debt should be very low. This application must be given priority in the allocation of budget for improvement. This approach is applied successfully for some time by very large organizations. I have explained it in more detail in a recent article published in the Cutter IT Journal.
Here is a copy (please don’t distribute or copy for copyright reasons)of this article.

I will be happy to present (and demonstrate) this approach with more detail during our seminar.

Questions that are quite systematically raised about TD – (cont.)

During my consulting engagements within large organization, I meet senior managers and exchange with them on the topic of technical debt. I would like to share with you:
• The questions that are quite systematically raised by my contacts
• My current answers to their questions.
• My suggestions, my proposal about what needs to be done to make progress?

Question 2:
What’s the definition of technical debt?

My current answer:
I don’t have a definition of technical debt, because it is a metaphor. You don’t define metaphor, you just use it (if you find it useful).
The metaphor has been introduced by Ward Cunningham, one of the authors of the Agile Manifesto.
Ward used it the first time when he was developing a financial application in Smalltalk. He wanted to justify to his boss the refactoring they were doing, so he used a financial analogy.
If you want to know more about the metaphor, you can:
• Look on the web for some videos, interviews of Ward on the topic.
From what I know, Ward never provided a precise “Definition” of its metaphor. I personally think that it’s a fully deliberate position.
• Look for a recent document on the Agile Alliance web site (here). It is an introduction (not a definition) to the concept which has been reviewed and completed by Ward

What needs to be done to make progress about the topic raised by this question?
I personally think that experts, researchers should stop providing definitions of technical debt. Their posts, their articles provide added value when they share experience, studies, recommendations etc. on the topic. Providing or referencing a definition of technical debt is not fully ethical.
Let me explain my position.
Has a comparison, Tom McCabe introduced Cyclomatic Complexity in 1976. After that, nobody tried to give personal, improved definition of the concept introduced (and so owned) by Tom. The research community focused on studying the concept, finding correlations between the measure and external quality aspects of software etc. Of course, he gets feedback, comments on its concept and its definition. In 1996 Tom published a revised definition of its measure.
The Technical debt metaphor has been “invented” and is “owned” by Ward. If there is one day a definition of the concept it should come from Ward.

There is still plenty of room for work on the concept. The community can bring added value for identifying and defining other complementary concepts like technical debt item, obsolescence, other IT debts etc and for providing methods, practices, recommendations, tools etc for managing technical debt.

Questions that are quite systematically raised about TD

During my consulting engagements within large organization, I meet senior managers and exchange with them on the topic of technical debt. I would like to share with you:
• The questions that are quite systematically raised by my contacts
• My current answers to their questions.
• My suggestions, my proposal about what needs to be done to make progress?

Question 1:
Is technical debt just a new fad that will pass in a few years?

My Current answer:
No, the concept of technical debt is a true paradigm shift. Once you have adopted this measurement concept, you won’t come back to traditional measurement system for code quality. This is due at least for 2 reasons that are purely mathematical.
The first reason is that technical debt is measured on a ratio scale (principal and interest are unlimited numbers. A file with X0 days of debt has X times more debt than a file with 10 days debt).
Almost all code quality measurement systems that the community has proposed during the last 30 years was producing measures, indexes on limited intervals like [1 to 5], [0 to 10], [0 to 100]. (I remember the MI3 and MI4 maintainability indices). Such measurement systems have representation issues and should be replaced by ratio scale measure. In fact, if we leave the software world and look at the measures we use every day for centuries (weight, distance, area …), they all are ratio scale measures.
The second reason is that technical debt is aggregated with additions. This is the only aggregation rule compatible with a representative measurement system (which means that if the analysis tools are accurate, the system will not make false positives, when aggregating file level measures to build a module or application level measure).
This looks quite short explanations about these two reasons. If you want more details, I invite you to read an article that I published in 2010 and which covers more in detail the measurement theory applied to code measures: Valid-2010 Conference paper downloadable here.

What needs to be done to make progress about the topic raised by this question?
I personally think that this paradigm shift is a huge progress, a major step toward the systematic measure of code quality. I think that the TD expert community does not communicate enough about this major breakthrough achievd by the concept of technical debt.