The cost of detecting and fixing defects in software increases exponentially with time in the software development workflow. Fixing bugs in the field is incredibly costly, and risky — often by an order of magnitude or two. The cost is in not just in the form of time and resources wasted in the present, but also in form of lost opportunities of in the future.

Most defects end up costing more than it would have cost to prevent them. Defects are expensive when they occur, both the direct costs of fixing the defects and the indirect costs because of damaged relationships, lost business, and lost development time. — Kent Beck, Extreme Programming Explained

The following graph courtesy the NIST helps in visualizing how the effort in detecting and fixing defects increases as the software moves through the five broad phases of software development.

Chart

To understand why the costs increase in this manner, let’s consider the following points:

  • It is much easier to detect issues in code when developers are still writing the code. Since the code is still fresh in the mind, it’s trivial to fix even complex problems. As time passes and the code moves to later stages, developers need to remember everything and hunt down issues before they can fix them. If an automated system, such as a CQ integration, highlights issues in code when the developers are still writing the code, they are much more amenable to incorporate the fix for the same reason.

  • Once the software is in the testing phase, reproducing the defects on a developer’s local environment presents another time-consuming task. Additionally, while it’s very easy to catch something that’s obviously broken or not according to the requirements, it is incredibly difficult to uncover defects which are more fundamental — think about memory leaks, race conditions, etc. If these issues escape the coding phase, they generally don’t present themselves until in production phase, unfortunately.

  • After the software has been released and is out in the field, it’s not just difficult to find defects — it’s incredibly risky as well. In addition to preventing live users being affected by the problems, ensuring availability of the service is business critical. These effects compound and the heightened cost is as high as 30x as compared to if these defects are fixed early on.

Mitigation

The arguments above withstanding, it is valuable to implement processes which enable developers to detect early, detect often. Essentially, the development workflow should ensure that defects can be detected as early as possible — preferably during the code is being written by the developer or is in code-review stage before being merged to the main development branch.

Processes like CI help ensure that changes to the code are small and manageable, so it’s easier to detect issues. Tracking code coverage and ensuring that a certain threshold is helpful, and facilitate iterations on the code to fix these issues.

In essence, processes and conventions should be designed around moving defect detection as early in the workflow and as closer to the developer’s coding environment as possible. This way, the same compounding effects which inflate the negative impacts of late defect detection work in favor of increasing software quality and resilience.

Continuous Quality or CQ is a software engineering practice in which isolated changes (or deltas) are immediately analyzed for the code’s quality and the properties of maintainability, and reported on before they are added to a code base. The goal of CQ is to provide rapid feedback so that if issues are found that can affect maintainability of the code or contribute to increasing the technical debt, they can be identified and corrected as soon as possible.

Developers have been trying to invent tools and implement a process which deliver software better for a long time now. Maintaining the health of code is an incredibly vital, and practices like peer-review of code, static analysis checks, tracking key metrics like documentation coverage, test coverage etc. help enforce that. Implementing CQ is a formal way to bring all these practices together in the software development workflow. When put together with practices like CI and CD, CQ helps ensure that the team is able to deliver reliable software, faster.

The major benefits of implementing Continuous Quality as part of the development workflow are:

1. More reliable and secure software

One of the core tenets of CQ is detecting defects in code, such as anti-patterns, bug risks, potential security vulnerabilities closer to the developer’s workflow. Since the developer is made aware of these issues very early in the workflow while they are still amenable to make changes, it increases the chances of these issues being fixed as compared to when these issues are raised later. Rinse and repeat, the most common issues have a high chance of never entering the code base at all — leading to defect-free software which is more reliable and secure.

2. Faster time to market

A direct consequence of implementing CQ is that it automates a lot of work that senior developers in the team do in terms of reviewing a proposed code change. Since the most trivial issues can now be raised by the CQ system to the developer who is proposing the change, these issues are typically fixed before the code review happens. This saves considerable time for both junior and senior developers, who can now spend their time more productively and qualitatively. Due to increased reliability, the time to manually test and verify software is also reduced; and it becomes easier to deliver software into production faster.

3. Reduced cost of software maintenance

An average software developer spends over 17 hours every week battling with technical debt and bad code1. Maintenance issues like debugging, refactoring, fixing broken dependencies, adapting code to new requirements, etc. claim a huge amount of developer time and attention. Factors like lack of documentation, lack of quality processes can compound the complexity considerably. CQ ensures that the basic hygiene factors are maintained in the code, and indicators of source code health are kept in check — so it becomes easier to add a new module, extending on existing functionalities or porting the software to work in a new environment.

4. Better estimation of release timelines

Estimating how long will it take to ship a new feature is business-critical, since it directly impacts the go-to-market strategy and related revenue streams. Trying to estimate without having all the necessary information about the current health of the code base is like shooting in the dark. CQ ensures that decision-makers have all the relevant information on their fingertips, so estimation of release timeline is more realistic.

5. Improved customer satisfaction

This is basically a no-brainer. A stable and secure product that does what it says it does and doesn’t crash is all that customers wish for. Building software is hard and building it at scale is even harder, there’s no denying that. CQ ensure quality control at the earliest phases of writing software, and in the smallest quanta of changes to software. This makes it easier for teams to ensure quality control as a whole and focus on delivering a great product experience to customers.

6. Improved developer happiness

Finally, and most importantly, it is important to keep in mind that developers are humans. More than 80% of developers in a recent survey said that having to work on bad code and unmanaged technical debt has a negative impact on their productivity and personal morale. Processes like CQ help in reducing the grunt work in development workflows, automate things that can be automated, and provide certainty to developers. Saving time and increasing productivity boosts morale and enables developers to do their best work. More than 70% of engineering leaders believe developers can bring the most impact to business by bringing software to market faster. Happy developers are the only way to achieve that.

References

  1. The Developer Coefficient: a $300B opportunity for businesses

Software evolves, and changes to software are inevitable. In general, any work done to change the software after it is in operation is considered to be maintenance. Maintenance consumes over 70% of the total life-cycle cost of a software project 1. If you think about it for a while, you would realize how critical maintenance work is to keep the software alive. Interestingly, the act of reading code is the most time-consuming component of all maintenance activities performed by software developers.

Since readability poses such importance on maintenance of software, let’s understand how do we define it. In natural languages, readability is defined as how easy a text is to understand. In literature, readability is objectively judged by metrics like average syllables per word, average sentence length, etc. Raising the readability level of a text from mediocre to good can make the difference between success and failure of its communication goals.

Programs must be written for people to read and only incidentally for machines to execute. Thus spoke the authors of the authoritative book on software development patterns, SICP. So how do we make sure the communication goals of source code is delivered to the developers?

Source code is not documentation

You would often see software developers treat source code as the primary or at times, the only documentation. For this to manifest in practice, the code has to be sufficiently detailed and precise. But source code in its original form is not readable as plain text. As noted earlier, readability plays a huge part in making software accessible and maintainable. Any documentation that is written must be easy to understand not just by the immediate team members but also by future stakeholders. Some examples of why this is important are:

  1. When interfacing with external modules, the consumer should understand the exposed interfaces by the existing module.
  2. To extend a module, existing models and concepts need to be understood in detail.
  3. To identify a bug and patch a fix faster, detailed documentation can be critical.

Of course, for the documentation to be effective, it must be maintained along with the code itself. When refactoring code it has to be made sure that the documentation reflects the change as well. All seasoned engineering teams put the impetus on tracking changes in documentation when the code is updated.

How to write good documentation?

Three golden rules when writing documentation are asking yourself these questions while writing comments:

  1. What does this piece of code do?
  2. How does it do it?
  3. How does someone use it somewhere else?

When you treat comments as part of source code, make sure it’s reviewed along in the merge process. If there is one takeaway from this post, it is treating documentation equally as source code as part of review process.

Embedded documentation helps the programmer to stay within the context and understand thoroughly. It also exhibits a significant level of correlation with other conventional metrics such as software quality, code churn, etc. A code base is owned primarily by a team, not an individual. It’s important that developers put in the effort to make sure that the code they write is clear and readable. Some teams may prefer to skip code documentation in order to save time, money and effort. Keep in mind though that this might result in even more significant expenses once the product is transferred to another team or when updates are required down the line.


The inability to change software quickly and reliably means that business opportunities are lost. DeepSource enables teams to keep track of health of their software documentation with a documentation coverage metric. Get started today.

Documentation coverage metric DeepSource metrics dashboard

References

  1. Software Defect Reduction Top 10 List
  2. CodeAsDocumentation
  3. A Survey of Improving Computer Program Readability to Aid Modification

The term technical debt was coined by Ward Cunningham in 1992. To understand technical debt, let’s compare it with financial debt.

A financial debt is an arrangement which gives the borrowing party permission to borrow money under the condition that it is to be paid back at a later date, usually with interest. Financial debt is generally needed due to overspending, lack of financial knowledge, poorly managed budget, etc. to name a few. You might also raise a financial debt to invest in your capital-intensive business, education, etc. which would bring returns in future. Financial debt is not always a bad thing as long as you’re aware of the consequences and always keep it under control.

Now assume a scenario in software development — there are constraints placed on the development of a product, primarily in terms of the delivery deadline, or a business decision has been made with lack of technical implementation knowledge. An ad-hoc decision would need to be made intentionally to meet the delivery deadline. This results in technical debt. It is a leeway that has been taken to ship things today, hoping it would bring returns in future. But the “interest” must be paid — the ad-hoc decisions are taken today must be corrected sometime in the future.

Causes and effects

The major contributors to technical debt, however, are the unintentional and caused by non-repayment of the interest — lack of objective code-review processes, poor estimation of product releases, not following best practices or industry standard patterns, to name a few. Inadvertently, without being aware of the butterfly effect it causes, people move on to the next thing. Like financial debt, technical debt incurs interest, in the form of the extra effort that we have to do in future development because of the quick and dirty design choices made. In technical debt, refactoring is like repaying principal, and slower development due to complexity is like paying interest. As long as you’re well aware of the choices you’ve made and try to repay the debt on time before moving on, it is not a bad thing.

The biggest cost of technical debt is the fact that it slows your ability to deliver future features, thus handing you an opportunity cost for lost revenue. When accumulated over time, bugs would start cropping up tossing software stability and reliability. These factors also result in developer unhappiness, burnout, resulting in low productivity, thereby compounding the first two aspects. The tricky thing about technical debt is that unlike money it’s hard to measure effectively. To take action on technical debt, one should have absolute visibility of the current state of their code.


DeepSource’s vision is to help teams identify technical debt, educate them with industry-standard practices and patterns, and provide actionable insights on code health continuously. Though our tools automate these workflows, we firmly believe in educating developers and team managers along the way in form of detailed articles about ways to keep technical debt under control, reduce the existing debt and avoid it as much as possible in future.

We welcome you to be a part of this journey! Tell us what you think by tweeting to us @DeepSourceHQ.

Join like-minded developers and engineering leaders.