What to Do When…Gulp…Data Analysis Isn’t an Option…
I’m a quality engineer, so it probably goes without saying that I like gathering and analyzing data. Minitab and I spend a significant amount of time together. Some might say that our relationship is unhealthy—perhaps even co-dependent. But Minitab and I have been together for almost 20 years. That’s a long time for any relationship, especially one between a software application and an engineer.
It’s that 20-year bond that makes it very difficult for me to acknowledge when I can’t gather data for a particular attribute of quality. Sometimes it’s not an option. Sometimes it’s just not the best option. Either way, it’s not easy.
Case in point. Minitab runs an agile development shop. This means we begin testing very early in development and continually iterate between development and test. For this reason, developers are fixing bugs almost as quickly as we find them. That’s very good. Believe me—I don’t miss waterfall development at all.
In addition, our development teams are given a lot of autonomy to determine the most efficient and effective way to develop software.
As a result, many of our teams choose not to log development bugs in our bug-tracking database. It’s more efficient for them to just work side by side and fix them as they are found. This is definitely efficient; no arguments there. But there's a downside for data lovers. Like me. We lose the ability to track the number of issues found within development. We no longer have a comprehensive database of bugs to assess and study.
That bugs me.
The number of bugs found during development directly impacts delivery time. With data, I could analyze results and predict delivery and quality impacts. I could look for areas of improvement. I could sleep at night. With data, my world makes sense.
I know the world making sense to me means nothing to anyone else. In fact, some days I’m pretty sure people are hell-bent on ensuring my world does not make sense. Fine. But I still need to ensure that we strike the right balance between quality and delivery. The easy route: make everyone collect data. The better route? Get the information I need without interfering with the team dynamics. That means sometimes, to my chagrin, data gathering for a specific attribute of quality just isn’t going to happen.
So, when data collection isn’t an option, what do we do?
Data Analysis and Quality Improvement—a “Big Picture” Thing
Luckily, understanding product quality is about studying many attributes and their relationships to one another. It’s the totality of the attributes that provide an overall picture of product quality. No single attribute provides a complete view. So many other metrics can offer insight into what's happening in a “data-deficient” area.
For example, while I can’t track initial code quality, I am tracking many other attributes of performance and quality. Because each of these attributes has relationships with the others, I can see “indicators” for one attribute when looking at the results of another.
Code Quality Metrics
We track various code metrics, including maintainability index, cyclomatic complexity, depth of inheritance and class coupling. Each provides a different view of the code. For example, cyclomatic complexity shows the number of linearly independent paths through the source code. The more complex the code, the more test cases and, typically, the more prone it is to defects.
Our recommended Code Complexity is less than 20. Classes or methods with high complexity numbers are a red flag. It doesn’t definitively say that code quality will be bad, but it is an indicator of complex code. And where code is complex, there are often bugs.
The boxplot below shows that most of the methods have a code complexity of less than 20; however, there are clusters outside of the limit. These are indicators of code that should be investigated.
Project Completion Rate
Another indicator is development progress. Our development cycle has phases and review points. As features pass through these phases we track their progress. Projects that are going well move smoothly through these phases and into completion. Projects that aren’t going as well…tend to get “stuck.”
In the figure below, most projects are successfully moving toward the “ready to ship” state. But some appear to be stuck in development, and should be investigated to determine the cause of the delay.
Minitab uses a continuous integration system to manage changes to the code base. So each time a developer changes code, thousands of automated tests—actually, tens of thousands—are run against the change to ensure that no problems were introduced to existing functionality. When a change causes a test to fail, we call that “build breakage,” and we refer to the frequency of build breakage as our “build quality.”
Our continuous integration system detects test failures in existing test cases as a “regression” test. It may not include the specific tests for the feature in development and may not provide specific details on the failures for new feature development; but, if it starts heading south, that’s a pretty good indicator that the code of one or more features should be investigated.
Juggling Cost, Quality and Delivery
Theoretically, I can and, some may argue should, find a way to gather the data.You can’t imagine how much I’d like for it to be that easy. But it’s not always easy to collect the data--and when you’re juggling cost, quality and delivery, you’ll find times when you can’t. And so, quality engineers assess many (but sometimes not all) quality attributes; individually and in totality. But even when gathering the exact data we want isn't an option, we can continually monitor our progress, investigate issues and make adjustments moving forward.