In previous posts, I’ve outlined some reasons why a Lean Six Sigma project might have been deemed a failure. We’ve gathered many of these reasons from surveying and talking with our customers.
I’d like to present a few more reasons why projects might fail, and then share some “words of wisdom” from Minitab trainers on how you can avoid these project failures.
Certain quality improvement projects were never meant to be Six Sigma projects that fit neatly into the DMAIC (Define – Measure – Analyze – Improve – Control) methodology. Examples include:
This project is not a suitable DMAIC Six Sigma project because it’s not a project that’s focused on reducing defects. Selecting a software vendor is certainly a good example of an “improvement” project, but not one that makes sense to complete using the DMAIC structure.
Unlike the previous example, this project does focus on reducing defects. However, one year into the life of the product, the manufacturer no longer has control over that car -- the consumer who purchased it has control. This makes it extremely challenging to collect relevant data, and you’d also have to consider and factor in whether or not the consumer is treating the car properly by getting it serviced on time, etc.
Often, equipment installation engineers choose installation projects as their Six Sigma projects. However, this is not a good idea because, again, there are no defects to reduce. This type of project would likely fit better into the DFSS (Design for Six Sigma) methodology.
How can you avoid the trap of accidentally forcing your project into the DMAIC structure if that structure really isn't suitable for the type of project you’re working on? Consider performing a project risk assessment before starting the project to evaluate its likelihood to succeed.
Here’s an example of a project risk assessment form in Minitab Engage that comes pre-loaded with a list of evaluation criteria for your DMAIC project:
To fill out a project risk assessment, go through each of the evaluation criteria and answer “yes,” “no,” or “maybe.” Depending on how you answer, you can also select a risk score for each. In the example above, “yes” was scored with a 1 (a low risk score), “no” was scored with a 10 (high risk), and “maybe” was scored with a 4 (partially or probably risky).
You could refine this assessment by giving each evaluation criterion a different weight. Or, like in the example above, all the criteria can be given an equal weight because all are equally important when considering the riskiness of the project.
The goal is to add up all the scores. If you’re comparing the scores of two projects, the project with the lowest total risk score is perceived as being more likely to succeed (or less risky). Projects with high scores may not have a clearly identified customer, or a clearly defined defect. Also, a project without a clearly definable defect may be a red flag that you might be trying to force your project into the DMAIC structure.
Another key aspect of DMAIC is using data to make sound decisions. Customers often tell us that actually getting the data (or getting decent data) is one of their biggest challenges when completing Six Sigma projects.
How could you end up with no data? This seems to be pretty common with service-related projects because it can be difficult or expensive to get good data. Getting service-related data is not as simple as picking products off the manufacturing line and measuring them. But getting decent data can be difficult even in manufacturing, especially when a defect occurs in the customer’s process and not yours. Sometimes it's difficult for improvement teams to access the data, perhaps due to confidentiality or privacy issues. This problem is particularly common in financial, healthcare, and human resources. The data may exist, but the process improvement team might not be able to use it!
If this situation arises, it’s best to focus your attention on areas where the data does exist or where it’s more easily obtained. But there's something that’s even worse than having no data: Having bad data! There’s nothing worse than spending the time to collect your data, assuming that it’s great, and then doing all of the analysis only to realize later that your data was flawed.
How can you avoid having bad data? The quality improvement experts here at Minitab recommend that you check your data before you use it. It’s always worth it to carry-out a measurement systems analysis (MSA) in the Measure phase of your DMAIC project so that you can answer the question: “Can I trust my data?” with a confident "Yes."
If you’ve got continuous data, you’ll want to conduct a Gage R&R, and if you’ve got attribute data, you’ll want to conduct an attribute agreement analysis. These analyses can be done in Minitab, and you can use the software’s built-in Assistant menu to guide you through selecting the appropriate MSA for your data:
For more on MSA, check out these posts:
Explaining Quality Statistics So My Boss Will Understand: Measurement Systems Analysis (MSA)
Measurement Systems Analysis Needs a Stratified Random Sample (Even with Gummi Bears)
Accuracy vs. Precision: What’s the Difference?
Ultimately, this is the moral of the story: Until you can trust your data, you should not proceed to the analysis phase. And if you do proceed to the analysis phase anyway, you may be setting yourself up for a project failure.
For previous posts about how to avoid a Lean Six Sigma project failure, take a look at Avoiding a Lean Six Sigma Project Failure, part 1 and Avoiding a Lean Six Sigma Project Failure, part 2.