Quality Improvement

Blog posts and articles about using statistics and data analysis to improve quality through methodologies such as Lean and Six Sigma.

In Parts 1 and 2 of Gauging Gage we looked at the numbers of parts, operators, and replicates used in a Gage R&R Study and how accurately we could estimate %Contribution based on the choice for each.  In doing so, I hoped to provide you with valuable and interesting information, but mostly I hoped to make you like me.  I mean like me so much that if I told you that you were doing... Continue Reading
You run a capability analysis and your Cpk is bad. Now what? First, let’s start by defining what “bad” is. In simple terms, the smaller the Cpk, the more defects you have. So the larger your Cpk is, the better. Many practitioners use a Cpk of 1.33 as the gold standard, so we’ll treat that as the gold standard here, too. Suppose we collect some data and run a capability analysis using Minitab Statisti... Continue Reading


Chicago, IL | 11-12 September, 2017


Register by July 20 for a $100 discount!

by Kevin Clay, guest blogger In transactional or service processes, we often deal with lead-time data, and usually that data does not follow the normal distribution. Consider a Lean Six Sigma project to reduce the lead time required to install an information technology solution at a customer site. It should take no more than 30 days—working 10 hours per day Monday–Friday—to complete, test and... Continue Reading
"You take 10 parts and have 3 operators measure each 2 times." This standard approach to a Gage R&R experiment is so common, so accepted, so ubiquitous that few people ever question whether it is effective.  Obviously one could look at whether 3 is an adequate number of operators or 2 an adequate number of replicates, but in this first of a series of posts about "Gauging Gage," I want to look at... Continue Reading
Did you know the most popular diamond cut is probably the Round Brilliant Cut? The first early version of what would become the modern Round Brilliant Diamond Cut was introduced by an Italian named Vincent Peruzzi, sometime in the late 17th century.  In the early 1900s, the angles for an "ideal" diamond cut were designed by Marcel Tolkowsky. Minor changes have been made since then, but the angles... Continue Reading
In its industry guidance to companies that manufacture drugs and biological products for people and animals, the Food and Drug Administration (FDA) recommends three stages for process validation: Process Design, Process Qualification, and Continued Process Verification. In this post, we we will focus on that third stage. Stage 3: Continued Process Verification Per the FDA guidelines, the goal of... Continue Reading
To make objective decisions about the processes that are critical to your organization, you often need to examine categorical data. You may know how to use a t-test or ANOVA when you’re comparing measurement data (like weight, length, revenue, and so on), but do you know how to compare attribute or counts data? It easy to do with statistical software like Minitab.  One person may look at this bar... Continue Reading
Genichi Taguchi is famous for his pioneering methods of robust quality engineering. One of the major contributions that he made to quality improvement methods is Taguchi designs. Designed experiments were first used by agronomists during the last century. This method seemed highly theoretical at first, and was initially restricted to agronomy. Taguchi made the designed experiment approach more... Continue Reading
In its industry guidance to companies that manufacture drugs and biological products for people and animals, the Food and Drug Administration (FDA) recommends three stages for process validation. While my last post covered statistical tools for the Process Design stage, here we will focus on the statistical techniques typically utilized for the second stage, Process Qualification. Stage 2: Process... Continue Reading
Have you ever wished your control charts were better?  More effective and user-friendly?  Easier to understand and act on?  In this post, I'll share some simple ways to make SPC monitoring more effective in Minitab. Common Problems with SPC Control Charts I worked for several years in a large manufacturing plant in which control charts played a very important role. Virtually thousands of SPC... Continue Reading
In the first part of this series, we saw how conflicting opinions about a subjective factor can create business problems. In part 2, we used Minitab's Assistant feature to set up an attribute agreement analysis study that will provide a better understanding of where and when such disagreements occur.  We asked four loan application reviewers to reject or approve 30  selected applications, two... Continue Reading
Previously, I discussed how business problems arise when people have conflicting opinions about a subjective factor, such as whether something is the right color or not, or whether a job applicant is qualified for a position. The key to resolving such honest disagreements and handling future decisions more consistently is a statistical tool called attribute agreement analysis. In this post, we'll... Continue Reading
In my last post on DMAIC tools for the Define phase, we reviewed various graphs and stats typically used to define project goals and customer deliverables. Let’s now move along to the tools you can use in Minitab Statistical Software to conduct the Measure phase. Measure Phase Methodology The goal of this phase is to measure the process to determine its current performance and quantify the problem.... Continue Reading
People frequently have different opinions. Usually that's fine—if everybody thought the same way, life would be pretty boring—but many business decisions are based on opinion. And when different people in an organization reach different conclusions about the same business situation, problems follow.  Inconsistency and poor quality result when people being asked to make yes / no, pass / fail, and... Continue Reading
Ahoy, matey! Ye’ve come to the right place to learn about Value Stream Maps (VSM).  Just as a treasure map can lead a band o’ pirates to buried treasures, so too can the VSM lead a process improvement bilge rat to the loot buried deep inside a process! Companion by Minitab has an easy-to-use VSM tool to guide yer way. Use a value stream map to illustrate the flow of materials and information as a... Continue Reading
The line plot is an incredibly agile but frequently overlooked tool in the quest to better understand your processes. In any process, whether it's baking a cake or processing loan forms, many factors have the potential to affect the outcome. Changing the source of raw materials could affect the strength of plywood a factory produces. Similarly, one method of gluing this plywood might be better... Continue Reading
If you’re familiar with Lean Six Sigma, then you’re familiar with DMAIC. DMAIC is the acronym for Define, Measure, Analyze, Improve and Control. This proven problem-solving strategy provides a structured 5-phase framework to follow when working on an improvement project. This is the first post in a five-part series that focuses on the tools available in Minitab Statistical Software that are most... Continue Reading
A member of Minitab's LinkedIn group asked how to create a chart to monitor change by month, specifically comparing last year's data to this year's data. My last post showed how to do this using an Individuals Chart of the differences between this year's and last year's data.  Here's another approach suggested by a participant in the group.  Applying Statistical Thinking An individuals chart of the... Continue Reading
Reliability analysis is the perfect tool for calculating the proportion of items that you can expect to survive for a specified period of time under identical operating conditions. Light bulbs—or lamps—are a classic example. Want to calculate the number of light bulbs expected to fail within 1000 hours? Reliability analysis can help you answer this type of question. But to conduct the analysis... Continue Reading
In Parts 1 and 2 of this blog series, I wrote about how statistical inference uses data from a sample of individuals to reach conclusions about the whole population. That’s a very powerful tool, but you must check your assumptions when you make statistical inferences. Violating any of these assumptions can result in false positives or false negatives, thus invalidating your results.  The common... Continue Reading