Gage R&R | MinitabBlog posts and articles about Gage Repeatability and Reprodicibility (Gage R&R) studies for quality improvement.
http://blog.minitab.com/blog/gage-randr/rss
Sat, 01 Oct 2016 17:09:25 +0000FeedCreator 1.7.3Those 10 Simple Rules for Using Statistics? They're Not Just for Research
http://blog.minitab.com/blog/understanding-statistics/those-10-simple-rules-for-using-statistics-theyre-not-just-for-research
<p><span style="line-height: 1.6;">Earlier this month, PLOS.org published an article titled "<a href="http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004961" target="_blank">Ten Simple Rules for Effective Statistical Practice</a>." </span><span style="line-height: 20.8px;">The 10 rules are good reading for </span><em style="line-height: 20.8px;">anyone </em><span style="line-height: 20.8px;">who draws conclusions and makes decisions based on data</span><span style="line-height: 20.8px;">, whether you're trying to extend the boundaries of scientific knowledge or make good decisions for your business. </span></p>
<p><span style="line-height: 20.8px;">Carnegie Mellon University's Robert E. Kass and several co-authors</span><span style="line-height: 20.8px;"> </span><span style="line-height: 1.6;">devised the rules in response to the increased pressure on scientists and researchers—many, if not most, of whom are <em>not</em> statisticians—to present accurate findings based on sound statistical methods. </span></p>
<p><span style="line-height: 20.8px;">Since </span><span style="line-height: 1.6;">the paper and the discussions it has prompted focus on scientists and researchers, it seems worthwhile to consider how the rules might apply to </span><span style="line-height: 20.8px;">quality practitioners or business decision-makers as well</span><span style="line-height: 1.6;">. </span><span style="line-height: 1.6;">In this post, I'll share the 10 rules, some with a few modifications to make them more applicable to the wider population of all people who use data to inform their decisions. </span></p>
<img alt="questions" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/d2c0571a-acbd-48c7-84f4-222276c293fe/Image/36fa08b0c862c669f4e41596fbb76ddd/question_mark_signs.jpg" style="width: 200px; height: 200px; float: right; margin: 10px 15px; border-width: 1px; border-style: solid;" />1. Statistical Methods Should Enable Data to Answer <span style="color:#FF0000;">Scientific Specific</span> Questions
<p>As the article points out, new or infrequent users of statistics tend to emphasize finding the "right" method to use—often focusing on the structure or format of their data, rather than thinking about how the data might answer an important question. But choosing a method based on the data is putting the cart before the horse. Instead, we should start by clearly identifying the question we're trying to answer. Then we can look for a method that uses the data to answer it. If you haven't already collected your data, so much the better—you have the opportunity to identify and obtain the data you'll need.</p>
2. Signals Always Come With Noise
<p>If you're familiar with <a href="http://blog.minitab.com/blog/understanding-statistics/control-chart-tutorials-and-examples">control charts</a> used in statistical process control (SPC) or the Control phase of a Six Sigma DMAIC project, you know that they let you distinguish process variation that matters (special-cause variation) from normal process variation that doesn't need investigation or correction.</p>
<p style="margin-left: 40px;"><img alt="control chart" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/d2c0571a-acbd-48c7-84f4-222276c293fe/Image/632a05ec67ddca317eb4bc1f4daabe9a/i_chart_of_ph.gif" style="line-height: 20.8px; width: 573px; height: 172px;" /><br />
<em style="line-height: 20.8px;">Control charts are one common tool used to distinguish "noise" from "signal." </em></p>
<p>The same concept applies here: whenever we gather and analyze data, some of what we see in the results will be due to inherent variability. Measures of probability for analyses, such as confidence intervals, are important because they help us understand and account for this "noise." </p>
3. Plan Ahead, Really Ahead
<p>Say you're starting a DMAIC project. Carefully considering and developing good questions right at the start of a project—the DEFINE stage—will help you make sure that you're getting the right data in the MEASURE stage. That, in turn, should result in a much smoother and stress-free ANALYZE phase—and probably more successful IMPROVE and CONTROL phases, too. The alternative? You'll have to complete the ANALYZE phase with the data you have, not the data you wish you had. </p>
4. Worry About Data Quality
<p><img alt="gauge" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/b82fc879fa26a76f2b00424550aafe9e/gage.jpg" style="width: 250px; height: 173px; float: right; margin: 10px 15px;" />"Can you trust your data?" My Six Sigma instructor asked us that question so many times, it still flashes through my mind every time I open Minitab. That's good, because he was absolutely right: if you can't trust your data, you shouldn't do anything with it. Many people take it for granted that the data they get is precise and accurate, especially when using automated measuring instruments and similar technology. But how do you <em>know </em>they're measuring precisely and accurately? How do you <em>know </em>your instruments are calibrated properly? If you didn't test it, <em>you don't know</em>. And if you don't know, you can't trust your data. Fortunately, with measurement system analysis methods like <span><a href="http://blog.minitab.com/blog/meredith-griffith/fundamentals-of-gage-rr">gage R&R</a></span> and <a href="http://blog.minitab.com/blog/understanding-statistics/got-good-judgment-prove-it-with-attribute-agreement-analysis">attribute agreement analysis</a>, we never have to trust <span style="line-height: 20.8px;">data</span><span style="line-height: 20.8px;"> </span><span style="line-height: 1.6;">quality to blind faith. </span></p>
5. Statistical Analysis Is More Than a Set of Computations
<p>Statistical techniques are often referred to as "tools," and that's a very apt metaphor. A saw, a plane, and a router all cut wood, but they aren't interchangeable—the end product defines which tool is appropriate for a job. Similarly, you might apply ANOVA, regression, or time series analysis to the same data set, but the right tool depends on what you want to understand. To extend the metaphor further, just as we have circular saws, jigsaws, and miter saws for very specific tasks, each family of statistical methods also includes specialized tools designed to handle particular situations. The point is that we select a tool to <em>assist </em>our analysis, not to <em>define </em>it. </p>
6. Keep it Simple
<p>Many processes are inherently messy. If you've got dozens of input variables and multiple outcomes, analyzing them could require many steps, transformations, and some thorny calculations. Sometimes that degree of complexity is required. But a more complicated analysis isn't always better—in fact, overcomplicating it may make your results less clear and less reliable. It also potenitally makes the analysis more difficult than necessary. <span style="line-height: 20.8px;">You may not </span><em style="line-height: 20.8px;">need </em><span style="line-height: 20.8px;">a complex process model that includes 15 factors if you can improve your output by optimizing the three or four most important inputs. </span><span style="line-height: 1.6;">If you need to improve a process that includes many inputs, </span><a href="http://blog.minitab.com/blog/statistics-and-quality-improvement/create-a-doe-screening-experiment-with-the-assistant-in-minitab-17" style="line-height: 1.6;">a short screening experiment</a><span style="line-height: 1.6;"> can help you identify which factors are most critical, and which are not so important. </span></p>
7. Provide Assessments of Variability
<p>No model is perfect. No analysis accounts for all of the observed variation. Every analysis includes a degree of uncertainty. Thus, no statistical finding is 100% certain, and that degree of uncertainty needs to be considered when using statistical results to make decisions. If you're the decision-maker, be sure that you understand the risks of reaching a wrong conclusion based on the analysis at hand. If you're sharing your results with stakeholders and executives, especially if they aren't statistically inclined, make sure you've communicated that degree of risk to them by offering and explaining confidence intervals, margins of error, or other appropriate measures of uncertainty. </p>
8. Check Your Assumptions
<p>Different statistical methods are based on different assumptions about the data being analyzed. For instance, many common analyses assume that your data follow a normal distribution. You can check most of these assumptions very quickly using functions like a normality test in your statistical software, but it's easy to forget (or ignore) these steps and dive right into your analysis. However, failing to verify those assumptions can yield results that aren't reliable and shouldn't be used to inform decisions, so don't skip that step. <a href="http://www.minitab.com/products/minitab/assistant/">If you're not sure about the assumptions for a statistical analysis, Minitab's Assistant menu explains them</a>, and can even flag violations of the assumptions before you draw the wrong conclusion from an errant analysis. </p>
9. <span style="color:#FF0000;">When Possible, Replicate Verify Success!</span>
<p><span style="line-height: 1.6;">In science, replication of a study—ideally by another, independent scientist—is crucial. It indicates that the first researcher's findings weren't a fluke, and provides more evidence in support of the given hypothesis. Similarly, when a quality project results in great improvements, we can't take it for granted those benefits are going to be sustained—they need to be verified and confirmed over time. Control charts are probably the most common tool for making sure a project's benefits endure, but depending on the process and the nature of the improvements, hypothesis tests, capability analysis, and other methods also can come into play. </span></p>
10. <span style="color:#FF0000;">Make Your Analysis Reproducible Share How You Did It</span>
<p>In the original 10 Simple Rules article, the authors suggest scientists share their data and explain how they analyzed it so that others can make sure they get the same results. This idea doesn't translate so neatly to the business world, where your data may be proprietary or private for other reasons. But just as science benefits from transparency, the quality profession benefits when we share as much information as we can about our successes. <span style="line-height: 20.8px;">Of course you can't share your company's secret-sauce formulas with competitors</span><span style="line-height: 20.8px;">—but i</span><span style="line-height: 1.6;">f you solved a quality challenge in your organization, chances are your experience could help someone facing a similar problem. If a peer in another organization already solved a problem like the one you're struggling with now, wouldn't you like to see if a similar approach might work for you? Organizations like <a href="http://asq.org/index.aspx" target="_blank">ASQ</a> and forums like <a href="https://www.isixsigma.com/" target="_blank">iSixSigma.com</a> help quality practitioners network and share their successes so we can all get better at what we do. And here at Minitab, we love sharing <a href="http://www.minitab.com/company/case-studies/">case studies and examples of how people have solved problems using data analysis</a>, too. </span></p>
<p>How do you think these rules apply to the world of quality and business decision-making? What are <em>your </em>guidelines when it comes to analyzing data? </p>
<p> </p>
Data AnalysisLean Six SigmaQuality ImprovementSix SigmaStatisticsStatistics HelpStatistics in the NewsStatsWed, 29 Jun 2016 12:00:00 +0000http://blog.minitab.com/blog/understanding-statistics/those-10-simple-rules-for-using-statistics-theyre-not-just-for-researchEston MartzAre You Putting the Data Cart Before the Horse? Best Practices for Prepping Data for Analysis, ...
http://blog.minitab.com/blog/meredith-griffith/are-you-putting-the-data-cart-before-the-horse-best-practices-for-prepping-data-for-analysis%2C-part-1
<p>Most of us have heard a backwards way of completing a task, or doing something in the conventionally wrong order, described as “putting the cart before the horse.” That’s because a horse pulling a cart is much more efficient than a horse pushing a cart.</p>
<p><img alt="cart before horse" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/ec1fbea4785510ea0e0a9997c1669c68/cart_horse.png" style="margin: 10px 15px; float: right; width: 350px; height: 206px;" />This saying may be especially true in the world of statistics. Focusing on a statistical tool or analysis before checking out the condition of your data is one way you may be putting the cart before the horse. You may then find yourself trying to force your data to fit an analysis, particularly when the data has not been set up properly. It’s far more efficient to first make sure your <a href="http://blog.minitab.com/blog/understanding-statistics/the-single-most-important-question-in-every-statistical-analysis">data are reliable</a> and then allow your questions of interest to guide you to the right analysis.</p>
<p>Spending a little quality time with your data up front can save you from wasting a lot of time on an analysis that either can’t work—or can’t be trusted.</p>
<p>As a quality practitioner, you’re likely to be involved in many activities—establishing quality requirements for external suppliers, monitoring product quality, reviewing product specifications and ensuring they are met, improving process efficiency, and much more.</p>
<p>All of these tasks will involve data collection and statistical analysis with software such as Minitab. For example, suppose you need to perform a <a href="http://blog.minitab.com/blog/meredith-griffith/fundamentals-of-gage-rr">Gage R&R</a> study to verify your measurement systems are valid, or you need to understand how machine failures impact downtime.</p>
<p>Rather than jumping right into the analysis, you will be at an advantage if you take time to look at your data. Ask yourself questions such as:</p>
<ul>
<li>What problem am I trying to solve?</li>
<li>Is my data set up in a way that will be useful to answering my question?</li>
<li>Did I make any mistakes while recording my data?</li>
</ul>
<p>Utilizing process knowledge can also help you answer questions about your data and identify data entry errors. A focus on preparing and exploring your data prior to an analysis will not only save you time in the long run, but will help you obtain reliable results.</p>
<p>So then, where to begin with best practices for prepping data for an analysis? Let’s look no further than your data.</p>
Clean your data before you analyze it
<p>Let’s assume you already know what problem you’re trying to solve with your data. For instance, you are the area supervisor of a manufacturing facility, and you’ve been experiencing lower productivity than usual on the machines in your area and want to understand why. You have collected data on these machines, recording the amount of time a machine was out of operation, the reason for the machine being down, the shift number when the machine went down, and the speed of the machine when it went down.</p>
<p>The first step toward answering your question is to ensure your data are clean. Cleaning your data before you begin an analysis can save time by preventing rework, such as reformatting data or correcting data entry errors, after you’ve already begun the analysis. Data cleaning is also essential to ensure your analyses and results—and the decisions you make—are reliable.</p>
<p>With the <a href="https://www.minitab.com/en-us/support/minitab/minitab-17.3.1-update/" style="line-height: 20.8px;">latest update to Minitab 17</a><span style="line-height: 20.8px;">, an improved data import helps you identify and correct case mismatches, fix improperly formatted columns, represent missing data accurately and in a manner that is recognized by the software, remove blank rows and extra spaces, and more. When importing your data, you see a preview of your data as a reminder to ensure it’s in the best possible state before it finds its way into Minitab. This preview helps you spot mistakes you have made in your data collection, and automatically corrects mistakes you don’t notice or that are difficult to find in large data sets.</span></p>
<p><img alt="Data Import" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/dae6c7b7-fc22-4616-9d65-f04909c20ab1/Image/b1c679056c60ac2fa82f37e1f1de406b/data_import.jpg" style="width: 775px; height: 655px;" /></p>
<p><em>Minitab offers a data import dialog that helps you quickly clean and format your data before importing into the software, ensuring your data are trustworthy and allowing you to get to your analysis sooner.</em></p>
<p><span style="line-height: 20.8px;">If you’d rather copy and paste your data from Excel, Minitab will ensure you paste your data in the right place. For instance, if your data have column names and you accidentally paste your data into the first row of the worksheet, your data will all be formatted as text—even when the data following your column names are numeric! With </span><a href="https://www.minitab.com/en-us/products/minitab/whats-new/" style="line-height: 20.8px;">Minitab 17.3</a><span style="line-height: 20.8px;">, you will receive an alert that your data is in the wrong place, and Minitab will automatically move your data where it belongs. This alert ensures your data are formatted properly, preventing you from running into the problem during an analysis and saving you time manually correcting every improperly formatted column.</span></p>
<p><img alt="Copy Paste Warning" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/dae6c7b7-fc22-4616-9d65-f04909c20ab1/Image/5df941ffaa491a0072261aef075a19d6/copy_paste_warning.jpg" style="width: 431px; height: 299px;" /></p>
<p><em>Pasting your Excel data in the first row of a Minitab worksheet will trigger this warning, which safeguards against improperly formatted columns.</em></p>
<p><span style="line-height: 1.6;">This is only the beginning! Minitab makes it quick and painless to begin exploring and visualizing your data, offering more insights and ease once you get to the analysis. If you’d like to learn additional best practices for prepping your data for any analysis, stay tuned for my next post where I’ll offer tips for exploring and drawing insights from your data!</span></p>
Data AnalysisStatisticsWed, 30 Mar 2016 14:05:04 +0000http://blog.minitab.com/blog/meredith-griffith/are-you-putting-the-data-cart-before-the-horse-best-practices-for-prepping-data-for-analysis%2C-part-1Meredith GriffithGage R&R Metrics: What Do They All Mean?
http://blog.minitab.com/blog/starting-out-with-statistical-software/gage-rr-metrics%3A-what-do-they-all-mean
<p>When you analyze a Gage R&R study in <a href="http://www.minitab.com/products/minitab/">statistical software</a>, your results can be overwhelming. There are a lot of statistics listed in Minitab's Session Window—what do they all mean, and are they telling you the same thing?</p>
<p>If you don't know where to start, it can be hard to figure out what the analysis is telling you, especially if your measurement system is giving you some numbers you'd think are good, and others that might not be. I'm going to focus on three different statistics that are often confused when <span><a href="http://blog.minitab.com/blog/meredith-griffith/fundamentals-of-gage-rr">reading Gage R&R output</a></span>. </p>
<p>The first thing to look at is the %Study Variation and the %Contribution.</p>
<p style="margin-left: 40px;"><img alt="gage r&R output" src="https://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f7e1af57-c25e-4ec3-a999-2166d525717e/Image/be2a9d9d311b9fad9b00eacdd73abff5/gage2.png" style="width: 618px; height: 404px;" /></p>
<p>You could look at either of them, as they are both telling you the same thing, just in a different way. By definition, the %Contribution for a source is 100 times the variance component for that source divided by the Total Variation variance component. This calculation has the benefit of making all of your sources of variability add up to 100%, which can make things easy to interpret.</p>
<p>The %Study Variation does not sum up to 100% like %Contribution, but it does have other benefits. %Contribution is based on the variance component that is specific to the values you observed in your study, not what the population of values might be. In contrast, the %Study Variation, by taking 6*standard deviation, extrapolates out over the entire population of values (based on the observed values, of course).</p>
<p>The bottom line is that both % Study Variation and %Contribution are telling you, in simple terms, about the percentage of variation in your process attributable to that particular source. </p>
<p>What about %Tolerance? What does <em>that </em>allow us to look at? While %StudyVar and %Contribution compare the variation from a particular source to the total variation, the %Tolerance compares the amount of variation from a source to a specified tolerance spread. This can lead to seemingly conflicting results, such as getting a low %StudyVar while having a high %Tolerance. In this case, your gage system may be introducing low levels of variability compared to other sources, but the amount of variation is still too much based on your spec limits. The %Tolerance column may be more important to you in this case, as it's more specific to your actual product and its spec limits. </p>
<p>So, a short summary:</p>
<p><strong>%Contribution: </strong>The percentage of variation due to the source compared to the total variation, but with the added benefit that all sources will sum to 100%</p>
<p><strong>%StudyVar:</strong> The <span style="line-height: 20.8px;">percentage of variation due to the source compared to the total variation, but with the added benefit of extrapolating beyond your specific data values. </span></p>
<p><strong>%Tolerance:</strong> The percentage of variation due to the source compared to your specified tolerance range.</p>
<p>The %StudyVar is perhaps more reliant on having a good quality study and can be used when your goal is improving the measurement system. On the other hand %Tolerance can be used when the focus is on the measurement system being able to do it’s job and classify parts as in or out of spec.</p>
<p>Each of these statistics provide valuable information, and how you weigh each of these largely depends on what you're looking to get out of your study.</p>
Lean Six SigmaProject ToolsQuality ImprovementMon, 21 Mar 2016 12:00:00 +0000http://blog.minitab.com/blog/starting-out-with-statistical-software/gage-rr-metrics%3A-what-do-they-all-meanEric HeckmanImproving Recycling Processes at Rose-Hulman, Part III
http://blog.minitab.com/blog/real-world-quality-improvement/improving-recycling-processes-at-rose-hulman-part-iii
<p><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/ccb8f6d6-3464-4afb-a432-56c623a7b437/Image/fa7a4559e547be217d5fa38f61c978c1/landfill.jpg" style="float: right; width: 350px; height: 253px; margin: 10px 15px;" />In previous posts, I discussed the results of a recycling project done by Six Sigma students at Rose-Hulman Institute of Technology last spring. (If you’re playing catch up, you can read <a href="http://blog.minitab.com/blog/real-world-quality-improvement/a-little-trash-talk3a-improving-recycling-processes-at-rose-hulman" target="_blank">Part I</a> and <a href="http://blog.minitab.com/blog/real-world-quality-improvement/a-little-trash-talk%3A-improving-recycling-processes-at-rose-hulman%2C-part-ii" target="_blank">Part II</a>.)</p>
<p>The students did an awesome job reducing the amount of recycling that was thrown into the normal trash cans across all of the institution’s academic buildings. At the end of the spring quarter (2014), 24% of trash cans (by weight) included recyclable items. At the beginning of that spring quarter, 36% of trash cans were recyclable items, so you can see that they were very successful in reducing this percentage!</p>
<p>The fall quarter (2015) brought a new set of Six Sigma students to Rose-Hulman who were just as dedicated to reducing the amount of recycling thrown into normal trash cans, and I want to cover their success in this post, as well as some of the neat statistical methods they used when completing their project.</p>
Fall 2015 goals
<p>This time around, the students wanted to at least maintain or improve on the percentage spring quarter (2014) students were able to achieve. They set out with a specific goal to reduce the amount of recycling in the trash to 20% by weight.</p>
<p>In order to further reduce the recyclables in the academic buildings in fall 2015, the standard “Define, Measure, Analyze, Improve, Control” (DMAIC) methodology of Six Sigma was once again implemented. The main project goal focused on standardizing the recycling process within the buildings, and their plan to reduce the amount of recyclables focused on optimizing the operating procedure for collecting recyclables in all academic building areas (excluding classrooms) where trash and recycling are collected.</p>
<p>Many of the same DMAIC tools that were used by spring 2014 students were also used here, including—<a href="http://support.minitab.com/quality-companion/3/help-and-how-to/run-projects/brainstorming/ct-tree/" target="_blank">Critical to Quality Diagrams</a>, <a href="http://support.minitab.com/quality-companion/3/help-and-how-to/run-projects/maps/process-map/" target="_blank">Process Maps</a>, <a href="http://blog.minitab.com/blog/real-world-quality-improvement/spicy-statistics-and-attribute-agreement-analysis" target="_blank">Attribute Agreement Analysis</a>, <a href="http://blog.minitab.com/blog/marilyn-wheatleys-blog/evaluating-a-gage-study-with-one-part-v2" target="_blank">Gage R&R</a>, Statistical Plots, <a href="http://blog.minitab.com/blog/adventures-in-software-development/risk-based-testing-at-minitab-using-quality-companions-fmea" target="_blank">FMEA</a>, <a href="http://blog.minitab.com/blog/adventures-in-statistics/regression-analysis-tutorial-and-examples" target="_blank">Regression</a>—among many others.</p>
Making and measuring improvements
<p>The spring 2014 initiative added recycling bins to every classroom, which created a measurable improvement. The fall 2015 effort focused on improvement through <em>standardization of operation</em>. For example, many areas in the academic buildings suffer from random placement and arrangement of trash cans and recycling bins. The students thought standardization of bin areas (one trash, one plastic/aluminum recycling, and one paper recycling) would lessen the confusion of recycling, and clear signage and stickers on identically shaped trash cans and recycling bins would be better visual cues of where to place waste of both kinds.</p>
<p>For fall 2015, there were seven teams, and they were assigned different academic building floors (not including classrooms) and common areas. Unlike the spring 2014 data collection, the teams did not combine the trash from their assigned areas. They treated each recycling station as a unique data point.</p>
<p>After implementing the improvements to standardize the bins, the teams collected data for four days across twenty-nine total stations. Thus, there were a total of 116 fall 2015 improvement percentages. The fall 2015 students used the post-improvement percentage of recyclables in the trash from spring 2014 (24%) as their baseline for determining improvement in fall 2015.</p>
<p>The descriptive statistics for the percentage of recyclables (by weight) in the trash were as follows:</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/ccb8f6d6-3464-4afb-a432-56c623a7b437/Image/5c77690aaaff21d0b33eb5083f82074e/descriptive_stats.jpg" style="border-width: 0px; border-style: solid; width: 550px; height: 67px;" /></p>
<p>Below, the students put together a histogram and a boxplot of the data using <a href="http://www.minitab.com/products/minitab/features/" target="_blank">Minitab Statistical Software</a>. Over half of the stations (61 out of 116) had less than 5% of recyclables in the trash. Forty-six of the 116 recycling stations had no recyclables. The value of the third quartile (16.6%), meant that 75% of the stations had less than 16.6% recyclables. The descriptive statistics above showed that the sample mean was much larger than the sample median. The graphs confirmed that this must be the case because of the strong positively skewed shape of the data.</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/ccb8f6d6-3464-4afb-a432-56c623a7b437/Image/4e730181a9288e531ff9caf69a347dd0/histogram.jpg" style="border-width: 0px; border-style: solid; width: 624px; height: 206px;" /></p>
<p>Even though the 116 data points didn’t follow a normal distribution and there was a large mound of 0’s as part of the distribution from collection spots that had no recyclables, the students trusted that the <a href="http://blog.minitab.com/blog/understanding-statistics/how-the-central-limit-theorem-works" target="_blank">Central Limit Theorem</a> with a sample size of 116 would generate a sampling distribution of the means that was normally distributed. Because of the large sample size and unknown standard deviation, they used a <em>t</em> distribution to create a 95% confidence interval for the true mean percentage of recyclables in the trash for fall 2015.</p>
<p>Also using Minitab, they constructed the 95% confidence interval:</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/ccb8f6d6-3464-4afb-a432-56c623a7b437/Image/2ccf17f68f0055c32282c2020f2c9108/one_sample_t.jpg" style="border-width: 0px; border-style: solid; width: 423px; height: 48px;" /></p>
<p>The 95% confidence interval meant that the students were 95% certain that the interval [9.94, 18.22] contains the true mean percentage of recyclables in the trash for fall 2015. At an alpha level equal to 0.025, they were able to reject the null hypothesis, where H0: μ = 24% versus Ha: μ < 24%, because 24% was not contained in the two-sided 95% confidence interval. (Remember that 24% was the mean percentage of recyclables in trash after the spring 2014 improvement phase.) The null hypothesis for H0: μ = 20% versus Ha: μ < 20%, was rejected. This meant that they had met their goal to reduce the percentage of recyclables in the trash to below 20% for this project!</p>
Continuing to analyze the data
<p>The students also subgrouped their data by collection day. Each day consisted of data from 29 recycling stations. The comparative boxplots and individual value plots below show the percentage of recyclables in the trash across the four collection dates. (The horizontal dotted line in the boxplot is the mean from spring 2014’s post-improvement data.)</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/ccb8f6d6-3464-4afb-a432-56c623a7b437/Image/664e8bf0f443d278376e71a70817e727/ivp.jpg" style="border-width: 0px; border-style: solid; width: 624px; height: 207px;" /></p>
<p>Though all four collection days have sample means less than 24%, it’s obvious from the boxplots that the first three collection days are clearly below 24%, and the medians from all four days are less than 11%. The individual value plots reveal the large number of 0’s on each day, which represented collection spots that had no recyclables. Both graphs display the positively skewed nature of the data. Because of the positive skewness, each day’s mean is much larger than its median.</p>
How capable was the process?
<p>Next, the students ran a <a href="http://blog.minitab.com/blog/real-world-quality-improvement/using-statistics-to-show-your-boss-process-improvements" target="_blank">process capability analysis</a> for the seven areas where trash was collected over four days:</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/ccb8f6d6-3464-4afb-a432-56c623a7b437/Image/8f9b85a55164f9e957809a8be1eef1c0/process_cap.jpg" style="border-width: 0px; border-style: solid; width: 465px; height: 347px;" /></p>
<p>The process capability indices were Pp = 0.48 and Ppk = 0.42. (The Pp value corresponds to a 1.44 Sigma Level, while the Ppk value corresponds to a 1.26 Sigma Level.) Recall that the previous Ppk value after improvements in <a href="http://blog.minitab.com/blog/real-world-quality-improvement/a-little-trash-talk%3A-improving-recycling-processes-at-rose-hulman%2C-part-ii" target="_blank">spring 2014</a> was 0.22. The fall 2015 index is almost double that value!</p>
<p>The students knew that they still needed to account for the total weight of the trash and recyclables by calculating the percentage of recyclables per station. Some collection stations with the highest percentage of recyclables had the lowest total weight, while some stations with the lowest percentage of recyclables had the highest total weight. Instead of strictly using a capability index to indicate their improvement, they incorporated a <a href="http://blog.minitab.com/blog/adventures-in-statistics/regression-analysis-tutorial-and-examples" target="_blank">regression</a> model for the trash weight versus the total weight of trash and recyclables to show that the percentage of recyclables in the trash was less than 20%.</p>
<p>The 95% confidence interval for the true mean slope of the regression line was [0.856, 0.954]. The students were 95% certain that the trash weight was somewhere between 0.86 to 0.96 of the total weight of the collection. Hence, the recycling weight was between 0.046 and 0.114 of the total weight. This value is clearly below 20% with 95% confidence! From this, they were able to state through yet another type of analysis that there was a statistically significant improvement over the spring 2014 recycling project, and that they met their goal of reducing the percentage of recyclables in the trash to below 20%. Compared to the spring 2014 project where 24% of the trash was recyclables, the fall 2015 students saved <em>at least</em> 4% more recyclables from ending up in the local landfill!</p>
<p>For even more on this topic, be sure to check out Rose-Hulman student Peter Olejnik’s blog posts on how he and the recycling project team at the school used regression to evaluate project results:</p>
<p><a href="http://blog.minitab.com/blog/statistics-in-the-field/using-regression-to-evaluate-project-results%2C-part-1" target="_blank">Using Regression to Evaluate Project Results, part 1</a></p>
<p><a href="http://blog.minitab.com/blog/statistics-in-the-field/using-regression-to-evaluate-project-results%2C-part-2" target="_blank">Using Regression to Evaluate Project Results, part 2</a></p>
<p><em>Many thanks to Dr. Diane Evans for her contributions to this post!</em></p>
Data AnalysisFun StatisticsHypothesis TestingLean Six SigmaLearningSix SigmaStatisticsStatsFri, 08 May 2015 12:00:00 +0000http://blog.minitab.com/blog/real-world-quality-improvement/improving-recycling-processes-at-rose-hulman-part-iiiCarly BarryFundamentals of Gage R&R
http://blog.minitab.com/blog/meredith-griffith/fundamentals-of-gage-rr
<p>Before cutting an expensive piece of granite for a countertop, a good carpenter will first confirm he has measured correctly. Acting on faulty measurements could be costly.</p>
<p><img alt="gauge" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/b82fc879fa26a76f2b00424550aafe9e/gage.jpg" style="width: 300px; height: 208px; float: right; margin: 10px 15px;" />While no measurement system is perfect, we rely on such systems to quantify data that help us control quality and monitor changes in critical processes. So, how do you know whether the changes you see are valid and not just the product of a faulty measurement system? After all, if you can’t trust your measurement system, then you can’t trust the data it produces.</p>
<p>Performing a Gage R&R study can help you to identify problems with your measurement system, enabling you to trust your data and to make data-driven decisions for process improvement. </p>
What Can Gage R&R Do for Me?
<p>Gage R&R studies can tell you if inconsistencies in your measurements are too large to ignore—this could be due to a faulty tool or inconsistent operation of a tool.</p>
<p><strong>Reveal an inconsistent tool</strong></p>
<p>Let’s look at an example to better understand how Gage R&R studies work.</p>
<p>Suppose a company wants to use a control chart to monitor the fill weights of cereal boxes. Before doing so, they conduct a Gage R&R study to determine if the system which measures the weight of each cereal box is producing precise measurements.</p>
<p>The best way to ensure that measurements are valid is to look at repeatability, or the variation of the measurements taken by the same operator for the same part. If we weigh the same cereal box under the same conditions a number of times, will we observe the same weight every time? Weighing the same box over and over again can show us how much variation exists in our measurement system.</p>
<p><img alt="plot" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/497eee17946361547404397e1c606de6/gage_fundamentals_1.jpg" style="width: 400px; height: 267px;" /></p>
<p>For this experiment, we can look at repeatability based on two different operators’ measurements. The Gage R&R results show that even when the same person weighs the same box on the same scale, the measurements can vary by several grams. Most likely, the scale is in serious need of recalibration. The faulty scale would have rendered a control chart for these measurements virtually useless. Although the average measurements for each operator are not far apart, the spread of the measurements is huge!</p>
<p><strong>Highlight operator differences</strong></p>
<p>But the variation that exists in the measurement system is just one aspect of a Gage R&R study. We must also look at reproducibility, or the variation due to different operators using the measurement system. A Gage R&R study can tell us whether a measurement differs from one operator to the next and by how much.</p>
<p>Suppose the same company who wishes to monitor fill weights of cereal boxes hires new employees to help record measurements. The company uses a Gage R&R to evaluate both the new operators and experienced operators.</p>
<p><img alt="gage R&R " src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/25b59ba04a57260bab6b3195f5dfaded/gage_fundamentals_2.jpg" style="width: 400px; height: 267px;" /></p>
<p>The study reveals that when employees weigh the same cereal box, the measurements of new hires are too high or too low more often than the measurements of experienced employees. This finding might indicate that the company should conduct more training for the new hires.</p>
How to Analyze a Gage R&R Study in Minitab
<p>Awareness of how well you can measure something can have substantial financial impacts. Minitab <a href="http://www.minitab.com/products/minitab">Statistical Software</a> makes it easy to analyze how precise your measurements are.</p>
<p>In the case of the company evaluating cereal box fill weights, problems of over- and under-filling have different implications. Overfilling cereal boxes is costing the company money they could be saving with a calibrated measurement system and properly trained staff. Similarly, not filling cereal boxes fully is making customers angry because they didn’t get the amount of product they paid for. </p>
Getting started
<p>Preparing to analyze your measurement system is easy because Minitab’s Create Gage R&R Study Worksheet can generate a data collection sheet for you. The dialog box lets you quickly specify who takes the measurements (the operators), which item they measure (the parts), and in what order the data are to be collected.</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/bd575431b6841c8c30a15eb7862ef85b/gage_fundamentals_3.jpg" style="width: 400px; height: 375px;" /></p>
<ol>
<li>Choose <strong>Stat > Quality Tools > Gage Study > Create Gage R&R Study Worksheet</strong>.</li>
<li>Specify the number of parts, number of operators, and the number of times the same operator will measure the same part.</li>
<li>Give descriptive names to the parts and operators so they’re easy to identify in the output.</li>
<li>Click <strong>OK</strong>.</li>
</ol>
The main event
<p>After you create your data collection sheet and record the measurements you observe, you can use Gage R&R Study (Crossed) to analyze the measurements.</p>
<p><img src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/3abd7c04618ffe20125ce7c70121c9d9/gage_fundamentals_4.jpg" style="width: 400px; height: 253px;" /></p>
<ol>
<li>Choose <strong>Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed)</strong>.</li>
<li>In Part Numbers, enter <em>Parts</em>.</li>
<li>In Operators, enter <em>Operators</em>.</li>
<li>In Measurement Data, enter <em>'Fill Weights'</em>.</li>
<li>Click <strong>OK</strong>.</li>
</ol>
<p><img alt="Gage R&R Output" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/586798e925a3e524022899e185750b04/gage_fundamentals_5.jpg" style="width: 500px; height: 384px;" /></p>
<p>The study reveals that Jordan’s measurements are lower than Pat’s or Taylor’s. In fact, the %Study Variation for our total Gage R&R is high—90.39%—indicating that our measurement system is unacceptable. Identifying and eliminating the source of the difference will improve the measurement system.</p>
<p>Some of my colleagues offer <a href="http://blog.minitab.com/blog/quality-data-analysis-and-statistics/how-to-interpret-gage-output-part-2">more information on Gage R&R tools and how to interpret the output</a>.</p>
Putting Gage R&R Studies to Use
<p>Taking measurements is like any other process—it’s prone to variability. Assessing and identifying where to focus efforts for reducing this variation with Minitab’s Gage R&R tools can help you ensure your measurement system is precise. </p>
Data AnalysisQuality ImprovementStatisticsFri, 01 May 2015 12:00:00 +0000http://blog.minitab.com/blog/meredith-griffith/fundamentals-of-gage-rrMeredith GriffithCreating a New Metric with Gage R&R, part 2
http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-2
<p style="line-height: 20.7999992370605px;">In my previous post, I showed you <a href="http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-1">how to set up data collection for a gage R&R analysis</a> using the Assistant in Minitab 17. In this case, the goal of the gage R&R study is to test whether a new tool provides an effective metric for assessing resident supervision in a medical facility. </p>
<p style="line-height: 20.7999992370605px;"><span style="line-height: 20.7999992370605px;">As noted in that post, I'm drawing on one of my favorite bloggers about health care quality, David Kashmer of the Business Model Innovation in Surgery blog, and specifically his</span><span style="line-height: 20.7999992370605px;"> column "</span><a href="http://www.surgicalbusinessmodelinnovation.com/statistical-process-control/how-to-measure-a-process-when-theres-no-metric/" style="line-height: 20.7999992370605px;" target="_blank">How to Measure a Process When There's No Metric</a><span style="line-height: 20.7999992370605px;">." </span></p>
An Effective Measure of Resident Supervision?
<p style="line-height: 20.7999992370605px;">In one scenario Kashmer presents, state regulators and hospital staff disagree about a health system's ability to oversee residents. In the absence of an established way to measure resident <span style="line-height: 20.7999992370605px;">supervision</span><span style="line-height: 20.7999992370605px;">, the staff devises a tool that uses a 0 to 10 scale to rate resident supervision. </span></p>
<p style="line-height: 20.7999992370605px;">Now we're going to analyze the Gage R&R data to test how effectively and reliably the new tool <span style="line-height: 20.7999992370605px;">measures what we want it to measure</span><span style="line-height: 20.7999992370605px;">. The analysis will evaluate whether different people who use the tool </span><span style="line-height: 20.7999992370605px;">(the gauge)</span><span style="line-height: 20.7999992370605px;"> </span><span style="line-height: 20.7999992370605px;">reach the same conclusion (reproducibility) and do it consistently (repeatability). </span></p>
<p style="line-height: 20.7999992370605px;">To get data, three evaluators used the tool to assess each of 20 charts three times each, and recorded their score for each chart in the worksheet we produced earlier. (You can download the completed worksheet <a href="//cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/File/02131d16de689b5864576174e86da023/gage_resident_supervision.MTW">here</a> if you're following along in Minitab.) </p>
<p style="line-height: 20.7999992370605px;"><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/96cfa33c2344135d665b93e2c637b017/data_sheet.gif" style="width: 332px; height: 381px;" /></p>
<p>Now we're ready to analyze the data. </p>
Evaluating the Ability to Measure Accurately
<p style="line-height: 20.7999992370605px;">Once again, we can turn to the Assistant in Minitab Statistical Software to help us. If you're not already using it, your can <a href="http://it.minitab.com/products/minitab/free-trial.aspx">download a 30-day trial version</a> for free so you can follow along. Start by selecting <strong>Assistant > Measurement Systems Analysis...</strong> from the menu: </p>
<p style="line-height: 20.7999992370605px;"><img alt="measurement systems analysis " src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/10b2080fd1ed8b3e1337e7838fd85313/assistant_msa.gif" style="width: 345px; height: 258px;" /></p>
<p style="line-height: 20.7999992370605px;">In my earlier post, we used the Assistant to set up this study and make it easy to collect the data we need. Now that we've gathered the data, we can follow the Assistant's decision tree to the "Analyze Data" option. </p>
<p style="line-height: 20.7999992370605px;"><img alt="measurement systems analysis decision tree for analysis" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/f5fd33e7322a966327a9a5dc659294b6/gage_dialog_analyze.gif" style="width: 600px; height: 450px;" /></p>
<p style="line-height: 20.7999992370605px;">Selecting the right items for the Assistant's Gage R&R dialog box couldn't be easier—when you use the datasheet the Assistant generated, just enter "Operators" for Operators, "Parts" for Parts, and "Score" for Measurements. </p>
<p style="line-height: 20.7999992370605px;"><img alt="gage R&R analysis dialog box" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/b65b3a39837f9875c9748c46d12498f8/grnr_analysis_dialog.png" style="line-height: 20.7999992370605px; width: 600px; height: 397px;" /></p>
<p style="line-height: 20.7999992370605px;"><span style="line-height: 20.7999992370605px;">Before we press OK, though, we need to tell the Assistant how to estimate process variation. When Gage R&R is performed in a manufacturing context, historic data about the amount of variation in the output of the process being studied is usually available. Since this is the first time we're analyzing the performance of the new tool for measuring the quality of resident supervision, we don't have an historical standard deviation</span><span style="line-height: 20.7999992370605px;">, so </span><span style="line-height: 20.7999992370605px;">we will tell the Assistant to estimate the variation from the data we're analyzing. </span></p>
<p style="line-height: 20.7999992370605px;"><span style="line-height: 20.7999992370605px;"><img alt="gage r&r variation calculation options" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/0e55e7fca7d27eeaa22484359e996a8a/grnr_analysis_variation.png" style="width: 546px; height: 110px;" /></span></p>
<p style="line-height: 20.7999992370605px;"><span style="line-height: 20.7999992370605px;">The Assistant also asks for an upper or lower specification limit, or tolerance width</span><span style="line-height: 20.7999992370605px;">, which is the distance from the upper spec limit to the lower spec limit</span><span style="line-height: 20.7999992370605px;">. Minitab uses this to calculate %Tolerance, an optional statistic used to determine whether the measurement system can adequately sort good from bad parts—or in this case, good from bad supervision. For the sake of this example, let's say in designing the instrument you have selected a level of 5.0 as the minimum acceptable score. </span></p>
<p style="line-height: 20.7999992370605px;"><span style="line-height: 20.7999992370605px;"><img alt="gage r and r process tolerance" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/81c00cbc50239082a77d7e3e4c522afe/grnr_analysis_dialog_tolerance.png" style="width: 536px; height: 178px;" /> </span></p>
<p style="line-height: 20.7999992370605px;">When we press OK, the Assistant analyzes the data and presents a Summary Report, a Variation Report, and a Report Card for its analysis. The Summary Report gives us the bottom line about how well the new measurement system works. </p>
<p style="line-height: 20.7999992370605px;">The first item we see is a bar graph that answers the question, "Can you adequately assess process performance?" The Assistant's analysis of the data tells us that the system we're using to measure patient supervision can indeed assess the <span style="line-height: 20.7999992370605px;">resident supervision </span><span style="line-height: 20.7999992370605px;">process. </span></p>
<p style="line-height: 20.7999992370605px;"><img alt="gage R&R summary" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/36cd05b806ba3399849a5a7f562b6892/gage_r_r_summary_report.png" style="width: 600px; height: 471px;" /></p>
<p style="line-height: 20.7999992370605px;">The second bar graph answers the question "Can you sort good parts from bad?" In this case, we're evaluating patient supervision rather than parts, but the Analysis shows that the system is able to distinguish charts that indicate acceptable resident supervision from those that do not. </p>
<p style="line-height: 20.7999992370605px;">For both of these charts, less than 10% of the observed variation in the data could be attributed to the measurement system itself—a very good result.</p>
Measuring the "Unmeasurable"
<p style="line-height: 20.7999992370605px;">I can't count the number of times I've heard people say that they can't gather or analyze data about a situation because "it can't be measured." In most cases, that's just not true. Where a factor of interest—"service quality," say—is tough to measure <em>directly</em>, we can usually find measurable indicator variables that can at least give us some insight into our performance. </p>
<p style="line-height: 20.7999992370605px;">I hope this example, though simplified from what you're likely to encounter in the real world, shows how it's possible to demonstrate the effectiveness of a measurement system when one doesn't already exist. Even for outcomes that seem hard to quantify, we can create measurement systems to give us valuable data, which we can then use to make improvements. </p>
<p style="line-height: 20.7999992370605px;">What kinds of outcomes would you like to be able to measure in your profession? Could you use Gage R&R or another form of measurement system analysis to get started? </p>
<p style="line-height: 20.7999992370605px;"> </p>
Health Care Quality ImprovementQuality ImprovementStatisticsStatistics HelpThu, 26 Feb 2015 13:00:00 +0000http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-2Eston MartzCreating a New Metric with Gage R&R, part 1
http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-1
<p>One of my favorite bloggers about the application of statistics in health care is David Kashmer, an MD and MBA who runs and writes for the <a href="http://www.surgicalbusinessmodelinnovation.com/" target="_blank">Business Model Innovation in Surgery</a> blog. If you have an interest in how quality improvement methods like Lean and Six Sigma can be applied to healthcare, check it out. </p>
<p>A while back, Dr. Kashmer penned a column called "<a href="http://www.surgicalbusinessmodelinnovation.com/statistical-process-control/how-to-measure-a-process-when-theres-no-metric/" target="_blank">How to Measure a Process When There's No Metric</a>," in which he discusses how you can use the measurement systems analysis method called Gage R&R (or gauge R&R) to create your own measurement tools and validate them as useful metrics. (I select the term “useful” here deliberately: a metric you’ve devised could be very <em>useful </em>in helping you assess your situation, but might not meet requirements set by agencies, auditors, or other concerned parties.) </p>
<p>I thought I would use this post to show you how you can use the Assistant in Minitab Statistical Software to <span style="line-height: 20.7999992370605px;">do this</span><span style="line-height: 1.6;">.</span></p>
How Well Are You Supervising Residents?
<p>Kashmer posits a scenario in which state regulators assert that your health system's ability to oversee residents is poor, but your team believes residents are well supervised. You want to assess the situation with data, but you lack an established way to measure the quality of resident supervision. What to do?</p>
<p>Kashmer says, "You decide to design a tool for your organization. You pull a sample of charts and look for commonalities that seem to display excellent supervision versus poor supervision."</p>
<p>So you work with your team to come up with a tool that uses a 0 to 10 scale to rate resident supervision<span style="line-height: 20.7999992370605px;">, based on various factors appearing on a chart</span>. But how do you know if the tool will actually help you assess the quality of resident supervision? </p>
<p>This is where gage R&R comes in. The gage refers to the tool or instrument you're testing, and the R&R stands for reproducibility and repeatability. The analysis will tell you whether different people who use your tool to assess resident supervision (the gauge) will reach the same conclusion (reproducibility) and do it consistently (repeatability). </p>
Collecting Data to Evaluate the Ability to Measure Accurately
<p>We're going to use the Assistant in Minitab Statistical Software to help us. If you're not already using it, you can <a href="http://it.minitab.com/products/minitab/free-trial.aspx">download a 30-day trial version</a> for free so you can follow along. Start by selecting <strong>Assistant > Measurement Systems Analysis...</strong> from the menu: </p>
<p><img alt="measurement systems analysis " src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/10b2080fd1ed8b3e1337e7838fd85313/assistant_msa.gif" style="width: 345px; height: 258px;" /></p>
<p>Follow the decision tree...</p>
<p><img alt="measurement systems analysis decision tree" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/5f5055a500745c69c183056582dc41a6/msa_decision_tree.gif" style="width: 600px; height: 450px;" /></p>
<p>If you're not sure about what you need to do in a gage R&R, clicking the <strong><em>more...</em></strong> link gives you requirements, assumptions, and guidelines to follow: </p>
<p><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/62bf793d1ea0e4bbbdad53ffb70783e5/gager_rassumptions.gif" style="width: 600px; height: 454px;" /></p>
<p>After a look at the requirements, you decide you will have three evaluators use your new tool to assess each of 20 charts 3 times, and so you complete the dialog box thus: </p>
<p><img alt="MSA dialog box" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/067ec3e5997c68e845d4061a8251b862/msa_dialog.gif" style="width: 500px; height: 401px;" /></p>
<p style="line-height: 20.7999992370605px;">When you press "OK," the Assistant asks if you'd like to print worksheets you can use to easily gather your data:</p>
<p><img alt="gage R&R data collection form" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/ffb7215f1d123d17cca1eaa3c11211d8/msa_gage_r_r_data_collection_form.gif" style="line-height: 20.7999992370605px; width: 400px; height: 430px;" /></p>
<p>Minitab also creates a datasheet for the analysis. All you need to do is enter the data you collect in the "Measurements" column:</p>
<p><img alt="worksheet" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/cfd21a76a9cab42059705c67e54e5fdc/gage_r_r_worksheet.gif" style="line-height: 20.7999992370605px; width: 355px; height: 357px;" /></p>
<p>Note that the Assistant automatically randomizes the order in which each evaluator will examine the charts in each of their three judging sessions. </p>
<p>Now we're ready to gather the data to verify the effectiveness of our new metric for assessing the quality of patient supervision. Come back for Part 2, where we'll <a href="http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-2">analyze the collected data</a>! </p>
Health Care Quality ImprovementLean Six SigmaSix SigmaWed, 25 Feb 2015 13:00:00 +0000http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-1Eston MartzCrossed Gage R&R: How are the Variance Components Calculated?
http://blog.minitab.com/blog/marilyn-wheatleys-blog/crossed-gage-rr%3A-how-are-the-variance-components-calculated
<p>In technical support, we often receive questions about <span><a href="http://blog.minitab.com/blog/michelle-paret/gage-this-or-gage-that-how-the-number-of-distinct-categories-relates-to-the-study-variation">Gage R&R</a></span> and how Minitab calculates the amount of variation that is attributable to the various sources in a measurement system.</p>
<p>This post will focus on how the variance components are calculated for a crossed Gage R&R using the ANOVA table, and how we can obtain the %Contribution, StdDev, Study Var and %Study Var shown in the Gage R&R output. For this example, we will accept all of Minitab’s default values for the calculations.</p>
<p>The sample data used in this post is available within Minitab by navigating to <strong>File</strong> > <strong>Open Worksheet</strong>, and then clicking the <strong>Look in Minitab Sample Data folder</strong> button at the bottom of the dialog box. (If you're not already using Minitab, <a href="http://it.minitab.com/products/minitab/free-trial.aspx">get the free 30-day trial</a>.) The name of the sample data set is <strong>Gageaiag.MTW</strong>. For this data set, 10 parts were selected that represent the expected range of the process variation. Three operators measured the 10 parts, three times per part, in a random order.</p>
<p>To see the Gage R&R ANOVA tables in Minitab, we use <strong>Stat</strong> > <strong>Quality Tools</strong> > <strong>Gage Study</strong> > <strong>Gage R&R Study (Crossed)</strong>, and then complete the dialog box as shown below:</p>
<p><img alt="" spellcheck="true" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f6d0da32-ba1d-41d4-ace1-af34dcb51351/Image/48aecbaca4f0bc91fe82c149a6ebe99b/pic1.png" style="border-width: 1px; border-style: solid; width: 892px; height: 353px;" /></p>
<p>Minitab 17’s default alpha to remove the Part*Operator interaction is 0.05. Since the p-value for the interaction in the first ANOVA table is 0.974 (much greater than 0.05), Minitab removes the interaction and shows a second ANOVA table with no interaction.</p>
<p>To calculate the Variance Components, we turn to Minitab’s Methods and Formulas section: <strong>Help</strong> > <strong>Methods and Formulas </strong>> <strong>Measurement systems analysis</strong> > <strong>Gage R&R Study (Crossed)</strong>, and then choose <strong>VarComp for ANOVA method</strong> under <strong>Gage R&R table</strong>.</p>
<p><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f6d0da32-ba1d-41d4-ace1-af34dcb51351/Image/fa381e07132def3c0d6a8e34df7be583/pic2.PNG" style="width: 803px; height: 706px;" /></p>
<p>There are two parts to this section of Methods and formulas. The first provides the formulas used when the Operator*Part interaction is part of the model. In this example, the Operator*Part interaction was not significant and was removed. Therefore we use the formulas for the reduced model:</p>
<p><img alt="" spellcheck="true" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f6d0da32-ba1d-41d4-ace1-af34dcb51351/Image/227f3bb695f73fe56009f77ae34d46d8/pic3.png" style="border-width: 1px; border-style: solid; width: 535px; height: 311px;" /></p>
<p>The variance components section of the crossed Gage R&R output is shown below so we can compare our hand calculations to Minitab’s results:</p>
<p><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f6d0da32-ba1d-41d4-ace1-af34dcb51351/Image/a0c0882e1f3c94ff2d6a01df0acceb1f/pic4.png" style="border-width: 1px; border-style: solid; width: 309px; height: 169px;" /></p>
<p>We will do the hand calculations using the reduced ANOVA table for each source of variation:</p>
<p><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f6d0da32-ba1d-41d4-ace1-af34dcb51351/Image/2831d91347a6936ad7f3d2fa2fb8bd70/pic5.png" style="border-width: 1px; border-style: solid; width: 368px; height: 118px;" /></p>
<p>Repeatability is estimated as the Mean Square (MS column) for Repeatability in the ANOVA table, so the estimate for <u>Repeatability</u> is <strong>0.03997</strong>.</p>
<p>We can see the formula for Operator above. The number of replicates is the number of times each operator measured each part. We had 10 parts in this study, and each operator measured each of the 10 parts 3 times, so the denominator for the Operator calculation is 10*3. The numerator is the MS Operator – MS repeatability, so the formula for the variance component for the <u>Operator</u> is (1.58363-0.03997)/(10*3) = 1.54366/30 = <strong>0.0514553.</strong></p>
<p>Next, Methods and Formulas shows how to calculate the Part-to-Part variation. The b represents the number of operators (in this study we had 3), and n represents the number of replicates (that is also 3 since each operator measured each part 3 times). So the denominator for the Part-to-Part variation is 3*3, and the numerator is MS Part – MS Repeatability. Therefore, the <u>Part-to-Part</u> variation is (9.81799-0.03997)/(3*3) = <strong>1.08645</strong>.</p>
<p><u>Reproducibility</u> is easy since it is the same as the variance component for operator that we previously calculated; <strong>0.0514553</strong>.</p>
<p>For the last two calculations, we’re just adding the variance components for the sources that we previously calculated:</p>
<p><u>Total Gage R&R</u> = Repeatability + Reproducibility = 0.03997 + 0.0514553 = <strong>0.0914253</strong>.</p>
<p><u>Total Variation</u> = Total Gage R&R + Part-to-Part = 0.0914253 + 1.08645 = <strong>1.17788</strong>.</p>
<p>Notice that the Total Variation is the sum of all the variance components. The variances are additive so the total is just the sum of the other sources.</p>
<p>The %Contribution of VarComp column is calculated using the variance components- the VarComp for each source is divided by Total Variation:</p>
<p style="text-align: center;"><strong>Source</strong></p>
<p style="text-align: center;"><strong>VarComp</strong></p>
<p style="text-align: center;"><strong>Calculation</strong></p>
<p style="text-align: center;"><strong>%Contribution</strong></p>
<p style="text-align: center;"><strong>Total Gage R&R</strong></p>
<p style="text-align: center;">0.0914253</p>
<p style="text-align: center;">0.0914253/1.17788*100</p>
<p style="text-align: center;">7.76185</p>
<p style="text-align: center;"><strong> Repeatability</strong></p>
<p style="text-align: center;">0.03997</p>
<p style="text-align: center;">0.03997/1.17788*100</p>
<p style="text-align: center;">3.39338</p>
<p style="text-align: center;"><strong> Reproducibility</strong></p>
<p style="text-align: center;">0.0514553</p>
<p style="text-align: center;">0.0514553/1.17788*100</p>
<p style="text-align: center;">4.36847</p>
<p style="text-align: center;"> <strong> Operator</strong></p>
<p style="text-align: center;">0.0514553</p>
<p style="text-align: center;">0.0514553/1.17788*100</p>
<p style="text-align: center;">4.36847</p>
<p style="text-align: center;"><strong>Part-To-Part</strong></p>
<p style="text-align: center;">1.08645</p>
<p style="text-align: center;">1.08645/1.17788*100</p>
<p style="text-align: center;">92.2377</p>
<p style="text-align: center;">Total variation</p>
<p style="text-align: center;">1.17788</p>
<p style="text-align: center;">1.17788/1.17788*100</p>
<p style="text-align: center;">100</p>
<p><span style="line-height: 1.6;">Now that we’ve replicated the Variance components output, we can use these values to re-create the last table in Minitab’s Gage R&R output:</span></p>
<p><img alt="" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/f6d0da32-ba1d-41d4-ace1-af34dcb51351/Image/0bfe7807f260f4798ce577f55125c1c7/pic6.png" style="border-width: 1px; border-style: solid; width: 389px; height: 128px;" /></p>
<p>The StdDev column is simple- there we’re just taking the square root of each of the values in the VarComp column. To Total Variation value in the StDev column is the square root of the corresponding VarComp column (it is not the sum of the standard deviations):</p>
<p style="text-align: center;"><strong>Source</strong></p>
<p style="text-align: center;"><strong>VarComp</strong></p>
<p style="text-align: center;"><strong>Square Root of VarComp = StdDev</strong></p>
<p style="text-align: center;"><strong>6 x StdDev = Study Var</strong></p>
<p style="text-align: center;"><strong>Total Gage R&R</strong></p>
<p style="text-align: center;">0.0914253</p>
<p style="text-align: center;">0.302366</p>
<p style="text-align: center;">1.81420</p>
<p style="text-align: center;"> <strong>Repeatability</strong></p>
<p style="text-align: center;">0.03997</p>
<p style="text-align: center;">0.199925</p>
<p style="text-align: center;">1.19955</p>
<p style="text-align: center;"> <strong>Reproducibility</strong></p>
<p style="text-align: center;">0.0514553</p>
<p style="text-align: center;">0.226838</p>
<p style="text-align: center;">1.36103</p>
<p style="text-align: center;"> <strong>Operator</strong></p>
<p style="text-align: center;">0.0514553</p>
<p style="text-align: center;">0.226838</p>
<p style="text-align: center;">1.36103</p>
<p style="text-align: center;"><strong>Part-To-Part</strong></p>
<p style="text-align: center;">1.08645</p>
<p style="text-align: center;">1.04233</p>
<p style="text-align: center;">6.25397</p>
<p style="text-align: center;"><strong>Total Variation</strong></p>
<p style="text-align: center;">1.17788</p>
<p style="text-align: center;">1.08530</p>
<p style="text-align: center;">6.51181</p>
<p><span style="line-height: 1.6;">Finally, the %Study Var column is calculated by dividing the Study Var for each source by the Study Var value in the Total Variation row. For example, the %Study Var for Repeatability is 1.19955/6.51181*100 = 18.4211%.</span></p>
<p>I hope this post helps you understand where these numbers come from in a Gage R&R. Let’s just be glad that we have Minitab to do the calculations behind the scenes so we don’t have to do this by hand every time!</p>
StatisticsStatistics HelpFri, 20 Feb 2015 13:00:00 +0000http://blog.minitab.com/blog/marilyn-wheatleys-blog/crossed-gage-rr%3A-how-are-the-variance-components-calculatedMarilyn WheatleyLessons in Quality from Guadalajara and Mexico City
http://blog.minitab.com/blog/understanding-statistics-and-its-application/lessons-in-quality-from-guadalajara-and-mexico-city
<p><img alt="View of Mexico City" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/8e5ec9217bc8fbc2ca7a6784a1efcdfa/mexico_df_400w.jpg" style="border-width: 1px; border-style: solid; margin: 10px 15px; float: right; width: 400px; height: 235px;" />Last week, thanks to the collective effort from many people, we held very successful events in Guadalajara and Mexico City, which gave us a unique opportunity to meet with over 300 Spanish-speaking Minitab users. They represented many different industries, including automotive, textile, pharmaceutical, medical devices, oil and gas, electronics, and mining, as well as academic institutions and consultants.</p>
<p>As I listened to my peers Jose Padilla and <a href="http://blog.minitab.com/blog/marilyn-wheatleys-blog">Marilyn Wheatley</a> deliver their presentations, it was interesting to see people's reactions as they learned more about our products and services. Several attendees were particularly pleased to learn more about Minitab's ease-of-use and <a href="http://www.minitab.com/products/minitab/assistant/">step-by-step help with analysis</a> offered by the Assistant menu. I saw others react to demonstrations of Minitab's comprehensive Help system, the use of executables for automation purposes, and several of the tips and tricks discussed throughout our presentations.</p>
<p>We also had multiple conversations on Minitab's flexible licensing options. Several attendees who spend a lot of time on the road were particularly glad to learn about our <a href="http://support.minitab.com/installation/frequently-asked-questions/license-fulfillment/borrow-a-license-of-minitab-companion/">borrowing functionality</a>, which lets you “check out” a license so you can use Minitab software without accessing your organization’s license server.</p>
Acceptance Sampling Plans
<p>There were plenty of technical discussions as well. One interesting question came from a user who asked how Minitab's Acceptance Sampling Plans compare to the <a href="http://asq.org/knowledge-center/ANSI_ASQZ1_4-2008/index.html">ANSI Z1.4</a> standard (a.k.a. MIL-STD 105E). The short answer is that the tables provided by the ANSI Z1.4 are for a specific AQL (Acceptable Quality Level), while implicitly assuming a certain RQL (Rejectable Quality Level) based solely on the lot size. The ANSI Z1.4 is an AQL-based system, while Minitab's acceptance sampling plans give you the flexibility to create a customized sampling scheme for a specific AQL, RQL, or lot size using both the binomial or hypergeometric distributions.</p>
Destructive Testing and Gage R&R
<p>Other users had questions about Gage R&R and destructive testing. Practitioners commonly assess a destructive test using Nested Gage R&R; however, this is not always necessary. The main problem with destructive testing is that every part tested is destroyed and thus can only be measured by a single operator. Since the purpose of this type of analysis is to measure the repeatability and reproducibility of the measurement system, one must identify parts that are as homogeneous as possible. Typically, instead of 10 parts, practitioners may use multiple parts from each of 10 batches. If the within-batch variation is small enough then the parts from each batch can be considered to be "the same" and thus the readings measured by all the operators can be used to produce repeatability and reproducibility measures. The main trick is to have homogenous units or batches that can give you enough samples to be tested by all operators for all replicates. If this is the case, you can analyze a destructive test with crossed gage R&R.</p>
Control Charts and Subgroup Size
<p>We also had an interesting discussion about the sensitivity of Shewhart <a href="http://blog.minitab.com/blog/understanding-statistics/control-chart-tutorials-and-examples">control charts</a> to the subgroup size. Specifically, one of the attendees asked our recommendation for subgroup size: 4, or 5? </p>
<p>The answer to this intriguing question requires an understanding of the reason why subgroups are recommended. Control charts have limits that are constructed so that if the process is stable, the probability of observing points out of these control limits is very small; this probability is typically referred to as the false alarm rate and it is usually set at 0.0027. This calculation assumes the process is normally distributed, so if we were plotting the individual data as in an Individuals chart, the control limits would be effective to determine an out-of-control situation only if the data came from a normal distribution. To reduce the dependence on normality, Shewhart suggested collecting the data in subgroups, because if we plot the means instead of the individual data the control limits would become less and less sensitive to normality as the subgroup size increases. This is a result of the Central Limit Theorem (CLT), which states that regardless of the underlying distribution of the data, that if we take independent samples and compute the average (or a sum) of all the observations in each sample then the distribution of these sample means will converge to a normal distribution.</p>
<p>So going back to the original question, what is the recommended subgroup size for building control charts? The answer depends on how skewed the underlying distribution may be. For various distributions a subgroup size of 5 is sufficient to have the CLT kick in making our control charts robust to normality; however for extremely skewed distributions like the exponential, the subgroup sizes may need to be much larger than 50. This topic was discussed in a paper Schilling and Nelson titled "<a href="http://asq.org/qic/display-item/?item=5238">The Effect of Non-normality on the Control Limits of Xbar Charts</a>" published in JQT back in 1976.</p>
Analyzing Variability
<p>We also had a great discussion about modeling variability in a process. One of the attendees, working for McDonald's, was looking for statistical methods for reducing the variation of the weight of apple slices. An apple is cut in 10 slices, and the goal was to minimize the variation in weight so that exactly four slices be placed in each bag without further rework. This gave me the opportunity to demonstrate how to use the <a href="http://blog.minitab.com/blog/adventures-in-statistics/assessing-variability-for-quality-improvement">Analyze Variability</a> command in Minitab, which happens to be one of the topics we cover in our <a href="http://www.minitab.com/services/training/courses/">DOE in Practice</a> course.</p>
We Love Your Questions
<p>For me and my fellow trainers, there’s nothing better than talking with people who are using Minitab software to solve problems. Sometimes we’re able to provide a quick, helpful answer. Sometimes a question provokes a great discussion about some quality challenge we all have in common. And sometimes a question will lead to a great idea that we’re able to share with our developers and engineers to make our software better. </p>
<p>If you have a question about Minitab, statistics, or quality improvement, please feel free to comment here. And if you use Minitab software, you can always contact our <a href="http://www.minitab.com/support/">customer support</a> team for direct assistance from specialists in IT, statistics, and quality improvement.</p>
<p> </p>
Quality ImprovementStatisticsStatistics HelpWed, 19 Nov 2014 13:57:00 +0000http://blog.minitab.com/blog/understanding-statistics-and-its-application/lessons-in-quality-from-guadalajara-and-mexico-cityEduardo SantiagoUsing the G-Chart Control Chart for Rare Events to Predict Borewell Accidents
http://blog.minitab.com/blog/statistics-in-the-field/using-the-g-chart-control-chart-for-rare-events-to-predict-borewell-accidents
<p><em>by Lion "Ari" Ondiappan Arivazhagan, guest blogger</em></p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/ac11ba7bc8daa85327ad905ba5dc5f96/borewell_screencap.jpg" style="margin: 10px 15px; width: 400px; height: 283px; float: right;" />In India, we've seen this story far too many times in recent years:</p>
<p>Timmanna Hatti, a six-year old boy, was trapped in a 160-feet borewell for more than 5 days in Sulikeri village of Bagalkot district in Karnataka after falling into the well. Perhaps the most heartbreaking aspect of the situation was the decision of the Bagalkot district administration to stop the rescue operation because the digging work, if continued further, might lead to collapse of the vertical wall created by the side of the borewell within which Timmanna had struggled for his life.</p>
<p><a href="http://timesofindia.indiatimes.com/city/mysore/8-days-on-boys-body-pulled-out/articleshow/40082590.cms?" target="_blank">Timmanna's body was retrieved from the well 8 days after he fell in</a>. Sadly, this is just one of an alarming number of borewell accidents, especially involving little children, across India in the recent past.</p>
<p>This most recent event prompted me to conduct a preliminary study of borewell accidents across India in the last 8-9 years.</p>
Using Data to Assess Borewell Accidents
<p>My main objective was to find out the possible causes of such accidents and to assess the likelihood of such adverse events based on the data available to date.</p>
<p>This very preliminary study has heightened my awareness of lot of uncomfortable and dismaying factors involved in these deadly incidents, including the pathetic circumstances of many rural children and carelessness on the part of many borewell contractors and farmers.</p>
<p>In this post, I'll lead you through my analysis, which concludes with the use of a G-chart for the possible prediction of the next such adverse event, based on Geometric distribution probabilities.</p>
Collecting Data on Borewell Accidents
<p>My search of newspaper articles and Google provided details about a total of 34 borewell incidents since 2006. The actual number of incidents may be higher, since many incidents go unreported. The table below shows the total number of borewell cases reported each year between 2006 and 2014.</p>
<p><img alt="Borewell Accident Summary Data" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/9e60f3b9c08b0125a38b30d717e1acb8/borewell_g_chart_table_2.jpg" style="width: 189px; height: 289px;" /></p>
Summary Analysis of the Borewell Accident Data
<p>First, I used Minitab to create a histogram of the data I'd collected, shown below.</p>
<p>A quick review of the histogram reveals that out of 34 reported cases, the highest number of accidents occurred in the years 2007 and 2014.</p>
<p><img alt="Histogram of Borewell Accidents" src="http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/76d0288bea7e9a7bd29a9ff501326fea/borewell_histogram_of_accidents.png" style="width: 577px; height: 385px;" /></p>
<p>The ages of children trapped in the borewells ranged from 2 years to 9 years. More boys (21) than girls (13) were involved in these incidents.</p>
<p>What hurts most is that, in this modern India, more than 70% of the children did not survive the incident. They died either in the borewell itself or in the hospital after the rescue. Only about 20% of children (7 out of 34) have been rescued successfully. The ultimate status of 10% of the cases reported is not known.</p>
Pie Chart of Borewell Incidents by Indian State
<p>Analysis of a state-wise pie chart, shown below, indicates that Haryana, Gujarat, and Tamil Nadu top the list of the borewell accident states. These three states alone account for more than 50% of the borewell accidents since 2006.</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/8466766e4788ea2d73b7d8672692be4d/borewell_pie_chart.jpg" style="width: 500px; height: 334px;" /></p>
Pareto Chart for Vital Causes of Borewell Accidents
<p>I used a <a href="http://blog.minitab.com/blog/michelle-paret/fast-food-and-identifying-the-vital-few">Pareto chart</a> to analyze the various causes of these borewell accidents, which revealed the top causes of these tragedies:</p>
<ol>
<li>Children accidentally falling into open borewell pits while playing in the fields.</li>
<li>Abandoned borewell pits not bring properly closed / sealed.<br />
</li>
</ol>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/8012effc2a1aa662d5a276d487e55954/borewell_pareto_chart_w640.jpeg" style="width: 500px; height: 335px;" /></p>
Applying the Geometric Distribution to Rare Adverse Events
<p>There are many different types of control charts, but for rare events, we can use <a href="http://www.minitab.com/products/minitab">Minitab Statistical Software</a> and the G chart. Based on the geometric distribution, the G chart is designed specifically for monitoring rare events. In the geometric distribution, we count the number of opportunities before or until the defect (adverse event) occurs.</p>
<p>The figure below shows the geometric probability distribution of days between such rare events if the probability of the event is 0.01. As you can see, the odds of an event happening 50 or 100 days after the previous one are much higher than the odds of the next event happening 300 or 400 days later.</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/1587c05dd9a8d77bcda5be87bb2a748b/borewell_distribution_plot.jpg" style="width: 500px; height: 333px;" /></p>
<p>By using the geometric distribution to plot the number of days between rare events, such as borewell accidents, the G chart can reveal patterns or trends that might enable us to prevent such accidents in future. In this case, we count the number of days between reported borewell accidents. One key assumption, when counting the number of days between the events, is that the number of accidents per day was fairly constant.</p>
A G-Chart for Prediction of the Next Borewell Accident
<p>I now used Minitab to create a G-chart for the analysis of the borewell accident data I collected, shown below.</p>
<p>Although the observations fall within the upper and lower control limits (UCL and LCL), the G chart shows a cluster of observations below the center line (the mean) after the 28th observation and before the 34th observation (the latest event). Overall, the chart indicates/detects an unusually high rate adverse events (borewell accidents) over the past decade.</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/7571156e97822d68efe18af3225902e5/borewell_g_chart_date_between_events.jpg" style="width: 500px; height: 332px; border-width: 1px; border-style: solid;" /></p>
<p>Descriptive statistics based on the Gaussian distribution for my data show 90.8 days as the mean "days between events." But the G-chart, based on geometric distribution, which is more apt for studying the distribution of adverse events, indicates a Mean (CL) of only 67.2 days as "days between events."</p>
Predicting Days Between Borewell Accidents with a Cumulative Probability Distribution
<p>I used Minitab to create a cumulative distribution function for data, using the geometric distribution with probability set at 0.01. This gives us some additional detail about how many incident-free days we're likely to have until the next borewell tragedy strikes: </p>
<p style="margin-left: 40px;"><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/77a56196f91723fca7f7e7222a815573/borewell_output.jpg" style="width: 290px; height: 640px;" /></p>
<p>Based on the above, we can reasonably predict when next borewell accident is most likely to occur in any of the states included in the data, especially in the states of Haryana, Tamil Nadu, Gujarat, Rajasthan, and Karnataka.</p>
<p>The probabilities are shown below, with the assumption that the sample size and the Gage R&R / Measurement errors of event data reported and collected are adequate and within the allowable limits.</p>
<p><strong>Probability of next borewell event happening in...</strong></p>
<ul>
<li>31 days or less: 0.275020 = 27.5% appx.<br />
</li>
<li>104 days or less = 0.651907 = 65% appx.<br />
</li>
<li>181 days or less = 0.839452 = 84% appx.<br />
</li>
<li>488 days or less = 0.992661 = 99% appx.</li>
</ul>
<p> </p>
<p>My purpose in preparing this study would be fulfilled if enough people take preventive actions before the possibility of occurrence next such an adverse event within next 6 months (p > 80%). NGOs, government officials, and individuals all need to take preventive actions—like sealing all open borewells across India, especially in the above 5 states—to prevent many more innocent children from dying while playing.</p>
<p> </p>
<p><strong>About the Guest Blogger:</strong></p>
<p><em>Ondiappan "Ari" Arivazhagan is an honors graduate in civil / structural engineering from the University of Madras. He is a certified PMP, PMI-SP, PMI-RMP from the Project Management Institute. He is also a Master Black Belt in Lean Six Sigma and has done Business Analytics from IIM, Bangalore. He has 30 years of professional global project management experience in various countries and has almost 14 years of teaching / training experience in project management and Lean Six Sigma. He is the Founder-CEO of International Institute of Project Management (IIPM), Chennai, and can be reached at <a href="mailto:askari@iipmchennai.com?subject=Minitab%20Blog%20Reader" target="_blank">askari@iipmchennai.com</a>.</em></p>
<p><em>An earlier version of this article was published on LinkedIn. </em></p>
Data AnalysisStatistics in the NewsTue, 19 Aug 2014 12:00:00 +0000http://blog.minitab.com/blog/statistics-in-the-field/using-the-g-chart-control-chart-for-rare-events-to-predict-borewell-accidentsGuest BloggerGage This or Gage That? How the Number of Distinct Categories Relates to the %Study Variation
http://blog.minitab.com/blog/michelle-paret/gage-this-or-gage-that-how-the-number-of-distinct-categories-relates-to-the-study-variation
<p>We cannot improve what we cannot measure. Therefore, it is critical that we conduct a measurement systems analysis (MSA) before we start analyzing our data to make any kind of decisions.</p>
<p>When conducting an MSA for continuous measurements, we typically using a Gage R&R Study. And in these Gage R&R Studies, we look at output such as the <a href="http://blog.minitab.com/blog/quality-data-analysis-and-statistics/how-to-interpret-gage-output-part-2">percentage study variation</a> (%Study Var, or %SV) and the <a href="http://blog.minitab.com/blog/quality-data-analysis-and-statistics/understanding-your-gage-randr-output">Number of Distinct Categories</a> (ndc) to assess whether our measurement system is adequate.</p>
<p>Looking at these 2 values to assess a measurement system often leads to questions like "Should I look at both values? Will both values simultaneously indicate if my measurement system is poor? Are these 2 values related?" </p>
<p>The answer to all of these questions is "Yes," and here's why.</p>
How Are NDC and %Study Var Related?
<p>To clearly understand how number of distinct categories and percentage study variation are related, first consider how they are mathematically defined:</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/d840d539abbf0f0cc70c3cb03c823cb1/equation1.jpg" style="width: 401px; height: 72px; margin-left: 50px; margin-right: 50px" /></p>
<p>where sigma represents the square root of the variance components. (Note that where the square root of 2 appears in this formula and those below, Minitab 17 uses 1.41.)</p>
<p>Using substitution, we can express the relationship between ndc and %SV as:</p>
<p><span face=""><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/b8624dccb97d74650d8f3389eef2db64/equation2.jpg" style="width: 350px; height: 152px; margin-left: 50px; margin-right: 50px" /></span></p>
<p>The last equation shows that ndc and %SV are inversely proportional: the larger %SV is, the smaller the ndc is, and vice-versa. However, it also suggests that the value of ndc depends not only on %SV, but on the variance components as well.</p>
NDC as a Function of %SV
<p>To simplify the equation and represent ndc solely as a function of %SV, we can express the variance components in another way. The total variance is the sum of two variance components, one corresponding to gage repeatability and reproducibility and the other to part-to-part variation:</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/8cdb02ebc3a57a05010fe627dfe8bb45/equation3.jpg" style="width: 222px; height: 36px; margin-left: 50px; margin-right: 50px" /></p>
<p>Solving for sigma-squared for part and dividing each side of the equation by sigma-squared for total yields:</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/cfe7fe6042e0c688436844d14f9c9460/equation4.jpg" style="width: 193px; height: 73px; margin-left: 50px; margin-right: 50px" /></p>
<p>Because %SV / 100 = sigma gage / sigma total, the equation above can be rewritten as:</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/89d53946a51b100b7a4573c9677b3cf7/equation6.jpg" style="width: 350px; height: 82px; margin-left: 50px; margin-right: 50px" /></p>
<p>Substituting this value into the previous equation for ndc gives the following simplified formula:</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/00cecc8f1fff70ad94e17d0c785253b9/equation7.jpg" style="width: 330px; height: 144px; margin-left: 50px; margin-right: 50px" /></p>
<p>This equation clearly shows the relationship between ndc and %SV and can be used to calculate the number of distinct categories for a given percentage study variation. As shown in Table 1, the calculated ndc value is then truncated to obtain a whole number (integer).</p>
<p><img alt="" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/6060c2db-f5d9-449b-abe2-68eade74814a/Image/394a73d4dd88ac618ceb3fe68a18922b/equation8.jpg" style="width: 270px; height: 268px; margin-left: 50px; margin-right: 50px" /></p>
<p>For example, if the calculated value is 15.8, mathematically you are not quite capable of differentiating between 16 categories. Therefore, Minitab <a href="http://www.minitab.com/products/minitab">Statistical Software</a> is conservative and truncates the number of distinct categories to 15. For practical purposes, you can also round the calculated ndc values to obtain the number of distinct categories.</p>
Guidelines and Limitations for Evaluating a Measurement System Using NDC
<p>You can evaluate a measurement system by looking only at the number of distinct categories and using the following guidelines (based on the truncation method used by Minitab):</p>
<ul>
<li>≥ <strong>14 distinct categories </strong>– The measurement system is acceptable.</li>
<li><strong>4-13 distinct categories </strong>– The measurement system is marginally acceptable, depending on the importance of the application, cost of measurement device, cost of repair, and other factors.</li>
<li><strong>≤ 3 distinct categories </strong>– The measurement system is unacceptable and should be improved.</li>
</ul>
<p>These guidelines have some limitations. For example, in some cases when the %SV is over 30% the number of distinct categories is 4. Therefore, a measurement system with 32% study variation, which is unacceptable under the AIAG criteria for %SV, is acceptable under the ndc criteria. To avoid this discrepancy, some authors suggest only accepting a measurement system when it can distinguish between 5 or more categories. Although this fixes the original problem, it makes measurement systems with a 28-30% study variation unacceptable, because their corresponding ndc value equals 4.</p>
<p>To resolve this issue you can establish more specific guidelines based on the exact calculated ndc values, without truncating or rounding. For example, you could define an unacceptable measurement system based on an ndc < 4.497.</p>
<p>And that is how the number of distinct categories is related to %Study Var.</p>
Data AnalysisLean Six SigmaLearningQuality ImprovementSix SigmaStatisticsStatistics HelpStatsMon, 19 May 2014 12:00:00 +0000http://blog.minitab.com/blog/michelle-paret/gage-this-or-gage-that-how-the-number-of-distinct-categories-relates-to-the-study-variationMichelle ParetGauging Gage Part 3: How to Sample Parts
http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-3-how-to-sample-parts
<p>In <a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-1-is-10-parts-enough">Parts 1</a> and <a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-2-are-3-operators-or-2-replicates-enough">2 of Gauging Gage</a> we looked at the numbers of parts, operators, and replicates used in a Gage R&R Study and how accurately we could estimate %Contribution based on the choice for each. In doing so, I hoped to provide you with valuable and interesting information, but mostly I hoped to make you like me. I mean like me so much that if I told you that you were doing something flat-out wrong and had been for years and probably screwed somethings up, you would hear me out and hopefully just revert back to being indifferent towards me.</p>
<p>For the third (and maybe final) installment, I want to talk about something that drives me crazy. It really gets under my skin. I see it all of the time, maybe more often than not. You might even do it. If you do, I'm going to try to convince you that you are very, very wrong. If you're an instructor, you may even have to contact past students with groveling apologies and admit you steered them wrong. And that's the best-case scenario. Maybe instead of admitting error, you will post scathing comments on this post insisting I am wrong and maybe even insulting me despite the evidence I provide here that I am, in fact, right.</p>
<p>Let me ask you a question:</p>
When you choose parts to use in a Gage R&R Study, how do you choose them?
<p>If your answer to that question required anymore than a few words - and it can be done in one word—then I'm afraid you may have been making a very popular but very bad decision. If you're in that group, I bet you're already reciting your rebuttal in your head now, without even hearing what I have to say. You've had this argument before, haven't you? Consider whether your response was some variation on the following popular schemes:</p>
<ol>
<li>Sample parts at regular intervals across the range of measurements typically seen</li>
<li>Sample parts at regular intervals across the process tolerance (lower spec to upper spec)</li>
<li>Sample randomly but pull a part from outside of either spec</li>
</ol>
<p>#1 is wrong. #2 is wrong. #3 is wrong.</p>
<p>You see, the statistics you use to qualify your measurement system are all reported relative to the part-to-part variation and all of the schemes I just listed do not accurately estimate your true part-to-part variation. The answer to the question that would have provided the most reasonable estimate?</p>
<p>"Randomly."</p>
<p>But enough with the small talk—this is a statistics blog, so let's see what the statistics say.</p>
<p>In Part 1 I described a simulated Gage R&R experiment, which I will repeat here using the standard design of 10 parts, 3 operators, and 2 replicates. The difference is that in only one set of 1,000 simulations will I randomly pull parts, and we'll consider that our baseline. The other schemes I will simulate are as follows:</p>
<ol>
<li>An "exact" sampling - while not practical in real life, this pulls parts corresponding to the 5th, 15th, 25, ..., and 95th percentiles of the underlying normal distribution and forms a (nearly) "exact" normal distribution as a means of seeing how much the randomness of sampling affects our estimates.</li>
<li>Parts are selected uniformly (at equal intervals) across a typical range of parts seen in production (from the 5th to the 95th percentile).</li>
<li>Parts are selected uniformly (at equal intervals) across the range of the specs, in this case assuming the process is centered with a Ppk of 1.</li>
<li>8 of the 10 parts are selected randomly, and then one part each is used that lies one-half of a standard deviation outside of the specs.</li>
</ol>
<p>Keep in mind that we know with absolute certainty that the underlying %Contribution is 5.88325%.</p>
Random Sampling for Gage
<p>Let's use "random" as the default to compare to, which, as you recall from Parts 1 and 2, already does not provide a particularly accurate estimate:</p>
<p style="margin-left:40px"><img alt="Pct Contribution with Random Sampling" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/af91c4815469651cc698c3aa7d980c61/histogram_of_10_pctcontribution.gif" style="height:384px; width:576px" /></p>
<p>On several occasions I've had people tell me that you can't just sample randomly because you might get parts that don't really match the underlying distribution. </p>
Sample 10 Parts that Match the Distribution
<p>So let's compare the results of random sampling from above with our results if we could magically pull 10 parts that follow the underlying part distribution almost perfectly, thereby eliminating the effect of randomness:</p>
<p style="margin-left:40px"><img alt="Random vs Exact" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/f2b7c1cc6c3cede482e7251b2b55f28e/random_vs_exact.gif" style="height:384px; width:576px" /></p>
<p>There's obviously something to the idea that the randomness that comes from random sampling has a big impact on our estimate of %Contribution...the "exact" distribution of parts shows much less skewness and variation and is considerably less likely to incorrectly reject the measurement system. To be sure, implementing an "exact" sample scheme is impossible in most cases...since you don't yet know how much measurement error you have, there's no way to know that you're pulling an exact distribution. What we have here is a statistical version of chicken-and-the-egg!</p>
Sampling Uniformly across a Typical Range of Values
<p>Let's move on...next up, we will compare the random scheme to scheme #2, sampling uniformly across a typical range of values:</p>
<p style="margin-left:40px"><img alt="Random vs Uniform Range" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/d8e9f2f7a24a62457a2d517914baef73/random_vs_uniformrange.gif" style="height:384px; width:576px" /></p>
<p>So here we have a different situation: there is a very clear reduction in variation, but also a very clear bias. So while pulling parts uniformly across the typical part range gives much more consistent estimates, those estimates are likely telling you that the measurement system is much better than it really is.</p>
Sampling Uniformly across the Spec Range
<p>How about collecting uniformly across the range of the specs?</p>
<p style="margin-left:40px"><img alt="Random vs Uniform Specs" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/5da456e491792be021485c0e9a514298/random_vs_uniformspecs.gif" style="height:384px; width:576px" /></p>
<p>This scheme results in an even more extreme bias, with qualifying this measurement system a certainty and in some cases even classifying it as excellent. Needless to say it does not result in an accurate assessment.</p>
Selectively Sampling Outside the Spec Limits
<p>Finally, how about that scheme where most of the points are taken randomly but just one part is pulled from just outside of each spec limit? Surely just taking 2 of the 10 points from outside of the spec limits wouldn't make a substantial difference, right?</p>
<p style="margin-left:40px"><img alt="Random vs OOS" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/c0821d19873a65162535d231799052ce/random_vs_oos.gif" style="height:384px; width:576px" /></p>
<p>Actually those two points make a huge difference and render the study's results meaningless! This process had a Ppk of 1 - a higher-quality process would make this result even more extreme. Clearly this is not a reasonable sampling scheme.</p>
<strong>Why These Sampling Schemes?</strong>
<p>If you were taught to sample randomly, you might be wondering why so many people would use one of these other schemes (or similar ones). They actually all have something in common that explains their use: all of them allow a practitioner to assess the measurement system across a range of possible values. After all, if you almost always produce values between 8.2 and 8.3 and the process goes out of control, how do you know that you can adequately measure a part at 8.4 if you never evaluated the measurement system at that point?</p>
<p>Those that choose these schemes for that reason are smart to think about that issue, but just aren't using the right tool for it. Gage R&R evaluates your measurement system's ability to measure relative to the current process. To assess your measurement system across a range of potential values, the correct tool to use is a "Bias and Linearity Study" which is found in the Gage Study menu in Minitab. This tool establishes for you whether you have bias across the entire range (consistently measuring high or low) or bias that depends on the value measured (for example, measuring smaller parts larger than they are and larger parts smaller than they are).</p>
<p>To really assess a measurement system, I advise performing both a Bias and Linearity Study as well as a Gage R&R.</p>
<strong>Which Sampling Scheme to Use?</strong>
<p>In the beginning I suggested that a random scheme be used but then clearly illustrated that the "exact" method provides even better results. Using an exact method requires you to know the underlying distribution from having enough previous data (somewhat reasonable although existing data include measurement error) as well as to be able to measure those parts accurately enough to ensure you're pulling the right parts (not too feasible...if you know you can measure accurately, why are you doing a Gage R&R?). In other words, it isn't very realistic.</p>
<p>So for the majority of cases, the best we can do is to sample randomly. But we can do a reality check after the fact by looking at the average measurement for each of the parts chosen and verifying that the distribution seems reasonable. If you have a process that typically shows normality and your sample shows unusually high skewness, there's a chance you pulled an unusual sample and may want to pull some additional parts and supplement the original experiment.</p>
<p>Thanks for humoring me and please post scathing comments below!</p>
<p><a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-1-is-10-parts-enough">see Part I of this series</a><br />
<a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-2-are-3-operators-or-2-replicates-enough">see Part II of this series</a></p>
Thu, 27 Feb 2014 14:39:00 +0000http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-3-how-to-sample-partsJoel SmithGauging Gage Part 2: Are 3 Operators or 2 Replicates Enough?
http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-2-are-3-operators-or-2-replicates-enough
<p>In Part 1 of Gauging Gage, I looked at how adequate a <a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-1-is-10-parts-enough">sampling of 10 parts is for a Gage R&R Study</a> and providing some advice based on the results.</p>
<p>Now I want to turn my attention to the other two factors in the standard Gage experiment: 3 operators and 2 replicates. Specifically, what if instead of increasing the number of parts in the experiment (my previous post demonstrated you would need an unfeasible increase in parts), you increased the number of operators or number of replicates?</p>
<p>In this study, we are only interested in the effect on our estimate of overall Gage variation. Obviously, increasing operators would give you a better estimate of of the operator term and reproducibility, and increasing replicates would get you a better estimate of repeatability. But I want to look at the overall impact on your assessment of the measurement system.</p>
Operators
<p>First we will look at operators. Using the same simulation engine I described in Part 1, this time I did two different simulations. In one, I increased the number of operators to 4 and continued using 10 parts and 2 replicates (for a total of 80 runs); in the other, I increased to 4 operators and still used 2 replicates, but decreased the number of parts to 8 to get back close to the original experiment size (64 runs compared to the original 60).</p>
<p>Here is a comparison of the standard experiment and each scenario laid out here:</p>
<p style="margin-left:40px"><img alt="Operator Comparisons" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/ab84f3d0ae2d826f47786930ee54c611/operator_comparisons.gif" style="height:384px; width:576px" /></p>
<p style="margin-left:40px"><img alt="Operator Descriptive Stats" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/bc864992dcfd882e2c6066496b79ce19/operators_desc.GIF" style="height:68px; width:524px" /></p>
<p>It may not be obvious in the graph, but increasing to 4 operators while decreasing to 8 parts actually <em>increased</em> the variation in %Contribution seen...so despite requiring 4 more runs this is the poorer choice. And the experiment that involved 4 operators but maintained 10 parts (a total of 80 runs) showed no significant improvement over the standard study.</p>
Replicates
<p>Now let's look at replicates in the same manner we looked at parts. In one run of simulations we will increase replicates to 3 while continuing to use 10 parts and 3 operators (90 runs), and in another we will increase replicates to 3 and operators to 3, but reduce parts to 7 to compensate (63 runs).</p>
<p>Again we compare the standard experiment to each of these scenarios:</p>
<p style="margin-left:40px"><img alt="Replicate Comparisons" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/f5d14793691f9ad2d39a598ca41e9945/replicate_comparisons.gif" style="height:384px; width:576px" /></p>
<p style="margin-left:40px"><img alt="Replicates Descriptive Statistics" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/1c08fe3733c316f67e904e532c6b3e6e/replicates_desc.GIF" style="height:71px; width:528px" /></p>
<p>Here we see the same pattern as with operators. Increasing to 3 replicates while compensating by reducing to 7 parts (for a total of 63 runs) significantly increases the variation in %Contribution seen. And increasing to 3 replicates while maintaining 10 parts shows no improvement.</p>
<strong>Conclusions about Operators and Replicates in Gage Studies</strong>
<p>As stated above, we're only looking at the effect of these changes to the overall estimate of measurement system error. So while increasing to 4 operators or 3 replicates either showed no improvement in our ability to estimate %Contribution or actually made it worse, you may have a situation where you are willing to sacrifice that in order to get more accurate estimates of the individual components of measurement error. In that case, one of these designs might actually be a better choice.</p>
<p>For most situations, however, if you're able to collect more data, then increasing the number of parts used remains your best choice.</p>
<p>But how do we select those parts? I'll talk about that in my next post!</p>
<p><a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-1-is-10-parts-enough">see Part I of this series</a><br />
<a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-3-how-to-sample-parts">see Part III of this series</a></p>
Data AnalysisLean Six SigmaSix SigmaStatisticsStatsWed, 26 Feb 2014 13:00:00 +0000http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-2-are-3-operators-or-2-replicates-enoughJoel SmithGauging Gage Part 1: Is 10 Parts Enough?
http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-1-is-10-parts-enough
<p>"You take 10 parts and have 3 operators measure each 2 times."</p>
<p>This standard approach to a Gage R&R experiment is so common, so accepted, so ubiquitous that few people ever question whether it is effective. Obviously one could look at whether 3 is an adequate number of operators or 2 an adequate number of replicates, but in this first of a series of posts about "Gauging Gage," I want to look at 10. Just 10 parts. How accurately can you assess your measurement system with 10 parts?</p>
Assessing a Measurement System with 10 Parts
<p>I'm going to use a simple scenario as an example. I'm going to simulate the results of 1,000 Gage R&R Studies with the following underlying characteristics:</p>
<ol>
<li>There are no operator-to-operator differences, and no operator*part interaction.</li>
<li>The measurement system variance and part-to-part variance used would result in a %Contribution of 5.88%, between the popular guidelines of <1% is excellent and >9% is poor.</li>
</ol>
<p>So—no looking ahead here—based on my 1,000 simulated Gage studies, what do you think the distribution of %Contribution looks like across all studies? Specifically, do you think it is centered near the true value (5.88%), or do you think the distribution is skewed, and if so, how much do you think the estimates vary?</p>
<p>Go ahead and think about it...I'll just wait here for a minute.</p>
<p>Okay, ready?</p>
<p>Here is the distribution, with the guidelines and true value indicated:</p>
<p style="margin-left:40px"><img alt="PctContribution for 10 Parts" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/af91c4815469651cc698c3aa7d980c61/histogram_of_10_pctcontribution.gif" style="height:384px; width:576px" /></p>
<p>The good news is that it is roughly averaging around the true value.</p>
<p>However, the distribution is highly skewed—a decent number of observations estimated %Contribution to be at least double the true value with one estimating it at about SIX times the true value! And the variation is huge. In fact, about 1 in 4 gage studies would have resulted in failing this gage.</p>
<p>Now a standard gage study is no small undertaking—a total of 60 data points must be collected, and once randomization and "masking" of the parts is done it can be quite tedious (and possibly annoying to the operators). So just how many parts would be needed for a more accurate assessment of %Contribution?</p>
Assessing a Measurement System with 30 Parts
<p>I repeated 1,000 simulations, this time using 30 parts (if you're keeping score, that's 180 data points). And then for kicks, I went ahead and did 100 parts (that's 600 data points). So now consider the same questions from before for these counts—mean, skewness, and variation.</p>
<p>Mean is probably easy: if it was centered before, it's probably centered still.</p>
<p>So let's really look at skewness and how much we were able to reduce variation:</p>
<p style="margin-left:40px"><img alt="10 30 100 Parts" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/2a6885f40fda396703a0176a030ae332/histogram_of_10_30_100_parts.gif" style="height:384px; width:576px" /></p>
<p>Skewness and variation have clearly decreased, but I suspect you thought variation would have decreased more than it did. Keep in mind that %Contribution is affected by your estimates of repeatability and reproducibility as well, so you can only tighten this distribution so much by increasing number of parts. But still, even using 30 parts—an enormous experiment to undertake—still results in this gage failing 7% of the time!</p>
<p>So what is a quality practitioner to do?</p>
<p>I have two recommendations for you. First, let's talk about %Process. Often times the measurement system we are evaluating has been in place for some time and we are simply verifying its effectiveness. In this case, rather than relying on your small sampling of parts to estimate the overall variation, you can use the historical standard deviation as your estimate and eliminate much of the variation caused by the same sample size of parts. Just enter your historical standard deviation in the Options subdialog in Minitab:</p>
<p style="margin-left:40px"><img alt="Options Subdialog" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/cc286906ae0d7171affa92523707f722/options_dialog.png" style="height:422px; width:456px" /></p>
<p>Then your output will include an additional column of information called %Process. This column is the equivalent of the %StudyVar column, but using the historical standard deviation (which comes from a much larger sample) instead of the overall standard deviation estimated from the data collected in your experiment:</p>
<p style="margin-left:40px"><img alt="Percent Process" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/a7c6669b4b10272aac25b420e67d561c/pctprocess_output.GIF" style="height:130px; width:462px" /></p>
<p>My second recommendation is to include confidence intervals in your output. This can be done in the <em>Conf Int </em>subdialog:</p>
<p style="margin-left:40px"><img alt="Conf Int sibdialog" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/f3433eb79f2aacded976c9a2d7733e00/conf_int_dialog.gif" style="height:191px; width:381px" /></p>
<p>Including confidence intervals in your output doesn't inherently improve the wide variation of estimates the standard gage study provides, but it does force you to recognize just how much uncertainty there is in your estimate. For example, consider this output from the gageaiag.mtw sample dataset in Minitab with confidence intervals turned on:</p>
<p style="margin-left:40px"><img alt="Output with CIs" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/46889f0e-f0a5-4b4a-8a19-2d2b8dce6087/Image/68bc06ad673742b3f76922b1c31d813a/output_with_cis.GIF" style="height:162px; width:520px" /></p>
<p>For some processes you might accept this gage based on the %Contribution being less than 9%. But for most processes you really need to trust your data, and the 95% CI of (2.14, 66.18) is a red flag that you really shouldn't be very confident that you have an acceptable measurement system.</p>
<p>So the next time you run a Gage R&R Study, put some thought into how many parts you use and how confident you are in your results!</p>
<p><a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-2-are-3-operators-or-2-replicates-enough">see Part II of this series</a><br />
<a href="http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-3-how-to-sample-parts">see Part III of this series</a></p>
Data AnalysisQuality ImprovementSix SigmaStatisticsStatsMon, 24 Feb 2014 16:31:00 +0000http://blog.minitab.com/blog/fun-with-statistics/gauging-gage-part-1-is-10-parts-enoughJoel SmithApplying Six Sigma to a Small Operation
http://blog.minitab.com/blog/understanding-statistics/applying-six-sigma-to-a-small-operation
<p>Using data analysis and statistics to improve business quality has a long history. But it often seems like most of that history involves huge operations. After all, Six Sigma originated with Motorola, and became adopted by thousands of other businesses after it was adopted by a little-known outfit called General Electric.</p>
<p>There are many case studies and examples of how big companies used Six Sigma methods to save millions of dollars, slash expenses, and improve quality...but when they read about the big dogs getting those kind of results, a lot of folks hear a little voice in their heads saying, "Sure, but could it work in my <em>small </em>business?" </p>
Can Six Sigma Help a Small Business?
<p><img alt="six sigma for bicycle chain manufacturer" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/189142f85cb097558b782f1d03e95efa/bike_chain_thumb.png" style="border-width: 1px; border-style: solid; margin: 10px 15px; width: 250px; height: 250px; float: right;" />That's why I was so intrigued to find this <a href="http://www.emeraldinsight.com/journals.htm?issn=1754-2731&volume=24&issue=1&articleid=17009714&show=html&PHPSESSID=hur5eiqnc51f6502k0dbl1cve1#id1060240101013" target="_blank">article</a> published in the <em>TQM Journal</em> in 2012: it shows exactly how Six Sigma methods can be used to benefit a small manufacturing business. The authors of this paper profile a small manufacturing company in India that was plagued with declining productivity. This operation made bicycle chains using plates, pins, bushings, and rollers.</p>
<p>The bushings, which need to be between 5.23 and 5.27 mm, had a very high rejection rate. Variation in the diameter caused rejection rates of 8 percent, so the company applied Six Sigma methods to reduce defects in the bushing manufacturing process.</p>
<p>The company used the <a href="http://blog.minitab.com/blog/real-world-quality-improvement/dmaic-vs-dmadv-vs-dfss">DMAIC methodology</a>--which divides a project into Define, Measure, Analyze, Improve, and Control phases--to attack the problem. Each step the authors describe in their process can be performed using Minitab Statistical Software and Quality Companion, our collection of "soft tools" for quality projects.</p>
The Define Phase
<p>The Define phase is self-explanatory: you investigate and specify the problem, and detail the requirements that are not being met. In Define phase, the project team created a process map (reproduced below in Quality Companion) and a <a href="http://blog.minitab.com/blog/understanding-statistics/sipoc-alypse-now">SIPOC (Supplier, Input, Process, Output, Customer) diagram</a> for the bushing manufacturing process.</p>
<p><img alt="Process Map Created in Quality Companion by Minitab" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/85fc9c4bb55b9327a2275fda7a4d447e/process_map_interface_w640.jpeg" style="width: 640px; height: 439px;" /></p>
The Measure Phase
<p>In measure phase, you gather data about the process. This isn't always as straightforward as it seems, though. First, you need to <a href="http://blog.minitab.com/blog/understanding-statistics/the-single-most-important-question-in-every-statistical-analysis">make sure you can trust your data</a> by conducting a measurement system analysis.</p>
<p>The team in this case study did <a href="http://blog.minitab.com/blog/understanding-statistics/creating-a-new-metric-with-gage-rr-part-1">Gage repeatability and reproducibility (Gage R&R)</a> studies to confirm that their measurement system produced accurate and reliable data. This is a critical step, but it needn't be long and involved: the chain manufacturer's study involved two operators, who took two readings apiece on 10 sample bushings with a micrometer. The 40 data points they generated were sufficient to confirm the micrometer's accuracy and consistency, so they moved on to gathering data about the chain-making process itself.</p>
The Analysis Phase
<p>The team then applied a variety of data analysis tools, using Minitab Statistical Software. First they conducted a <a href="http://blog.minitab.com/blog/michelle-paret/process-capability-statistics-cpk-vs-ppk">process capability analysis</a>, taking 20 samples produced under similar circumstances (in groups of 5). The graph shown below uses simulated data with extremely similar, though not completely identical, results to those shown in the <em>TQM Journal</em> article.</p>
<p><img alt="process capability curve" src="http://cdn2.content.compendiumblog.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/479b4fbd-f8c0-4011-9409-f4109cc4c745/Image/b76f2437c4f118f957e23f35cbc05cee/capability_analysis_curve.gif" style="width: 577px; height: 385px;" /></p>
<p>One of the key items to look at here is the PPM Total, which equates to the commonly-heard DPMO, or defects per million opportunities. In this case, the DPMO is nearly 80,000 per million, or 8 percent.</p>
<p>Another measure of process capability is the the <a href="http://blog.minitab.com/blog/quality-data-analysis-and-statistics/capability-analysis-part-2">Z.bench score</a>, which is a report of the process's sigma capability. In general terms, a 6 sigma process is one that has 3.4 defects per million opportunities. Adding the conventional 1.5 Z-shift, this appears to be about a 3-sigma process, or a little over 66,000 defects per million opportunities.</p>
<p>Clearly, there's a lot of room for improvement, and this preliminary analysis gives the team a measure against which to assess improvements they make to the process.</p>
<p>At this point, the project team looked carefully at the process to identify possible causes for rejecting bushings. They drew a <a href="http://blog.minitab.com/blog/understanding-statistics/five-types-of-fishbone-diagrams">fishbone diagram</a> that helped them identify four potential factors to analyze: whether the operator was skilled or unskilled, how long rods were used (15 or 25 hours), how frequently the curl tool was reground (after 20 or 30 hours), and whether the rod-holding mechanism was new or old.</p>
<p>The team then used Minitab Statistical Software to do <a href="http://blog.minitab.com/blog/understanding-statistics/guidelines-and-how-tos-for-the-2-sample-t-test">2-sample t-tests</a> on each of these factors. For each factor they studied, they collected 50 samples under each condition. For instance, they looked at 50 bushings made by skilled operators, and 50 made by unskilled operators. They also looked at 50 bushings made with rods that were replaced after 15 hours, and 50 made with rods replaced after 30 hours.</p>
<p>The t-tests revealed whether or not there was a statistically significant difference between the two conditions for each factor; if no significant difference existed, team members could conclude it didn't have a large impact on bush rejection.</p>
<p>This team's hypothesis tests indicated that operator skill level and curl-tool regrinding did not have a significant effect on bushing rejection; however, 15-hour vs. 25-hour rod replacement and new vs. old rod-holding mechanisms did. Thus, a fairly simple analysis helped them identify which factors they should their improvement efforts on.</p>
<p><a href="http://blog.minitab.com/blog/understanding-statistics/applying-six-sigma-to-a-small-operation-part-2">In my next post, I'll review how the team used Minitab to apply what they learned in the Define, Measure and Analyze phases of their project to the final two phases: Improve, and Control, and the benefits they saw from the project</a>.</p>
Six SigmaTue, 04 Feb 2014 16:26:00 +0000http://blog.minitab.com/blog/understanding-statistics/applying-six-sigma-to-a-small-operationEston Martz