dcsimg
 

DOE

Blog posts and articles about the statistical method called DOE (Design of Experiments) in quality improvement.

As someone who has collected and analyzed real data for a living, the idea of using simulated data for a Monte Carlo simulation sounds a bit odd. How can you improve a real product with simulated data? In this post, I’ll help you understand the methods behind Monte Carlo simulation and walk you through a simulation example using Companion by Minitab. Companion by Minitab is a software platform that... Continue Reading
Have you ever tried to install ventilated shelving in a closet?  You know: the heavy-duty, white- or gray-colored vinyl-coated wire shelving? The one that allows you to get organized, more efficient with space, and is strong and maintenance-free? Yep, that’s the one. Did I mention this stuff is strong?  As in, really hard to cut?  It seems like a simple 4-step project. Measure the closet, go the... Continue Reading

LIVE WEBINAR | MAY 4, 10:00 AM EST

Smarter Process Improvement

with Companion by Minitab

SIGN UP TODAY >
 
Grocery shopping. For some, it's the most dreaded household activity. For others, it's fun, or perhaps just a “necessary evil.” Personally, I enjoy it! My co-worker, Ginger, a content manager here at Minitab, opened my eyes to something that made me love grocery shopping even more: she shared the data behind her family’s shopping trips. Being something of a data nerd, I really geeked out over the... Continue Reading
If you regularly perform regression analysis, you know that R2 is a statistic used to evaluate the fit of your model. You may even know the standard definition of R2: the percentage of variation in the response that is explained by the model. Fair enough. With Minitab Statistical Software doing all the heavy lifting to calculate your R2 values, that may be all you ever need to know. But if you’re... Continue Reading
In Parts 1 and 2 of Gauging Gage we looked at the numbers of parts, operators, and replicates used in a Gage R&R Study and how accurately we could estimate %Contribution based on the choice for each.  In doing so, I hoped to provide you with valuable and interesting information, but mostly I hoped to make you like me.  I mean like me so much that if I told you that you were doing... Continue Reading
Earlier, I wrote about the different types of data statisticians typically encounter. In this post, we're going to look at why, when given a choice in the matter, we prefer to analyze continuous data rather than categorical/attribute or discrete data.  As a reminder, when we assign something to a group or give it a name, we have created attribute or categorical data.  If we count something, like... Continue Reading
You run a capability analysis and your Cpk is bad. Now what? First, let’s start by defining what “bad” is. In simple terms, the smaller the Cpk, the more defects you have. So the larger your Cpk is, the better. Many practitioners use a Cpk of 1.33 as the gold standard, so we’ll treat that as the gold standard here, too. Suppose we collect some data and run a capability analysis using Minitab Statisti... Continue Reading
by Kevin Clay, guest blogger In transactional or service processes, we often deal with lead-time data, and usually that data does not follow the normal distribution. Consider a Lean Six Sigma project to reduce the lead time required to install an information technology solution at a customer site. It should take no more than 30 days—working 10 hours per day Monday–Friday—to complete, test and... Continue Reading
"You take 10 parts and have 3 operators measure each 2 times." This standard approach to a Gage R&R experiment is so common, so accepted, so ubiquitous that few people ever question whether it is effective.  Obviously one could look at whether 3 is an adequate number of operators or 2 an adequate number of replicates, but in this first of a series of posts about "Gauging Gage," I want to look at... Continue Reading
In Part 1 of this blog series, I compared Six Sigma to a diamond because both are valuable, have many facets and have withstood the test of time. I also explained how the term “Six Sigma” can be used to summarize a variety of concepts, including philosophy, tools, methodology, or metrics. In this post, I’ll explain short/long-term variation and between/within-subgroup variation and how they help... Continue Reading
In my last post, I wrote about making a cluttered data set easier to work with by removing unneeded columns entirely, and by displaying just those columns you want to work with now. But too much unneeded data isn't always the problem. What can you do when someone gives you data that isn't organized the way you need it to be?   That happens for a variety of reasons, but most often it's because the... Continue Reading
In its industry guidance to companies that manufacture drugs and biological products for people and animals, the Food and Drug Administration (FDA) recommends three stages for process validation: Process Design, Process Qualification, and Continued Process Verification. In this post, we we will focus on that third stage. Stage 3: Continued Process Verification Per the FDA guidelines, the goal of... Continue Reading
People can make mistakes when they test a hypothesis with statistical analysis. Specifically, they can make either Type I or Type II errors. As you analyze your own data and test hypotheses, understanding the difference between Type I and Type II errors is extremely important, because there's a risk of making each type of error in every analysis, and the amount of risk is in your control.    So if... Continue Reading
A recent discussion on the Minitab Network on LinkedIn pertained to the I-MR chart. In the course of the conversation, a couple of people referred to it as "The Swiss Army Knife of control charts," and that's a pretty great description. You might be able to find more specific tools for specific applications, but in many cases, the I-MR chart gets the job done quite adequately. When you're... Continue Reading
Right now I’m enjoying my daily dose of morning joe. As the steam rises off the cup, the dark rich liquid triggers a powerful enzyme cascade that jump-starts my brain and central nervous system, delivering potent glints of perspicacity into the dark crevices of my still-dormant consciousness. Feels good, yeah! But is it good for me? Let’s see what the studies say… Drinking more than 4 cups of coffee... Continue Reading
Statistics can be challenging, especially if you're not analyzing data and interpreting the results every day. Statistical software makes things easier by handling the arduous mathematical work involved in statistics. But ultimately, we're responsible for correctly interpreting and communicating what the results of our analyses show. The p-value is probably the most frequently cited statistic. We... Continue Reading
As a person who loves baking (and eating) cakes, I find it bothersome to go through all the effort of baking a cake when the end result is too dry for my taste. For that reason, I decided to use a designed experiment in Minitab to help me reduce the moisture loss in baked chocolate cakes, and find the optimal settings of my input factors to produce a moist baked chocolate cake. I’ll share the... Continue Reading
Histograms are one of the most common graphs used to display numeric data. Anyone who takes a statistics course is likely to learn about the histogram, and for good reason: histograms are easy to understand and can instantly tell you a lot about your data. Here are three of the most important things you can learn by looking at a histogram.  Shape—Mirror, Mirror, On the Wall… If the left side of a... Continue Reading
by Matthew Barsalou, guest blogger.  The old saying “if it walks like a duck, quacks like a duck and looks like a duck, then it must be a duck” may be appropriate in bird watching; however, the same idea can’t be applied when observing a statistical distribution. The dedicated ornithologist is often armed with binoculars and a field guide to the local birds and this should be sufficient. A... Continue Reading
Genichi Taguchi is famous for his pioneering methods of robust quality engineering. One of the major contributions that he made to quality improvement methods is Taguchi designs. Designed experiments were first used by agronomists during the last century. This method seemed highly theoretical at first, and was initially restricted to agronomy. Taguchi made the designed experiment approach more... Continue Reading