dcsimg
 

Power and Sample Size

Blog posts and articles about statistical power and sample size, especially in quality improvement projects.

In Parts 1 and 2 of this blog series, I wrote about how statistical inference uses data from a sample of individuals to reach conclusions about the whole population. That’s a very powerful tool, but you must check your assumptions when you make statistical inferences. Violating any of these assumptions can result in false positives or false negatives, thus invalidating your results.  The common... Continue Reading
In Part 1 of this blog series, I wrote about how statistical inference uses data from a sample of individuals to reach conclusions about the whole population. That’s a very powerful tool, but you must check your assumptions when you make statistical inferences. Violating any of these assumptions can result in false positives or false negatives, thus invalidating your results.  The common data... Continue Reading

7 Deadly Statistical Sins Even the Experts Make

Do you know how to avoid them?

Get the facts >
If your work involves quality improvement, you've at least heard of Design of Experiments (DOE). You probably know it's the most efficient way to optimize and improve your process. But many of us find DOE intimidating, especially if it's not a tool we use often. How do you select an appropriate design, and ensure you've got the right number of factors and levels? And after you've gathered your... Continue Reading
With another Halloween almost upon us, here's a look back at some of the posts we've written about this holiday specifically, and about various creepy things in general. I hope that you enjoy this roundup of 13 scary statistics posts...and that they won't keep you up at night! 1. How to Make Minitab Wear a Halloween Costume As Halloween nears, you can customize your Minitab interface to match the... Continue Reading
Since the release of Minitab Express in 2014, we’ve often received questions in technical support about the differences between Express and Minitab 17.  In this post, I’ll attempt to provide a comparison between these two Minitab products. What Is Minitab 17? Minitab 17 is an all-in-one graphical and statistical analysis package that includes basic analysis tools such as hypothesis testing,... Continue Reading
True or false: When comparing a parameter for two sets of measurements, you should always use a hypothesis test to determine whether the difference is statistically significant. The answer? (drumroll...) True! ...and False! To understand this paradoxical answer, you need to keep in mind the difference between samples, populations, and descriptive and inferential statistics.  Descriptive Statistics and... Continue Reading
So the data you nurtured, that you worked so hard to format and make useful, failed the normality test. Time to face the truth: despite your best efforts, that data set is never going to measure up to the assumption you may have been trained to fervently look for. Your data's lack of normality seems to make it poorly suited for analysis. Now what? Take it easy. Don't get uptight. Just let your data... Continue Reading
See if this sounds fair to you. I flip a coin. Heads: You win $1.Tails: You pay me $1. You may not like games of chance, but you have to admit it seems like a fair game. At least, assuming the coin is a normal, balanced coin, and assuming I’m not a sleight-of-hand magician who can control the coin. How about this next game? You pay me $2 to play.I flip a coin over and over until it comes up heads.Your... Continue Reading
Have you ever accidentally done statistics? Not all of us can (or would want to) be “stat nerds,” but the word “statistics” shouldn’t be scary. In fact, we all analyze things that happen to us every day. Sometimes we don’t realize that we are compiling data and analyzing it, but that’s exactly what we are doing. Yes, there are advanced statistical concepts that can be difficult to understand—but... Continue Reading
When you perform a statistical analysis, you want to make sure you collect enough data that your results are reliable. But you also want to avoid wasting time and money collecting more data than you need. So it's important to find an appropriate middle ground when determining your sample size. Now, technically, the Major League Baseball regular season isn't a statistical analysis. But it does kind... Continue Reading
You often hear the data being blamed when an analysis is not delivering the answers you wanted or expected. I was recently reminded that the data chosen or collected for a specific analysis is determined by the analyst, so there is no such thing as bad data—only bad analysis.  This made me think about the steps an analyst can take to minimise the risk of producing analysis that fails to answer... Continue Reading
In statistics, t-tests are a type of hypothesis test that allows you to compare means. They are called t-tests because each t-test boils your sample data down to one number, the t-value. If you understand how t-tests calculate t-values, you’re well on your way to understanding how these tests work. In this series of posts, I'm focusing on concepts rather than equations to show how t-tests work.... Continue Reading
When it comes to statistical analyses, collecting a large enough sample size is essential to obtaining quality results. If your sample size is too small, confidence intervals may be too wide to be useful, linear models may lack necessary precision, and control charts may get so out of control that they become self-aware and rise up against humankind. Okay,that last point may have been... Continue Reading
Five-point Likert scales are commonly associated with surveys and are used in a wide variety of settings. You’ve run into the Likert scale if you’ve ever been asked whether you strongly agree, agree, neither agree or disagree, disagree, or strongly disagree about something. The worksheet to the right shows what five-point Likert data look like when you have two groups. Because Likert item data are... Continue Reading
P values have been around for nearly a century and they’ve been the subject of criticism since their origins. In recent years, the debate over P values has risen to a fever pitch. In particular, there are serious fears that P values are misused to such an extent that it has actually damaged science. In March 2016, spurred on by the growing concerns, the American Statistical Association (ASA) did... Continue Reading
I am a bit of an Oscar fanatic. Every year after the ceremony, I religiously go online to find out who won the awards and listen to their acceptance speeches. This year, I was so chuffed to learn that Leonardo Di Caprio won his first Oscar for his performance in The Revenant in the 88thAcademy Awards—after five nominations in  previous ceremonies. As a longtime Di Caprio fan, I still remember... Continue Reading
There are many reasons why a distribution might not be normal/Gaussian. A non-normal pattern might be caused by several distributions being mixed together, or by a drift in time, or by one or several outliers, or by an asymmetrical behavior, some out-of-control points, etc. I recently collected the scores of three different teams (the Blue team, the Yellow team and the Pink team) after a laser... Continue Reading
Did you ever wonder why statistical analyses and concepts often have such weird, cryptic names? One conspiracy theory points to the workings of a secret committee called the ICSSNN. The International Committee for Sadistic Statistical Nomenclature and Numerophobia was formed solely to befuddle and subjugate the masses. Its mission: To select the most awkward, obscure, and confusing name possible... Continue Reading
As Halloween approaches, you are probably taking the necessary steps to protect yourself from the various ghosts, goblins, and witches that are prowling around. Monsters of all sorts are out to get you, unless they’re sufficiently bribed with candy offerings! I’m here to warn you about a ghoul that all statisticians and data scientists need to be aware of: phantom degrees of freedom. These phantoms... Continue Reading
Step 3 in our DOE problem solving methodology is to determine how many times to replicate the base experiment plan. The discussion in Part 3 ended with the conclusion that our 4 factors could best be studied using all 16 combinations of the high and low settings for each factor, a full factorial. Each golfer will perform half of the sixteen possible combinations and each golfer’s data could stand as... Continue Reading