P Value

Blog posts and articles about how to use and interpret the P Value statistic in quality improvement efforts.

Back when I was an undergrad in statistics, I unfortunately spent an entire semester of my life taking a class, diligently crunching numbers with my TI-82, before realizing 1) that I was actually in an Analysis of Variance (ANOVA) class, 2) why I would want to use such a tool in the first place, and 3) that ANOVA doesn’t necessarily tell you a thing about variances. Fortunately, I've had a lot more... Continue Reading
I have two young children, and I work full-time, so my adult TV time is about as rare as finding a Kardashian-free tabloid.  So I can’t commit to just any TV show. It better be a good one. I was therefore extremely excited when Netflix analyzed viewer data to find out at what point watchers get hooked on the first season of various shows. Specifically, they identified the episode at which 70% of... Continue Reading
As Halloween approaches, you are probably taking the necessary steps to protect yourself from the various ghosts, goblins, and witches that are prowling around. Monsters of all sorts are out to get you, unless they’re sufficiently bribed with candy offerings! I’m here to warn you about a ghoul that all statisticians and data scientists need to be aware of: phantom degrees of freedom. These phantoms... Continue Reading
In Part 5 of our series, we began the analysis of the experiment data by reviewing analysis of covariance and blocking variables, two key concepts in the design and interpretation of your results. The 250-yard marker at the Tussey Mountain Driving Range, one of the locations where we conducted our golf experiment. Some of the golfers drove their balls well beyond this 250-yard maker during a few of... Continue Reading
By Matthew Barsalou, guest blogger Teaching process performance and capability studies is easier when actual process data is available for the student or trainee to practice with. As I have previously discussed at the Minitab Blog, a catapult can be used to generate data for a capability study. My last blog on using a catapult for this purspose was several years ago, so I would like to revisit... Continue Reading
In Part 3 of our series, we decided to test our 4 experimental factors, Club Face Tilt, Ball Characteristics, Club Shaft Flexibility, and Tee Height in a full factorial design because of the many advantages of that data collection plan. In Part 4 we concluded that each golfer should replicate their half fraction of the full factorial 5 times in order to have a high enough power to detect... Continue Reading
With Speaker John Boehner resigning, Kevin McCarthy quitting before the vote for him to be Speaker, and a possible government shutdown in the works, the Freedom Caucus has certainly been in the news frequently! Depending on your political bent, the Freedom Caucus has caused quite a disruption for either good or bad.  Who are these politicians? The Freedom Caucus is a group of approximately 40... Continue Reading
Step 3 in our DOE problem solving methodology is to determine how many times to replicate the base experiment plan. The discussion in Part 3 ended with the conclusion that our 4 factors could best be studied using all 16 combinations of the high and low settings for each factor, a full factorial. Each golfer will perform half of the sixteen possible combinations and each golfer’s data could stand as... Continue Reading
An exciting new study sheds light on the relationship between P values and the replication of experimental results. This study highlights issues that I've emphasized repeatedly—it is crucial to interpret P values correctly, and significant results must be replicated to be trustworthy. The study also supports my disagreement with the decision by the Journal of Basic and Applied Social Psychology to b... Continue Reading
Repeated measures designs don’t fit our impression of a typical experiment in several key ways. When we think of an experiment, we often think of a design that has a clear distinction between the treatment and control groups. Each subject is in one, and only one, of these non-overlapping groups. Subjects who are in a treatment group are exposed to only one type of treatment. This is the... Continue Reading
If you use ordinary linear regression with a response of count data, if may work out fine (Part 1), or you may run into some problems (Part 2). Given that a count response could be problematic, why not use a regression procedure developed to handle a response of counts? A Poisson regression analysis is designed to analyze a regression model with a count response. First, let's try using Poisson... Continue Reading
My previous post showed an example of using ordinary linear regression to model a count response. For that particular count data, shown by the blue circles on the dot plot below, the model assumptions for linear regression were adequately satisfied. But frequently, count data may contain many values equal or close to 0. Also, the distribution of the counts may be right-skewed. In the quality field,... Continue Reading
Ever use dental floss to cut soft cheese? Or Alka Seltzer to clean your toilet bowl? You can find a host of nonconventional uses for ordinary objects online. Some are more peculiar than others. Ever use ordinary linear regression to evaluate a response (outcome) variable of counts?  Technically, ordinary linear regression was designed to evaluate a a continuous response variable. A continuous... Continue Reading
In 2007, the Crayola crayon company encountered a problem. Labels were coming off of their crayons. Up to that point, Crayola had done little to implement data-driven methodology into the process of manufacturing their crayons. But that was about to change. An elementary data analysis showed that the adhesive didn’t consistently set properly when the labels were dry. Misting crayons as they went... Continue Reading
In regression analysis, overfitting a model is a real problem. An overfit model can cause the regression coefficients, p-values, and R-squared to be misleading. In this post, I explain what an overfit model is and how to detect and avoid this problem. An overfit model is one that is too complicated for your data set. When this happens, the regression model becomes tailored to fit the quirks and... Continue Reading
Imagine a multi-million dollar company that released a product without knowing the probability that it will fail after a certain amount of time. “We offer a 2 year warranty, but we have no idea what percentage of our products fail before 2 years.” Crazy, right? Anybody who wanted to ensure the quality of their product would perform a statistical analysis to look at the reliability and survival of... Continue Reading
If you want to use data to predict the impact of different variables, whether it's for business or some personal interest, you need to create a model based on the best information you have at your disposal. In this post and subsequent posts throughout the football season, I'm going to share how I've been developing and applying a model for predicting the outcomes of 4th down decisions in Big... Continue Reading
by Colin Courchesne, guest blogger, representing his Governor's School research team.   High-level research opportunities for high school students are rare; however, that was just what the New Jersey Governor’s School of Engineering and Technology provided.  Bringing together the best and brightest rising seniors from across the state, the Governor’s School, or GSET for short, tasks teams of... Continue Reading
Just 100 years ago, very few statistical tools were available and the field was largely unknown. Since then, there has been an explosion of tools available, as well as ever-increasing awareness and use of statistics.   While most readers of the Minitab Blog are looking to pick up new tools or improve their use of commonly-applied ones, I thought it would be worth stepping back and talking about one... Continue Reading