Since the release of Minitab
Express in 2014, we’ve often received questions in technical
support about the differences between Express and Minitab 17.
In this post, I’ll attempt to provide a comparison between these
two Minitab products.
What Is Minitab 17?
Minitab 17 is an all-in-one graphical and statistical analysis
package that includes basic analysis tools such as hypothesis
testing,... Continue Reading
True or false: When comparing a parameter for two sets of
measurements, you should always use a hypothesis test to determine
whether the difference is statistically significant.
The answer? (drumroll...) True!
To understand this paradoxical answer, you need to keep in mind
the difference between samples, populations, and descriptive and
Descriptive Statistics and... Continue Reading
So the data you nurtured, that you worked so hard to format and
make useful, failed the normality test.
Time to face the truth: despite your best efforts, that data set
is never going to measure up to the assumption you may
have been trained to fervently look for.
Your data's lack of normality seems to make it poorly suited for
analysis. Now what?
Take it easy. Don't get uptight. Just let your data... Continue Reading
See if this
sounds fair to you. I flip a coin.
Heads: You win
$1.Tails: You pay me $1.
You may not like games of chance, but you have to admit it seems
like a fair game. At least, assuming the coin is a normal, balanced
coin, and assuming I’m not a sleight-of-hand magician who can
control the coin.
How about this next
You pay me $2 to play.I flip a coin over and over until
it comes up heads.Your... Continue Reading
Have you ever accidentally done statistics? Not all of us can
(or would want to) be “stat nerds,” but the word “statistics”
shouldn’t be scary. In fact, we all analyze things that happen to
us every day. Sometimes we don’t realize that we are compiling data
and analyzing it, but that’s exactly what we are doing. Yes, there
are advanced statistical concepts that can be difficult to
understand—but... Continue Reading
you perform a statistical analysis, you want to make sure you
collect enough data that your results are reliable. But you also
want to avoid wasting time and money collecting more data than you
need. So it's important to find an appropriate middle ground when
determining your sample size.
Now, technically, the Major League Baseball regular season isn't
a statistical analysis. But it does kind... Continue Reading
You often hear the data being
blamed when an analysis is not delivering the answers you wanted or
expected. I was recently reminded that the data chosen or collected
for a specific analysis is determined by the analyst, so there is
no such thing as bad data—only bad
This made me think about the
steps an analyst can take to minimise the risk of producing
analysis that fails to answer... Continue Reading
In statistics, t-tests are a type of hypothesis test that allows
you to compare means. They are called t-tests because each t-test
boils your sample data down to one number, the t-value. If you
understand how t-tests calculate t-values, you’re well on your way
to understanding how these tests work.
In this series of posts, I'm focusing on concepts rather than
equations to show how t-tests work.... Continue Reading
When it comes to statistical analyses, collecting a large enough
sample size is essential to obtaining quality results. If your
sample size is too small, confidence intervals may be too wide to
be useful, linear models may lack necessary precision, and
control charts may get so out of control that they become
self-aware and rise up against humankind.
Okay,that last point may have been... Continue Reading
Likert scales are commonly associated with surveys and are used in
a wide variety of settings. You’ve run into the Likert scale if
you’ve ever been asked whether you strongly agree, agree, neither
agree or disagree, disagree, or strongly disagree about something.
The worksheet to the right shows what five-point Likert data look
like when you have two groups.
Because Likert item data are... Continue Reading
P values have been around for nearly a century and they’ve been
the subject of criticism since their origins. In recent years, the
debate over P values has risen to a fever pitch. In particular,
there are serious fears that P values are misused to such an extent
that it has actually damaged science.
In March 2016, spurred on by the growing concerns, the American
Statistical Association (ASA) did... Continue Reading
I am a bit of an Oscar fanatic.
Every year after the ceremony, I religiously go online to find out
who won the awards and listen to their acceptance speeches. This
year, I was so chuffed to learn that Leonardo Di Caprio
won his first Oscar for his performance in The Revenant in
Awards—after five nominations in previous ceremonies. As a
longtime Di Caprio fan, I still remember... Continue Reading
There are many reasons why a distribution might not be
normal/Gaussian. A non-normal pattern might be caused by several
distributions being mixed together, or by a drift in time, or by
one or several outliers, or by an asymmetrical behavior, some
out-of-control points, etc.
I recently collected the scores of three different teams (the
Blue team, the Yellow team and the Pink team) after a laser... Continue Reading
you ever wonder why statistical analyses and concepts often have
such weird, cryptic names?
One conspiracy theory points to the workings of a secret
committee called the ICSSNN. The International Committee for
Sadistic Statistical Nomenclature and Numerophobia was formed
solely to befuddle and subjugate the masses. Its mission: To select
the most awkward, obscure, and confusing name possible... Continue Reading
approaches, you are probably taking the necessary steps to protect
yourself from the various ghosts, goblins, and witches that are prowling
around. Monsters of all sorts are out to get you, unless they’re
sufficiently bribed with candy offerings!
I’m here to warn you about a ghoul that all statisticians and
data scientists need to be aware of: phantom degrees of freedom.
These phantoms... Continue Reading
3 in our DOE problem solving methodology is to determine how many
times to replicate the base experiment plan. The discussion in Part 3
ended with the conclusion that our
4 factors could best be studied using all 16 combinations of the
high and low settings for each factor, a full factorial. Each
golfer will perform half of the sixteen possible combinations and
each golfer’s data could stand as... Continue Reading
Step 1 in our DOE problem-solving methodology
is to use process experts, literature, or past experiments to
characterize the process and define the problem. Since I had little
experience with golf myself, this was an important step for me.
This is not an uncommon situation. Experiment designers often
find themselves working on processes that they have little or no
experience with. For example, a... Continue Reading
Repeated measures designs don’t fit our impression of a typical
experiment in several key ways. When we think of an experiment, we
often think of a design that has a clear distinction between the
treatment and control groups. Each subject is in one, and only one,
of these non-overlapping groups. Subjects who are in a treatment
group are exposed to only one type of treatment. This is the... Continue Reading
By Matthew Barsalou, guest
Many statistical tests assume the data being tested came from a
normal distribution. Violating the assumption of normality can
result in incorrect conclusions. For example, a Z test may indicate
a new process is more efficient than an older process when this is
not true. This could result in a capital investment for equipment
that actually results in higher... Continue Reading
my previous post, I wrote about the hypothesis testing ban in
the Journal of Basic and Applied Social Psychology. I
showed how P values and confidence intervals provide important
information that descriptive statistics alone don’t provide. In
this post, I'll cover the editors’ concerns about hypothesis
testing and how to avoid the problems they describe.
The editors describe hypothesis testing... Continue Reading
Minitab is the leading provider of software and services for quality
improvement and statistics education. More than 90% of Fortune 100 companies
use Minitab Statistical Software, our flagship product, and more students
worldwide have used Minitab to learn statistics than any other package.
Minitab Inc. is a privately owned company headquartered in State College,
Pennsylvania, with subsidiaries in the United Kingdom, France, and
Australia. Our global network of representatives serves more than 40
countries around the world.