Blog posts and articles about testing hypotheses with the statistical method called the T-Test.

This is a companion post for a series of blog posts about
understanding hypothesis tests. In this series, I create a
graphical equivalent to a 1-sample t-test and confidence interval
to help you understand how it works more intuitively.
This post focuses entirely on the steps required to create the
graphs. It’s a fairly technical and task-oriented post designed for
those who need to create the... Continue Reading

What do significance levels and P values mean in hypothesis
tests? What is statistical significance anyway? In this
post, I’ll continue to focus on concepts and graphs to help you
gain a more intuitive understanding of how hypothesis tests work in
statistics.
To bring it to life, I’ll add the significance level and P value
to the graph in my previous post in order to perform a graphical
version of... Continue Reading

Hypothesis testing is an essential procedure in statistics. A
hypothesis test evaluates two mutually exclusive statements about a
population to determine which statement is best supported by the
sample data. When we say that a finding is statistically
significant, it’s thanks to a hypothesis test. How do these tests
really work and what does statistical significance actually
mean?
In this series of... Continue Reading

It’s safe to say that most people who use statistics are more
familiar with parametric analyses than nonparametric analyses.
Nonparametric tests are also called distribution-free tests because
they don’t assume that your data follow a specific
distribution.
You may have heard that you should use nonparametric tests when
your data don’t meet the assumptions of the parametric test,
especially the... Continue Reading

I left off last with a
post outlining how the Six Sigma students at
Rose-Hulman were working on a project to reduce the amount of
recycling thrown in the normal trash cans in all of the academic
buildings at the institution.
Using the DMAIC methodology for completing improvement
projects, they had already defined the problem at hand: how could
the amount of recycling that’s thrown in the normal trash... Continue Reading

by Matthew Barsalou, guest
blogger.
E. E. Doc Smith, one of the greatest authors ever, wrote
many classic books such as The Skylark of Space and
his Lensman series. Doc Smith’s imagination knew no
limits; his Galactic Patrol had millions of combat fleets under its command
and possessed planets turned into movable, armored weapons
platforms. Some of the Galactic Patrol’s weapons may be well... Continue Reading

In my recent meetings with people from various companies in the
service industries, I realized that one of the problems they face
is that they were collecting large amounts of
"qualitative" data: types of product, customer profiles, different
subsidiaries, several customer requirements, etc.
As I discussed in my previous post, one way to look at
qualitative data is to use different types of... Continue Reading

If you’re not a statistician, looking through statistical output
can sometimes make you feel a bit like Alice in
Wonderland. Suddenly, you step into a fantastical world
where strange and mysterious phantasms appear out of nowhere.
For example, consider the T and P in your t-test results.
“Curiouser and curiouser!” you might exclaim, like Alice, as you
gaze at your output.
What are these values,... Continue Reading

"Data! Data! Data! I can't make bricks without clay."
— Sherlock Holmes, in Arthur Conan Doyle's The Adventure
of the Copper Beeches
Whether you're the world's greatest detective trying to crack a
case or a person trying to solve a problem at work, you're going to
need information. Facts. Data, as Sherlock Holmes
says.
But not all data is created equal, especially if you plan to
analyze as part of... Continue Reading

A
recent study has indicated that female-named hurricanes kill more people than male
hurricanes. Of course, the title of that article (and other
articles like it) is a bit misleading. The study found a
significant
interaction between the damage caused by the storm and the
perceived masculinity or femininity of the hurricane names. So
don’t be confused by stories that suggest all... Continue Reading

by Matthew Barsalou, guest blogger
Programs such as the Minitab Statistical
Software make hypothesis testing easier; but no program can
think for the experimenter. Anybody performing a statistical
hypothesis test must understand what p values mean in regards to
their statistical results as well as potential limitations of
statistical hypothesis testing.
A p value of 0.05 is frequently used during... Continue Reading

Minitab graphs are powerful tools for investigating your process
further and removing any doubt about the steps you should take to
improve it. With that in mind, you’ll want to know every feature
about Minitab graphs that can help you share and communicate your
results effectively. While many ways to modify your graph are on
the Editor menu, some of the best features become
available when you... Continue Reading

It's all too easy to make mistakes involving statistics.
Powerful statistical software can remove a lot of the difficulty
surrounding statistical calculation, reducing the risk of
mathematical errors—but correctly interpreting the results of
an analysis can be even more challenging.
No one knows that better than Minitab's technical trainers. All of our trainers
are seasoned statisticians with... Continue Reading

The P
value is used all over statistics, from t-tests to regression analysis. Everyone knows that you
use P values to determine statistical significance in a hypothesis test. In fact, P values often
determine what studies get published and what projects get
funding.
Despite being so important, the P value is a slippery concept
that people often interpret incorrectly. How do you
interpret P values?
In... Continue Reading

My
previous post examined how an equivalence test
can shift the burden of proof when you perform hypothesis test of
the means. This allows you to more rigorously test whether the
process mean is equivalent to a target or to another mean.
Here’s another key difference: To perform the analysis, an
equivalence test requires that you first define, upfront, the size
of a practically important difference... Continue Reading

With
more options, come more decisions.
With equivalence testing added to Minitab 17, you now have more
statistical tools to test a sample mean against target value or
another sample mean.
Equivalence testing is extensively used in the biomedical field.
Pharmaceutical manufacturers often need to test whether the
biological activity of a generic drug is equivalent to that of a
brand name drug that... Continue Reading

If
you regularly perform regression analysis, you know that
R2 is a statistic used to evaluate the fit of your
model. You may even know the standard definition of R2:
the percentage of variation in the response that is explained
by the model.
Fair enough. With Minitab Statistical Software doing all the heavy
lifting to calculate your R2 values, that may be all you
ever need to know.
But if you’re... Continue Reading

Using data analysis and statistics to improve business quality
has a long history. But it often seems like most of that history
involves huge operations. After all, Six Sigma originated with
Motorola, and became adopted by thousands of other businesses after
it was adopted by a little-known outfit called General
Electric.
There are many case studies and examples of how big companies
used Six Sigma... Continue Reading

Ever
start a fantasy football draft and realize that passing touchdowns
are worth 6 points, not 4? Or how about realizing at the last
minute that the commissioner of your league decided to have a point
per reception (PPR) league. We know that this year running backs
are going to be going early in the draft. But if your league is a
PPR or gives 6 points for a passing touchdown, should you... Continue Reading

Most of the data that one can collect and analyze follow a
normal distribution (the famous bell-shaped curve). In fact, the
formulae and calculations used in many analyses simply take it for
granted that our data follow this distribution; statisticians call
this the "assumption of normality."
For example, our data need to meet the normality assumption
before we can accept the results of a one- or... Continue Reading