Unless you’ve been marooned on a desert island, you've probably been hearing a lot of hullaboo about the Higgs boson particle over the last few days.

Scientists claim they’ve finally proven the existence of this long sought-after “God particle,” which supports the standard model of the universe by identifying the particle that gives mass to other particles like protons and electrons.

If you’re keen on the statistics behind this discovery, you’ll notice that many news articles cite the fact that the scientists are certain of their results at the 5-sigma level, or a 99.9999% level of confidence.

You might wonder, why are they talking about a sigma-level instead of the alpha-level, which is the way most of us are used to representing the level of significance in our statistical results?

Are the two related? And, if so, how?

## **Statistical Terminology: You Say BaNAna, I Say BanaNA**

Often what makes concepts seem more complicated than they really are is how we humans tend to give all sorts of names to things that are, at root, expressing the same thing.

In statistics, we use alpha (α) to represent the probability of falsely obtaining a statistically significant result due to random error. An alpha of 0.05 means there’s 5% chance that your statistically significant results are caused by random error rather than a "real" effect.

In astronomy and particle physics, researchers use sigma (σ) to represent the probability of falsely obtaining a statistically significant result due to random error. Sigma indicates one standard deviation from the mean of a distribution.

## Using Probability Distribution Plots to Show Error Rates

To see how these two terms are really just different ways to represent similar concepts, I created two probability distribution plots using Minitab Statistical Software.

Both plots show the standard normal distribution, with the chance of random error for a two-tailed test represented by the red shaded areas. On both plots, one tick mark on the X scale represents one unit of standard deviation (σ).

**Disclaimer**: The closest I've come to particle physics is wiping the bread crumbs off my kitchen counter. I'm not privy to the actual data used in this experiment or whether the scientists were using a one-tailed or two-tailed test. My purpose here is simply to show the relationship of how the significance level can be represented in terms of alpha or sigma.

The 0.05 level of signficance and the 2σ level of significance are both commonly used benchmarks in their respective fields. As you can see, they’re almost the same risk of error for a two-tailed test.

In making an initial scientific discovery, such as the discovery of a planet outside of our solar system, scientists may report the discovery based on a 2σ level of error. In fact, researchers previously reported the existence of the Higgs Boson particle with a 2σ level of error—about 95.5% confidence. (If they were using a one-tailed test at the 2σ level, you can see from the bottom plot that the error rate would be half that—about 2.275%—and the confidence level would be 97.725%.)

But what’s an acceptable risk of error depends on the application, and the convention of 0.05 alpha or 2σ may not cut the mustard when the stakes are high—like when you’re trying to “prove” the existence of something like an exoplanet or the God particle. To confirm a new discovery, scientists may require a 5σ or even a 7σ level of significance.

So now, their success at the 5σ level basically means that they seem to have confirmed a signal for the particle present at an amplitude of five times the standard deviation of the "noise.'' (Signal to noise on a control chart is another way to view this, since the limits on the control chart are simply standard deviations from the mean. Test 1 for special causes in your process identifies a point more than 3σ from the mean, so this confirmation is like finding a special cause 5σ from the mean!)

## Sigma and Alpha: Don't Let Them Be Greek to You

To see how the sigma level translates into alpha level for two-tailed tests, I created a table of equivalent values using values from the probability distribution functions in Minitab.

(Choose **Graph > Probability Distribution Plot > View Probability**. Use the default setting: normal distribution with mean 0 and standard deviation 1. Click **Shaded Area** and click **X value** to enter units of σ).

As you can see, the 5σ level of significance means there’s less than one in a million that chance this significant result is due to a random error. However, “random” is the operative word here.

One thing that’s not being widely reported amidst all the hype: The 99.9999% confidence value only tells you the confidence you can have in your results, *assuming that your data are valid.* And that’s assuming a lot: that the instrumentation measured accurately, that the results were recorded accurately, that there’s no flaws in the experimental design itself, and so on and so forth.

In a nutshell, the sigma or alpha level tells you a lot about the validity of your results.

But it can't tell you whether someone accidentally dropped a peanut into your particle collider during the experiment.

**Postscript: **Here's another interesting difference between how particle physicists and statisticians apply the concepts of confidence level and statistical significance in their studies.

In statistics, it's generally considered proper procedure and good form to determine your alpha rate and sample size *before *you perform your experiment. But apparently, in particle physics, the convention is to keep collecting more and more data until you *reach* a certain sigma-level of signficance in your results. That "test-as-you-go" approach would be considered a no-no by many statisticians. For more information, see Higgs and Signficance.