Does Design of Experiments Explain Contradictory Research Results?

Minitab Blog Editor | 04 December, 2012

Topics: Design of Experiments - DOE

Design of experiments, experimental design, or just "gathering some data." Whatever you want to call it, your approach to doing it will affect the results you get.

Have you ever wondered about all those contradictory studies in the news, especially regarding what's good and bad for you? Coffee is good for you, one headline says. It's bad for you, says the next. And if you read beyond the headlines, each study seems to have been conducted in a reasonable manner. 

Experimental design may be the explanation. 

Designing Experiments Begins with Questions

Science and health writer Emily Anthes recently discussed this issue in a post on her PLOS blog. She sums it up this way: "When researchers are actually designing these studies, they have to make countless small decisions about how to collect and analyze data, each of which could affect the ultimate conclusion."

For example, in medical studies, the control group researchers select may affect the strength of any correlation they discover. It could even determine whether they see any evidence of an effect at all.

She goes on to say:

This...provides a glimpse of how tiny decisions about experimental design can affect the outcome of a study–and begins to illuminate why studies may contradict one another. That’s not to say that researchers are deliberately trying to sway the results through their study designs, though unscrupulous scientists certainly could do so. The point is merely that it is difficult, especially...where there is not yet a consensus about procedures and best practices, to know exactly how to conduct research and analyze data.

In other words, when one study says something's good and next study says the opposite, it seems like they can't both be true. Yet, the fact is, they very well could both be true (statistically speaking), and the reason they show inconsistent results might just come down to experimental design, or, as statisticians prefer, "design of experiments." 

That's why a solid understanding of experimental design can be beneficial for everyone, even if you're not working in biomedical research: understanding how different types of experiments work, and what their relative strengths and weaknesses are, can help you better parse the stream of research findings reported in the media, even when they are, as it so frequently seems, diametrically opposed. 

Design of Experiments for Quality Improvement

Biomedical studies often involve studying a single factor at a time. When we talk about "designed experiments" in quality statistics, we're talking about a series of runs, or tests, in which we change input variables simultaneously and gather data about the responses. This is a great approach for improving a process, because changing multiple factors at one time lets you obtain meaningful results quickly .

When doing six sigma or quality improvement projects, selecting and planning the right experiment can mean the difference between success and failure.  

Careful planning can ensure that you collect enough data about the factors you're interested in to reach an actionable conclusion, without wasting time and money on unnecessary data collection that won't yield more insight. 

Well-designed experiments can produce more information with fewer experimental runs than haphazard experiments. Without careful planning, you may not collect enough data to reach any conclusions, or collect much more data than you actually need. 

How Much Do You Need to Know from Your DOE? 

A  carefully designed experiment will ensure that you can evaluate the effects you believe are important.

If you suspect interaction between variables is affecting your process, including both variables in your design rather than doing a "one factor at a time" experiment will be more efficient. 

carefully designed experiments

Say you worked at an electronics assembly company where some customers have complained of components coming loose from printed circuit boards. You might suspect several factors, including solder type, component type, and cooling time. 

You want to determine which factors, or combinations of factors, significantly influence the effectiveness of your company's products. The number of individual factors and interactions you want to consider will affect the size and scope of your designed experiment. 

Processes, of course, can have any number of inputs and thus it can be tough to identify which factors are really important. That's why designed experiments are often carried out in four phases:

  1. Planning: Here you define the problem, your objective, and a plan that will provide meaningful information. 
  2. Screening, or process characterization: In which you conduct initial research to identify the key variables that influence quality ("the critical few"). 
  3. Optimization: In this phase, you determine the optimal values for these critical factors depending on your objective, such as maximizing yield or reducing variability. 
  4. Verification: This follow-up experiment assesses the predicted optimal conditions to confirm the optimization results.

Designed experiments are typically planned using a statistcal software package like Minitab, which automatically randomizes the design's run order (in other words, the ordered sequence of factor combinations) and displays it in a worksheet, which you can use to carry out your experiment and record the responses. Minitab offers an array of DOE options, including factorial, response surface, mixture, and Taguchi designs.