People routinely take dietary supplements even though the long-term health consequences of many vitamins and minerals are unknown. A study published in the October 2011 issue of Archives of Internal Medicine made headlines because it concluded that seven common vitamin and mineral supplements may be associated with a higher mortality risk. Are vitamin supplements really dangerous?
This study is representative of a type of study that you hear about on the news frequently. They make a splash with surprising, health-related conclusions that affect the common person. Often the findings are overturned in a later study, which causes much confusion.
I want to delve deeper into a study of this type to show you how this uncertainty happens.
To do this, we’ll look at this study in more detail than the standard news reports did. We’ll look under the hood and see how problems arise. I’m not connected with the study in any fashion; I’m just working from the researchers' published article and a solid foundation in research studies and statistics.
Of the 15 supplements included in the study, researchers found 7 that were associated with a significant (p-value < 0.05) increase in the risk of death: multivitamins (risk increase 2.4%), vitamin B6 (4.1%), folic acid (5.9%), copper (18.0%), iron (3.9%), magnesium (3.6%), and zinc (3.0%). Calcium was the only supplement associated with a significant reduction in risk of 3.8%.
The study participants were elderly, white women who tended to live in rural areas. The findings can't necessarily be generalized to other groups.
This study did not randomly assign the subjects either to a treatment or control group, which would have been typical for a design of experiments (DOE). Instead, this was an observational study that surveyed subjects and related their responses to eventual differences in death rates. In an earlier post, I discussed how a non-randomized design introduces the possibility that the groups may not be equal at the beginning, and how these differences may account for differences at the end.
In this study, the different groups are the users vs non-users of different supplements. So, let’s take a look at how supplement users compared to non-users at the beginning. According to the authors:
Whew! That’s quite a list of differences! Supplement users were different from non-users in many ways that are likely to affect their death risk. This fact becomes extremely important.
Survey Issues: Recall Accuracy, Self-Reporting and Confounding Variables
Surveys measured everything relating to the subjects in this study. Nothing was measured in a medical setting or a lab. No blood tests were done. Everything was self-reported. The surveys did ask questions about potential confounding variables such as demographic information, food intake, basic health, supplement intake, and physical activity. However, these surveys have several problems.
First, these surveys placed a huge burden on the memories of senior citizens to recall a wide variety of behaviors and health indicators from many years after the fact. The study period was 22 years, from 1986 to 2008. Subjects were asked about a wide variety of subjects at baseline in 1986 and then only twice more (1997 and 2004) in the following 22 years! Therefore, recall accuracy is a real concern. Also, I’ve discussed the hazards of self-reporting in this blog post.
Second, the survey may not measure all of the differences between groups. If you can measure all of the relevant differences between groups, you can statistically control and adjust for them in your results. However, if you don’t measure an important attribute, you can’t adjust for it, and your results can be wrong. I’ve discussed confounding variables here and have shown how they can totally flip the results of the analysis 180 degrees.
The issue of confounding variables in this study concerns me even more than the accuracy and self-reporting issues. The surveys only inquired about a handful of health conditions and indicators. Hundreds of health conditions that potentially could influence the mortality rate were not measured.
For example, it is possible that some supplements were taken specifically due to medical conditions that the survey did not track (e.g., iron for cancer-induced anemia). This would make it appear that the supplement usage was related to the death, when it really may have been the unrecorded health condition.
I love data and numbers, so let’s take a look at the raw data. The authors highlight the iron results as a key finding so we’ll look at those numbers.
A total of 1117 out of the 2738 subjects who took iron supplements are deceased by the end of the study. On the other hand, 13,801 out of 34,443 subjects who did not take iron supplements are deceased. These numbers yield a raw death rate of 40.796% for iron users versus 40.069% for nonusers over the course of the study. This is a miniscule difference of 0.7%, which is not significant.
In fact, the raw results for folic acid, vitamin B6, magnesium, zinc, copper, and multivitamins are all similarly insignificant. Perhaps most surprising is the fact that the raw results show a significant reduction in the death risk for users of B complex, C, calcium, D, and E!
So, what’s up with these raw results that mostly show no difference or even a reduced risk? After all, the study reports that 7 supplements increased the death risk and only calcium decreased the risk. The answer harks back to the fact that this is not a randomized experiment and that the groups started out with many health-related differences. To control for these initial differences, the researchers use regression analysis to control for additional predictors of the death rate.
The researchers present 3 different regression models. They first adjust only for age and caloric intake. Next are two versions that add additional predictors. Version 1 adds some demographic information and 7 health measures. Version 2 includes everything in version 1 plus several dietary intake measures. In short, each subsequent model suggests an increasingly negative effect for all of the supplements. The researchers present the most adjusted model as the basis for their results.
The fact that the raw and adjusted rates are so different shows how different the original groups were in important health factors.
This methodology is proper for statistical analysis. If the predictors of the death rate are significant, you should include them in the model to be able to adjust for them. The difficulty is that the only way to capture these differences is through the survey, which places a huge burden on the survey. As discussed, the survey only measured a handful of health attributes out of hundreds.
Furthermore, the results are not robust. As we saw, the study results are highly dependent on the specific predictors included in the model. It seems probable that the survey did not consider all significant health measures, which could produce biased results.
Small Effect Size
Making the data analysis even more difficult is the fact that the effect of the supplements on the death risk, if there is one, is very small. Generally, the statistically significant changes in death risk range between 2 and 5%. A bias of only a couple of percent for any of the reasons stated in this blog could easily erase this difference. A bias of this size, while not certain, does not seem unlikely given the limitations of the study.
Copper: An Exception?
Copper supplements have the largest effect in this study with a risk increase of 18.0% among copper users. Further, the results are robust. Copper is significant whether you look at the unadjusted results or at a model with any combination of predictors. However, only 0.5% of the subjects take copper supplements. This group is so small that you have to wonder if it is different in some way. A quick Google search indicates that copper use should be determined only by a doctor. It is prescribed for specific medical conditions that increase a patient's need for copper, such as kidney disease, pancreas disease, and stomach removal. It seems probable that the group of copper users was extremely different from the nonusers.
If this is the case, copper usage is measured by the survey but the underlying medical conditions are not measured. Without including the medical conditions, statistical analysis makes it appear that copper supplements are related to the deaths rather than the underlying medical conditions. We simply do not know if some or all of the 18% increase in risk reflects unmeasured health differences between copper users and nonusers.
What these researchers did is actually fairly difficult, and they did it very well. They tracked nearly 40,000 subjects over two decades and tried to attribute small differences in death rates to the consumption of specific pills. Further, they did this entirely with surveys and no medical tests. To avoid the problems identified in this observational study and generate stronger results, future researchers would have to use a randomized controlled trial.
The researchers compare their results to several other studies that looked at supplement consumption and mortality rates. The authors mention that results from other studies have been notably inconsistent. For example, one study found that it was vitamin D and not calcium that decreased mortality rates, whereas this study found a beneficial effect for calcium and none for vitamin D. Like this study, another study found an increased risk for iron users. However, contrary to this study, that other study found the difference in the raw results, which were subsequently reduced after statistical control!
Given the uncertainty within this study and across studies, the jury is still out on the net effects of supplement consumption on mortality rate.
Dietary Supplements and Mortality Rate in Older Women: The Iowa Women's Health Study
Jaakko Mursu, PhD; Kim Robien, PhD; Lisa J. Harnack, DrPH, MPH; Kyong Park, PhD; David R. Jacobs Jr, PhD
Arch Intern Med. 2011;171(18):1625-1633.