Calcium and Vitamin D Supplements Still Don't Work, New Study Says

I think I’ll have to add calcium to my list of the Top Six Supplements You Should Not Take. Here’s why.

A year and a half ago, I reported on a very large study of 26,000 men and women that asked if vitamin D helps to prevent bone fractures, as many people (including some doctors) believe.

Well, it doesn’t. That study found that people who took vitamin D had exactly the same risk of bone fractures as those who didn’t. It didn’t matter how much vitamin D they took, nor did it help if they also took supplemental calcium: either way, vitamin D had no effect.

(Aside: everyone needs vitamin D, but most people get all they need from a normal diet. Alternatively, just 10 minutes of sunlight gives you about 4 times your daily recommended vitamin D requirement.)

Well, now there’s a huge new study, just out in the Annals of Internal Medicine, that followed over 36,000 older women, looking at the effects of a combination of vitamin D and calcium over a 22-year period. (That’s a really long time for a study, and kudos to the authors for their determination and effort.) The scientists leading the study looked not only at the effects of supplements on hip fractures, but also whether supplements changed the risk of dying from cancer or heart disease.

The results? Well, the study found no reduction in the risk of hip fractures, which isn’t surprising given that earlier studies found the same thing. But because it was such a lengthy study, following people for more than 20 years, they could ask something else: did vitamin D and calcium have any effect on mortality? Or to put it more bluntly, did the supplements prevent death?

Well, no. But the report was a bit more nuanced than that. It turns out that deaths from cancer went down a tiny bit, and deaths from heart disease went up a tiny bit.

First, though, let me explain the overall experiment. Approximately half the women in the study, just over 18,000, were assigned to take both vitamin D and calcium every day. They were given pills with 1000 mg of calcium carbonate (400 mg of elemental calcium) and 400 IU of vitamin D3 daily. The other half of the participants took placebo pills, but neither group knew whether their pills were placebos or not.

Over the course of 22 years, 1817 women taking supplements died of cancer, compared to 1943 women in the placebo group who died of cancer. That sounds kind of good, right? The study authors report that this result – 126 fewer deaths – was statistically significant (just barely), but there are good reasons to be skeptical of this “significance” claim.

On the other hand, 2621 women taking supplements died of heart disease, versus 2420 women in the placebo group. So there were 201 more deaths from heart disease among women taking vitamin D and calcium: not so good.

Combining both causes of death, we see that in the women taking supplements, there were 75 more deaths from either cancer or heart disease. The study also reported numbers for all causes of death, and there were still very slightly more deaths in the supplement group. (The annual death rate increased from 2.14% to 2.15% for those taking supplements, a non-significant change.)

So on the whole, taking supplements didn’t seem to provide any benefit at all, and it certainly didn’t reduce the risk of death.

Why would supplemental vitamin D and calcium increase the rate of heart disease, or decrease the rate of cancer? Well, first I should emphasize that it’s entirely possible that these supplements have no effect at all, and the difference in death rates must just be random variation. There have been multiple studies speculating on how vitamin D might help to prevent cancer, but the effect, if any, is very small. And as for heart disease, maybe, as the authors of the new study speculate, long-term calcium supplements create calcifications in coronary arteries, which would be a bad thing. For now, this is merely a hypothesis.

So here is my new list of the top 7 (no longer 6) supplements that you should not take:

  1. Vitamin C
  2. Vitamin A and beta carotene
  3. Vitamin E
  4. Vitamin B6
  5. Multi-vitamins
  6. Vitamin D
  7. Calcium

You can read more about the first five, some of which can be downright bad for you, in The Top Five Vitamins You Should Not Take.

What’s left? Well, if you don’t have a deficiency, there’s no reason to take any supplemental vitamins at all. If you want to spend a little more money at the grocery, buy some fresh fruit instead. You’ll be healthier for it.

As a final caveat, I should point out that although routine supplementation is worthless and megadoses of vitamins can be harmful, if you think you have a vitamin deficiency, consult with your doctor. Serious vitamin deficiencies might be the result of other health problems that your doctor can help you address, and treatments for specific conditions or diseases may include vitamins.

Why did humans lose our tails? Blame a "jumping gene"

 

Most animals have tails, including almost all mammals. For some reason, we humans don’t. This difference has been the source of much speculation among scientists over the years, and many arguments have been made about why we don’t tails.

One line of reasoning goes like this: tails are very useful for animals that live in trees, but once our ancestors came down from the trees and started living on the open plain, they didn’t need those tails any more. But why lose them? Lots of animals don’t live in trees, and they still have tails.

Even among the primates, most species have tails, but chimpanzees, gorillas, orangutans, and bonobos–the great apes–don’t. In fact, one way to tell great apes apart from other apes is by the presence of a tail. We humans are simply great apes without so much hair. Or, as the English scientist Desmond Morris called us in his famous book, humans are “The Naked Ape.”

So why am I writing about this now? Well, in a newly published article in Nature, a group of scientists from NYU, led by Itai Yanai and Jef Boeke, seem to have figured out what made us lose our tails. It’s all due to a piece of DNA that copies itself and jumps around our genome.

It’s a bit geeky, but stay with me and I’ll try to explain.

It seems that sometime around the divergence of the great apes from other primates, about 15-20 million years ago, a “jumping gene” popped into a gene called TBXT in our ancestor. (The B in TBXT stands for brachyury, which means “short tail.”)

The jumping gene here is just a piece of DNA a few hundred letters long*, not really a gene all by itself. But once that jumping gene got into TBXT, it was in just the right position to make the cells in our ancestor produce a shorter version of TBXT. The shortened gene was missing one of its pieces, but it still worked – well, sort of. Our ancestors managed just fine, but they lost their tails.

(Aside: the piece that’s chopped out is called exon 6, for those who really want to know.)

Given that this happened over 15 million years ago, how did the scientists prove their hypothesis? Well, other mammals have the same gene, but they make a longer version. So the authors of the new paper created a version of the TBXT gene in mice that included the jumping gene–and, as predicted, some of the mice lost their tails entirely.

Admittedly, this doesn’t exactly prove that one jumping gene caused us to lose our tails. Without a time machine to take us back 15 million years (with a DNA sequencing machine in tow), we can’t truly prove what happened eons ago. But it’s a compelling story, because we know that our genomes, and those of other great apes, have this unique jumping gene that other mammals lack.

So now we know how we lost our tails. We still don’t know exactly why, though. Some scientists speculate that being tail-less might have helped us to walk upright, or that it might have been better to lose the tails once our ancestors stopped living in trees.

On the other hand, guinea pigs don’t have tails either, and they don’t walk on two legs. And koalas don’t have tails, even though they live in trees. Some of these questions may just have to remain a mystery.

*Technically, the jumping genes in this story are called Alu elements, and they occur all over our genome. Famed geneticist Haig Kazazian, a former Hopkins colleague who passed away just two years ago, explained in a 2004 paper that Alus are a form of “nonautonomous retrotransposon.”

Sadly, the Washington Post once again falls for acupuncture pseudoscience

It’s like playing whac-a-mole. No matter how many times I write a column showing that some wildly implausible practice is nonsense, new articles pop up claiming “Hey, look at this! It really works!”

So I’m going to try to whack another mole, because people can be harmed by bad information, especially when it comes in the form of medical advice.

Recently the Washington Post ran a column under the headline, “Does acupuncture work for chronic pain? Here’s what the science says.” (The column first appeared back in July, but the Post’s website promoted it again just last week.)

Before giving you the Post’s answer, let me give you the correct answer. No! Not “maybe” or “sometimes” or “we’re not sure.” Acupuncture doesn’t treat anything, and it carries a real risk of harm, particularly from infections. I’ll get to that below.

I’ve written on this topic many times before (in 2013in 2012in 2010, and more), and I’ve even called out the Washington Post for their pro-acupuncture pseudoscience (see this column, which I wrote in 2016). The physicians over at Science-Based Medicine have debunked more acupuncture studies than I can count; they’ve even created a special webpage (which I highly recommend) dedicated to explaining the bogus claims that acupuncture proponents make.

Acupuncture, in case you don’t know this, is a practice where people who call themselves acupuncturists (they are not doctors) stick needles into your body to “treat” various conditions. The claim is that these needles can manipulate your vital life force, or “qi”, which runs along supposed acupuncture lines throughout your body.

That’s just wrong. Modern biology has taught us a whole lot about human physiology, and there just aren’t any lines with mystical forces flowing through them. There are nerve fibers, true, but acupuncturists don’t use those. (And if their needles were piercing nerves, it would hurt like heck.)

Acupuncture and qi are part of Traditional Chinese Medicine, or TCM, a collection of largely ineffective and sometimes very harmful folk beliefs. TCM’s popularity started to grow in the mid-20th century when Chairman Mao launched a propaganda campaign pushing it. Mao himself never used TCM, but his government couldn’t afford real medicine, so they convinced people that inexpensive folk medicine was just as good. It wasn’t.

But I digress.

Acupuncturists claim to treat many conditions, but they especially like to claim that they can treat chronic pain, for at least a couple of reasons. First, pain is inherently subjective, so the only way to measure if a treatment is working is to ask the patient. This makes it hard to study objectively. And second, pain symptoms usually wax and wane, even without any treatment. Patients usually want treatment when the pain is at its worst, which means once the pain subsides, the patients will give credit to whatever they were doing at that time. So pain is fertile ground for people selling quack treatments.

Now let’s get to that column in the Washington Post. The column promises to tell you “what the science says,” and it quickly gets to the point, saying yes, it does! First it puts forward the logically flawed (and non-scientific) claim that hey, the U.S. Medicare system now covers acupuncture for back pain, so it must be effective.

Ugh, where do I start? Well, like it or not, Medicare approval of a treatment doesn’t mean the treatment works. (And conversely, some treatments that work are approved for coverage by Medicare.) So that’s just a logical fallacy. I wish it were true that Medicare was purely science-driven, but both the federal and state government have been lobbied for years by acupuncturists (and other purveyors of dubious therapies) to provide public tax dollars to cover their practices. For a deeper dive into these lobbying efforts, I recommend the lengthy takedown by Jann Bellamy explaining that acupuncture is “legalized quackery.”

The Post article then goes on to discuss the science, for which it relies primarily on a single study, a meta-analysis published in 2019 by Andrew Vickers. (The column was written by Dr. Trisha Pasricha, who has sterling credentials, including training at Johns Hopkins Medicine where I also work. Alas, good credentials don’t always mean that you can trust the holder of those credentials, and this is one of those instances.)

Vickers has published multiple meta-analyses, and if he’s shown anything, it’s how easy it is to cherry-pick from the (extensive) acupuncture literature and find studies that prove whatever point you want to make. The Post column asserts that Vickers used 39 “high-quality” studies, but that is debatable. Many of the studies were done in China, which (as Science-Based Medicine physicians David Gorksi and Steven Novella have pointed out) virtually never publishes a negative study of acupuncture.

I’ve done a deep dive into one of Vickers meta-analyses of acupuncture–an earlier one–for one of my medical school classes, where I use it to illustrate how bad studies can be mis-reported by scientists themselves and by the media. I don’t have time to go through it here, but among other problems, Vickers doesn’t seem to understand how placebo controls work.

Here’s what I mean by cherry-picking. Vickers went through 100’s of studies to pick the 39 that he included. One of those supposedly high-quality studies looked at acupuncture for knee arthritis. That study found that both acupuncture and sham acupuncture (the placebo arm) and the same small effect on knee pain, and that patients who received no treatment at all reported more pain than patients. The authors of the study (and Vickers) concluded–wrongly–that because acupuncture was better than nothing, it must be working. Wrong! If you don’t beat the placebo, then your treatment fails.

For a drug trial, failing to beat the placebo means the game is over. But with acupuncture, it means “more studies are needed,” and the whac-a-mole game continues.

Oh, and I should add that as far as knee arthritis goes, the reduction in pain in both the acupuncture and placebo group was much less than has been reported in studies that use ibuprofen.

That’s right, ibuprofen is far better than acupuncture. Not to mention cheaper and more convenient.

If this weren’t enough, a more recent study has already contradicted the Vickers study as physician-blogger Steven Novella pointed out in a recent column. Novella wrote that “the evidence is too low quality to conclude that acupuncture works, as desperate as proponents are to say we can reach that conclusion.” So no, Dr. Pasricha, the latest science does not say that acupuncture works. Quite the opposite.

I’m still understating how badly acupuncture has failed every well-designed study to test its effectiveness. Studies have shown that placing the needles in random locations works just as well as using so-called acupuncture points. Other studies showed that sham acupuncture, where the needles don’t pierce the skin but where subjects believe they did, also works just as well. And “expert” acupuncturists can’t agree on the locations of acupuncture points.

And don’t get me started on acupuncture and the risk of infection. Acupuncturists aren’t trained in real medicine, and they don’t use proper sterile procedures. This means that they don’t necessarily sterilize their hands, or your skin at all of those points where they’re plunging needles into you. There have been thousands reports of infections due to acupuncture (dating back decades), some of them fatal. And because acupuncturists aren’t part of the medical system, we can be virtually certain that infections are under-reported.

Acupuncture isn’t going away any time soon, because people are making money from it, and no matter how many studies show that it’s nothing more than a fiction, those people will keep insisting on more studies. Plus they can point to hundreds of poorly-done studies that claim to show benefits, and argue–as the Post column does too–that “more research is needed.” I’m not making this up: that precise phrase appears in Dr. Pasricha’s article.

There are even scientific journals entirely devoted to acupuncture (here and here, for example), and they make money too, for the for-profit publishers that produce them. So you can be sure that more studies are coming, and some of them will be positive, even though acupuncture is utterly ineffective.

Even so, the Washington Post can and should do better. Here’s my (free) advice for those considering acupuncture: save your money, and just take some ibuprofen.

Good news for "Research Parasites": NEJM takes it back, 8 years later

After years of debate, the National Institutes of Health finally rolled out a data sharing policy early this year, one that should greatly increase the amount of data that biomedical researchers share with the public. This week, three prominent scientists from Yale described, in an op-ed in the New England Journal of Medicine, how “the potential effects of this shift ... toward data sharing are profound.”

For some of us, it’s deliciously ironic that this op-ed appeared in NEJM, which just a few years ago coined the term “research parasites” to describe anyone who wants to make discoveries from someone else’s data. That earlier piece, written in 2016 by the NEJM’s chief editors, was simply dripping with disdain. It caused a huge outcry, including a response from me in these pages and a sharply worded response from the Retraction Watch team, published in Statnews. The editor backed down (slightly) in a follow-up letter just a few days later, but the damage was done.

One interesting consequence was that a group of scientists created a Research Parasite Award, now awarded each year (entirely seriously, despite the tongue-in-cheek name) at a major biomedical conference, for “rigorous secondary data analysis.”

The 2016 op-ed in NEJM was itself a response to a call for greater data sharing published in the New York Times by cardiologists Eric Topol and Harlan Krumholz–and Krumholz, we should note, is a co-author of the latest piece in NEJM. Meanwhile, the former editor of NEJM retired years ago, and it appears that the journal is now ready to join the 21st century, even if it’s a few decades late.

What is all this fuss about? Well, many people outside of the scientific research community probably don’t realize that vast amounts of data generated by publicly-funded research–work that is paid for by government grants–are not usually released to the public or to any other scientists.

On the contrary: in much of biomedical research, data sets collected with government funding are zealously kept private, often forever. The usual reasons for this are simple (although rarely admitted openly): the scientists who collected the data want to keep mining it for more discoveries, so why share it? Sometimes, too, researchers package up the data and sell it, which is completely legal, even though the government paid for the work.

(It’s not just medical research data, either: once I tried to get some data from a paleontologist, only to learn that he treated every fossil he ever collected as his personal property. But that’s a blog for another day.)

Many scientists have been fighting this culture of secrecy for a long time. Our argument is that all data should be set free, at least if it’s the subject of a scientific publication. It’s not just scientists making this argument: since the early 2000s, patient groups began to realize they couldn’t even read the studies about their own diseases unless they paid a for-profit journal to access the paper. Those groups lobbied–successfully, after a years-long fight–that any publicly-funded research had to be published on a free website, not locked behind the doors of private publishers. Their effort led to an NIH database called PubMedCentral, which contains the full text of thousands of articles.

The new NIH data sharing policy is one consequence of the Open Science movement (which I’m a part of), which argues that science moves much faster when it’s done in the open. This means sharing data, software, methods, and everything else. There’s now a U.S. government website dedicated to Open Science, open.science.gov, which includes more than a dozen federal agencies including NIH, NSF, and the CDC.

A bit more history: as far as I can tell, the earliest voices for data sharing emerged during the Human Genome Project, an international effort beginning in 1989 that produced the first draft of the human genome in 2001. When a private company (Celera Genomics) emerged in 1998, a dramatic race ensued, and as one strategy for competing, the public groups announced that, in contrast to the private group, they would release all their data openly on a weekly basis, long before publication. That wasn’t how things had worked before.

Very soon after that, scientists in genomics (my own field) realized that all genome data, whether from bacteria, viruses, animals, or plants, ought to be released freely. The publicly-funded sequencing centers received millions of dollars to generate the data, but they weren’t the only places who could analyze it. NIH and NSF agreed, and pretty soon they required all sequencing data to be released promptly.

This same spirit didn’t touch most medical research, though. Even though far more money–billions of dollars a year in NIH funds–is spent on disease-focused research, data from those studies remained locked up in the labs that got the funds. This is now changing.

As the Yale scientists (Joseph Ross, Joanne Waldstreicher, and Harlan Krumholz) point out in their NEJM editorial, open data sharing has already yielded tremendous benefits. For example, they point out that hundreds of papers have been published using public data from the NIH’s National Heart, Lung, and Blood Institute, including studies that revealed new findings about the efficacy of digoxin, a common drug used to treat heart failure.

The new NIH policy covers all of NIH, not just one institute, and we can hope it will unlock new discoveries by allowing many more scientists to look at the valuable data currently kept behind closed firewalls.

But simply requiring scientists to have a “data management and sharing policy,” as the NIH is now doing, might not be enough. Many thousands of scientific papers already say they share data and materials–but as it turns out, the authors don’t always want to share.

A study published last year illustrated how toothless some current policies are. That study identified nearly 1800 recent papers in which the authors said they would share their data “upon request.” They wrote to all of them, only to find that 93% of the authors either didn’t respond at all, or else declined to share their data. That’s right: only 7% of authors shared their data, despite publishing a statement that they would.

The NEJM editorial proposes a different solution, one that could be far more effective: putting scientific data into a government repository. This is something the government itself can enforce (because they control the funding), and once the data is in a public repository, the authors won’t be able to sit on it as (some of them) now do.

It’s good to see NEJM joining the open science movement. Science that is shared openly will inevitably move faster, and everyone–except, perhaps a few data hoarders–will benefit.

A simple trick to make better coffee: cut the static!

 

You’d think that coffee afficionados had tried everything by now, and that few if any tricks remained undiscovered. Well, you could be right–but there’s one trick that most ordinary coffee drinkers probably don’t know, and it’s remarkably easy to do.

I’ll jump right to the punchline, and then I’ll explain the (new) science that explains it. To make richer coffee in the morning, simply spritz a little water on your beans before grinding them. That’s it!

So what happens when you do this, and why does it make better coffee? Well, as explained in this new paper in the journal Matter, by Christopher Hendon and colleagues at the University of Oregon, it’s all about reducing the static electricity that the grinding process creates.

Grinding coffee causes triboelectrification. If you’ve never heard of that, not to worry–neither had I, until I read the paper. Basically, when the beans rub together, they create static, and that makes the ground coffee clump together (and sometimes fly into the air).

Then when you make the coffee, the clumping means that the water flows through the grounds unevenly, absorbing less of the coffee particles than it might. Ideally, all the coffee grounds should be evenly and densely packed, and static electricity prevents that.

Water reduces triboelectrification quite a bit, it turns out.

So what happens? Well, after extensive experimentation–and I do mean extensive–the scientists found that the amount of coffee solids in a cup of espresso increased from 8.2% to 8.9% when adding a bit of water to the beans before grinding. That’s a relative increase of 8.5%. Richer coffee!

Reading the paper, I realized these scientists had a lot of fun doing these experiments. They measured the water content in 31 types of coffees, and tried a wide range of settings for their grinders, for the water temperature, and more.


They also roasted their own beans to varying degrees of darkness. They tried dozens of combinations of beans and roasting strategies, measuring the water content after roasting and the amount of static electricity generated upon grinding. They observed that darker roast coffees usually generate finer ground particles, and finer particles in turn generate more static electricity.

They drank a lot of coffee to get this right! But hey, sometimes science requires sacrifices, right?

I should mention that the trick of adding a little water to the beans is already known to some experts, although the precise science behind it was unknown until now. It even has a name (as the paper points out): the “Ross Droplet Technique.”

As the paper concludes, “a few simple squirts of water [may] have solved the problems of clumping, channeling, and poor extractions while aiding in the pursuit of attaining the tastiest espresso.” You only need a few drops of water–give it a try.

One important caveat is that if you use the French press method to make coffee, where the grounds are immersed in water, then this trick won’t make any difference.

What’s next? Well, I should point out that this study focused entirely on espresso. Does it work for regular coffee as well? Probably so, but more research is needed.

Does Taurine Really Extend Life? Maybe.


 Readers of this column will know that I’m highly skeptical of dietary supplements. So you might imagine my reaction when I saw headlines a few days ago about “Taurine, the elixir of life?” (at CNN) and “Supplement slows aging in mice and monkeys” (NY Times).

Unlikely, I thought. But I read the scientific article behind these reports, and now I’m intrigued.

What is taurine? And could it really slow down aging? Well, it seems like it could, just maybe. A new study published last week in Science (one of the top journals in all of science) seems to show, for the first time, that taking large doses of taurine, an essential amino acid, might provide a host of benefits that include slowing down the aging process.

First question first: what is taurine? It’s an amino acid, but it’s not one of the 20 amino acids that comprise all the proteins in your body. It’s a slightly different one, and our bodies naturally produce it in small amounts. We need more than our bodies produce when we’re very young, but we get it from breast milk, and it’s added as a supplement to infant formula.

We also get extra taurine from our diet: the best foods for taurine are meats, especially shrimp and other shellfish, but also beef and the dark meat in chicken and turkey.

What did the new Science paper show? Well, first the authors (from Columbia University, India’s National Institute of Immunology, and the Sanger Institute in the UK) describe how taurine levels clearly decline with age in humans and other mammals. Now, just because taurine declines doesn’t mean that replacing it will reverse the aging process, but at least it establishes plausibility.

They then describe a series of experiments, mostly in mice but also in monkeys, where they fed the animals relatively large amounts of taurine each day, and the results were pretty darned impressive:

  1. Life span in the mice increased by 10-12%.
  2. In mice that started taurine supplements in middle age, life span increased by 18-25%.
  3. Bone density increased in female mice and osteoporosis seemed to be cured.
  4. Muscle strength increased in both males and females compared to mice who didn’t get taurine.
  5. The number of senescent cells–cells that don’t do much except emit damaging inflammatory signals–seemed to be reduced.

Of course, there’s always a big caveat with results in mice: they’re mice, not humans! And many, many times we’ve seen results in mice that just don’t carry over into humans. So the scientists also did a study (a smaller one) in monkeys, which are much closer to humans genetically. This also had some very good results:

  1. Bone density increased in the spine and legs.
  2. Body fat was lower than it was in monkeys that didn’t get taurine.
  3. Several measures of inflammation decreased.

Monkeys live a lot longer than mice, so the scientists don’t yet know if taurine increases the monkeys’ life span, but all the signs are promising. I was skeptical going into this article, but I couldn’t find any obvious flaws.

In an accompanying article in Science, U. Penn’s Joseph McGaunn and Joseph Baur point out that we don’t know for sure what the risks of long-term supplementation with taurine would be, but it is already widely taken as a supplement in baby formula and in energy drinks, with no known ill effects.

However, the amounts used in the Columbia study were very high, much higher than you’d get from energy drinks or even from standard taurine supplements. I looked up a few, and typical formulations offer 1000 or 2000 mg (which is 1-2 grams) per day. The doses given to monkeys in the study, if converted to a 150-pound person, is equivalent to about 5500 mg (5.5 grams) per day. That’s not very much by weight, and it would be easy enough to take this much taurine, but no one knows the effects in humans of such high doses.

The bottom line: this study is really intriguing. More studies are needed, especially to measure the effects of taurine on humans, but all the signs are positive. I’ll be watching closely to see if the effects in mice and monkeys carry over, and if they do, we may all be taking taurine supplements one day. And I just ordered some taurine powder for myself–why not?