Earlier this month the editors of The New England Journal of Medicine invited readers to a more interactive role than we usually expect from a traditional medical journal. They offered us free access to a provocative clinical research article, welcomed us (paying subscribers or not) to read the accompanying editorial, and then, in a remarkable demonstration of the potential of open (or at least free) access, invited medical practitioners to comment in an online forum on the article: Will it change the way you practice? Why or why not?
Their offer received hundreds of thoughtful and articulate responses in the first two days, and many more over the following week. The paper, reporting on the JUPITER trial of rosuvastatin in participants with unremarkable LDL cholesterol levels and at least a moderately elevated high-sensitivity CRP, seems a particularly good choice. As a practicing internist as well as an editor, I can well appreciate the importance of identifying and treating apparently healthy patients whose normal LDL levels belie a high risk of heart attack or stroke.
Do the study results really tell us how best to help such patients? Many who made comments in the online survey were quick to raise insightful criticisms: the study did not enroll completely healthy people, but rather those with a variety of risk factors (such as metabolic syndrome and smoking) in significant proportions; there was no mention of appropriate non-pharmaceutical interventions (such as weight loss, diet, exercise, smoking cessation); the study drug was produced by the study sponsor and relatively expensive among statins; the treatment cost per case averted was in the hundreds of thousands of dollars; and so on. Should one change practice based on these results, screening widely for elevated CRP and prescribing rosuvastatin (or a less expensive congener) to those with higher levels? Interestingly, with a few days remaining to vote the tally appears to be running about even.
Many clinicians (myself included) would like to focus on the particular individuals most likely to benefit from an intervention, and conversely avoid unnecessary testing and treatment in patients where these might do more harm than good. The authors indicate that the relative risk reduction with treatment was approximately 50% over 2 years across the board: smokers, increased Framingham risk score, hypertension, metabolic syndrome, and so on. But the paper does not directly present the absolute rate of adverse outcomes in each risk group. Were the prevented events concentrated in, or perhaps largely limited to, those with risk factors that one could identify without routine testing of hsCRP?
Again as commenters have noted, the paper is not entirely clear about this point. Numbers of outcomes are said to be proportional to the size of the little black squares in figure 2, which I suppose one could measure and divide by the number of participants in each group to get risk estimates, but we’d still have to factor in the figure’s footnote that “data were missing for some participants in some subgroups.” The authors of an editorial in BMJ apparently did try to measure those little squares, and they comment that “A closer look at subgroup analyses (size of plots, exact numbers are not reported) indicates that most events occurred in high risk groups. Wouldn’t old fashioned risk estimation by traditional methods have produced similar results?” (Incidentally, if you live in the US and don’t happen to have a BMJ subscription, it will cost you $4.00 to read their critique. Not a huge amount, but if you’ve already been convinced by the article itself, that’s $4.00 further away from an hsCRP for yourself or a loved one. Is it worth the gamble?)
People who make patient care decisions clearly have a lot to say about high-profile papers with debatable results. As a clinician, I welcome this opportunity to see what my colleagues are thinking. That many of us are not deferring our decisions to industry-designated “thought leaders” bearing industry-approved slide presentations – nor yet to personable drug reps bearing pens and reprints – should come as no surprise. But to have the opportunity to have our say, right there on the NEJM site, the very source of those slides and reprints, is a new and important development. As an editor, I believe that the expectation of such public scrutiny could, over time, more sharply focus the design and reporting of clinical trials to address the specific questions that practitioners must consider in the interest of patient care. I’m looking forward to seeing how the comments will be incorporated into a permanently accessible record, and what relationship it will bear to NEJM’s “official” correspondence.
As clinician and editor, I applaud the editors of NEJM for providing this forum and hope it will become a regular feature.