Tag Archives: systematic reviews

Standards and Services and PRISMA, Oh My! Systematic Reviews at MLAnet16, Day One

First posted at the MLAnet16 blog: http://www.mlanet.org/blog/standards-and-services-and-prisma,-oh-my!-systematic-reviews-at-mlanet16,-day-one


Toronto Scenery

Wow, wow, wow! What an AMAZING day! I’m at the Medical Library Association Annual Meeting, and trying to get to as many of the systematic review events as I can. Today is the first full day of the conference, and it was a jackpot — PRISMA for searches, a session on EBM/EBHC training, and a session on systematic review services. Lots of posters, too, but I haven’t had a chance to go look at those yet.

I tweeted a screenshot of the special session on systematic reviews this afternoon.

Dean Giustini asked me what’s new, so let me get right to that.

PRISMA

I saw an event in the program, something about PRISMA standards, so I thought I’d poke my head in. When I poked my head back out later, I could not stop talking about it. The gist of it is that PRISMA, whom most medical librarians and journal editors know of as providing standards and guidelines for how systematic review data should be reported, are branching out. Me, I’ve been watching with excitement the various PRISMA extensions that have been being added recently. Thsee include standards for reporting protocols, meta-analyses, patient data, abstracts, and more. Well, it turns out there is a pretty substantial team working on developing PRISMA guidelines for reporting search strategies. This is pretty exciting for me! And somehow, I had missed it until today. The group today was opening the results from the original team to a broader audience and asking for reactions. They had come up with 123 guidelines, which they narrowed down to 53, and then we broke into four subgroups (search strategy, grey literature, documenting results, database characteristics) brainstorming about how to narrow down even further, into truly actionable points. I tell you, this is a group to watch.

Some of my favorite lines:

“I did this review according to PRISMA standards.” “You can’t. PRISMA is a ‘reporting’ standard, not a ‘doing.'” (Margaret Foster)

“The faculty are asking individual students to do something that is essentially a team sport.” (Ana Patricia Ayala)

“Cochrane says, ‘You will not limit by language.’ PRISMA says, ‘You will report any limits.'” (Margaret Sampson)

Here is just one of the flip boards from the conversation to whet the appetite of the systematic review methods nerds.

Priorities for Systematic Review Search Strategy Reporting

SYSTEMATIC REVIEW SERVICES

Later in the day, there was a complete session devoted to systematic review services in medical libraries. Yes, this is the same one from the tweet earlier in this post. I was dashing in late from the poster session, so I missed the beginning of the presentation on training needs by Catherine Boden and Hellsten. I was disappointed, because they were citing many wonderful articles I wanted to look into later. I’m sure glad the slides are in the online MLA system, because I’ll have to go find them! Being late also means I didn’t get any photos from their talk. The most provocative concept I pulled from their talk was the idea that systematic reviews are actually “a constellation of related methods rather than a single methodology.” So elegantly put, and so true. It’s a helpful way to reframe how we think about what we do, and is supported by the same drive that is motivating the various PRISMA extensions mentioned above.

MLAnet16 Systematic Review Services

Sarah Vistinini presented for her team on scoping reviews, their similarities to and differences from systematic reviews, and the value of being included in the ENTIRE process (which she cleverly described as giving a “better appreciation of all the moving parts.”). Sarah showed some very cool evidence mapping (see pic above), dot prioritization, and more. There were glowing recommendations of the 2005 Arksey and O’Malley article on scoping review methodologies and a wonderful link to all the references: bit.ly/visin-2016.

Kate Krause presented for a team primarily from the Texas Medical Center Library about their efforts to launch a new systematic review service, and the resulting “opportunities” (wink, wink, nudge, nudge, we all know what THAT means). The moderator described their presentation as a “collective therapy session,” which generated considerable amusement among the audience. The most important parts of her talk were, of course, the solutions! They require systematic review requests to come through an online request form, which gives them solid statistics and allows them to manage workflow better. They are using a memorandum of understanding (MOU) with faculty to facilitate a discussion of the duties, timeline, and expectations. They are providing different levels of service, with some interesting requirements for the highest level of service (like, if I understood correctly, mandatory five face-to-face meetings with the project lead). One curious nugget for which they are seeking the citation was heard at a prior MLA meeting — the more face-to-face meetings you have with a systematic review researcher, the more likely they are to actually publish on the project. They have a wonderful-sounding information packet given to new SR researchers, but I didn’t catch everything in it. I did catch bits (Cochrane timeline? list of other review types?) that make me want to know more!

MLAnet16 Systematic Review Services

Lynn Kysh and Robert E. Johnson presented a talk with the awesome title: “Blinded Ambition, misperceptions & misconceptions of systematic reviews.” They discussed some of the challenges to co-authorship and publication being assumed as an automatic good for librarians working on systematic review teams. Lynn described constraints to completing publication, and described times when librarians there removed their name from articles being submitted for publication because of methodological concerns. Very very interesting content. Well, and then there were the forest plot kittenz.

Last but not least, Maylene Kefeng Qiu represented a team that did the bulk of the work for a rapid review in … three weeks. Intense! Much of the challenge centered around timing, expertise available, staffing, workflow, and management coordination. The librarians on this team actually did the critical appraisal of the articles before giving the final dataset to the faculty member writing the review. My favorite line from her talk was, “Stick to your inclusion/exclusion criteria.” Their slide deck had so many wonderful images illustrating parallels and differences between systematic reviews and rapid reviews. I hope it’s ok if I share just one.

MLAnet16 Systematic Review Services

What’s New, What’s Hot: My Favorite Posters from #MLAnet15

Part 3 of a series of blogposts I wrote for the recent Annual Meeting of the Medical Library Association.


I had a particular slant, where I was looking for new technology posters, emerging and emergent innovations, but then I was so delighted with the richness of systematic review research being presented, that there is a lot of that, too. The chosen few ran from A to Z, with apps, bioinformatics, data visualization, games, Google Glass in surgery, new tech to save money with ILL operations, social media, Youtube, zombies, and even PEOPLE. What is it with medical librarians and zombies? Hunh. Surely there are other gory engaging popular medical monsters? Anyway, here are some of my favorite posters from MLA’s Annual Meeting. There were so many more which I loved and tweeted, but I just can’t share them all here today. I’ll try to put them in a Storify when I get back home. Meanwhile, look these up online or in the app for more details. By the way, they started to get the audio up, so you can use the app to listen to many of the presenters talk about their poster.

Poster 14:

Poster 28:

Poster 30:

Poster 38:

Poster 40 (and that should read “Twitter”, not “Titter”):

Poster 43:

Poster 54:

Poster 65:

Poster 83:

Poster 100:

Poster 121:

Poster 125:

Poster 130:

Poster 157:

Poster 202:

Poster 224:

Poster 225:

Poster 228:

Poster 238:

Poster 243:

Systematic Reviews 101

Systematic!!!

This morning in the Emergent Research Series, my colleagues Whitney Townsend and Mark MacEachern presented to a mix of mostly faculty and other librarians about how medical librarians use the systematic review methodology. They did a brilliant job! Very nicely structured, great sources and examples, excellent Q&A session afterwards. They had planned for some activities, but it ended up there wasn’t time. I’d like to know more about what they had planned!

I was one of the folk livetweeting. According to my Twitter metrics, this was a popular topic. I assembled a Storify from the Tweets and related content. I thought it would be of interest to people here.

Storify: PF Anderson: Systematic Reviews 101: https://storify.com/pfanderson/systematic-reviews-101

Evidence-based? What’s the GRADE?

GRADE Working Group

Personally, I have a love/hate relationship with healthcare’s dependence on grading systems, kitemarks, seals of approval, etcetera, especially in the realm of websites and information for patients or general health literacy. It is rather a different matter when it comes to information for clinicians and healthcare providers (HCPs). There, we typically depend on the peer-review process to give clinicians confidence in the information on which they base their clinical decisions for patient care. Retraction Watch and others have made it clear that simply being published is no longer (if it ever was) an assurance of quality and dependability of healthcare information. As long as I’ve been working as a medical librarian, I’ve been hearing from med school faculty that their students don’t do the best job of critically appraising the medical literature. I suspect this is something that medical faculty have said for many generations, and that it is nothing new. Still, it is welcome to find tools and training to help improve awareness of the possible weaknesses of the literature and how to assess quality.

During some recent excellent and thought provoking conversations on the Evidence-Based Health list, GRADE was brought up yet again by Per Olav Vandvik. There have been several conversations about GRADE in this group, but I thought perhaps some of the readers of this blog might not be aware of it yet. Here’s a brief intro.

GRADE stands for “Grading of Recommendations Assessment, Development and Evaluation.” GRADE Working Group is the managing organization. I like their back history: “The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) Working Group began in the year 2000 as an informal collaboration of people with an interest in addressing the shortcomings of present grading systems in health care.”

GRADE Working Group: http://www.gradeworkinggroup.org/index.htm

Playlist of Presentations on GRADE by the American Thoracic Society:
http://www.youtube.com/playlist?list=PLv3ASQRBkH-NMKAbMYoDIsWuGMF8fUrVL

Free software to support the GRADE process.

Cochrane: RevMan: GRADEpro: http://ims.cochrane.org/revman/gradepro

UpToDate GRADE Tutorial: http://www.uptodate.com/home/grading-tutorial

20-part article series in Journal of Clinical Epidemiology explaining GRADE. These articles focus on:
– Rating the quality of evidence
– Summarizing the evidence
– Diagnostic tests
– Making recommendations
– GRADE and observational studies

GRADE guidelines – best practices using the GRADE framework: http://www.gradeworkinggroup.org/publications/JCE_series.htm

New York Academy of Medicine is having a training session on GRADE this coming August. You can find more information here.

Teaching Evidence Assimilation for Collaborative Healthcare: http://www.nyam.org/fellows-members/ebhc/
PDF on GRADE section of the course: http://www.nyam.org/fellows-members/docs/2013-More-Information-on-Level-2-GRADE.pdf

Hashtags of the Week (HOTW): Comparative Effectiveness Research (Week of January 21, 2013)

First posted at THL Blog http://wp.me/p1v84h-125


What is Comparative Effectiveness Research?
What is Comparative Effectiveness Research?: http://effectivehealthcare.ahrq.gov/index.cfm/what-is-comparative-effectiveness-research1/

I’ve been tracking the Comparative Effectiveness Research hashtag in Twitter for a while. You will have seen tweets from that stream here earlier in this HOTW series of post. The hashtag is #CER, by the way, but unfortunately it is used for many other topics as well — Carbon Emissions Reduction, Corporate Entrepreneurship Responsibility, food conversations in Turkish, and some sort of technology gadget topic that I haven’t figured out. Ah.

Luckily, the #CER tag when used in the health context has a number of other hashtags with which it is often associated. #eGEMS, #PCOR, #PCORI, and #QI are the most common used companion hashtags, but there are others as well.

#eGEMS = Generating Evidence and Methods to improve patient outcomes

#PCOR = Patient-Centered Outcomes Research

#PCORI = Patient-Centered Outcomes Research Institute

#QI = Quality Improvement (also “Quite Interesting”)

One of the things that makes it easier to track the health side of the #CER tag is that the CER community has volunteers (National Pharmaceutical Council) who find the stream so valuable they curate, collate, and archive the most relevant tweets from each week, along with brief comments on the high points from each week.

That JAMA article they mentioned? Was actually a 2009 classic from NEJM.

But there was a JAMA article in the collection from the previous week. And an impressive one, too!

Yesterday, our team here at the Taubman Health Sciences Library had a journal club to talk about a classic article on #CER.

That conversation had us looking beyond the issues of CER as a research methodology, and into the foundation of why and how the methodology developed, the purposes it is designed to serve, when and why to choose CER over another methodology such as systematic reviews, the implications of CER for the EVidence-Based Healthcare movement, the strengths and weaknesses of CER compared to other methodologies, and much more. It was a very valuable and interesting hour well spent.

Of course, we aren’t the only ones asking these types of questions about #CER — The FDA, the New York Times, among others.

Thus, you see me inspired today to dig into the #CER stream and explore more about what is there. One very timely notice is the webinar on Monday, next week.

And an upcoming conference at UCSF on using CER to make healthcare more relevant.

One of my colleagues also mentioned an upcoming campus event focusing on chronic diseases, so this was interesting and relevant to that.

The #CER stream seems to contain a regular number of high quality research articles. Definitely worth exploring.

What’s Wrong With Google Scholar for “Systematic” Reviews

Systematic!!!

Monday I read the already infamous article published January 9th which concludes that Google Scholar is, basically, good enough to be used for systematic reviews without searching any other databases.

Conclusion
The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews.

Gehanno JF, Rollin L, Darmoni S. Is the coverage of google scholar enough to be used alone for systematic reviews. BMC Med Inform Decis Mak. 2013 Jan 9;13(1):7. http://www.biomedcentral.com/1472-6947/13/7/abstract

Screen Shot: "Is the coverage of google scholar enough ..."

Leading the argument from the library perspective is Dean Giustini, who has already commented on the problems of:
– precision
– generalizability
– reproducibility

Giustini D. Is Google scholar enough for SR searching? No. http://blogs.ubc.ca/dean/2013/01/is-google-scholar-enough-for-sr-searching-no/

Giustini D. More on using Google Scholar for the systematic review. http://blogs.ubc.ca/dean/2013/01/more-on-using-google-scholar-for-the-systematic-review/

While these have already been touched upon, what I want to do right now is to bring up what distresses me most about this article, which is the same thing that worries me so much about the overall systematic review literature.

Problem One: Google.

Google Search

First and foremost, “systematic review” means that the methods to the review are SYSTEMATIC and unbiased, validated and replicable, from the question, through the search, delivery of the dataset, to the review and analysis of the data, to reporting the findings.

Let’s take just a moment with this statement. Replicable means that if two different research teams do exactly the same thing, they get the same results. Please note that Google is famed for constantly tweaking their algorithms. SEOMOZ tracks the history of changes and updates to the Google search algorithm. Back in the old days, Google would update the algorithm once a month, at the “dark of the moon”, and the changes would them propagate through the networks. Now they want to update them more often, so there is no set time. It happens when they choose, with at least 23 major updates during 2012, and 500-600 minor ones. That is roughly twice a day. That means you can do exactly the same search later in the same day, and get different results.

Google Algorithm Change History: http://www.seomoz.org/google-algorithm-change

That is not the only thing that makes Google search results unable to be replicated. Google personalizes the search experience. That means that when you do a search for a topic, it shows you what it thinks you want to see, based on the sort of links you’ve clicked on in the past, and your browsing history. If you haven’t already seen the Eli Pariser video on filter bubbles and their dangers, now is a good time to take a look at it.


TED: Eli Pariser: Beware Online Filter Bubbles. http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html

If you are using standard Google, it will give you different results than it would give to your kid sitting on the couch across the room. This is usually a good thing. It is NOT a good thing if you are trying to use the search results to create a standardized dataset as part of a scientific study.

People often think this is not a big problem. All you have to do is log out of any Google products. Then it goes back to the generic search, and you get the same things anyone else would get. Right? Actually, no. Even if you switch to a new computer, in a different office or building, and don’t log in at all, Google is really pretty good at making a guess at who you are based on the topics you search and the links you choose. Whether or not it guesses correctly doesn’t matter for my concerns, the problem is that it is customizing results AT ALL. If there is any customization going on, then that is a tool that is inappropriate for a systematic review.

Now, Google does provide a way to opt-out of the customization. You have to know it is possible, and you have to do something extra to turn it off, but it is possible and isn’t hard.

Has Google Popped the Filter Bubble?: http://www.wired.com/business/2012/01/google-filter-bubble/

Now, the most important question is does it actually turn off the filter bubble. Uh, um, well, … No. It doesn’t. Even if you turn off personalization, go to a new location, and use a different computer, Google still knows where that computer is sitting and makes guesses based on where you are. That Wired article about Google getting rid of the filter bubble was dated in January of 2012. I participated in a study done by DuckDuckGo on September 6th, and reported in November on their blog. Each participant ran the same search strategies at the same time, twice, once logged in and once logged out. They grabbed screenshots of the first screen of search results and emailed them to the research team. The searchers were from many different places around the world. Did they get different results? Oh, you betcha.

Magic keywords on Google and the consequences of tailoring results: http://www.gabrielweinberg.com/blog/2012/11/magic-keywords-on-google-and-the-consequences-of-search-tailoring-results.html

Now try to imagine the sort of challenge we face in the world of systematic review searchers. Someone already published a systematic review. You want to do a followup study. You want to use their search strategy. You need to test that you are using it right, so you limit the results to the same time period they searched, to see if you get the same numbers. I don’t know about you, but I am busting with laughter trying to imagine a search in Google, and saying, “No, I just want the part of Google results that were available at this particular moment in time five years ago and three months and ten days, if I was sitting in Oklahoma City.” Yeah, right.

Take home message? Google cannot be used for a systematic review. Period. And not just because you get 16,000 results instead of 3,000 (the precision and recall question), or because Google is a more comprehensive database than the curated scholarly databases that libraries pay for and thus you end up with poor quality results (also impacting on sensitivity and specificity), but purely on methodological grounds.

Problem Two: Process.

Systematic Reviews and Clinical Practice Guidelines to Improve Clinical Decision Making

First and foremost, “systematic review” means that the methods to the review are SYSTEMATIC and unbiased, validated and replicable, from the question, through the search, delivery of the dataset, to the review and analysis of the data, to reporting the findings.

Doing a systematic review is supposed to be SYSTEMATIC. Not just systematic for the data analysis (a subset of which is the focus of the Gehanno Google Scholar article), but systematic for the data generation, the data collection, the data management, defining the question, analysing the data, establishing consensus for the analysis, and reporting the findings. It is systematic ALL THE WAY THROUGH THE WHOLE PROCESS of doing a real systematic review. The point of the methodology is to make sure the review is unbiased (to the best of our ability, despite being done by humans), and replicable. If both of those are true, someone else could do the same study, following your methodology, and get the same results. We all know that one of the real challenges in science is encountering challenges with replicating results. That doesn’t mean it is OK to be sloppy.

The Gehanno article tries to test a tiny fraction of the SR process – if you can find the results. But they search them backwards from the normal way such a search would be done. The idea that the final selected studies of interest in specific systematic reviews will be discoverable in Google Scholar is also fairly predictable, given that Google Scholar scrapes content from publicly accessible databases such as PubMed, and thus duplicates that content.

It is unfortunately that their own methodology is not reported in sufficient detail as to allow replicating their study. What they’ve done is a very tiny partial validation study to show that certain types of content is available in Google Scholar. That is important for showing the scope of Google Scholar, but has absolutely nothing to do with doing a real systematic review, and the findings of their study should have no impact on the systematic review process for future researchers. Specifically, this sentence is what is most misstated.

“In other words, if the authors of these 29 systematic reviews had used only GS, they would have obtained the very same results.”

All we really know is what happened for the researchers who did these several searches on the days they searched. It might have been possible, but to say that they would have obtained the same results is far too strong of a claim. For the statement above to be true, it would have been necessary to first find a way to lock in Google search results for specific content at specific times; second, to replicate the search strategies from the original systematic reviews in Google Scholar and to compare coverage; third, to have vastly more sophisticated advanced searching allowing greater precision, control, and focus; and so forth. Gehanno et al are well aware of these issues, and mention them in their study.

“GS has been reported to be less precise than PubMed, since it retrieves hundreds or thousands of documents, most of them being irrelevant. Nevertheless, we should not overestimate the precision of PubMed in real life since precision and recall of a search in a database is highly dependent on the skills of the user. Many of them overestimate the quality of their searching performance, and experienced reference librarians typically retrieve about twice as many citations as do less experienced users. … . It just requires some improvement in the advanced search features to improve its precision …”

More importantly, in my mind, is that the Gehanno study conflates the search process and the data analysis in the systematic review methodology. These are two separate steps of the methodological process, with different purposes, functions, and processes. Each is to be systematic for what is happening at that step in the process. They are not interchangeable. The Gehanno study is solid and useful, but placed in an inappropriate context which results in the findings being misinterpreted.

Problem Three: Published

Retraction Watch & Plagiarism
Adam Marcus & Ivan Oransky. The paper is not sacred: Peer review continues long after a paper is published, and that analysis should become part of the scientific record. Nature Dec 22, 2011 480:449-450. http://www.nature.com/nature/journal/v480/n7378/full/480449a.html

The biggest problem with the Gehanno article, for me, is that it was published at all, at least in its current form. There is much to like in the article, if it didn’t make any claims relative to the systematic review methodological process. The research is well done and interesting, if looked at in the context of potential utility of Google Scholar to support bedside or chairside clinical decisionmaking. There are significant differences between the approaches and strategies for evidence-based clinical practice and doing a systematic review. While the three authors are all highly respected and expert informaticians, the content of the article illustrates beyond a shadow of a doubt that the authors have a grave and worrisome lack of understanding of the systematic review methodology. It is worse than that. It isn’t just that the authors of the study don’t understand how systematic review methodologies, but that their peer reviewers ALSO did not understand, and that the journal editor did not understand. That is not simply worrisome, but flat out frightening.

The entire enterprise of evidence-based healthcare depends in large part on the systematic review methodology. Evidence-based healthcare informs clinical decisionmaking, treatment plans and practice, insurance coverage, healthcare policy development, and other matters equally central to the practice of medicine and the welfare of patients. The methodologies for doing a systematic review were developed to try to improve these areas. As will any research project, the quality of the end product depends to a great extent on selecting the appropriate methodology for the study, understanding that methodology, following it accurately, and appropriately documenting and reporting variances from the standard methodology where they might impact on the results or findings.

My concern is that this might be just one indicator of a wide-spread problem with the ways in which systematic review methodologies are understood and applied by researchers. These concerns have been discussed for years among my peers, both in medical librarianship and among devoted evidence-based healthcare researchers, those with a deep and intimate understanding of the processes and methodologies. There are countless examples of published articles that state they are systematic reviews which … aren’t. I have been part of project teams for systematic reviews where I became aware partway through the process that other members of the team were not following the correct process, and the review was no longer unbiased or systematic. While some of those were published, my name is not on them, and I don’t want my name associated with them. But the flaws in the process were not corrected, nor reported, creating a certain level of alarm for me with respect to that particular project, as well as looking to them as indicators of challenges with published systematic review in general.

I used to team teach systematic review methodologies with some representatives from the Cochrane Collaboration. At that time, I was still pretty new to the process and had a lot to learn, but I did know who the experts really were, and who to go to with questions. One of the people I follow rigorously is Anne-Marie Glenny, who was a co-author on a major study examining the quality of published systematic reviews. Here is what they found.

“Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems.”
Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ. 2009 Apr 3;338:b1147. doi: 10.1136/bmj.b1147. PMID: 19346285 http://www.bmj.com/content/338/bmj.b1147?view=long&pmid=19346285

We have a problem with systematic reviews as published, and the Gehanno article is merely a warning sign. There are serious large concerns with the quality of published systematic reviews in the current research base, and equally large concerns with the ability of the peer-review process to identify quality systematic reviews. This is due, in my opinion, to weaknesses in the educational process for systematic review methodologies, and in the level of methodological expertise on the part of the authors, editors, and reviewers of the scholarly journals. Those concerns are significant enough to generate doubt about the appropriateness of depending on systematic reviews for developing healthcare policies.

The Power of Post Publication Review, A Case Study

Pic of the day - Libraries

There are many discussions and examples of post-publication review as an alternative to the currently more common peer-review model. While this comes up fairly regularly in my Twitter stream, I don’t think I’ve done more than hint at it within the blogposts here. I’ve also been watching (but neglecting to mention here) the emergence of data journalists and data journalism as a field, or perhaps perhaps I should say co-emergence, since it seems to be tightly coupled with shifts in the field of science communication and communicating risk to the public. Obviously, these all tie in tightly with the ethical constructs of informed consent and shared decisionmaking in healthcare (the phrase from the 1980s) which is now more often called participatory medicine.

That is quite a lot of jargon stuffed into one small paragraph. I could stuff it equally densely with citations to sources on these topics, definitions, and debates. Instead, for today, I’d like to give a brief overview of a case I’ve been privileged to observe unfolding over the weekend. If you want to see it directly, you’ll have to join the email list where this took place.


Part One: Publication

Last week, a new article on hormone replacement therapy (HRT) was published in the British Medical Journal (BMJ).

Schierbeck LL, Rejnmark L, Tofteng CL, Stilgren L, Eiken P, Mosekilde L, Køber L, Jensen JEB. Effect of hormone replacement therapy on cardiovascular events in recently postmenopausal women: randomised trial. BMJ 2012;345:e6409 doi: http://dx.doi.org/10.1136/bmj.e6409 (Published 9 October 2012)

The article was on the outcomes from a clinical trial which includes more information in the trial registry.

Danish Osteoporosis Prevention Study http://clinicaltrials.gov/show/NCT00252408?link_type=CLINTRIALGOV&access_num=NCT00252408

Two days later, a message was posted to an evidence-based health care email list (EVIDENCE-BASED-HEALTH@jiscmail.ac.uk [EBH]), asking for discussion of the article.

The same day, a Rapid Response was published by BMJ criticizing the article.

Mascitelli L, Goldstein MR. The flawed beneficial effects of hormone replacement therapy. BMJ. http://www.bmj.com/content/345/bmj.e6409?tab=responses

The Rapid Response closed with this delightful witticism.

“If you torture numbers enough they will say anything you want.”


Part Two: Discussion

Meanwhile, on the EBH list, the conversation was going fast and furious. I’m not going to quote individuals, but I would like to collate an overview of the topics covered.

Methodology:
– blinding (it wasn’t)
– placebo-controlled (nope)
– 8% of eligible patients recruited
– sample size (small, compared to the Women’s Health Initiative (WHI) study)
– age confounding of participants

Ethics / Bias:
– Funding (pharma)
– Authors linked to pharma

Bibliography:
– incomplete?
– does it include the most important portions of the relevant evidence base?
– specifically lacking core references on the “age hypothesis”

Referees:
– Were they the right folk? (Yes, the list was assured by a BMJ editor)
– Did they read the article critically and review it thoroughly, including the bibliography?

Impact:
– implications for future practice
– placing this article appropriately in the context of the larger body of evidence
– implications for participatory medicine, informed consent, shared decisionmaking, and how to inform the public about risk for personal decisionmaking

Recommendations for future analysis:
– pool with similar data from other studies
– include in systematic review or meta-analysis
– strategic genomic analysis (NOTE: this was not available in 1993 when the study started)

Other:
– apparent publication delay (data collection first completed in 2003, then later followup in 2008, published in 2012)
– ghostwriting (specifically the history of it related to HRT)
– ‘System I’ thinking (gut feelings) vs ‘System II’ thinking (transparent methodological approach to decisionmaking)
– “science by sound-bite”

I’m not equipped to judge the article on any of these points. I did find it extremely interesting that the discussants included faculty and faculty emeritus from major universities both in the UK and the US, patient advocates, medical and health librarians, experts in evidence-based health care methodologies, and an editor of the journal which published the article.


Part Three: The Press

Of course, the press jumped all over this, in part because the BMJ press release directing attention to this study.

HRT taken for 10 years significantly reduces risk of heart failure and heart attack. BMJ Press Releases, Wednesday, October 10, 2012 – 08:37. http://www.bmj.com/press-releases/2012/10/10/hrt-taken-10-years-significantly-reduces-risk-heart-failure-and-heart-atta

There are a lot of articles out there now in the popular press. Notice the type of language used.

BBC News: HRT reduces risk of heart attack, study suggests: http://www.bbc.co.uk/news/health-19886932

Guardian: : HRT can cut heart attack risk, study shows: http://www.guardian.co.uk/lifeandstyle/2012/oct/09/hormone-replacement-therapy-heart-attack

Telegraph: HRT is safe and cuts heart deaths, ‘significant study’ finds: http://www.telegraph.co.uk/health/healthnews/9595745/HRT-is-safe-and-cuts-heart-deaths-significant-study-finds.html

Time: Heart Benefits from Hormone Replacement Therapy?: http://healthland.time.com/2012/10/10/heart-benefits-from-hormone-replacement-therapy/

US News: Health Buzz: HRT May Be More Than Safe, Study Says: http://health.usnews.com/health-news/articles/2012/10/10/health-buzz-hrt-may-be-more-than-safe-study-says

Kind of makes you want to run out and get pills, doesn’t it? This one is not from a major popular press venue, but it has some interesting aspects. Again, look at the language used in the headline.

MedPage: HRT Helps Heart with No Cancer, Clot Risks; By Charles Bankhead, Staff Writer, MedPage Today: http://www.medpagetoday.com/OBGYN/HRT/35236

This one is from a medical news service, and was published the same day as the original article, before even the BMJ press release. What is really interesting is that it says the article was reviewed prior to publication by an MD and medical faculty member.

Published: October 09, 2012
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and Dorothy Caputo, MA, BSN, RN, Nurse Planner

That’s awfully fast. I know they need to be fast, it’s news, but it seems to me that it would be hard to have enough time read carefully or think about the implications and context; to have time to do much more than think, “Hmmm, BMJ, a good journal. This article says what the BMJ article said. OK.” I’m not saying that’s what the doc did who signed off on it, I’m just saying that the process and speed lend themselves to flaws.

The authors of the article and the BMJ editor both emphasized that this is exceptionally unique data. Because of the WHI study, there is virtually no chance of generating this type of data again. That the Danish study findings run so contrary to the findings of the WHI study is shocking and noteworthy. Why? Is this significant enough to reopen the question of HRT risks? What does this mean for individual patients and clinicians attempting to make treatment plans and decisions?

Obviously, it is not as simple as the press would make it seem. Open access makes the article accessible, but without open post publication peer review, the CONTEXT is not made accessible. Open access can only go so far in supporting personal decisionmaking.