Tag Archives: systematic reviews

Standards and Services and PRISMA, Oh My! Systematic Reviews at MLAnet16, Day One

First posted at the MLAnet16 blog: http://www.mlanet.org/blog/standards-and-services-and-prisma,-oh-my!-systematic-reviews-at-mlanet16,-day-one


Toronto Scenery

Wow, wow, wow! What an AMAZING day! I’m at the Medical Library Association Annual Meeting, and trying to get to as many of the systematic review events as I can. Today is the first full day of the conference, and it was a jackpot — PRISMA for searches, a session on EBM/EBHC training, and a session on systematic review services. Lots of posters, too, but I haven’t had a chance to go look at those yet.

I tweeted a screenshot of the special session on systematic reviews this afternoon.

Dean Giustini asked me what’s new, so let me get right to that.

PRISMA

I saw an event in the program, something about PRISMA standards, so I thought I’d poke my head in. When I poked my head back out later, I could not stop talking about it. The gist of it is that PRISMA, whom most medical librarians and journal editors know of as providing standards and guidelines for how systematic review data should be reported, are branching out. Me, I’ve been watching with excitement the various PRISMA extensions that have been being added recently. Thsee include standards for reporting protocols, meta-analyses, patient data, abstracts, and more. Well, it turns out there is a pretty substantial team working on developing PRISMA guidelines for reporting search strategies. This is pretty exciting for me! And somehow, I had missed it until today. The group today was opening the results from the original team to a broader audience and asking for reactions. They had come up with 123 guidelines, which they narrowed down to 53, and then we broke into four subgroups (search strategy, grey literature, documenting results, database characteristics) brainstorming about how to narrow down even further, into truly actionable points. I tell you, this is a group to watch.

Some of my favorite lines:

“I did this review according to PRISMA standards.” “You can’t. PRISMA is a ‘reporting’ standard, not a ‘doing.'” (Margaret Foster)

“The faculty are asking individual students to do something that is essentially a team sport.” (Ana Patricia Ayala)

“Cochrane says, ‘You will not limit by language.’ PRISMA says, ‘You will report any limits.'” (Margaret Sampson)

Here is just one of the flip boards from the conversation to whet the appetite of the systematic review methods nerds.

Priorities for Systematic Review Search Strategy Reporting

SYSTEMATIC REVIEW SERVICES

Later in the day, there was a complete session devoted to systematic review services in medical libraries. Yes, this is the same one from the tweet earlier in this post. I was dashing in late from the poster session, so I missed the beginning of the presentation on training needs by Catherine Boden and Hellsten. I was disappointed, because they were citing many wonderful articles I wanted to look into later. I’m sure glad the slides are in the online MLA system, because I’ll have to go find them! Being late also means I didn’t get any photos from their talk. The most provocative concept I pulled from their talk was the idea that systematic reviews are actually “a constellation of related methods rather than a single methodology.” So elegantly put, and so true. It’s a helpful way to reframe how we think about what we do, and is supported by the same drive that is motivating the various PRISMA extensions mentioned above.

MLAnet16 Systematic Review Services

Sarah Vistinini presented for her team on scoping reviews, their similarities to and differences from systematic reviews, and the value of being included in the ENTIRE process (which she cleverly described as giving a “better appreciation of all the moving parts.”). Sarah showed some very cool evidence mapping (see pic above), dot prioritization, and more. There were glowing recommendations of the 2005 Arksey and O’Malley article on scoping review methodologies and a wonderful link to all the references: bit.ly/visin-2016.

Kate Krause presented for a team primarily from the Texas Medical Center Library about their efforts to launch a new systematic review service, and the resulting “opportunities” (wink, wink, nudge, nudge, we all know what THAT means). The moderator described their presentation as a “collective therapy session,” which generated considerable amusement among the audience. The most important parts of her talk were, of course, the solutions! They require systematic review requests to come through an online request form, which gives them solid statistics and allows them to manage workflow better. They are using a memorandum of understanding (MOU) with faculty to facilitate a discussion of the duties, timeline, and expectations. They are providing different levels of service, with some interesting requirements for the highest level of service (like, if I understood correctly, mandatory five face-to-face meetings with the project lead). One curious nugget for which they are seeking the citation was heard at a prior MLA meeting — the more face-to-face meetings you have with a systematic review researcher, the more likely they are to actually publish on the project. They have a wonderful-sounding information packet given to new SR researchers, but I didn’t catch everything in it. I did catch bits (Cochrane timeline? list of other review types?) that make me want to know more!

MLAnet16 Systematic Review Services

Lynn Kysh and Robert E. Johnson presented a talk with the awesome title: “Blinded Ambition, misperceptions & misconceptions of systematic reviews.” They discussed some of the challenges to co-authorship and publication being assumed as an automatic good for librarians working on systematic review teams. Lynn described constraints to completing publication, and described times when librarians there removed their name from articles being submitted for publication because of methodological concerns. Very very interesting content. Well, and then there were the forest plot kittenz.

Last but not least, Maylene Kefeng Qiu represented a team that did the bulk of the work for a rapid review in … three weeks. Intense! Much of the challenge centered around timing, expertise available, staffing, workflow, and management coordination. The librarians on this team actually did the critical appraisal of the articles before giving the final dataset to the faculty member writing the review. My favorite line from her talk was, “Stick to your inclusion/exclusion criteria.” Their slide deck had so many wonderful images illustrating parallels and differences between systematic reviews and rapid reviews. I hope it’s ok if I share just one.

MLAnet16 Systematic Review Services

What’s New, What’s Hot: My Favorite Posters from #MLAnet15

Part 3 of a series of blogposts I wrote for the recent Annual Meeting of the Medical Library Association.


I had a particular slant, where I was looking for new technology posters, emerging and emergent innovations, but then I was so delighted with the richness of systematic review research being presented, that there is a lot of that, too. The chosen few ran from A to Z, with apps, bioinformatics, data visualization, games, Google Glass in surgery, new tech to save money with ILL operations, social media, Youtube, zombies, and even PEOPLE. What is it with medical librarians and zombies? Hunh. Surely there are other gory engaging popular medical monsters? Anyway, here are some of my favorite posters from MLA’s Annual Meeting. There were so many more which I loved and tweeted, but I just can’t share them all here today. I’ll try to put them in a Storify when I get back home. Meanwhile, look these up online or in the app for more details. By the way, they started to get the audio up, so you can use the app to listen to many of the presenters talk about their poster.

Poster 14:

Poster 28:

Poster 30:

Poster 38:

Poster 40 (and that should read “Twitter”, not “Titter”):

Poster 43:

Poster 54:

Poster 65:

Poster 83:

Poster 100:

Poster 121:

Poster 125:

Poster 130:

Poster 157:

Poster 202:

Poster 224:

Poster 225:

Poster 228:

Poster 238:

Poster 243:

Systematic Reviews 101

Systematic!!!

This morning in the Emergent Research Series, my colleagues Whitney Townsend and Mark MacEachern presented to a mix of mostly faculty and other librarians about how medical librarians use the systematic review methodology. They did a brilliant job! Very nicely structured, great sources and examples, excellent Q&A session afterwards. They had planned for some activities, but it ended up there wasn’t time. I’d like to know more about what they had planned!

I was one of the folk livetweeting. According to my Twitter metrics, this was a popular topic. I assembled a Storify from the Tweets and related content. I thought it would be of interest to people here.

Storify: PF Anderson: Systematic Reviews 101: https://storify.com/pfanderson/systematic-reviews-101

Evidence-based? What’s the GRADE?

GRADE Working Group

Personally, I have a love/hate relationship with healthcare’s dependence on grading systems, kitemarks, seals of approval, etcetera, especially in the realm of websites and information for patients or general health literacy. It is rather a different matter when it comes to information for clinicians and healthcare providers (HCPs). There, we typically depend on the peer-review process to give clinicians confidence in the information on which they base their clinical decisions for patient care. Retraction Watch and others have made it clear that simply being published is no longer (if it ever was) an assurance of quality and dependability of healthcare information. As long as I’ve been working as a medical librarian, I’ve been hearing from med school faculty that their students don’t do the best job of critically appraising the medical literature. I suspect this is something that medical faculty have said for many generations, and that it is nothing new. Still, it is welcome to find tools and training to help improve awareness of the possible weaknesses of the literature and how to assess quality.

During some recent excellent and thought provoking conversations on the Evidence-Based Health list, GRADE was brought up yet again by Per Olav Vandvik. There have been several conversations about GRADE in this group, but I thought perhaps some of the readers of this blog might not be aware of it yet. Here’s a brief intro.

GRADE stands for “Grading of Recommendations Assessment, Development and Evaluation.” GRADE Working Group is the managing organization. I like their back history: “The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) Working Group began in the year 2000 as an informal collaboration of people with an interest in addressing the shortcomings of present grading systems in health care.”

GRADE Working Group: http://www.gradeworkinggroup.org/index.htm

Playlist of Presentations on GRADE by the American Thoracic Society:
http://www.youtube.com/playlist?list=PLv3ASQRBkH-NMKAbMYoDIsWuGMF8fUrVL

Free software to support the GRADE process.

Cochrane: RevMan: GRADEpro: http://ims.cochrane.org/revman/gradepro

UpToDate GRADE Tutorial: http://www.uptodate.com/home/grading-tutorial

20-part article series in Journal of Clinical Epidemiology explaining GRADE. These articles focus on:
– Rating the quality of evidence
– Summarizing the evidence
– Diagnostic tests
– Making recommendations
– GRADE and observational studies

GRADE guidelines – best practices using the GRADE framework: http://www.gradeworkinggroup.org/publications/JCE_series.htm

New York Academy of Medicine is having a training session on GRADE this coming August. You can find more information here.

Teaching Evidence Assimilation for Collaborative Healthcare: http://www.nyam.org/fellows-members/ebhc/
PDF on GRADE section of the course: http://www.nyam.org/fellows-members/docs/2013-More-Information-on-Level-2-GRADE.pdf

Hashtags of the Week (HOTW): Comparative Effectiveness Research (Week of January 21, 2013)

First posted at THL Blog http://wp.me/p1v84h-125


What is Comparative Effectiveness Research?
What is Comparative Effectiveness Research?: http://effectivehealthcare.ahrq.gov/index.cfm/what-is-comparative-effectiveness-research1/

I’ve been tracking the Comparative Effectiveness Research hashtag in Twitter for a while. You will have seen tweets from that stream here earlier in this HOTW series of post. The hashtag is #CER, by the way, but unfortunately it is used for many other topics as well — Carbon Emissions Reduction, Corporate Entrepreneurship Responsibility, food conversations in Turkish, and some sort of technology gadget topic that I haven’t figured out. Ah.

Luckily, the #CER tag when used in the health context has a number of other hashtags with which it is often associated. #eGEMS, #PCOR, #PCORI, and #QI are the most common used companion hashtags, but there are others as well.

#eGEMS = Generating Evidence and Methods to improve patient outcomes

#PCOR = Patient-Centered Outcomes Research

#PCORI = Patient-Centered Outcomes Research Institute

#QI = Quality Improvement (also “Quite Interesting”)

One of the things that makes it easier to track the health side of the #CER tag is that the CER community has volunteers (National Pharmaceutical Council) who find the stream so valuable they curate, collate, and archive the most relevant tweets from each week, along with brief comments on the high points from each week.

That JAMA article they mentioned? Was actually a 2009 classic from NEJM.

But there was a JAMA article in the collection from the previous week. And an impressive one, too!

Yesterday, our team here at the Taubman Health Sciences Library had a journal club to talk about a classic article on #CER.

That conversation had us looking beyond the issues of CER as a research methodology, and into the foundation of why and how the methodology developed, the purposes it is designed to serve, when and why to choose CER over another methodology such as systematic reviews, the implications of CER for the EVidence-Based Healthcare movement, the strengths and weaknesses of CER compared to other methodologies, and much more. It was a very valuable and interesting hour well spent.

Of course, we aren’t the only ones asking these types of questions about #CER — The FDA, the New York Times, among others.

Thus, you see me inspired today to dig into the #CER stream and explore more about what is there. One very timely notice is the webinar on Monday, next week.

And an upcoming conference at UCSF on using CER to make healthcare more relevant.

One of my colleagues also mentioned an upcoming campus event focusing on chronic diseases, so this was interesting and relevant to that.

The #CER stream seems to contain a regular number of high quality research articles. Definitely worth exploring.

What’s Wrong With Google Scholar for “Systematic” Reviews

Systematic!!!

Monday I read the already infamous article published January 9th which concludes that Google Scholar is, basically, good enough to be used for systematic reviews without searching any other databases.

Conclusion
The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews.

Gehanno JF, Rollin L, Darmoni S. Is the coverage of google scholar enough to be used alone for systematic reviews. BMC Med Inform Decis Mak. 2013 Jan 9;13(1):7. http://www.biomedcentral.com/1472-6947/13/7/abstract

Screen Shot: "Is the coverage of google scholar enough ..."

Leading the argument from the library perspective is Dean Giustini, who has already commented on the problems of:
– precision
– generalizability
– reproducibility

Giustini D. Is Google scholar enough for SR searching? No. http://blogs.ubc.ca/dean/2013/01/is-google-scholar-enough-for-sr-searching-no/

Giustini D. More on using Google Scholar for the systematic review. http://blogs.ubc.ca/dean/2013/01/more-on-using-google-scholar-for-the-systematic-review/

While these have already been touched upon, what I want to do right now is to bring up what distresses me most about this article, which is the same thing that worries me so much about the overall systematic review literature.

Problem One: Google.

Google Search

First and foremost, “systematic review” means that the methods to the review are SYSTEMATIC and unbiased, validated and replicable, from the question, through the search, delivery of the dataset, to the review and analysis of the data, to reporting the findings.

Let’s take just a moment with this statement. Replicable means that if two different research teams do exactly the same thing, they get the same results. Please note that Google is famed for constantly tweaking their algorithms. SEOMOZ tracks the history of changes and updates to the Google search algorithm. Back in the old days, Google would update the algorithm once a month, at the “dark of the moon”, and the changes would them propagate through the networks. Now they want to update them more often, so there is no set time. It happens when they choose, with at least 23 major updates during 2012, and 500-600 minor ones. That is roughly twice a day. That means you can do exactly the same search later in the same day, and get different results.

Google Algorithm Change History: http://www.seomoz.org/google-algorithm-change

That is not the only thing that makes Google search results unable to be replicated. Google personalizes the search experience. That means that when you do a search for a topic, it shows you what it thinks you want to see, based on the sort of links you’ve clicked on in the past, and your browsing history. If you haven’t already seen the Eli Pariser video on filter bubbles and their dangers, now is a good time to take a look at it.


TED: Eli Pariser: Beware Online Filter Bubbles. http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html

If you are using standard Google, it will give you different results than it would give to your kid sitting on the couch across the room. This is usually a good thing. It is NOT a good thing if you are trying to use the search results to create a standardized dataset as part of a scientific study.

People often think this is not a big problem. All you have to do is log out of any Google products. Then it goes back to the generic search, and you get the same things anyone else would get. Right? Actually, no. Even if you switch to a new computer, in a different office or building, and don’t log in at all, Google is really pretty good at making a guess at who you are based on the topics you search and the links you choose. Whether or not it guesses correctly doesn’t matter for my concerns, the problem is that it is customizing results AT ALL. If there is any customization going on, then that is a tool that is inappropriate for a systematic review.

Now, Google does provide a way to opt-out of the customization. You have to know it is possible, and you have to do something extra to turn it off, but it is possible and isn’t hard.

Has Google Popped the Filter Bubble?: http://www.wired.com/business/2012/01/google-filter-bubble/

Now, the most important question is does it actually turn off the filter bubble. Uh, um, well, … No. It doesn’t. Even if you turn off personalization, go to a new location, and use a different computer, Google still knows where that computer is sitting and makes guesses based on where you are. That Wired article about Google getting rid of the filter bubble was dated in January of 2012. I participated in a study done by DuckDuckGo on September 6th, and reported in November on their blog. Each participant ran the same search strategies at the same time, twice, once logged in and once logged out. They grabbed screenshots of the first screen of search results and emailed them to the research team. The searchers were from many different places around the world. Did they get different results? Oh, you betcha.

Magic keywords on Google and the consequences of tailoring results: http://www.gabrielweinberg.com/blog/2012/11/magic-keywords-on-google-and-the-consequences-of-search-tailoring-results.html

Now try to imagine the sort of challenge we face in the world of systematic review searchers. Someone already published a systematic review. You want to do a followup study. You want to use their search strategy. You need to test that you are using it right, so you limit the results to the same time period they searched, to see if you get the same numbers. I don’t know about you, but I am busting with laughter trying to imagine a search in Google, and saying, “No, I just want the part of Google results that were available at this particular moment in time five years ago and three months and ten days, if I was sitting in Oklahoma City.” Yeah, right.

Take home message? Google cannot be used for a systematic review. Period. And not just because you get 16,000 results instead of 3,000 (the precision and recall question), or because Google is a more comprehensive database than the curated scholarly databases that libraries pay for and thus you end up with poor quality results (also impacting on sensitivity and specificity), but purely on methodological grounds.

Problem Two: Process.

Systematic Reviews and Clinical Practice Guidelines to Improve Clinical Decision Making

First and foremost, “systematic review” means that the methods to the review are SYSTEMATIC and unbiased, validated and replicable, from the question, through the search, delivery of the dataset, to the review and analysis of the data, to reporting the findings.

Doing a systematic review is supposed to be SYSTEMATIC. Not just systematic for the data analysis (a subset of which is the focus of the Gehanno Google Scholar article), but systematic for the data generation, the data collection, the data management, defining the question, analysing the data, establishing consensus for the analysis, and reporting the findings. It is systematic ALL THE WAY THROUGH THE WHOLE PROCESS of doing a real systematic review. The point of the methodology is to make sure the review is unbiased (to the best of our ability, despite being done by humans), and replicable. If both of those are true, someone else could do the same study, following your methodology, and get the same results. We all know that one of the real challenges in science is encountering challenges with replicating results. That doesn’t mean it is OK to be sloppy.

The Gehanno article tries to test a tiny fraction of the SR process – if you can find the results. But they search them backwards from the normal way such a search would be done. The idea that the final selected studies of interest in specific systematic reviews will be discoverable in Google Scholar is also fairly predictable, given that Google Scholar scrapes content from publicly accessible databases such as PubMed, and thus duplicates that content.

It is unfortunately that their own methodology is not reported in sufficient detail as to allow replicating their study. What they’ve done is a very tiny partial validation study to show that certain types of content is available in Google Scholar. That is important for showing the scope of Google Scholar, but has absolutely nothing to do with doing a real systematic review, and the findings of their study should have no impact on the systematic review process for future researchers. Specifically, this sentence is what is most misstated.

“In other words, if the authors of these 29 systematic reviews had used only GS, they would have obtained the very same results.”

All we really know is what happened for the researchers who did these several searches on the days they searched. It might have been possible, but to say that they would have obtained the same results is far too strong of a claim. For the statement above to be true, it would have been necessary to first find a way to lock in Google search results for specific content at specific times; second, to replicate the search strategies from the original systematic reviews in Google Scholar and to compare coverage; third, to have vastly more sophisticated advanced searching allowing greater precision, control, and focus; and so forth. Gehanno et al are well aware of these issues, and mention them in their study.

“GS has been reported to be less precise than PubMed, since it retrieves hundreds or thousands of documents, most of them being irrelevant. Nevertheless, we should not overestimate the precision of PubMed in real life since precision and recall of a search in a database is highly dependent on the skills of the user. Many of them overestimate the quality of their searching performance, and experienced reference librarians typically retrieve about twice as many citations as do less experienced users. … . It just requires some improvement in the advanced search features to improve its precision …”

More importantly, in my mind, is that the Gehanno study conflates the search process and the data analysis in the systematic review methodology. These are two separate steps of the methodological process, with different purposes, functions, and processes. Each is to be systematic for what is happening at that step in the process. They are not interchangeable. The Gehanno study is solid and useful, but placed in an inappropriate context which results in the findings being misinterpreted.

Problem Three: Published

Retraction Watch & Plagiarism
Adam Marcus & Ivan Oransky. The paper is not sacred: Peer review continues long after a paper is published, and that analysis should become part of the scientific record. Nature Dec 22, 2011 480:449-450. http://www.nature.com/nature/journal/v480/n7378/full/480449a.html

The biggest problem with the Gehanno article, for me, is that it was published at all, at least in its current form. There is much to like in the article, if it didn’t make any claims relative to the systematic review methodological process. The research is well done and interesting, if looked at in the context of potential utility of Google Scholar to support bedside or chairside clinical decisionmaking. There are significant differences between the approaches and strategies for evidence-based clinical practice and doing a systematic review. While the three authors are all highly respected and expert informaticians, the content of the article illustrates beyond a shadow of a doubt that the authors have a grave and worrisome lack of understanding of the systematic review methodology. It is worse than that. It isn’t just that the authors of the study don’t understand how systematic review methodologies, but that their peer reviewers ALSO did not understand, and that the journal editor did not understand. That is not simply worrisome, but flat out frightening.

The entire enterprise of evidence-based healthcare depends in large part on the systematic review methodology. Evidence-based healthcare informs clinical decisionmaking, treatment plans and practice, insurance coverage, healthcare policy development, and other matters equally central to the practice of medicine and the welfare of patients. The methodologies for doing a systematic review were developed to try to improve these areas. As will any research project, the quality of the end product depends to a great extent on selecting the appropriate methodology for the study, understanding that methodology, following it accurately, and appropriately documenting and reporting variances from the standard methodology where they might impact on the results or findings.

My concern is that this might be just one indicator of a wide-spread problem with the ways in which systematic review methodologies are understood and applied by researchers. These concerns have been discussed for years among my peers, both in medical librarianship and among devoted evidence-based healthcare researchers, those with a deep and intimate understanding of the processes and methodologies. There are countless examples of published articles that state they are systematic reviews which … aren’t. I have been part of project teams for systematic reviews where I became aware partway through the process that other members of the team were not following the correct process, and the review was no longer unbiased or systematic. While some of those were published, my name is not on them, and I don’t want my name associated with them. But the flaws in the process were not corrected, nor reported, creating a certain level of alarm for me with respect to that particular project, as well as looking to them as indicators of challenges with published systematic review in general.

I used to team teach systematic review methodologies with some representatives from the Cochrane Collaboration. At that time, I was still pretty new to the process and had a lot to learn, but I did know who the experts really were, and who to go to with questions. One of the people I follow rigorously is Anne-Marie Glenny, who was a co-author on a major study examining the quality of published systematic reviews. Here is what they found.

“Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems.”
Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ. 2009 Apr 3;338:b1147. doi: 10.1136/bmj.b1147. PMID: 19346285 http://www.bmj.com/content/338/bmj.b1147?view=long&pmid=19346285

We have a problem with systematic reviews as published, and the Gehanno article is merely a warning sign. There are serious large concerns with the quality of published systematic reviews in the current research base, and equally large concerns with the ability of the peer-review process to identify quality systematic reviews. This is due, in my opinion, to weaknesses in the educational process for systematic review methodologies, and in the level of methodological expertise on the part of the authors, editors, and reviewers of the scholarly journals. Those concerns are significant enough to generate doubt about the appropriateness of depending on systematic reviews for developing healthcare policies.

The Power of Post Publication Review, A Case Study

Pic of the day - Libraries

There are many discussions and examples of post-publication review as an alternative to the currently more common peer-review model. While this comes up fairly regularly in my Twitter stream, I don’t think I’ve done more than hint at it within the blogposts here. I’ve also been watching (but neglecting to mention here) the emergence of data journalists and data journalism as a field, or perhaps perhaps I should say co-emergence, since it seems to be tightly coupled with shifts in the field of science communication and communicating risk to the public. Obviously, these all tie in tightly with the ethical constructs of informed consent and shared decisionmaking in healthcare (the phrase from the 1980s) which is now more often called participatory medicine.

That is quite a lot of jargon stuffed into one small paragraph. I could stuff it equally densely with citations to sources on these topics, definitions, and debates. Instead, for today, I’d like to give a brief overview of a case I’ve been privileged to observe unfolding over the weekend. If you want to see it directly, you’ll have to join the email list where this took place.


Part One: Publication

Last week, a new article on hormone replacement therapy (HRT) was published in the British Medical Journal (BMJ).

Schierbeck LL, Rejnmark L, Tofteng CL, Stilgren L, Eiken P, Mosekilde L, Køber L, Jensen JEB. Effect of hormone replacement therapy on cardiovascular events in recently postmenopausal women: randomised trial. BMJ 2012;345:e6409 doi: http://dx.doi.org/10.1136/bmj.e6409 (Published 9 October 2012)

The article was on the outcomes from a clinical trial which includes more information in the trial registry.

Danish Osteoporosis Prevention Study http://clinicaltrials.gov/show/NCT00252408?link_type=CLINTRIALGOV&access_num=NCT00252408

Two days later, a message was posted to an evidence-based health care email list (EVIDENCE-BASED-HEALTH@jiscmail.ac.uk [EBH]), asking for discussion of the article.

The same day, a Rapid Response was published by BMJ criticizing the article.

Mascitelli L, Goldstein MR. The flawed beneficial effects of hormone replacement therapy. BMJ. http://www.bmj.com/content/345/bmj.e6409?tab=responses

The Rapid Response closed with this delightful witticism.

“If you torture numbers enough they will say anything you want.”


Part Two: Discussion

Meanwhile, on the EBH list, the conversation was going fast and furious. I’m not going to quote individuals, but I would like to collate an overview of the topics covered.

Methodology:
– blinding (it wasn’t)
– placebo-controlled (nope)
– 8% of eligible patients recruited
– sample size (small, compared to the Women’s Health Initiative (WHI) study)
– age confounding of participants

Ethics / Bias:
– Funding (pharma)
– Authors linked to pharma

Bibliography:
– incomplete?
– does it include the most important portions of the relevant evidence base?
– specifically lacking core references on the “age hypothesis”

Referees:
– Were they the right folk? (Yes, the list was assured by a BMJ editor)
– Did they read the article critically and review it thoroughly, including the bibliography?

Impact:
– implications for future practice
– placing this article appropriately in the context of the larger body of evidence
– implications for participatory medicine, informed consent, shared decisionmaking, and how to inform the public about risk for personal decisionmaking

Recommendations for future analysis:
– pool with similar data from other studies
– include in systematic review or meta-analysis
– strategic genomic analysis (NOTE: this was not available in 1993 when the study started)

Other:
– apparent publication delay (data collection first completed in 2003, then later followup in 2008, published in 2012)
– ghostwriting (specifically the history of it related to HRT)
– ‘System I’ thinking (gut feelings) vs ‘System II’ thinking (transparent methodological approach to decisionmaking)
– “science by sound-bite”

I’m not equipped to judge the article on any of these points. I did find it extremely interesting that the discussants included faculty and faculty emeritus from major universities both in the UK and the US, patient advocates, medical and health librarians, experts in evidence-based health care methodologies, and an editor of the journal which published the article.


Part Three: The Press

Of course, the press jumped all over this, in part because the BMJ press release directing attention to this study.

HRT taken for 10 years significantly reduces risk of heart failure and heart attack. BMJ Press Releases, Wednesday, October 10, 2012 – 08:37. http://www.bmj.com/press-releases/2012/10/10/hrt-taken-10-years-significantly-reduces-risk-heart-failure-and-heart-atta

There are a lot of articles out there now in the popular press. Notice the type of language used.

BBC News: HRT reduces risk of heart attack, study suggests: http://www.bbc.co.uk/news/health-19886932

Guardian: : HRT can cut heart attack risk, study shows: http://www.guardian.co.uk/lifeandstyle/2012/oct/09/hormone-replacement-therapy-heart-attack

Telegraph: HRT is safe and cuts heart deaths, ‘significant study’ finds: http://www.telegraph.co.uk/health/healthnews/9595745/HRT-is-safe-and-cuts-heart-deaths-significant-study-finds.html

Time: Heart Benefits from Hormone Replacement Therapy?: http://healthland.time.com/2012/10/10/heart-benefits-from-hormone-replacement-therapy/

US News: Health Buzz: HRT May Be More Than Safe, Study Says: http://health.usnews.com/health-news/articles/2012/10/10/health-buzz-hrt-may-be-more-than-safe-study-says

Kind of makes you want to run out and get pills, doesn’t it? This one is not from a major popular press venue, but it has some interesting aspects. Again, look at the language used in the headline.

MedPage: HRT Helps Heart with No Cancer, Clot Risks; By Charles Bankhead, Staff Writer, MedPage Today: http://www.medpagetoday.com/OBGYN/HRT/35236

This one is from a medical news service, and was published the same day as the original article, before even the BMJ press release. What is really interesting is that it says the article was reviewed prior to publication by an MD and medical faculty member.

Published: October 09, 2012
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and Dorothy Caputo, MA, BSN, RN, Nurse Planner

That’s awfully fast. I know they need to be fast, it’s news, but it seems to me that it would be hard to have enough time read carefully or think about the implications and context; to have time to do much more than think, “Hmmm, BMJ, a good journal. This article says what the BMJ article said. OK.” I’m not saying that’s what the doc did who signed off on it, I’m just saying that the process and speed lend themselves to flaws.

The authors of the article and the BMJ editor both emphasized that this is exceptionally unique data. Because of the WHI study, there is virtually no chance of generating this type of data again. That the Danish study findings run so contrary to the findings of the WHI study is shocking and noteworthy. Why? Is this significant enough to reopen the question of HRT risks? What does this mean for individual patients and clinicians attempting to make treatment plans and decisions?

Obviously, it is not as simple as the press would make it seem. Open access makes the article accessible, but without open post publication peer review, the CONTEXT is not made accessible. Open access can only go so far in supporting personal decisionmaking.

Systematic Review Teams, Processes, Experiences

Recently I was privileged to speak with the students of Tiffany Veinot’s course in the School of Information on evidence-based practice and processes. It was an amazing and diverse group of students, with librarians and healthcare professionals from most (if not all) of the healthcare programs on campus! The students had insightful questions, the conversation went on much longer than it should have given the time allotted, but was as richly rewarding for me as I hope it was for them. The approach this year focused more on case studies and storytelling — what is it really like? The slides can’t give you the whole sense of it, but at least it is a start.

Systematic Review Teams, Processes, Experiences

Presentation is also viewable as a Google Presentation.

Systematic Review Teams, Processes, Experiences https://docs.google.com/presentation/d/1NaaYxG15LqxxlahSI2L1pLu7Q8W870B66pox79prtQY/edit

What is Best Available Evidence?

Doctor Reading Articles

Every now and then I take questions I’ve answered in other venues, and copy the answers over here for posterity. This is one of those. While my job is now in Emerging Technologies, I have a long history working in evidence-based medicine and systematic review. I’m starting to feel like I’m choking with content in that area that I haven’t blogged, so I am going to start putting a few bits of it here from time to time.

Q:

I had a question about EBM. The definition of EBM is: “The best available research evidence means evidence from valid and practically relevant research, often from the basic sciences …”

So can we use just basic science to justify a treatment? Can anyone give an example please.

A:

The definition I prefer is this, from David Sackett’s seminal article.

“Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice.”
David L Sackett, William M C Rosenberg, J A Muir Gray, R Brian Haynes, W Scott Richardson. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71. http://www.bmj.com/content/312/7023/71.full

Or, even simpler, from the same work, “It’s about integrating individual clinical expertise and the best external evidence.”

Their section on the concept of “best available evidence” goes as follows.

“By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens.”

My EBM / EBHC / EBD mentor and co-teacher, Amid Ismail, often made a big deal about this. The concept refers to the evidence pyramid, which you may have seen as an illustration around the EB literature. The version I use in teaching is on page two of this PDF:

CHAIN of TRUST: http://www-personal.umich.edu/~pfa/pro/courses/ChainOfTrustLoEVert2.pdf
Levels of Evidence, Updated for the Internet

Amid always emphasized that idea of best available evidence is ALWAYS tightly integrated with clinical judgment and the specific needs of the individual patient. So here is an example.

EXAMPLE

Let’s say there is a patient who is partially edentulous, struggling to eat, losing weight, becoming anorexic, and this is complicating other healthcare issues. The doctor wants to decide on the best way to make it easier for that patient to take in sufficient nutrition. The evidence base seems to suggest that dental implants are the best choice, and there are several systematic reviews in support of that concept. However, the patient has other conditions, of which the most important is rheumatoid arthritis (RA), and also has impaired wound healing. This makes the idea of surgeries for dental implants much more risky because of the patient’s personal situation. The RA also creates problems for the patient with using their hands, which might make for challenges managing post-operative care, which is a very important aspect of successful dental implants.

There are no systematic reviews on the population of partially-edentulous patients with RA, because the topic is too narrow and specific. Indeed, there are no articles at all on this combination of factors! This is no surprise, and is very common with rare conditions as well as less common combinations of conditions, often found among the elderly or persons with chronic health concerns. In this case, the best strategy for searching (and the best evidence) is likely to come from searching the major factors individually or in combination, then trying to integrate and weigh the evidence found to make a decision for that patient.

There is one article on the combination of partially edentulous and RA, based on a population of 6 patients.

Sato H, Fujii H, Takada H, Yamada N. The temporomandibular joint in rheumatoid arthritis–a comparative clinical and tomographic study pre- and post-prosthesis. J Oral Rehabil. 1990 Mar;17(2):165-72. http://www.ncbi.nlm.nih.gov/pubmed/2341957

That single article isn’t exactly on the topic we were searching, but it is as close to it as anything we can find. It recommends instead of implants to pull the teeth and place a prosthesis. Normally, according to the pyramid of evidence, this would not be evidence that ranks very highly. In this situation, it is the BEST evidence available.

The point is that for some questions or in some situations, the best evidence may not be very good; it might not be a systematic review, or an RCT, or even a case-controlled study. Sometimes it is a case report, or clinical experience. Sometimes it is animal studies, which we know don’t transfer over well to humans. A great deal of the research on dental implants comes in animal studies, so that might have happened with this question, if the complicating condition was perhaps diabetes or something other than RA. But a close match from an animal study compared with no evidence in humans means that the animal study is, for that question, the best evidence. It is not unusual for conditions that have a heavily immunological or microbiological aspect, for the emerging research to be based in labs, and may not have yet been tested in animals, much less people. Sometimes, in exceptional situations, especially if standard treatments have already failed, the best available evidence may even be personal reports. It is the clinician’s responsibility to examine the patient, gather the patient’s relevant history, review the evidence, select the best evidence, and integrate all of these in making a recommendation for a given patient. The point is not that the evidence is always excellent, but that it is, literally, the best available for that question and that patient.

Medlib’s Blog Carnival 2.1: Free Speech in Health Information, and More

WARNING: After this entry was originally posted, it came to my attention that I had not received all of the early entries for this round of the Carnival. The following post was edited to reflect these updates.


In the context of the looming deadline for comments on the FDA’s development of social media guidelines, the Medlib’s Blog Carnival theme this month was on free speech in health information. Briefly, the FDA has a long history of managing and establishing guidelines to prevent unethical publication of inaccurate or misleading health information from persons or corporate entities promoting the use or sales of drugs or medical devices. The flip side of this is to encourage informed decisionmaking based on high quality unbiased health information. There were few submissions this month, but those received were sound contributions looking at various aspects of this complicated issue.

Laika provided not one, but TWO excellent posts. The first one, “NOT ONE RCT on Swine Flu or H1N1?! – Outrageous!,” discusses the issue of popular news and hype as opinion influencers in comparison with actual research. Taking H1N1 as an example, she begins with a Twitter post and popular press, then discusses when it is appropriate to expect what kind of evidence in support of a question, simple tips for finding better quality evidence, as well as specific scientific and clinical contextual issues that beautifully illustrate not just issues of scientific research and methodology, access to information and information quality assessment, but also quite a bit of useful information about H1N1 itself! Laika provides a strong voice for clear reason and balanced information, but at the same time respects the importance of scientific dialog and communication in shaping the evolution of what we know about any given topic.

Laika’s MedLibLog: NOT ONE RCT on Swine Flu or H1N1?! – Outrageous!. http://laikaspoetnik.wordpress.com/2009/12/16/not-one-rct-on-swine-flu-or-h1n1-outrageous/

In her second post for this Carnival, Laika again zeroes in on the issue of dialog in science, and the broader issue of respect. This is true not just for dialog between scientists, as in the example she discusses, but even more so among the public and news media. The life lessons learned by Laika in her tale of disrespect and influence among scientists are ones we should all keep in mind when observing disagreements about science. I wanted to cheer when I read her excellent, methodical review of the limits of evidence-based medicine, and when one should or should not apply its finding to a given situation. While EBM is a very useful tool, I also have encountered worrisome instances in which a useful, low-risk, low-cost intervention is not used because there are not yet sufficient RCTs or because it is being researched for XYZ use but hasn’t yet been approved for it by the FDA. When EBM becomes a barrier to good clinical care, we have a different problem. I particularly liked the example she gave of a systematic review finding insufficient evidence to support the use of parachutes when jumping from a plane, and the selection of quotations from comments. My favorite, succinct and clear, was this line from a clinician at my institution, “RCTs aren’t holy writ, they’re simply a tool for filtering out our natural human biases in judgment and causal attribution. Whether it’s necessary to use that tool depends upon the likelihood of such bias occurring.” Read, read, and read this post again.

Laika’s MedLibLog: #NotSoFunny – Ridiculing RCTs and EBM. http://laikaspoetnik.wordpress.com/2010/02/01/notsofunny-ridiculing-rcts-and-ebm/

Dr. Shock’s post about BioMedSearch focused on “free” as in free access to quality healthcare information. A related concept in his post were the barriers traditional search methods provide to discovery of quality health information, and if it is time for a change. While you are visiting his blog, you might want to take a look at another recent post on “The Hidden and Informal Curriculum During Medical Education,” which talks about overt and covert concepts and communications in medical education. While the specific example was about narratives in a secured online space, the concepts are perhaps even more important when thinking about healthcare communications in unsecured social media spaces.

Dr. Shock, A Neurostimulating Blog: BioMedical Search on BioMedSearch: http://www.shockmd.com/2009/11/28/biomedical-search-on-biomedsearch/

In an oblique connection, Novoseek, the innovative biomedical web search engine covering Medline, grants and online publications, offered a post on their new feature, allowing searchers to limit by publication type. While this doesn’t directly connect to free speech (rather the reverse) it does directly connect to quality of health information and control through peer review, both of which are implied contextual issues. Being able to use a health specific search tool automatically focuses results on a narrower and higher quality subset of the information available on the web. Being able to limit by publication type enables the searcher to slice the search even more finely, focusing on just the highest quality health information available.

Novoseek: Tip #1 to improve searches in novoseek – Filter results by publication type. http://blog.novoseek.com/index.php/resources/tip-1-to-improve-searches-in-novoseek-filter-results-by-publication-type.html/

PS. While you are taking a look at that blogpost, you might want to also take a look at an earlier post from Novoseek called The importance of context in text disambiguation. It is a kind of geeky, technical post, but the fundamental concept is central to how humans (as well as computers) identity quality when they see it.