Tag Archives: evidence-based

ECigs: ETech Meets Public Health Again (Part Two)

[For information on why I’ve been missing-in-action here, please see this post at my personal blog: http://mhistoire.wordpress.com/2013/05/25/breathing-in-memory-of-rose-ann-broussard-cooper-anderson/ I expect to be back in business next week.]


So, in Part One, the eCig conversation was largely framed through health and legislative perspectives, with concerns hooked substantially on potential use by minors and young adults. In part two, I want to dig a little deeper into some of these issues, and spin off in new directions, mostly workplace use, more about minors, and issues of DIY and unintended uses of e-cigs.

I keep saying how complicated is the issue of electronic cigarettes. This tweet illustrates part of that.

The American Cancer Society (ACS) is one of the organizations most strongly advising caution with respect to e-cigs, and perceived as “the opposition” by the e-cig and vaping communities. Obviously, given that at least one person at their event was using an e-cig, this is not a topic with complete consensus, but it is also close enough to consensus to raise eyebrows and warrant comment. The issues are further complicated by the ACS accepting donations from e-cig manufacturers.

Similarly, despite the prolific and prominent vitriol from the vaping community regarding any suggestion that e-cigs warrant further research or concern or caution, there are elements of that community willing to work with the government and professional medical organizations on exactly those areas.

Growing Electronic Cigarette Manufacturer “Welcomes” FDA’s “Reasonable Regulation” Of Category:
http://www.prnewswire.com/news-releases/growing-electronic-cigarette-manufacturer-welcomes-fdas-reasonable-regulation-of-category-204121851.html

Given that e-cigs are an emerging technological alternative to the issue of smoking and that smoking in public spaces and the workplace has been a major issue over the past few decades, it’s no real surprise that there are guidelines and suggestions being created to advise employers about best practices for managing e-cigs in the workplace. Given that my own campus, University of Michigan, only recently went smoke-free (July 1, 2011), and that several of my friends are still struggling to make the switch, I expect that this is an issue worthy of local attention.

What employers need to know about electronic cigarettes? Fact Sheet, September 2011. (pdf)
http://www.businessgrouphealth.org/pub/f311fb03-2354-d714-51a9-0b67bb588666
Main points:
Quick Facts About E-Cigarettes
• Not an FDA-approved tobacco cessation device.
• Contain nicotine and detectable levels of known carcinogens and toxic chemicals.
• Look very similar to regular cigarettes (especially from a distance).
• Manufactured using inconsistent or non-existent quality control processes.
Actions for Employers
• Determine whether the use of e-cigarettes is allowed in their jurisdictions, including in the workplace.
• Understand whether unions, works councils, or other laws can raise barriers to implementing workplace
policies regulating e-cigarettes.
• Stay informed on any new laws and emerging scientific evidence regarding e-cigarettes.

Please note the date on those tips, and that they haven’t been updated, although the conversation is far from over!

Sullum, Jacob. Boston Bans E-Cigarettes in Workplaces, Just Because. Dec. 2, 2011 http://reason.com/blog/2011/12/02/boston-bans-e-cigarettes-in-workplaces-f

American Society for Quality: Should e-Cigarettes Be Allowed in the Workplace? April 15, 2013 http://asq.org/qualitynews/qnt/execute/displaySetup?newsID=15801

One marketing firm addressed a sort of a case study of why one life insurance firm in Britain banned e-cigs at work, arguing against each of the points.


Should electric cigarettes be allowed in the workplace http://www.slideshare.net/jackwillis2005/ppt-should-e-cigarettes-be-allowed-in-the-workplace

Here are a couple links with pro and con information about the Standard Life policy decision. A major point seems to be the psychology of e-cig use, that because of their resemblance to real cigarettes they give the message that smoking is a good thing or at least permissible. I am not aware of any research into this assumption, although there is substantial evidence on the related concept of candy cigarettes.

The Scotsman: Standard Life bans employees from smoking electronic cigarettes at work (2012): http://www.scotsman.com/the-scotsman/health/standard-life-bans-employees-from-smoking-electronic-cigarettes-at-work-1-2124568

Daily Mail: Safety fears over electronic cigarettes because they are ‘unclean’ and unregulated: http://www.dailymail.co.uk/health/article-2129550/Safety-fears-electronic-cigarettes-unclean-unregulated.html

And a couple pieces about the psychological impact candy cigarettes. Consider, though, that the research on candy cigarettes is looking explicitly at the impact on children, not adults.

Klein JD, Forehand B, Oliveri J, Patterson CJ, Kupersmidt JB, Strecher V. Candy cigarettes: do they encourage children’s smoking? Pediatrics. 1992 Jan;89(1):27-31. http://www.ncbi.nlm.nih.gov/pubmed/1728016

Klein JD, Clair SS. Do candy cigarettes encourage young people to smoke? BMJ. 2000 Aug 5;321(7257):362-5. http://www.ncbi.nlm.nih.gov/pubmed/10926600

Klein JD, Thomas RK, Sutter EJ. History of childhood candy cigarette use is associated with tobacco smoking by adults. Prev Med. 2007 Jul;45(1):26-30. Epub 2007 Apr 24. http://www.ncbi.nlm.nih.gov/pubmed/17532370

Back to the American Cancer Society, and the issue of minors having access to e-cigs.

Anti-THR Lies and related topics: Who leads the fight against banning e-cigarette sales to minors? Guess again: it is the American Cancer Society: http://antithrlies.com/2013/04/25/who-leads-the-fight-against-banning-e-cigarette-sales-to-minors/

As with everything surrounding the e-cig controversies, it’s never straightforward, and there are always multiple views with value. This tweet was in response to my Part One blogpost on e-cigs.

The links highlight the work of Dr. Michael Siegel, Professor, Department of Community Health Sciences, Boston University School of Public Health.

Dr. Siegel:
“I do not question the need to balance the benefits of enhancing smoking cessation among adult smokers with the costs of youth beginning to use this nicotine-containing product. But show me at least one youth using the product before you call for a ban. This recommendation makes a mockery out of the idea of science-based or evidence-based policy making in tobacco control.”
The Rest of the Story: Tobacco News Analysis and Commentary: American Legacy Foundation Sounds Alarm About Electronic Cigarette Use Among Young People, Calling for a Ban on Flavored E-Cigarettes, But Fails to Document a Single Youth Using These Products http://tobaccoanalysis.blogspot.com/2013/04/american-legacy-foundation-sounds-alarm.html

In response to:

““While most candy-flavors – such as chocolate, vanilla and peach – were banned in 2009 from cigarettes, flavored tobacco products like cigars, hookah, snus and e-cigarettes persist in more than 45 flavors and are still legally on the market,” said Andrea Villanti, PhD, MPH, CHES, Research Investigator for Legacy. “These products can be just as appealing to young people as flavored cigarettes, offering a product appearing to be more like candy to those most at-risk of becoming lifelong tobacco users,” she added.”
FDA Should Extend Ban on Flavors to Other Products to Protect Young People, April 3, 2013 http://legacyforhealth.org/newsroom/press-releases/flavored-tobacco-continues-to-play-a-role-in-tobacco-use-among-young-adults

“Overall, 18.5% of tobacco users report using flavored products, and dual use of menthol and flavored product use ranged from 1% (nicotine products) to 72% (chewing tobacco). In a multivariable model controlling for menthol use, younger adults were more likely to use flavored tobacco products (OR=1.89, 95% CI=1.14, 3.11), and those with a high school education had decreased use of flavored products (OR=0.56; 95% CI=0.32, 0.97). Differences in use may be due to the continued targeted advertising of flavored products to young adults and minorities. Those most likely to use flavored products are also those most at risk of developing established tobacco-use patterns that persist through their lifetime.”
Villanti AC, Richardson A, Vallone DM, Rath JM. Flavored Tobacco Product Use Among U.S. Young Adults. American Journal of Preventive Medicine 44(4):388-391, April 2013 http://www.ajpmonline.org/article/S0749-3797(12)00939-7/abstract

Dr. Siegel:
“But I don’t think most anti-smoking groups or advocates care about the actual evidence. They’ve already made up their minds. Vaping looks too much like smoking. So forget about the fact that not a single nonsmoking youth could be found who has even tried the product. The advocates must continue to follow the party line and warn about the danger of electronic cigarettes as a gateway to nicotine addiction. Never mind that the gateway just doesn’t exist.”
The Rest of the Story: Tobacco News Analysis and Commentary: New Study on Electronic Cigarette Use Among Youth Fails to Find a Single Nonsmoking Youth Who Has Even Tried an Electronic Cigarette: http://tobaccoanalysis.blogspot.com/2013/01/new-study-on-electronic-cigarette-use.html

In response to:

“E-cigarettes are battery-powered devices that look like cigarettes and deliver a nicotine vapor to the user. They are widely advertised as technologically advanced and healthier alternatives to tobacco cigarettes using youth-relevant appeals such as celebrity endorsements, trendy/fashionable imagery, and fruit, candy, and alcohol flavors [2], [3]. E-cigarettes are widely available online and in shopping mall kiosks, which may result in a disproportionate reach to teens, who spend much of their free time online and in shopping malls.”
Grana, Rachel A. Electronic Cigarettes: A New Nicotine Gateway? Journal of Adolescent Health 52(2):135-136, February 2013.
http://www.jahonline.org/article/S1054-139X(12)00736-7/fulltext
[NOTE: Check out the bibliography]

“Only two participants (< 1%) had previously tried e-cigarettes. Among those who had not tried e-cigarettes, most (67%) had heard of them. Awareness was higher among older and non-Hispanic adolescents. Nearly 1 in 5 (18%) participants were willing to try either a plain or flavored e-cigarette, but willingness to try plain versus flavored varieties did not differ. Smokers were more willing to try any e-cigarette than nonsmokers (74% vs. 13%; OR 10.25, 95% CI 2.88, 36.46). Nonsmokers who had more negative beliefs about the typical smoker were less willing to try e-cigarettes (OR .58, 95% CI .43, .79)."
Pepper JK , Reiter PL , McRee A-L , Cameron LD , Gilkey MB , Brewer NT . Adolescent males' awareness of and willingness to try electronic cigarettes. J Adolesc Health . 2013;52:144–150. http://www.jahonline.org/article/S1054-139X(12)00409-0/abstract

Wow. All smart people, working in or from the peer-reviewed literature, but with varying interpretations. For more information about flavors in e-cigs, check out these e-cig review and information sites.

Vapor Rater: http://www.vaporrater.com
Vapour Trails: http://www.vapourtrails.tv

The first thing I saw that actually sparked a moment of interest in e-cigs for me personally was the idea that you can make your own at home. I’m not a smoker, but I’m also not much of a drinker. I am, however, addicted to canning, pickling, and otherwise preserving produce and home goods. I go so far as to even make my own fruit shrubs as beverage mixes for my friends who do drink, even though I don’t partake. If you could convince me that e-cigs were safe and healthy and all that, you could tempt me to want to learn how to mix the vaping liquid for my friends, even if I don’t use it myself.

RTS Vapes: Lab Safety when Mixing Liquid Nicotine: http://rtsvapes.blogspot.com/2012/09/lab-safety-when-mixing-liquid-nicotine.html

A brief detour down memory lane. When I was in high school I remember vividly a change in what and who was “cool” between sophomore and junior years. During freshman and sophomore years, the cool kids, the influencers, were those who snuck off into corners to make out and have sex. In junior and senior years it was no longer sex but drugs that was cool, and a lot of the smartest kids in school adopted drugs, creating and using intellect, technology, and creativity to explore this “counter-culture” area. In chemistry class, one of the top students used the chem lab to gold-plate a baby marijuana leaf into a pickle fork. A pair of National Merit Scholars broke into the high school academic system to do a statistical analysis comparing the IQs of known street drug users compared to street drug ‘virgins’ among the student population, with the drug users ‘proven’ to have the highest IQs. There was a perception that drugs weren’t just cool, but smart. I don’t know, but it would not surprise me to find that high school students today are also inquisitive and creative with exploring new technologies that allow them to buck the status quo. It is with that in mind that I read these next tweets.


Portable Vaporizer – Marijuana Pot Herbal Portable Vaporizers http://www.youtube.com/watch?v=vs6AjEXcOok

For the record, I am a supporter of the legalization of marijuana, and it makes sense that if e-cigs are safer to want to extend those health benefits to persons who smoke anything recreationally. I’m not opposed to e-cigs, either, but do think there are benefits to information, education, and appropriate legislation. There are really two main questions. One, this is a new technology, and we don’t know that much about it. E-cigs were invented in 2004, and there simply hasn’t been time to fully research the technical, physical, and psychological health impacts of use. That is a problem for most new and emerging technologies, and we don’t have a solution for that at this point. The other main question is really about minors. So, the argument from Dr. Siegel is that youth don’t use e-cigs. Are you sure?

Science Online and the Role of Scepticism #SCIO13 #MedSkep

Last week, Chris Bullin did a lovely post on the tweets at the Science Online 2013 Conference (#SCIO13). I hope it intrigued some of you enough for you to go look at more of the tweets.

Science Online 2013: http://scienceonline.com/scienceonline2013/

As I was tracking the Twitter stream for the conference, I noticed several hashtags being used for specific sessions. They were all wonderful, but I thought #medskep was perhaps the most important one for information professionals, those curating & sharing science information, science journalists, science communication experts, and those trying to persuade and engage science professionals in social media.

Science Online 2013: Session 5E: How to make sure you’re being appropriately skeptical when covering scientific and medical studies (#MedSkep): http://scio13.wikispaces.com/Session+5E

Faculty often are engaged in trying to teach critical appraisal within their domain, to encourage students to select and cite more authoritative research in their academic products (ie. homework and research). Librarians share in this effort, teaching strategies for accessing resources of high quality, tips for how to identify high quality, and general skills for critical appraisal and critical thinking. This is a frequent topic of conversations among the profession at large, as well as at our own departmental meetings. We talk about how involved should the librarian be in teaching these skills, how to partner with faculty, how to improve our effectiveness, how to improve both our own skills and our credibility in this area among the students and faculty, and much more. Research is done and articles are written, all about how to help students and journalists and the public better understand the strengths and limitations of science research. Librarians are developing tools and resources to help teach and understand these skills, often in partnership with scientists, and at SCIO13 several scientists and faculty were sharing these, and then other scientists and librarians resharing them!

Sometimes, there is no substitute for taking the words from the most source. This is why the #MedSkep conversation was so powerful. Real scientists connecting with experts in science and critical appraisal (Ivan Oransky and Tara C. Smith) talking freely and honestly in plain language about why scepticism is essential in science literacy, and tips for communicating with special audiences (hint, hint, journalists?) about science.

So here, I want to share some of my favorite tweets from the conversation. I’ve organized them into three sections: Thoughts, Tools & Resources, and Debates. FYI, the debates were almost entirely about medicine and healthcare research, problems with peer review, there were some pointed comments about systematic reviews, and overall, they were … intense.

THOUGHTS

TOOLS & RESOURCES

This is so important, I’m going to BRIEFLY distill the key points, but do please go read the original, in full.

#1 Identify costs, both economic & social or personal.
#2 Identify benefits.
#3 Identify the harms.
#4 How good is the evidence?
#5 Avoid “disease-mongering.”
#6 Use independent sources, and identify conflicts of interest.
#7 Compare the new way to what is existing best (or standard) practice.
#8 Is this new way actually available to the public? How available?
#9 Is this actually novel? As in unique & innovative.
#10 Don’t just crib from a press release.

DEBATES

1) Is Medical Research Really Science?

2) Problems with Peer Review

WANT MORE?

Here is a blogpost on the session from one of the organizers.

Aetiology: Skeptical science and medical reporting (#Scio13 wrap-up) http://scienceblogs.com/aetiology/2013/02/04/skeptical-science-and-medical-reporting-scio13-wrap-up/

And a lovely Storify collecting many more of the tweets. Well worth reading through in its entirely.

#medskep session at #Scio13 http://storify.com/aetiology/medskep-session-at-scio13

Evidence-based? What’s the GRADE?

GRADE Working Group

Personally, I have a love/hate relationship with healthcare’s dependence on grading systems, kitemarks, seals of approval, etcetera, especially in the realm of websites and information for patients or general health literacy. It is rather a different matter when it comes to information for clinicians and healthcare providers (HCPs). There, we typically depend on the peer-review process to give clinicians confidence in the information on which they base their clinical decisions for patient care. Retraction Watch and others have made it clear that simply being published is no longer (if it ever was) an assurance of quality and dependability of healthcare information. As long as I’ve been working as a medical librarian, I’ve been hearing from med school faculty that their students don’t do the best job of critically appraising the medical literature. I suspect this is something that medical faculty have said for many generations, and that it is nothing new. Still, it is welcome to find tools and training to help improve awareness of the possible weaknesses of the literature and how to assess quality.

During some recent excellent and thought provoking conversations on the Evidence-Based Health list, GRADE was brought up yet again by Per Olav Vandvik. There have been several conversations about GRADE in this group, but I thought perhaps some of the readers of this blog might not be aware of it yet. Here’s a brief intro.

GRADE stands for “Grading of Recommendations Assessment, Development and Evaluation.” GRADE Working Group is the managing organization. I like their back history: “The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) Working Group began in the year 2000 as an informal collaboration of people with an interest in addressing the shortcomings of present grading systems in health care.”

GRADE Working Group: http://www.gradeworkinggroup.org/index.htm

Playlist of Presentations on GRADE by the American Thoracic Society:
http://www.youtube.com/playlist?list=PLv3ASQRBkH-NMKAbMYoDIsWuGMF8fUrVL

Free software to support the GRADE process.

Cochrane: RevMan: GRADEpro: http://ims.cochrane.org/revman/gradepro

UpToDate GRADE Tutorial: http://www.uptodate.com/home/grading-tutorial

20-part article series in Journal of Clinical Epidemiology explaining GRADE. These articles focus on:
– Rating the quality of evidence
– Summarizing the evidence
– Diagnostic tests
– Making recommendations
– GRADE and observational studies

GRADE guidelines – best practices using the GRADE framework: http://www.gradeworkinggroup.org/publications/JCE_series.htm

New York Academy of Medicine is having a training session on GRADE this coming August. You can find more information here.

Teaching Evidence Assimilation for Collaborative Healthcare: http://www.nyam.org/fellows-members/ebhc/
PDF on GRADE section of the course: http://www.nyam.org/fellows-members/docs/2013-More-Information-on-Level-2-GRADE.pdf

Hashtags of the Week (HOTW): Comparative Effectiveness Research (Week of January 21, 2013)

First posted at THL Blog http://wp.me/p1v84h-125


What is Comparative Effectiveness Research?
What is Comparative Effectiveness Research?: http://effectivehealthcare.ahrq.gov/index.cfm/what-is-comparative-effectiveness-research1/

I’ve been tracking the Comparative Effectiveness Research hashtag in Twitter for a while. You will have seen tweets from that stream here earlier in this HOTW series of post. The hashtag is #CER, by the way, but unfortunately it is used for many other topics as well — Carbon Emissions Reduction, Corporate Entrepreneurship Responsibility, food conversations in Turkish, and some sort of technology gadget topic that I haven’t figured out. Ah.

Luckily, the #CER tag when used in the health context has a number of other hashtags with which it is often associated. #eGEMS, #PCOR, #PCORI, and #QI are the most common used companion hashtags, but there are others as well.

#eGEMS = Generating Evidence and Methods to improve patient outcomes

#PCOR = Patient-Centered Outcomes Research

#PCORI = Patient-Centered Outcomes Research Institute

#QI = Quality Improvement (also “Quite Interesting”)

One of the things that makes it easier to track the health side of the #CER tag is that the CER community has volunteers (National Pharmaceutical Council) who find the stream so valuable they curate, collate, and archive the most relevant tweets from each week, along with brief comments on the high points from each week.

That JAMA article they mentioned? Was actually a 2009 classic from NEJM.

But there was a JAMA article in the collection from the previous week. And an impressive one, too!

Yesterday, our team here at the Taubman Health Sciences Library had a journal club to talk about a classic article on #CER.

That conversation had us looking beyond the issues of CER as a research methodology, and into the foundation of why and how the methodology developed, the purposes it is designed to serve, when and why to choose CER over another methodology such as systematic reviews, the implications of CER for the EVidence-Based Healthcare movement, the strengths and weaknesses of CER compared to other methodologies, and much more. It was a very valuable and interesting hour well spent.

Of course, we aren’t the only ones asking these types of questions about #CER — The FDA, the New York Times, among others.

Thus, you see me inspired today to dig into the #CER stream and explore more about what is there. One very timely notice is the webinar on Monday, next week.

And an upcoming conference at UCSF on using CER to make healthcare more relevant.

One of my colleagues also mentioned an upcoming campus event focusing on chronic diseases, so this was interesting and relevant to that.

The #CER stream seems to contain a regular number of high quality research articles. Definitely worth exploring.

What’s Wrong With Google Scholar for “Systematic” Reviews

Systematic!!!

Monday I read the already infamous article published January 9th which concludes that Google Scholar is, basically, good enough to be used for systematic reviews without searching any other databases.

Conclusion
The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews.

Gehanno JF, Rollin L, Darmoni S. Is the coverage of google scholar enough to be used alone for systematic reviews. BMC Med Inform Decis Mak. 2013 Jan 9;13(1):7. http://www.biomedcentral.com/1472-6947/13/7/abstract

Screen Shot: "Is the coverage of google scholar enough ..."

Leading the argument from the library perspective is Dean Giustini, who has already commented on the problems of:
– precision
– generalizability
– reproducibility

Giustini D. Is Google scholar enough for SR searching? No. http://blogs.ubc.ca/dean/2013/01/is-google-scholar-enough-for-sr-searching-no/

Giustini D. More on using Google Scholar for the systematic review. http://blogs.ubc.ca/dean/2013/01/more-on-using-google-scholar-for-the-systematic-review/

While these have already been touched upon, what I want to do right now is to bring up what distresses me most about this article, which is the same thing that worries me so much about the overall systematic review literature.

Problem One: Google.

Google Search

First and foremost, “systematic review” means that the methods to the review are SYSTEMATIC and unbiased, validated and replicable, from the question, through the search, delivery of the dataset, to the review and analysis of the data, to reporting the findings.

Let’s take just a moment with this statement. Replicable means that if two different research teams do exactly the same thing, they get the same results. Please note that Google is famed for constantly tweaking their algorithms. SEOMOZ tracks the history of changes and updates to the Google search algorithm. Back in the old days, Google would update the algorithm once a month, at the “dark of the moon”, and the changes would them propagate through the networks. Now they want to update them more often, so there is no set time. It happens when they choose, with at least 23 major updates during 2012, and 500-600 minor ones. That is roughly twice a day. That means you can do exactly the same search later in the same day, and get different results.

Google Algorithm Change History: http://www.seomoz.org/google-algorithm-change

That is not the only thing that makes Google search results unable to be replicated. Google personalizes the search experience. That means that when you do a search for a topic, it shows you what it thinks you want to see, based on the sort of links you’ve clicked on in the past, and your browsing history. If you haven’t already seen the Eli Pariser video on filter bubbles and their dangers, now is a good time to take a look at it.


TED: Eli Pariser: Beware Online Filter Bubbles. http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html

If you are using standard Google, it will give you different results than it would give to your kid sitting on the couch across the room. This is usually a good thing. It is NOT a good thing if you are trying to use the search results to create a standardized dataset as part of a scientific study.

People often think this is not a big problem. All you have to do is log out of any Google products. Then it goes back to the generic search, and you get the same things anyone else would get. Right? Actually, no. Even if you switch to a new computer, in a different office or building, and don’t log in at all, Google is really pretty good at making a guess at who you are based on the topics you search and the links you choose. Whether or not it guesses correctly doesn’t matter for my concerns, the problem is that it is customizing results AT ALL. If there is any customization going on, then that is a tool that is inappropriate for a systematic review.

Now, Google does provide a way to opt-out of the customization. You have to know it is possible, and you have to do something extra to turn it off, but it is possible and isn’t hard.

Has Google Popped the Filter Bubble?: http://www.wired.com/business/2012/01/google-filter-bubble/

Now, the most important question is does it actually turn off the filter bubble. Uh, um, well, … No. It doesn’t. Even if you turn off personalization, go to a new location, and use a different computer, Google still knows where that computer is sitting and makes guesses based on where you are. That Wired article about Google getting rid of the filter bubble was dated in January of 2012. I participated in a study done by DuckDuckGo on September 6th, and reported in November on their blog. Each participant ran the same search strategies at the same time, twice, once logged in and once logged out. They grabbed screenshots of the first screen of search results and emailed them to the research team. The searchers were from many different places around the world. Did they get different results? Oh, you betcha.

Magic keywords on Google and the consequences of tailoring results: http://www.gabrielweinberg.com/blog/2012/11/magic-keywords-on-google-and-the-consequences-of-search-tailoring-results.html

Now try to imagine the sort of challenge we face in the world of systematic review searchers. Someone already published a systematic review. You want to do a followup study. You want to use their search strategy. You need to test that you are using it right, so you limit the results to the same time period they searched, to see if you get the same numbers. I don’t know about you, but I am busting with laughter trying to imagine a search in Google, and saying, “No, I just want the part of Google results that were available at this particular moment in time five years ago and three months and ten days, if I was sitting in Oklahoma City.” Yeah, right.

Take home message? Google cannot be used for a systematic review. Period. And not just because you get 16,000 results instead of 3,000 (the precision and recall question), or because Google is a more comprehensive database than the curated scholarly databases that libraries pay for and thus you end up with poor quality results (also impacting on sensitivity and specificity), but purely on methodological grounds.

Problem Two: Process.

Systematic Reviews and Clinical Practice Guidelines to Improve Clinical Decision Making

First and foremost, “systematic review” means that the methods to the review are SYSTEMATIC and unbiased, validated and replicable, from the question, through the search, delivery of the dataset, to the review and analysis of the data, to reporting the findings.

Doing a systematic review is supposed to be SYSTEMATIC. Not just systematic for the data analysis (a subset of which is the focus of the Gehanno Google Scholar article), but systematic for the data generation, the data collection, the data management, defining the question, analysing the data, establishing consensus for the analysis, and reporting the findings. It is systematic ALL THE WAY THROUGH THE WHOLE PROCESS of doing a real systematic review. The point of the methodology is to make sure the review is unbiased (to the best of our ability, despite being done by humans), and replicable. If both of those are true, someone else could do the same study, following your methodology, and get the same results. We all know that one of the real challenges in science is encountering challenges with replicating results. That doesn’t mean it is OK to be sloppy.

The Gehanno article tries to test a tiny fraction of the SR process – if you can find the results. But they search them backwards from the normal way such a search would be done. The idea that the final selected studies of interest in specific systematic reviews will be discoverable in Google Scholar is also fairly predictable, given that Google Scholar scrapes content from publicly accessible databases such as PubMed, and thus duplicates that content.

It is unfortunately that their own methodology is not reported in sufficient detail as to allow replicating their study. What they’ve done is a very tiny partial validation study to show that certain types of content is available in Google Scholar. That is important for showing the scope of Google Scholar, but has absolutely nothing to do with doing a real systematic review, and the findings of their study should have no impact on the systematic review process for future researchers. Specifically, this sentence is what is most misstated.

“In other words, if the authors of these 29 systematic reviews had used only GS, they would have obtained the very same results.”

All we really know is what happened for the researchers who did these several searches on the days they searched. It might have been possible, but to say that they would have obtained the same results is far too strong of a claim. For the statement above to be true, it would have been necessary to first find a way to lock in Google search results for specific content at specific times; second, to replicate the search strategies from the original systematic reviews in Google Scholar and to compare coverage; third, to have vastly more sophisticated advanced searching allowing greater precision, control, and focus; and so forth. Gehanno et al are well aware of these issues, and mention them in their study.

“GS has been reported to be less precise than PubMed, since it retrieves hundreds or thousands of documents, most of them being irrelevant. Nevertheless, we should not overestimate the precision of PubMed in real life since precision and recall of a search in a database is highly dependent on the skills of the user. Many of them overestimate the quality of their searching performance, and experienced reference librarians typically retrieve about twice as many citations as do less experienced users. … . It just requires some improvement in the advanced search features to improve its precision …”

More importantly, in my mind, is that the Gehanno study conflates the search process and the data analysis in the systematic review methodology. These are two separate steps of the methodological process, with different purposes, functions, and processes. Each is to be systematic for what is happening at that step in the process. They are not interchangeable. The Gehanno study is solid and useful, but placed in an inappropriate context which results in the findings being misinterpreted.

Problem Three: Published

Retraction Watch & Plagiarism
Adam Marcus & Ivan Oransky. The paper is not sacred: Peer review continues long after a paper is published, and that analysis should become part of the scientific record. Nature Dec 22, 2011 480:449-450. http://www.nature.com/nature/journal/v480/n7378/full/480449a.html

The biggest problem with the Gehanno article, for me, is that it was published at all, at least in its current form. There is much to like in the article, if it didn’t make any claims relative to the systematic review methodological process. The research is well done and interesting, if looked at in the context of potential utility of Google Scholar to support bedside or chairside clinical decisionmaking. There are significant differences between the approaches and strategies for evidence-based clinical practice and doing a systematic review. While the three authors are all highly respected and expert informaticians, the content of the article illustrates beyond a shadow of a doubt that the authors have a grave and worrisome lack of understanding of the systematic review methodology. It is worse than that. It isn’t just that the authors of the study don’t understand how systematic review methodologies, but that their peer reviewers ALSO did not understand, and that the journal editor did not understand. That is not simply worrisome, but flat out frightening.

The entire enterprise of evidence-based healthcare depends in large part on the systematic review methodology. Evidence-based healthcare informs clinical decisionmaking, treatment plans and practice, insurance coverage, healthcare policy development, and other matters equally central to the practice of medicine and the welfare of patients. The methodologies for doing a systematic review were developed to try to improve these areas. As will any research project, the quality of the end product depends to a great extent on selecting the appropriate methodology for the study, understanding that methodology, following it accurately, and appropriately documenting and reporting variances from the standard methodology where they might impact on the results or findings.

My concern is that this might be just one indicator of a wide-spread problem with the ways in which systematic review methodologies are understood and applied by researchers. These concerns have been discussed for years among my peers, both in medical librarianship and among devoted evidence-based healthcare researchers, those with a deep and intimate understanding of the processes and methodologies. There are countless examples of published articles that state they are systematic reviews which … aren’t. I have been part of project teams for systematic reviews where I became aware partway through the process that other members of the team were not following the correct process, and the review was no longer unbiased or systematic. While some of those were published, my name is not on them, and I don’t want my name associated with them. But the flaws in the process were not corrected, nor reported, creating a certain level of alarm for me with respect to that particular project, as well as looking to them as indicators of challenges with published systematic review in general.

I used to team teach systematic review methodologies with some representatives from the Cochrane Collaboration. At that time, I was still pretty new to the process and had a lot to learn, but I did know who the experts really were, and who to go to with questions. One of the people I follow rigorously is Anne-Marie Glenny, who was a co-author on a major study examining the quality of published systematic reviews. Here is what they found.

“Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems.”
Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ. 2009 Apr 3;338:b1147. doi: 10.1136/bmj.b1147. PMID: 19346285 http://www.bmj.com/content/338/bmj.b1147?view=long&pmid=19346285

We have a problem with systematic reviews as published, and the Gehanno article is merely a warning sign. There are serious large concerns with the quality of published systematic reviews in the current research base, and equally large concerns with the ability of the peer-review process to identify quality systematic reviews. This is due, in my opinion, to weaknesses in the educational process for systematic review methodologies, and in the level of methodological expertise on the part of the authors, editors, and reviewers of the scholarly journals. Those concerns are significant enough to generate doubt about the appropriateness of depending on systematic reviews for developing healthcare policies.

The Power of Post Publication Review, A Case Study

Pic of the day - Libraries

There are many discussions and examples of post-publication review as an alternative to the currently more common peer-review model. While this comes up fairly regularly in my Twitter stream, I don’t think I’ve done more than hint at it within the blogposts here. I’ve also been watching (but neglecting to mention here) the emergence of data journalists and data journalism as a field, or perhaps perhaps I should say co-emergence, since it seems to be tightly coupled with shifts in the field of science communication and communicating risk to the public. Obviously, these all tie in tightly with the ethical constructs of informed consent and shared decisionmaking in healthcare (the phrase from the 1980s) which is now more often called participatory medicine.

That is quite a lot of jargon stuffed into one small paragraph. I could stuff it equally densely with citations to sources on these topics, definitions, and debates. Instead, for today, I’d like to give a brief overview of a case I’ve been privileged to observe unfolding over the weekend. If you want to see it directly, you’ll have to join the email list where this took place.


Part One: Publication

Last week, a new article on hormone replacement therapy (HRT) was published in the British Medical Journal (BMJ).

Schierbeck LL, Rejnmark L, Tofteng CL, Stilgren L, Eiken P, Mosekilde L, Køber L, Jensen JEB. Effect of hormone replacement therapy on cardiovascular events in recently postmenopausal women: randomised trial. BMJ 2012;345:e6409 doi: http://dx.doi.org/10.1136/bmj.e6409 (Published 9 October 2012)

The article was on the outcomes from a clinical trial which includes more information in the trial registry.

Danish Osteoporosis Prevention Study http://clinicaltrials.gov/show/NCT00252408?link_type=CLINTRIALGOV&access_num=NCT00252408

Two days later, a message was posted to an evidence-based health care email list (EVIDENCE-BASED-HEALTH@jiscmail.ac.uk [EBH]), asking for discussion of the article.

The same day, a Rapid Response was published by BMJ criticizing the article.

Mascitelli L, Goldstein MR. The flawed beneficial effects of hormone replacement therapy. BMJ. http://www.bmj.com/content/345/bmj.e6409?tab=responses

The Rapid Response closed with this delightful witticism.

“If you torture numbers enough they will say anything you want.”


Part Two: Discussion

Meanwhile, on the EBH list, the conversation was going fast and furious. I’m not going to quote individuals, but I would like to collate an overview of the topics covered.

Methodology:
– blinding (it wasn’t)
– placebo-controlled (nope)
– 8% of eligible patients recruited
– sample size (small, compared to the Women’s Health Initiative (WHI) study)
– age confounding of participants

Ethics / Bias:
– Funding (pharma)
– Authors linked to pharma

Bibliography:
– incomplete?
– does it include the most important portions of the relevant evidence base?
– specifically lacking core references on the “age hypothesis”

Referees:
– Were they the right folk? (Yes, the list was assured by a BMJ editor)
– Did they read the article critically and review it thoroughly, including the bibliography?

Impact:
– implications for future practice
– placing this article appropriately in the context of the larger body of evidence
– implications for participatory medicine, informed consent, shared decisionmaking, and how to inform the public about risk for personal decisionmaking

Recommendations for future analysis:
– pool with similar data from other studies
– include in systematic review or meta-analysis
– strategic genomic analysis (NOTE: this was not available in 1993 when the study started)

Other:
– apparent publication delay (data collection first completed in 2003, then later followup in 2008, published in 2012)
– ghostwriting (specifically the history of it related to HRT)
– ‘System I’ thinking (gut feelings) vs ‘System II’ thinking (transparent methodological approach to decisionmaking)
– “science by sound-bite”

I’m not equipped to judge the article on any of these points. I did find it extremely interesting that the discussants included faculty and faculty emeritus from major universities both in the UK and the US, patient advocates, medical and health librarians, experts in evidence-based health care methodologies, and an editor of the journal which published the article.


Part Three: The Press

Of course, the press jumped all over this, in part because the BMJ press release directing attention to this study.

HRT taken for 10 years significantly reduces risk of heart failure and heart attack. BMJ Press Releases, Wednesday, October 10, 2012 – 08:37. http://www.bmj.com/press-releases/2012/10/10/hrt-taken-10-years-significantly-reduces-risk-heart-failure-and-heart-atta

There are a lot of articles out there now in the popular press. Notice the type of language used.

BBC News: HRT reduces risk of heart attack, study suggests: http://www.bbc.co.uk/news/health-19886932

Guardian: : HRT can cut heart attack risk, study shows: http://www.guardian.co.uk/lifeandstyle/2012/oct/09/hormone-replacement-therapy-heart-attack

Telegraph: HRT is safe and cuts heart deaths, ‘significant study’ finds: http://www.telegraph.co.uk/health/healthnews/9595745/HRT-is-safe-and-cuts-heart-deaths-significant-study-finds.html

Time: Heart Benefits from Hormone Replacement Therapy?: http://healthland.time.com/2012/10/10/heart-benefits-from-hormone-replacement-therapy/

US News: Health Buzz: HRT May Be More Than Safe, Study Says: http://health.usnews.com/health-news/articles/2012/10/10/health-buzz-hrt-may-be-more-than-safe-study-says

Kind of makes you want to run out and get pills, doesn’t it? This one is not from a major popular press venue, but it has some interesting aspects. Again, look at the language used in the headline.

MedPage: HRT Helps Heart with No Cancer, Clot Risks; By Charles Bankhead, Staff Writer, MedPage Today: http://www.medpagetoday.com/OBGYN/HRT/35236

This one is from a medical news service, and was published the same day as the original article, before even the BMJ press release. What is really interesting is that it says the article was reviewed prior to publication by an MD and medical faculty member.

Published: October 09, 2012
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and Dorothy Caputo, MA, BSN, RN, Nurse Planner

That’s awfully fast. I know they need to be fast, it’s news, but it seems to me that it would be hard to have enough time read carefully or think about the implications and context; to have time to do much more than think, “Hmmm, BMJ, a good journal. This article says what the BMJ article said. OK.” I’m not saying that’s what the doc did who signed off on it, I’m just saying that the process and speed lend themselves to flaws.

The authors of the article and the BMJ editor both emphasized that this is exceptionally unique data. Because of the WHI study, there is virtually no chance of generating this type of data again. That the Danish study findings run so contrary to the findings of the WHI study is shocking and noteworthy. Why? Is this significant enough to reopen the question of HRT risks? What does this mean for individual patients and clinicians attempting to make treatment plans and decisions?

Obviously, it is not as simple as the press would make it seem. Open access makes the article accessible, but without open post publication peer review, the CONTEXT is not made accessible. Open access can only go so far in supporting personal decisionmaking.

Systematic Review Teams, Processes, Experiences

Recently I was privileged to speak with the students of Tiffany Veinot’s course in the School of Information on evidence-based practice and processes. It was an amazing and diverse group of students, with librarians and healthcare professionals from most (if not all) of the healthcare programs on campus! The students had insightful questions, the conversation went on much longer than it should have given the time allotted, but was as richly rewarding for me as I hope it was for them. The approach this year focused more on case studies and storytelling — what is it really like? The slides can’t give you the whole sense of it, but at least it is a start.

Systematic Review Teams, Processes, Experiences

Presentation is also viewable as a Google Presentation.

Systematic Review Teams, Processes, Experiences https://docs.google.com/presentation/d/1NaaYxG15LqxxlahSI2L1pLu7Q8W870B66pox79prtQY/edit

What is Best Available Evidence?

Doctor Reading Articles

Every now and then I take questions I’ve answered in other venues, and copy the answers over here for posterity. This is one of those. While my job is now in Emerging Technologies, I have a long history working in evidence-based medicine and systematic review. I’m starting to feel like I’m choking with content in that area that I haven’t blogged, so I am going to start putting a few bits of it here from time to time.

Q:

I had a question about EBM. The definition of EBM is: “The best available research evidence means evidence from valid and practically relevant research, often from the basic sciences …”

So can we use just basic science to justify a treatment? Can anyone give an example please.

A:

The definition I prefer is this, from David Sackett’s seminal article.

“Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice.”
David L Sackett, William M C Rosenberg, J A Muir Gray, R Brian Haynes, W Scott Richardson. Evidence based medicine: what it is and what it isn’t. BMJ 1996;312:71. http://www.bmj.com/content/312/7023/71.full

Or, even simpler, from the same work, “It’s about integrating individual clinical expertise and the best external evidence.”

Their section on the concept of “best available evidence” goes as follows.

“By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens.”

My EBM / EBHC / EBD mentor and co-teacher, Amid Ismail, often made a big deal about this. The concept refers to the evidence pyramid, which you may have seen as an illustration around the EB literature. The version I use in teaching is on page two of this PDF:

CHAIN of TRUST: http://www-personal.umich.edu/~pfa/pro/courses/ChainOfTrustLoEVert2.pdf
Levels of Evidence, Updated for the Internet

Amid always emphasized that idea of best available evidence is ALWAYS tightly integrated with clinical judgment and the specific needs of the individual patient. So here is an example.

EXAMPLE

Let’s say there is a patient who is partially edentulous, struggling to eat, losing weight, becoming anorexic, and this is complicating other healthcare issues. The doctor wants to decide on the best way to make it easier for that patient to take in sufficient nutrition. The evidence base seems to suggest that dental implants are the best choice, and there are several systematic reviews in support of that concept. However, the patient has other conditions, of which the most important is rheumatoid arthritis (RA), and also has impaired wound healing. This makes the idea of surgeries for dental implants much more risky because of the patient’s personal situation. The RA also creates problems for the patient with using their hands, which might make for challenges managing post-operative care, which is a very important aspect of successful dental implants.

There are no systematic reviews on the population of partially-edentulous patients with RA, because the topic is too narrow and specific. Indeed, there are no articles at all on this combination of factors! This is no surprise, and is very common with rare conditions as well as less common combinations of conditions, often found among the elderly or persons with chronic health concerns. In this case, the best strategy for searching (and the best evidence) is likely to come from searching the major factors individually or in combination, then trying to integrate and weigh the evidence found to make a decision for that patient.

There is one article on the combination of partially edentulous and RA, based on a population of 6 patients.

Sato H, Fujii H, Takada H, Yamada N. The temporomandibular joint in rheumatoid arthritis–a comparative clinical and tomographic study pre- and post-prosthesis. J Oral Rehabil. 1990 Mar;17(2):165-72. http://www.ncbi.nlm.nih.gov/pubmed/2341957

That single article isn’t exactly on the topic we were searching, but it is as close to it as anything we can find. It recommends instead of implants to pull the teeth and place a prosthesis. Normally, according to the pyramid of evidence, this would not be evidence that ranks very highly. In this situation, it is the BEST evidence available.

The point is that for some questions or in some situations, the best evidence may not be very good; it might not be a systematic review, or an RCT, or even a case-controlled study. Sometimes it is a case report, or clinical experience. Sometimes it is animal studies, which we know don’t transfer over well to humans. A great deal of the research on dental implants comes in animal studies, so that might have happened with this question, if the complicating condition was perhaps diabetes or something other than RA. But a close match from an animal study compared with no evidence in humans means that the animal study is, for that question, the best evidence. It is not unusual for conditions that have a heavily immunological or microbiological aspect, for the emerging research to be based in labs, and may not have yet been tested in animals, much less people. Sometimes, in exceptional situations, especially if standard treatments have already failed, the best available evidence may even be personal reports. It is the clinician’s responsibility to examine the patient, gather the patient’s relevant history, review the evidence, select the best evidence, and integrate all of these in making a recommendation for a given patient. The point is not that the evidence is always excellent, but that it is, literally, the best available for that question and that patient.

Medlib’s Blog Carnival 2.1: Free Speech in Health Information, and More

WARNING: After this entry was originally posted, it came to my attention that I had not received all of the early entries for this round of the Carnival. The following post was edited to reflect these updates.


In the context of the looming deadline for comments on the FDA’s development of social media guidelines, the Medlib’s Blog Carnival theme this month was on free speech in health information. Briefly, the FDA has a long history of managing and establishing guidelines to prevent unethical publication of inaccurate or misleading health information from persons or corporate entities promoting the use or sales of drugs or medical devices. The flip side of this is to encourage informed decisionmaking based on high quality unbiased health information. There were few submissions this month, but those received were sound contributions looking at various aspects of this complicated issue.

Laika provided not one, but TWO excellent posts. The first one, “NOT ONE RCT on Swine Flu or H1N1?! – Outrageous!,” discusses the issue of popular news and hype as opinion influencers in comparison with actual research. Taking H1N1 as an example, she begins with a Twitter post and popular press, then discusses when it is appropriate to expect what kind of evidence in support of a question, simple tips for finding better quality evidence, as well as specific scientific and clinical contextual issues that beautifully illustrate not just issues of scientific research and methodology, access to information and information quality assessment, but also quite a bit of useful information about H1N1 itself! Laika provides a strong voice for clear reason and balanced information, but at the same time respects the importance of scientific dialog and communication in shaping the evolution of what we know about any given topic.

Laika’s MedLibLog: NOT ONE RCT on Swine Flu or H1N1?! – Outrageous!. http://laikaspoetnik.wordpress.com/2009/12/16/not-one-rct-on-swine-flu-or-h1n1-outrageous/

In her second post for this Carnival, Laika again zeroes in on the issue of dialog in science, and the broader issue of respect. This is true not just for dialog between scientists, as in the example she discusses, but even more so among the public and news media. The life lessons learned by Laika in her tale of disrespect and influence among scientists are ones we should all keep in mind when observing disagreements about science. I wanted to cheer when I read her excellent, methodical review of the limits of evidence-based medicine, and when one should or should not apply its finding to a given situation. While EBM is a very useful tool, I also have encountered worrisome instances in which a useful, low-risk, low-cost intervention is not used because there are not yet sufficient RCTs or because it is being researched for XYZ use but hasn’t yet been approved for it by the FDA. When EBM becomes a barrier to good clinical care, we have a different problem. I particularly liked the example she gave of a systematic review finding insufficient evidence to support the use of parachutes when jumping from a plane, and the selection of quotations from comments. My favorite, succinct and clear, was this line from a clinician at my institution, “RCTs aren’t holy writ, they’re simply a tool for filtering out our natural human biases in judgment and causal attribution. Whether it’s necessary to use that tool depends upon the likelihood of such bias occurring.” Read, read, and read this post again.

Laika’s MedLibLog: #NotSoFunny – Ridiculing RCTs and EBM. http://laikaspoetnik.wordpress.com/2010/02/01/notsofunny-ridiculing-rcts-and-ebm/

Dr. Shock’s post about BioMedSearch focused on “free” as in free access to quality healthcare information. A related concept in his post were the barriers traditional search methods provide to discovery of quality health information, and if it is time for a change. While you are visiting his blog, you might want to take a look at another recent post on “The Hidden and Informal Curriculum During Medical Education,” which talks about overt and covert concepts and communications in medical education. While the specific example was about narratives in a secured online space, the concepts are perhaps even more important when thinking about healthcare communications in unsecured social media spaces.

Dr. Shock, A Neurostimulating Blog: BioMedical Search on BioMedSearch: http://www.shockmd.com/2009/11/28/biomedical-search-on-biomedsearch/

In an oblique connection, Novoseek, the innovative biomedical web search engine covering Medline, grants and online publications, offered a post on their new feature, allowing searchers to limit by publication type. While this doesn’t directly connect to free speech (rather the reverse) it does directly connect to quality of health information and control through peer review, both of which are implied contextual issues. Being able to use a health specific search tool automatically focuses results on a narrower and higher quality subset of the information available on the web. Being able to limit by publication type enables the searcher to slice the search even more finely, focusing on just the highest quality health information available.

Novoseek: Tip #1 to improve searches in novoseek – Filter results by publication type. http://blog.novoseek.com/index.php/resources/tip-1-to-improve-searches-in-novoseek-filter-results-by-publication-type.html/

PS. While you are taking a look at that blogpost, you might want to also take a look at an earlier post from Novoseek called The importance of context in text disambiguation. It is a kind of geeky, technical post, but the fundamental concept is central to how humans (as well as computers) identity quality when they see it.

I See the Punk, But Where’s the Science?: Science Blogging – Good, Bad, and Ugly

I’ve been working up a series of talks on Science 2.0 – what, why, payoffs, etcetera. In an earlier talk on Science Blogs (Staying Current with Science Blogs and Wikis) I had promoted the science blogs community (http://scienceblogs.com/) as a place for faculty to track science innovation and trends, as well as to consider for their own blogging. I did that because of the range of voices represented and the dynamism of the conversation. Right now, I am about to backpedal on that recommendation just a bit, to qualify it as I should have done in the beginning.
Earlier this week I stumbled across an entry in a blogpost at ScienceBlogs that raised red flags for me regarding the actual science in the post.
SciencePunk (Frank Swain): Reiki One-Liners: a daily dose of healing via Twitter: http://scienceblogs.com/sciencepunk/2009/02/reiki_one-liners_a_daily_dose.php

DISCLAIMER: I was initially attracted to reading the article because of seeing it discussed on Twitter, and I must admit that the author of the Reiki One-Liners is one of the roughly 2000 people I follow on Twitter. I also follow a lot of hard care scientists and informaticians, as well as medical and clinical people, social tech geeks, local folk, librarians and friends. I appreciate and enjoy the diversity of the conversations I observe and participate in through Twitter.

One of the sentences that was a warning sign for me was this: “probably the world’s laziest form of quackery” as well as the closing sentence, “Given their aversion to actually seeing patients, Reiki practitioners clearly think diagnosis is just some kind of foolish Western misconception anyway.” Immediately I wanted to know what made him think this — where’s the evidence, how does he support this statement. So I browsed on to the comments, many of which were overtly insulting to both Reiki (overtly) and the process of intellectual inquiry (covertly).

Here is a particularly inflammatory example:
“No, Pamir, Reiki isn’t “spiritual teaching”. It’s nonsense. OK, it’s more than nonsense, it’s fraud, it’s drivel, it’s lies, it’s trash. It’s the way of the idiot.
And a world without idiots feeding idiocy to other idiots would be a better place.
Help your fellow practitioners by suggesting they take up honest work instead of feeding them waffles.”

Well, before we get too insulting, let’s check the credentials of the person we’re bashing, ok? Pamir’s work and information is actually affiliated with some hospitals and medical organizations.

South Florida Hospital News: Reiki: The Art of Spiritual Healing: http://www.southfloridahospitalnews.com/specialfocus/default.asp?articleID=492

My response was to dig a bit (not deeply) into the scientific evidence, explore the claims of the original post and the commentor. I already was aware through my work of the National Center for Complementary and Alternative Medicine (NCCAM) from the US government National Institutes of Health. Here is what they have to say about Reiki:

NCCAM: Reiki, an Introduction: http://nccam.nih.gov/health/reiki/

Digging into the evidence more via PubMed, I found little to support the blogpost’s assumptions. Basically the evidence seemed to be saying that Reiki might work, we aren’t sure yet, but it doesn’t seem to hurt as long as you have the money to pay for it. I also had in mind a recent case study we’d been working with in the local medical school focusing on the effects of intercessional prayer, which sounds a little similar to me. I am thinking that reiki is a case of potential good with little risk of overt harm. I responded to the blogpost with a comment focused on the issue of science and evidence and interpretation of the evidence. I’ll get to that later. It is what happened next that is interesting.

I received a notice from the system that my comment had been directed to the author for approval before being released to the web page. That is pretty common, and didn’t concern me. But that was two days ago, and four more comments have been added to the page without mine being approved for release. I sent a twitter communication to the original author asking why my blog comment had not been approved. OK, Frank is new, still learning how Twitter works, and he doesn’t quite get it yet. He tried to reply to my tweet but sent it in a way that I wouldn’t see. (Frank, the @pfanderson has to be the first characters of the tweet to show up in my Replies box.) He asked me to email him, but on neither his blog nor his Twitter account was I able to find an email address, not in the profile, the “About“, nor in a Google search. I will give him kudos for trying to reach me, but … did he check the pending comments file at the blog?

Frank is a prominent voice in the science blogging community, having been tapped to write for the Guardian in the UK as well as . Here is a brief bio and a video of his chat about the importance of informing our decisions with evidence:

Why is Science Important?: Frank Swain: evidence to base our decisions: http://whyscience.co.uk/2008/12/frank-swain-evidence-to-base-our-decisions.php

I want to believe that any author at ScienceBlogs would be scientific enough to support the discussion about the evidence that is an essential part of scientific inquiry and progress. I spoke on this as an invited speaker at the Medical Library Association last May, coming from a strong personal conviction that social media has much to contribute to Science through supporting Science as Conversation.

So what does it say about a science blog when inflammatory and insulting comments are part of the comment thread, but a comment about the evidence of the topic is not released to the comments? For me, the most interesting and important part of the conversation is missing, and I must question what was the purpose of the original post if informed dialog is suppressed.

Basically, what it comes down to is a strategy that applies in science at every point from hallway conversations to informal publications such as blogposts all the way through to peer-reviewed publications and systematic reviews — “Quis custodiet ipsos custodes?” “Who watches the watchers?” (Juvenal, Satire 6.346–348). All of us, in whatever role (scientist, humanist, student, teacher, curious observer) must not assume the reliability of a source, but should take everything with a grain of salt. That’s it, that’s all I saying — that we must keep questioning, asking where is the science, where is the evidence, what is the quality of the evidence, what are the trends in the evidence, where is it coming from, who is doing the research, how is it being funded, is there obvious bias … all those questions. We must keep asking.

So what was my comment that wasn’t posted? See below. Please note that at the time of writing it I did not realize that Frank was the author of the post and not Sam, so my own mistake there. I will send Frank a copy of this post, and hope for his response at the original post. Also, here is an example of what sort of content triggered the original SciencePunk blogpost:

http://twitter.com/gassho/status/1174226019
Reiki One-Liners

————-
Sam, I might encourage you to examine some of the recent research evidence on Reiki:
http://tinyurl.com/au9wsn

Within the context of my experience as a consultant working with evidence-based healthcare for the past decade, I would want to mention that insufficient evidence does not always mean that the treatment doesn’t work. It means exactly what it says — that we don’t have enough research to make a conclusive decision. The purpose of systematic reviews that identify insufficient evidence is largely to identify flaws and gaps in the research to be addressed in future studies. I have seen a number of drug trials in similar circumstances, who completely turn around the findings by the time of the 5 yr update to the review.

In the case of insufficient evidence, clinicians should look at the BEST AVAILABLE EVIDENCE to support decisions, as well as the balance of potential risk/harm from the treatment. Current clinical trials and systematic reviews of reiki show distinct trends supportive of the effectiveness, and show little or no harm. There just aren’t enough of the trials to have reached statistical significance. So, while results are currently inconclusive, they are encouraging of a positive effect.

Please note that in the bulk of the research trials, they are examining the effectiveness of the “healing touch” aspects of reiki. In your post you seem unaware of this aspect of reiki, which you could not have avoided knowing if you read either the Reiki FAQ: http://www.reiki.org/faq/WhatIsReiki.html or even Wikipedia. Perhaps you should research and define your terms, as politely suggested by Pamir.