Bubble, Blur, Flip, Spin, Hoard, Hug. Part Five: Flip (5b: Publishing)


Original version published at: Life of an emerging technologies librarian in the health sciences: http://monthly.si.umich.edu/2013/01/17/life-of-an-emerging-technologies-librarian-in-the-health-sciences/ On this blog: Bubble, Blur, Flip, Spin, Hoard, Hug. Then, Now, Bubble, Blur, Flip (a).


Flip, Part Two, Publishing

MLGSCA09 Cerritos: Objects of our Attention

I missed last week’s post in this series because I had some sort of virus. Back now, and ready to pick up with how publication models are flipping. Publishing is flipping in the sense that the current model of publishing is broken in so many ways people have lost count. It just isn’t working. Flipping is also coming from the many ways in which communities are trying to find new ways to make things better. There are many layers to the changes going on, including economics, collaborative creation, shifts in online environments, transparency as marketing, pre-publication review, alternative archiving models, and so much more.

Coincidentally, even though this was next on my list, it turned out to be a great thing that I waited a week, because a conversation attached to the post “An Argument Against Science 2.0” has provided a ton of resources on shifts in peer review. I stumbled over this huge collection of general resources on publishing reform. Then today I heard my colleague Nadia Lalla speak about shifts in collection development, a presentation which highlighted some of the ways in which the traditional model is no longer working well.

Economics

Some of what Nadia talked about with our team included her extremely clever models of current publishing financing, including “The Evil Triumvirate” and the “I-have-a-dream pricing models.”

Second Life: Epiphany: Build a Cathedral, and the Angels and Devils will Confer

In the “Evil Triumvirate” model were the Great Satan, the Spawn of Satan, and the Evil Empire, all of which represent unnamed publishing conglomerates who have exceptionally poor customer service, and who seem determined to milk the dying cow for every last drop while they wring her neck. (That is my imagery, not Nadia’s.) One example she described was a publisher who refused to send an invoice so that we could pay our current bill, then cut off access to the publications, and only then agreed to send an invoice … for almost three times the previously agreed amount. Oh, right.

By the way, just for the record, other librarians seem to feel pretty much the same way, even if they aren’t quite so poetic about it.

And you probably already know about the Elsevier boycott. That said, Elsevier does some good stuff, and some not so good stuff, and there are other large publishing houses doing the same kinds of things. It is never as simple as black and white.

Cow
Cow, by Markku Åkerfelt on Flickr.

The “I-have-a-dream pricing models” described the many many ways in which publishers will change funding models with the end goal being that if you change the name of the cow, you can charge a lot more for the same cow, and maybe sell it twice! (Again, my imagery rephrasing my understanding of what Nadia said. Don’t blame her, unless you like it.) Examples included publishers who set up one price for usage and then tell you you aren’t eligible for that pricing model any more but have to switch to a tiered use model, and that means the price just went up 400%. Usage models change from FTE for faculty, or students, or by departments; by how many beds are in your hospital, and are those licensed beds or staffed beds; and so forth. Keep slicing it different ways.

Open Access

One of the frequently touted solutions is to simply commit to open access. Flip the economic model so that journals aren’t in the business of selling, but buying, and that the end result is free to all. I talk about this a lot in my forthcoming book chapter. Meanwhile, the NIH Mandate has had a huge impact on my library, with Merle Rosenzweig (and others) advising faculty on best practices, policies, and the fine points of where to publish, how to submit articles, impact on grant process, and more. This has become a big focus of the work we do, and the communities we support on campus (hint, hint, Open Michigan?).


NIH Public Access Policy (NIHPAP) Lecture in Five Parts: http://www.youtube.com/playlist?list=PLVB8CZOF_DzFYgzT8tMS2wxW0C7WDuba6

It isn’t just the traditional publishers making a mess of money, either. It isn’t at all a black and white / us versus them dichotomy. While I tend to be a fan of open access (read Peter Suber) and transparency (read David Brin), there are abuses of the alternate systems also. It is not uncommon for someone who’s heard me talking about online publishing alternatives to contact me, quietly, privately, on the side about a publisher that has approached them, wanting them to submit an article or a new book to an open access venue. It will look reputable and professional, say all the right things, but they’ve never heard of these people or this title before. The person contacting me always wants to know, “Are they legit? Is this too good to be true?” Good question. During tonight’s #MedLibs twitter chat, Molly K (@dial_m) brought up Beall’s List of predatory open access journals.

Beall’s List: Potential, possible, or probable predatory scholarly open-access publishers: http://scholarlyoa.com/publishers/

And those predators undermine the whole open access community by creating fear on the part of authors unwilling to risk the potential economic penalties of publishing in open access journals. Of course, those “penalties” aren’t really the kind of problem people imagine.

Suber, Peter. Once more: correcting the canard that OA always or usually costs authors money. https://plus.google.com/109377556796183035206/posts/QqMhLjodN1T

So, at the moment, the entire market of publishing scholarly, academic, and science research works is frankly, pretty depressing. Misinformation and misunderstanding about research publishing is rampant. There are good journals and good publishers, but it takes a lot of people working hard to try to figure out which are good and which aren’t, why, how, and then even the best journals / editors / authors sometimes make mistakes. This has a bit of the feel of that famous song of a fiddler dancing on a shaky roof beam.

Peer Review Reform and Post-publication Review

Retraction Watch & Plagiarism

Aside from the actual act of formal publication in an acknowledged journal, there is also the challenge of quality, credibility, reproducibility, and overall trust in the published products. There is SO much going on examining:
– what’s wrong with what is actually published;
– why are retraction rates skyrocketing (read RetractionWatch);
– what about ethics in research publishing (see also COPE);
– the impacts of scientific misconduct on publishing and research funding;
– how bias against negative results damages science as a whole;
publication bias overall;
– the whole question of conflicts of interest being accurately identified and reported, and how they impact on what is published;
– how pre-publication peer review is being challenged, and can turn into a “members only” club, or worse; and
– how all of these negatively impact public belief in the credibility of science. Meaning any science, all science, and all scientists. And then, naturally, that impacts on public policy and science funding.

I particularly like this quote as a validation for arguments in support of shifting from pre-publication review to post-publication review.

“If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?”
Lehrer, Jonah. The Truth Wears Off: Is there something wrong with the scientific method? New Yorker Dec. 13, 2010. http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

What it comes down to is how much trust you can put in the information you have. Replication tells us if the same thing will happen each time. Retraction itself is dangerous because it removes the information that is questioned, making it impossible to learn from its mistakes for the future. Biases of all sorts make it impossible to bring forth new ideas that might or might not be solutions for existing problems. In combination, these make the line between science and pseudoscience increasingly thin, and break down paths to innovation and discovery.

Let’s just say, this is not shaping up to be a pretty picture with a rosy future.

Paikan Paolao brought to my attention (comment here) the extensive crowdsourced list of tools, resources, and communities for alternatives to traditional peer review.

List: Standalone peer review platforms: https://docs.google.com/document/d/1HD-BEaVeDdFjjCNFkb0j3pvwe7MrP3PtE-bWHkkdq7Q/edit#heading=h.uhoilqhqulp8

So many tools from wonderful people who really care and are trying to find a way to fix these problems, but so far none of them have really built up any significant traction or proven to have significant impact on the process. That doesn’t mean it won’t happen, but hopes are fading, and the idea of post-publication peer review as a solution for quality problems in research publishing is coming to be seen as only one part of a much larger and more complex problem.

If you aren’t already reading Curt Rice’s blog, I highly recommend it for his discussions of peer review and challenges to quality in current science publishing: quality control, plagiarism, “the politics of prestige,” peer review, bias in the editorial and review processes, manipulation by the science publishing infrastructure, and so much more. He does an excellent job of tracking and questioning the emerging issues in the conversations around these issues.

How researchers and scientists themselves assess the quality, value, accuracy, and utility of research and science discoveries is under attack from within. The very tools designed to assist with this are proving to create at least as many problems as they have ever solved. This has huge implications for politicians, developing science policy and using science to shape other policies; for librarians, who depend on these reviews and quality tools to shape collections; for the general public, who depend on both policy and library collections to inform their own personal and community decisionmaking processes.

Let’s Start Over

Recently, a related issue has been getting attention — beyond peer review of individual articles, how ranking of entire journals is also having negative impacts on the quality of science.

Björn Brembs, Marcus Munafò. Deep Impact: Unintended consequences of journal rank http://arxiv.org/abs/1301.3748

This is such a phenomenally important piece, I want to quote extensively from the abstract. Please note these three pieces.

“Much has been said about the increasing bureaucracy in science, stifling innovation, hampering the creativity of researchers and incentivizing misconduct, even outright fraud. Many anecdotes have been recounted, observations described and conclusions drawn about the negative impact of impact assessment on scientists and science.”

Translation: “We’ve got trouble, people, right here in River City.”

“These data confirm previous suspicions: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact.”

Translation: Our tools for determining scholarly quality (IMHO, the entire publishing system) are busted.

“Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery function of the current journal system.”

Translation: We need to try something completely different. But what?

If you’ve read this far, you’ll probably want to also read some of the other pieces talking about the implications of this report. There are several. Here are a few I read.

Deep impact: Our manuscript on the consequences of journal rank. http://bjoern.brembs.net/news.php?item.864.11/

Unexpected consequences of journal rank. Physics Today January 30, 2013. http://blogs.physicstoday.org/thedayside/2013/01/30/

Consequences of using the journal impact factor. BackReAction Feb. 05, 2013. http://backreaction.blogspot.com/2013/02/consequences-of-using-journal-impact.html

So with publishing not just flipping, but fragmenting like a crumbling sandcastle around us, what should we be doing differently? We know it needs to change, but in the struggle to fix the problems, we grasp at quick fixes and short-term “solutions” like so many cancer patients cling to mysterious “cures.” There are two pieces I discovered recently, both of them curiously in the same journal, Frontiers of Computational Neuroscience.

The first one, by Jason Priem and Bradley Hemminger, envisions breaking apart each of the functions currently coalesced in the journal publishing model into independent systems. From where I stand now, with what I’ve read so far, this is probably the single most important article on journal publishing reform to read.

“For instance, a scholar might deposit an article in her institutional repository, have it copyedited and typeset by one company, indexed for search by several others, self-marketed over her own social networks, and peer reviewed by one or more stamping agencies that connect her paper to external reviewers. The DcJ brings publishing out of its current seventeenth-century paradigm, and creates a Web-like environment of loosely joined pieces—a marketplace of tools that, like the Web, evolves quickly in response to new technologies and users’ needs.”

Priem J, Hemminger BM. Decoupling the scholarly journal. Front. Comput. Neurosci., 05 April 2012 | doi: 10.3389/fncom.2012.00019 http://www.frontiersin.org/computational_neuroscience/10.3389/fncom.2012.00019/abstract

Citing Jason and Bradley’s article is this next one by a European team which pulls from an extensive and excellent bibliography those common elements emerging as consensus strategies for research publishing reform, with a focus on the review process. I strongly urge you to read the whole article, in which they unpack these ideas, but as a teaser will give here their 14 key consensus points, distilled from 18 articles.

1. The Evaluation Process is Totally Transparent
2. The Public Evaluative Information is Combined into Paper Priority Scores
3. Any Group or Individual can Define a Formula for Prioritizing Papers, Fostering a Plurality of Evaluative Perspectives
4. Should Evaluation Begin with a Closed, Pre-Publication Stage?
5. Should the Open Evaluation Begin with a Distinct Stage, in which the Paper is not yet Considered “Approved”?
6. The Evaluation Process Includes Written Reviews, Numerical Ratings, Usage Statistics, Social-Web Information, and Citations
7. The System Utilizes Signed (Along with Unsigned) Evaluations
8. Evaluators’ Identities are Authenticated
9. Reviews and Ratings are Meta-Evaluated
10. Participating Scientists are Evaluated in Terms of Scientific or Reviewing Performance in Order to Weight Paper Evaluations
11. The Open Evaluation Process is Perpetually Ongoing, such that Promising Papers are more Deeply Evaluated
12. Formal Statistical Inference is a Key Component of the Evaluation Process
13. The New System can Evolve from the Present One, Requiring No Sudden Revolutionary Change

Nikolaus Kriegeskorte1, Alexander Walther, Diana Deca. An emerging consensus for open evaluation: 18 visions for the future of scientific publishing. Front. Comput. Neurosci., 15 November 2012 | doi: 10.3389/fncom.2012.00094. http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2012.00094/full

The short version? Scholarly journal publishing is a royal mess, and we don’t yet know the solution.

Next week, flipping healthcare, I hope.

(To be continued …)

UPDATE: Link to video removed (which was not intended to be public). Saturday, Feb 23, 0:42.

2 responses to “Bubble, Blur, Flip, Spin, Hoard, Hug. Part Five: Flip (5b: Publishing)

  1. Just to clarify…
    My tweet about Elsevier & Scopus was not about publishers not responding to invoices, nor did I ever call them the devil. It was about their participation on social networking sites. The full transcript of the conversation can be found at
    http://bit.ly/12UsdeT

    • Indeed, Michelle, and the tweets quoted weren’t yours, they were from Milhealth, but the Twitter embed code drags along bits and pieces of the conversation. Twitter briefly had an option in generating embed code that allowed you to check a box for whether or not the embed code should include the context, but that option seems to have disappeared. This is a time when it would have been useful. Your comment was professional to the max, quite positive, focusing on the issues you stated.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s