Emerging Issues with Plagiarism in Scholarship

Inside the Card Catalog

As long as we’ve had factory-style education, we’ve had cheaters. As long as we’ve had money, we’ve had cheaters. Before that, we had thieves. If there are resources worth having, there will be those who want or need them who don’t have the skills, resources, initiative, energy, or general wherewithal to get it themselves through through their own efforts, and will find ways to get them some other way. That’s true in academia absolutely as much as anywhere else. I remember the first time I actually graded assignments for a class (of graduate students!), and found in a class of 100 four people who had obviously copied each others work. I am guessing there were probably more examples that I didn’t notice because of my inexperience in looking for this. It isn’t just plagiarism, you know. Another couple students didn’t copy, but falsified their results, which was obvious to me because I really knew the content in the assignment.

What was the problem with them cheating? Well, firstly, they don’t learn what they are supposed to learn. More important, this was a grad program for healthcare practitioners, all of the students had signed an ethics agreement and used an honor code assuring that they were being responsible and ethical in their learning. When someone cheats as a student, you kind of wonder about whether they have the ethics and responsibility to be a healthcare provider. Another reason is that ultimately misappropriating someone else’s work creates undocumented gaps in the intellectual record, undermining the effectiveness and utility of science overall and destroying credibility in the scientific method. This was a big topic of discussion at the recent CI Days where the actual science behind the core issue of reproducibility of research and its implications was presented as a keynote by Victoria Stodden.

That cheating is wrong is one of those things “everyone knows.” Now, what if it wasn’t? Or what if there was truly another side to the story? What if, instead of insisting that each student do the assignment independently and not copy (and having 100 assignments to grade that took me 2 months to finish), I had instead organized the assignment as a small group project? The students would have probably learned more, they would have self-policed, copying would have been virtually impossible, and I would have had fewer assignments to grade. Win-win.

Flash forward. A faculty member phoned me this week with a question, and I didn’t expect the question or have a clue what the answer was. The doc had just written a research article, was submitting it, and had been advised by the department chair to run it through a plagiarism checker before submission. The question was which plagiarism checker to use. I think I saw stars before my eyes, I felt so stunned by the idea.

I did find the answer (more on that later) but thought this was significant enough to provoke further thought and conversation. Why use the plagiarism checker BEFORE submitting? To prevent the researcher from ACCIDENTALLY copying without attribution. That’s a lovely thought, but you would think that people making a sincere effort to avoid this would be ok. Not so. Journal editors are not routinely checking all submissions. The idea is that if the software says you cheated, you want to know it before you get your hands slapped. Now why would the software say you cheated if you didn’t? Lots of reasons. One of the big problems with many plagiarism checkers is that when you scan your paper, it is added to their database of papers. If it passes the first time it is scanned, it then flunks the second time. If that is true of the software you use, and if the journal uses the same software, scanning before submission could guarantee that you would never be published again. Scary thought. Another thought. The preferred plagiarism checkers are expensive. How fiscally responsible is it to pay the software company twice for the same task? I’m really having issues with the way this picture is shaping up.

Retraction Watch & Plagiarism

I’ve been tracking RetractionWatch for a while. Fascinating blog that gathers information on examples of research that has been pulled or which is headed that direction, and why — plagiarism, data falsification, ethical malfeance, misrepresentation, misinterpretation, errors in data analysis, etc. Another fascinating blog is NCBI ROFL that focuses on research published in scholarly journals to which the thoughtful reaction is either “duh!” or “hunh?” Both of these raise the same question brought forward by Dan Atkins at CI Days as to the perception, reputation, and credibility of science in the eyes of the general public. The next obvious question is what happens to the funding for science if the general public widely believe science lacks credibility? I don’t really want to find out, do you?

Now I had already been working on this blogpost when I found an article published TODAY by the authors of RetractionWatch.

Retraction Watch & Plagiarism

Science publishing: The paper is not sacred: http://www.nature.com/nature/journal/v480/n7378/full/480449a.html

I’ll say this three times if you need me to: GO READ THIS PAPER. We already know that existing publication models are failing, failing researchers, failing libraries, failing the readers, failing the process and progress of science and research. There are people thinking deeply about what models would work as alternatives. Last week Paul Courant was suggesting a shift to a POST-publication review model. Victoria Stodden is urging folk to create and use open data repositories, and provide scholarly credit for authoring code. I’m a big fan of the LANL XXX Archive, as it was once known, now simply the Arxiv, which has been active and tested in the field of physics, where it has worked quite nicely indeed for over 20 years now. We need more people thinking about these alternative publication models, their impact on scholarship, and the risks that imperil scholarship if the current model persists as status quo. This article is another important voice in that process.

Alright now, I’m going to let you all go off and think about this a bit, hoping we’ll come back for more conversations. In the meantime, I said I found the answer about what software to use to check your article for accidental plagiarism before you submit. I’ll share some of those links. You think about if you really want to do this, and if plagiarism checkers are really the answer.

How to check your scientific paper for plagiarism, by George Lundberg, MD: http://www.kevinmd.com/blog/2011/02/check-scientific-paper-plagiarism.html

CrossCheck: http://www.crossref.org/crosscheck/index.html

iThenticate: http://www.ithenticate.com/
Understanding the Similarity Score: http://www.ithenticate.com/plagiarism-prevention-blog/bid/63534/CrossCheck-Plagiarism-Screening-Understanding-the-Similarity-Score

Elsevier: Researcher tools for evaluating trustworthiness: CrossCheck Plagiarism Screening and CrossMark: http://libraryconnect.elsevier.com/lcn/0801/lcn080108.html

Elsevier: Editors: Plagiarism detection: http://www.elsevier.com/wps/find/editorsinfo.editors/plagdetect

CrossCheck Plagiarism Screening: http://www.slideshare.net/CrossRef/crosscheck-plagiarism-screening


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s