Tag Archives: higher education

Teach Feast: Engaged Learning Through Internships, Badges, e-Portfolios, & Storytelling


Teach Feast 2015: integrative tools for engagement at Michigan http://www.slideshare.net/umhealthscienceslibraries/teach-feast-2015-integrative-tools-for-engagement-at-michigan

I was invited as a last minute fill-in to be part of a panel at the annual Teach Feast festival of learning technologies at the University of Michigan. It was a great learning experience for me, and I hope also for the participants. I learned more about how engaged learning can be a continuum, incorporating relatively small or subtle changes to traditional instruction or going whole hog and literally uprooting a student from their culture and context and positioning them in a new space for a different type of learning experience. Engagement can originate with either student or teacher (or both); it can be collaborative and/or competitive and/or creative. The one part that seemed foundational to all the strategies was reflection, turning your gaze both inward and outward.

My part was on digital storytelling. I’m a big fan of digital storytelling, in case you didn’t know. I like to test new online storytelling tools, and see how they work to support different kinds of stories. My current fascination with comics is based out of my larger enthusiasm for storytelling. I argue that storytelling is part of every academic discipline. It’s obvious that the humanities dissolve if you remove stories. What is history without story? But when it comes to the sciences, people are more likely to have trouble seeing the story as part of the academic process. Of course, case studies in health care, sure, those are stories. And psychology and psychiatry, they don’t work without stories. Social work, yeah, of course. But physics? And engineering? Maybe if you aren’t in the field you have trouble seeing the story, but if you really think about every research paper, every structured abstract is a frame for a story. Every piece of science has a backstory, a motivation, a reason someone wanted to know THIS. Even mathematics. Even when camouflaged, the stories of science are implied. What changes isn’t the presence of the story, but how we tell them, and the tools we use to carry them.

The advantages of digital storytelling are similar to those of printed stories: portable, inclusive, persistent, and reaching a broader audience. One of the great lessons from education is that communication is never one-size-fits-all. The more ways you present important content, the more people will be able to understand it and engage with it. The greater the variety of media you use to tell a story, the more people will hear it. I’m happy to help people find and tell stories, especially science stories. There are a lot of different types of tools and resources in the slides. If I have time, I’ll spend a little more time on specific ones here some other time. If there is something in particular from the slides that you’d like me to talk about, say so in the comments, and I’ll make it a priority.

Infographic of the Week: Learning in the Digital Age—“I Was Pleasantly Surprised”

Infographics in research articles?
Jeffrey Bartholet. Student Poll: “I Was Pleasantly Surprised.” Special Report: Learning In The Digital Age. Scientific American (2013) 309:72-73.
http://www.nature.com/scientificamerican/journal/v309/n2/full/scientificamerican0813-72.html PDF:

I was indeed surprised when I stumbled on this research article, went to read it, noticed the image thumbnail, and thought, “Oh, my goodness, that looks like an infographic!” And it was! We’ve been talking about infographics a lot lately. Our library is talking about the roles we could play as librarians in supporting infographic development for our institution and faculty. There were multiple presentations about infographics at last month’s Medical Library Association Annual Meeting. Also in the past couple months I’ve attended a few presentations about uses of infographics to promote research findings, for marketing, or health literacy outreach. But I had not noticed that infographics have crept into the actual published and printed versions of scholarly research articles!

This one was about MOOCs, which is another interest. I’ve taken (read “lurked in”) several MOOCs, without ever completing one. I have learned useful skills relevant to my job from a MOOC, but when push came to shove between the MOOC and my real life, real life won. Or just feeling tired won. This summer is different. My son and I are taking a MOOC together, watching the videos together, discussing the assignments while we do them. I’m going to be really embarrassed if my son finishes and I don’t. I’ll be even MORE embarrassed if I bomb out and my son takes that as an excuse for him to quit. So I was very interested in this piece of research on how MOOCs are used in science education.

“One in five science students surveyed by Nature and Scientific American has participated in a MOOC—and most would do so again”

It’s worth reading the whole short article. Here are just a couple small snippets highlighting key points.

PRO:
Stefan Kühn: “I started the course because of personal interest … and was pleasantly surprised when I realized I was using it for my write-ups as well.”

CON:
Kathleen Nicoll: “Although some classes try to mimic research experiences in a virtual lab, that cannot substitute ‘for smelling formaldehyde or seeing something almost explode in your face and having to react to that.'”

PRO:
Kathleen Nicoll: “One of the huge upsides is that MOOCs can reach everyone [with a computer and Internet]—people who are differently abled, people behind bars in prison.”

CON:
Jeffrey Bartholet: “Because failure is cost-free in a MOOC, the basic human tendency toward procrastination and sloth are stronger than in traditional classes.”

PRO:
Shannon Bohle: “I like to share with my friends that I finished the course and hear everyone say, ‘Oh, you’re so brilliant. Kudos to you!'”

It also didn’t hurt my interest at all to hear about what specific courses these students and faculty found useful. I might actually want to take the one recommended by Kühn, Think Again. The infographic itself also contained some surprises. I didn’t realize that any universities were requiring MOOC participation for their residential students! Or maybe I’m misinterpreting that question? It made sense that people find superior career value from taking classes face-to-face. Hard to make a connection in a MOOC that could turn into a person willing to write a letter of reference for you. But it was surprising how the perception of learning value was almost equal! Here’s the infographic – what surprises you?

MOOCs: I Was Pleasantly Surprised
Image source: Scientific American

Tenure and Citation Influence Tracking Tools – Yea or Nay?

I was just asked a question by a faculty member about using Web of Science citation tracking as a preparation for tenure review. While I would never, at this point in time, advise anyone to NOT look or not make sure they have these figures in hand, the situation has gotten more complicated in recent years. For that reason, I wanted to share a lightly edited portion of my response to this faculty member, as possibly being of interest to others.


What we did last time was to search Web of Science for your articles and citations to them, and you helped identify which of the articles were yours and not someone else with a similar name. You also helped identify citations to your articles that were variations of the right citation, but which still meant your article.

Here is are a couple guides for how to do the sorts of things we tried back in the day.

UMich: Citation Analysis Guide (2013): http://guides.lib.umich.edu/content.php?pid=98218

Tufts: Tools for Tenure-Track Faculty (2011): http://researchguides.library.tufts.edu/content.php?pid=158890&sid=1344528

Since the mid-2000s, there are now more tools to check for this type of information, and there are also new formulas to more accurately calculate an author’s influence and impact. Some of the most important to know about are the h-index, in particular, and altmetrics, in general.

Marnett, Alan. H-Index: What It Is and How to Find Yours. (2010) http://www.benchfly.com/blog/h-index-what-it-is-and-how-to-find-yours/

Altmetrics, a manifesto: http://altmetrics.org/manifesto/

FYI, there is also an emerging conversation in science to the effect that the “Citations + Impact Factor” approach to tenure review is dysfunctional and causing increasing problems in the practice, quality, and credibility of science. Here are a few interesting pieces I’ve read on this recently, in chronological order.

2000

McGarty, C. (2000) The citation impact factor in social psychology: A bad statistic that encourages bad science? Current Research in Social Psychology, 5 (1). pp. 1-16. http://www.uiowa.edu/~grpproc/crisp/crisp.5.1.htm

SNIP:

“In conclusion, it is worth asking how social psychology found itself to be using a poor measure for assessing a matter that is so important to so many of its practitioners. Haslam and McGarty (1998, in press) have argued that scientific practices can be understood as a process of uncertainty management. In psychology uncertainty is customarily dealt with by measuring statistical uncertainty and reducing methodological. Various other forms of uncertainty are frequently banished from formal consideration in the pages of journals and textbooks. Thus, uncertainty that arises from controversial questions involving political and societal matters which might be embarrassing for the field are frequently swept aside. The impact factor is attractive because its seemingly objective nature and the independent status of the statistic’s author (the Institute for Scientific Information) prevents many doubts from ever being formed (thereby banishing uncertainty). The two year impact factor clearly favors journals which publish work by authors who cite their own forthcoming work and who are geographically situated to make their work readily available in preprint form. The measure punishes journals which publish the work of authors who do not have membership of these invisible colleges and is virtually incapable of detecting genuine impact. It is not just a bad measure it is an invitation to do bad science.”

2009

Goldacre, Ben. Funding and findings: the impact factor. The Guardian, Friday 13 February 2009. http://www.guardian.co.uk/commentisfree/2009/feb/14/bad-science-medical-research

SNIP:

“But Tom Jefferson and colleagues looked, for the first time, at where studies are published. Academics measure the eminence of a journal by its “impact factor”: an indicator of how commonly, on average, research papers in that journal go on to be referred to by other research papers. The average journal impact factor for the 92 government-funded studies was 3.74; for the 52 studies wholly or partly funded by industry, the average impact factor was 8.78. Studies funded by the pharmaceutical industry are massively more likely to get into the bigger, more respected journals.
That’s interesting, because there is no explanation for it. There was no difference in methodological rigour, or quality, between the government-funded research and the industry-funded research. There was no difference in the size of the samples used in the studies. And there’s no difference in where people submit their articles: everybody wants to get into a big famous journal, and everybody tries their arm at it.”

2010

Werner, Yejudah L. The Aspiration to be Good is Bad: The ‘Impact Factor’ Hurts both Science and Society. International Journal of Science in Society 2010 1(1):99-106. http://ijy.cgpublisher.com/product/pub.187/prod.14

ABSTRACT:

“The fruitful aspiration of researchers to be classified as ‘good’ has been mounting, driven by the quantification of research quality and especially by the impact factor (IF). This paper briefly reviews examples of the many known or hypothetical fringe evils. Many universities now evaluate academics by the IF of the journals in which they publish. Because journals, fighting for their IFs, now select papers for brevity and for forecast of being quoted, this mal-affects science in several ways: (1) Much information remains unpublished. (2) Some projects are published in splinters. (3) Scientists avoid unpopular subjects. (4) Innovations are suppressed. (5) Small research fields are being deserted. (6) Active authors recruit inactive coauthors whose name could land the paper with a higher-IF journal, which generates assorted complications. (7) Journals striving to elevate their IF adorn their advisory boards with dignitaries who do not endeavor to help the journal. The IF also shortchanges society more directly, through the ‘quality’-driven choice of research subjects: (1) Academics concentrate on ideas and theories and avoid publishing facts of potential service to society. Thus biologists discuss how species arise, rather than describe new species to enable their conservation. (2) Professors, fighting for their resumés, regard academic neophytes as paper-manufacturing manpower and hinder their developing intellectual independence. Finally, some potential partial remedies are proposed.”

Neylon, Cameron. Warning: Misusing the journal impact factor can damage your science! 6 SEPTEMBER 2010. http://cameronneylon.net/blog/warning-misusing-the-journal-impact-factor-can-damage-your-science/

SNIP:

“It seems bizarre that we are still having this discussion. Thomson-Reuters say that the JIF shouldn’t be used for judging individual researchers, Eugene Garfield, the man who invented the JIF has consistently said it should never be used to judge individual researchers. Even a cursory look at the basic statistics should tell any half-competent scientist with an ounce of quantitative analysis in their bones that the Impact Factor of journals in which a given researcher publishes tells you nothing whatsoever about the quality of their work.”

2011

Grant, Richard P. Bye bye, Impact Factor… Faculty of 1000 24 October 2011. http://blog.f1000.com/2011/10/24/bye-bye-impact-factor/

SNIP:

The UK Government hands out money to its higher education funding bodies, which distribute that money according to the results of the Research Excellence Framework (REF), which will be completed in 2014. Traditionally, the predecessor of the REF (the Research Assessment Exercise) measured ‘impact’ of research by counting numbers of publications in high impact factor journals. Mr Willetts seems to be saying that the Journal Impact Factor will not play a role in the REF:
“Individual universities may have a different perspective on the journals you should have published in when it comes to promotion and recruitment, but the REF process makes no such judgements.”

2012

Vanclay, Jerome K. Impact Factor: outdated artefact or stepping-stone to journal certification? 15 Jan 2012. http://arxiv.org/abs/1201.3076

SNIP:

“However, there are increasing concerns that the impact factor is being used inappropriately and in ways not originally envisaged (Garfield 1996, Adler et al 2008). These concerns are becoming a crescendo, as the number of papers has increased exponentially (figure 1), reflecting the contradiction that editors celebrate any increase in their index, whilst more thoughtful analyses lament the inadequacies of the impact factor and its failure to fully utilize the potential of modern computing and bibliometric sciences. Although fit-for-purpose in the mid 20th century, the impact factor has outlived its usefulness. Has it become, like phrenology, a pseudo-science from a former time?”

Lozano, George A.; Lariviere, Vincent; Gingras, Yves. The weakening relationship between the Impact Factor and papers’ citations in the digital age. 19 May 2012. http://arxiv.org/abs/1205.4328

SNIP:

“Third, and even more troubling, is the 3-step approach of using the IF to infer journal quality, extend it to the papers therein, and then use it to evaluate researchers. Our data shows that the high IF journals are losing their stronghold as the sole repositories of high quality papers, so there is no legitimate basis for extending the IF of a journal to its papers, and much less to individual researchers. This is congruent with the finding that over the past decade in economics, the proportion of papers in the top journals produced by people from the top departments has been decreasing (Ellison, 2011). Moreover, given that researchers can be evaluated using a variety of other criteria and bibliometric indicators (e.g., Averch, 1989; Leydesdorff & Bornmann, 2011; Lozano, 2010; Lundberg, 2007; Põder, 2010), evaluating researchers by simply looking at the IFs of the journals in which they publish is both naive and uninformative.”

Lozano, George. The demise of the Impact Factor: The strength of the relationship between citation rates and IF is down to levels last seen 40 years ago. Impact of Social Sciences June 8, 2012.

SNIP:

“If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.”

Curry, Stephen. Sick of Impact Factors. Reciprocal Space August 13, 2012. http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/

SNIP:

“But the real problem started when impact factors began to be applied to papers and to people, a development that Garfield never anticipated. I can’t trace the precise origin of the growth but it has become a cancer that can no longer be ignored. The malady seems to particularly afflict researchers in science, technology and medicine who, astonishingly for a group that prizes its intelligence, have acquired a dependency on a valuation system that is grounded in falsity. We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career. We submit to time-wasting and demoralising rounds of manuscript rejection, retarding the progress of science in the chase for a false measure of prestige.”