Continued from Part One
Let me walk through the wishlist from the UM Future of Visualization Report with the eyes of a librarian. Well, a University of Michigan emerging technologies librarian. Points 1-10 will be discussed in this post, with 11-20 in Part Three.
This is reference. The UM Libraries already provide support with visualization tools through various locations. GIS is supported by the Clark Maps Library, and data visualization through Spatial and Numeric Data unit. Regular workshops are given for genomic data visualization tools by my colleague Marci Brandenberg. I’d be willing to wager that there are a whole lot of other librarians on campus providing support for other kinds of visualization tools.
This is curation and collection building. It could also involve knowledge creation, especially with partnerships to identify and fill knowledge gaps.
OK, I’m not an expert in this, and I haven’t read every word of the whole report, so I might be misunderstanding this part. What I think they are asking for is sort of tools to bridge the gap between data and understanding. I’m thinking back to the days when I was a librarian providing support to Healthweb, a large multi-institution web development project. A big part of my job was doing the coding behind the web pages, designing the user interface, creating web-ready graphics, and supporting new librarians in building the same skills at their institutions. I didn’t have my name on specific topic pages, but was providing behind-the-scenes support. What I’m hearing when I read this is that there is a need for a similar sort of role with respect to mooshing raw data into the actual images or video that the researchers will interpret. Does every research lab need their own expert to massage their data into visual form? Perhaps it would be more cost effective to have a data massager position that is shared or provides training and support to staff in the labs? And shouldn’t that position be in SAND?
Knowledge Navigation Center. ‘Nuff said. They already teach workshops on Flash. HTML5 is one of my soapboxes, but I have a lot to learn about it. WebGL is going to be really important for the 3d printing I’ve been ranting about recently. If the library doesn’t already provide support for these, I’m sure they can, or could partner with other units on campus to do so.
CARMA. As an aside, the library I worked in at Northwestern also had multimedia and video support services located in the library. This is a natural fit.
This is being done by ARC, the entity formerly known as ORCI. ARC (Advanced Research Computing) is not in the library system, but has strong roots connecting it to both the UM School of Information and the UM Libraries. A collaboration or partnership is not at all unlikely.
I don’t think they mean lending devices here. I’m assuming what they mean here is people to provide support for data visualization tools on mobile devices; someone(s) who downloads and tests out new tools, and then teaches others the best practices and inside tips; coding geeks who can build new tools or custom tools if what is needed doesn’t exist; troubleshooters who can figure out mobile access challenges for dataviz tools built for desktops; people who are connected and aware of the wide range of resources in this arena. This is not a one person job, which is why I am so glad that there are already such phenomenal resources around campus.
Mobile Users Group: http://www.instructionblog.com/mobileusers/ #ummobile
Mobile Developer Community: http://mobileapps.umich.edu/devtoolkit/dev-user-community
U-M Mobile Developer Toolkit: http://mobileapps.umich.edu/devtoolkit
Mobile Apps Center: http://mobileapps.umich.edu/
Mobile Apps Dashboard: https://wiki.umms.med.umich.edu/display/ADMAPP/Mobile+Apps+Dashboard
The Med School is very active in this area, with Laurie Kirchmeier as the person who knows all.
For campus, Cassandra Carson takes point.
And then there are the hackathons.
I think we’re good with mobile, it is just bringing it all together, or being able to appropriately direct people. The Library has a number of mobile apps that they’ve released themselves (my baby is the Plain Language Medical Dictionary), and are team players within these communities.
See above. Librarians are already teaching workshops all over campus on visualization tools and techniques. Why not collect several of these into a DataViz 101 introduction, partner librarians as co-instructors in other course around campus, build a library guide collecting support information for these courses, and so forth?
The Duderstadt Digital Media Commons is where most high tech equipment like this has tended to collect.The Duderstadt (a.k.a. “The Dude”) has floated back and forth from the library over the years, administratively, while remaining physically located or co-located with library space. Personally, as much as I love the Dude, I’ve been arguing for years that we need equivalent access more equitably distributed across campus, with similar resources on Central Campus, the Medical Campus, the Athletics Campus, etc. All of these locations tend to have something along these lines, but independently developed and highly variable. I’d love to see the libraries take on this sort of responsibility. I’m envisioning perhaps a series of locations with a core set of high end equipment for supporting these needs, with a small group of expert consultants who rotate from place to place on a regular schedule, creating training materials, consulting, and doing training workshops.
See #9. These are relatively large-scale devices that allow you to interact with them through touch, rather than through a keyboard, mouse, or other external device. Even more interesting is that they allow you to use both hands, as well as gestures. There is a new small scale device of this sort for the public newly available, called the Leap Motion. The Leap is a controller that connects with your existing computer, providing you an alternate way to interface with it, using gesture-based computing. I just heard that mine is on the way, being shipped hopefully in the next few weeks. I’m rather excited, as this will give me a way to become comfortable with the interface before I am faced with the large table-sized variety. Just to clarify, gesture-based computing is different than the multi-touch interfaces. With multi-touch, you do need to touch the surface, whereas with gesture-based you can wave your hands around and have things happen. Both of them are part of a movement toward interacting with our devices in a way that is more like how we interact with the world around us.
Where you most often see a large multi-touch surface are the flat games on the floor for little kids, in movie theaters and malls. The digital balls bounce around, and the kids bounce on the mat to redirect them. Of fish, or some other digital object. For visualization, the power is that you can interact with the computer OS, windows, data, images, displays much more intuitively and faster, because you can use both hands, or use multiple fingers. There is a lot you can do, and people using table computers are already using multi-touch when the what the device does changes depending on how many fingers you use in the gesture. The standard “pinch” is a multitouch command.
My usual argument for having new tech in the libraries is to give people a chance to build the skills without having to pay for the devices. Isn’t that how libraries started with books? Not everyone could afford them, but we all needed to learn from them, so the library bought them and shared. Eh, voilà.
Continued in Part Three, with motion capture, 3d printing/scanning, Arduino boards, and more.