Continued from Part Two
Continuing to walk through the Future of Visualization report with points 11-20.
Again, motion capture (ie. MoCap) is already being done at the Dude. This is a big enough and complicated enough process that it makes sense to not yet duplicate it in stations all across campus. However, having said that, there are new techniques emerging for doing small scale motion capture, and those could be replicated in libraries or research labs. What leaps first to mind are the tools that use the Kinect as a motion capture device.
I don’t know if there are small scale mocap systems on the Athletic Campus and the Medical Campus, but there are certainly strong arguments for how they could benefit teaching, learning, training, and clinical care. If you want to see how this works, it is already being done in town at All Hands Active.
I’ve talked about 3d printing a fair amount in this blog. You’re probably getting sick of it. I envision libraries providing access and support through providing:
– access to the printers;
– training on how to use them;
– “libraries” of open source or free patterns (ie. Thingiverse etc);
– software and training for 3d modeling at varying levels of need, ability, and expertise.
3d printers are available in many places around town (most notably Maker Works and All Hands Active), but the problem is that there are a lot of barriers to access (both financial, skills building, and administrative hurdles). The most accessible ones for the public are probably at All Hands Active, but there is still waiting for a class to be taught, paying for a class to be taken, paying for a membership ($50/month) …
On campus, there are 3d printers for mediated use at the 3D Lab, the Design Labs, the Fab Lab, in addition to those in research labs which are limited to the lab staff. None of these are open access, or staffed for drop in support. If there are training guides for the public, please, someone point me to them?
Ideally, I’d like a makerspace supporting these activities and placed in the district library, but if that isn’t going to happen, then next best would be to get the maker movement thoroughly embedded in the campus libraries.
Speaking of 3d printing and GLAM (galleries, libraries, archives, and museums), check out the Art Institute of Chicago’s channel on Thingiverse.
3d scanners allow you to scan an object, and then print the same shape, perhaps at other scales (larger or smaller). 3d scanning and printing really need to be services that are offered together. It makes sense for museums to be scanning objects in their collections, but why not partner with libraries on managing and supporting access to the collections of files?
Working in healthcare, I could go on at length about the possibilities for 3d scanning and printing, from 3d printed jaws to bionic printed ears to replacement skulls to just printing bones in general! While printing bones is relatively simple these days, printing tissues and cartilage is still a little tricky, although printing cartilage is happening. Printing tissues and cartilage is usually called bioprinting instead of 3d printing, and is being explored for printing new organs, with current tests focusing on the liver.
There are other less serious uses, that are still useful as ways to build a level of comfort with the skills and technology required. In Japan, Tokyo’s FabCafe offers 3d scanning that allows people to scan their likeness and then ‘print’ gummy candies shaped like them as gifts for their sweetheart. Or you could make chocolates of your face. For me, I have a selfish reason to want access to 3d scanners and printers. My favorite camera has a broken battery case door. The company went out of business, and the parts aren’t available for love or money. I’d love to scan it, and print a replacement. If printing candy gets folk comfortable with the idea of 3d printing and the skills needed, I’d be thrilled to have folk print candy in my library (if it was my choice).
3d scanners allow you to scan, or “sense” the boundaries, shape, and depth of an object. Electronic sensors may allow you to sense a variety of other measurable criteria, from light levels to temperature to sound vibrations. Most smartphones have at least a gyroscope, accelerometer, and GIS as examples of sensors. Back up at the Dude (I keep mentioning them, don’t I?), Design Lab 1 has a variety of sensors for use in the facility as well as some for loan. They also support Electronic Lunch (blog), a group that meets throughout the school year to build skills working with these types of tools. This past spring, they were using custom design circuit boards (I’d consider them conceptually related to Arduino) with various lights and controllers to create wirelessly connected interactive lights for the Festifools parade in April. I had great fun learning to position capacitors and resistors as part of the circuit board assembly.
My favorite part was actually participating in the parade, and trying to grab photos that would show the interaction of the lights. Video does it better, though.
See #9, but another great place to put these might be the fabulous study and performance spaces in the Shapiro Undergraduate Library by Bert’s Café. These are useful for 3d projections of data visualizations, but also for virtual or immersive reality.
There has been quite a bit of exploration of haptics (tactile feedback from automated systems) through the School of Dentistry. Much of this has been done in collaboration with the 3D Lab at the Duderstadt. What the report calls “natural user interface equipment” seems to me to fall into the realm of haptics and augmented reality. The “natural user interface” concept actually refers more to being able to move your body the way you normally do as part of interacting with the data visualization. If you’ve seen any of the recent Iron Man movies, when Tony Stark waves his hands to push the data visualization around, zoom in on some parts, erase others, THAT is what they are talking about. Don’t we all want to be Tony Stark some days? A lot of what’s driving the development is immersive reality games and training systems, with much of it coming from the military. The Video Game Library on North Campus gives the library a toehold in this space, and of course, this is already being explored in the UM 3D Lab at the Duderstadt. Again, using gaming or “fun” applications is a great way to get people building a level of comfort and facility with the technology. Having it in a library means they can learn the skills even if they can’t afford to buy it themselves, allowing students to position themselves ahead of the curve, ready to go.
Augmented reality (AR) means pretty much exactly what it says. You are traveling around in the real world, but there is extra information added to or augmenting what you perceive, allowing you to interact with the world in richer or simply different ways. Again, gaming is where you see most of this, and shopping is a close second, but there is more. Here’s an example where the University of Wisconsin has a tour built in to the campus space.
Here’s an app (Krikle, now gone) that tried to allow people to tag locations with tips that would pop up as you moved through the space. I had tried to tag the Reference Desk sign that said, “Ask us anything!”, but as you can see, the app never could quite get it right.
For years, I’ve been wanting to get the library involved in AR, imagining an app that allows you to walk through campus and discover:
– the history of the space you are in (by decade);
– who are the top cited researchers in the building;
– what are the main research interests in the building;
– what are the most used databases in the building;
– what are the currently funded grants in the space;
– most recent publications by faculty with offices in the location;
and of course, how to make donations to support the efforts there.
The most famous “head mounted display” at the moment is Google Glass. I’ve usually heard these referred to as Head-up displays, also known as HUDs. Virtual reality, augmented reality, immersive reality, and the large stereoscopic displays all allow you to walk around in a digitally enhanced space, potentially shareable with other people. A HUD gives you a more personal experience, allowing similar kinds of immersiveness, but for just you, and not visible to others in the same space.
Back to the Duderstadt! And onward to Wolverine Island!
The Duderstadt, which has been affiliated with the library system since its inception, is home to the 3D Lab, the CAVE (cave automatic virtual environment), and M.I.D.E.N. (Michigan Immersive Digital Experience Nexus). Basically, CAVE is a generic name for immersive 3D used many places, and MIDEN is exclusive to the U-M, but they are essentially the same thing, although MIDEN is new and improved from what the CAVE offered previously. These are immersive virtual reality spaces, in the sense that you can walk into a space that projects a virtual reality, and seem to interact with it through the head-mounted display. What you see isn’t what’s really there, but that is kind of the point. Some people compare it to a primitive Star Trek Holodeck.
Here’s one example from the CAVE.
Here’s an example of MIDEN.
Second Life is one example of an online virtual world space that allows you to have an immersive experience, although not as fully immersive as the CAVE/MIDEN. The University of Michigan Medical School has had space (or ‘real estate’) in Second Life for several years, with close partnership and support from the library.
44 people said they want shadow puppets in support of visualization. I pondered this, and did some digging, because, frankly, I did not have a clue what they meant. Here’s what I found.
“Therefore, we want to represent the points so that the distances between them change as little as possible. In general, this is called projection, the term coming from the idea that we will do the same thing to the data as you do when you make shadow puppets: We project a high dimensional object (such as your three-dimensional hands) onto a lower dimensional object (such as the two-dimensional wall).”
Shape of Data: Visualization and Projection: http://shapeofdata.wordpress.com/2013/04/16/visualization-and-projection/
Another option, is that it might be the Kinect shadow puppets.
“Puppet Parade” Uses Kinect To Create High-Tech Shadow Puppets http://www.fastcodesign.com/1665864/puppet-parade-uses-kinect-to-create-high-tech-shadow-puppets#1
Even More Kinect Hacks: Shadow Puppets, 3D Mapping Robots: http://www.tested.com/tech/gaming/1368-even-more-kinect-hacks-shadow-puppets-3d-mapping-robots/
I’m not sure, so I’m hoping someone else knows and will explain this to me.
Continued in Part Four, bringing it all together.