I just finished my slides for my lightning talk on Monday for the Open Access Week events here.
The idea, and I’m kind of nervous about this, is to introduce a model I’ve been working on as part of a book chapter I’m writing (and which is past deadline). Of course, as soon as I sent the slides and posted them, I immediately saw things I ought to change. Sigh. Brief background follows. Also, this is a substantially different part of the story about the model than either what will be in the talk or in the book chapter.
For many years, ever since David Brin came to speak at UM on tour for his book The Transparent Society, I’ve been observing closely and engaging in (less often) the dialog on transparency and privacy in the open science movement and the e-patient movement. I must confess, this is my favorite of David’s books. All those wonderful science fiction books he’s written (and which I’ve mostly read), and me, I go and fall in love with his one non-fiction book.
I didn’t just like the book and lurk. I wanted to quote the book in the introduction to my book as a formative and supporting work. I tried very hard to find a quote in the book that distilled the essence of the book into a nice soundbite. I was unable to do so. I emailed David and enlisted his help in this. (He’s really a very nice man.) I finally found something that worked, but unfortunately, NOT in the book!
“What has worked — the foundation of our liberties — has always been openness and candor. Especially the ability to force the mighty out in the open where we can hold them accountable. All three of the greatest human inventions — science, democracy and free markets — depend on open information flows.”
Brin, David. “The Value — and Empowerment — of Common Citizens in an Age of Danger.” The Futurist. 2001. http://www.futurist.com/articles-archive/society-and-culture/value-and-empowerment/
That wasn’t enough for me, either. I been completely fan-girl over David ever since, “stalking” him (politely and with his consent, only in public forums) via the many social media and online spaces he inhabits. (He has been kind enough to reciprocate in a small way by saying I’m an interesting person, and that he remembers who I am.) I read his blog, I follow him on Facebook and Twitter and Google Plus and Pinterest and ScoopIt and Youtube and … well, you get the idea.
Watching and listening to David’s informal thoughts allowed me to watch the evolution of his thought in his self-defined role as “Mr. Transparency.” From time to time, he post a link and launch into a brief diatribe about what made it wrong in some fashion. I was especially intrigued by those where he said (paraphrasing), “You might think Mr. Transparency would approve of this, BUT … ”
At the same time, I regularly participate in the Healthcare Social Media (#HCSM) Sunday evening Twitter chats, where a frequent topic of conversation is transparency and privacy and their implications in healthcare and health information for the public. I’ve also been engaging less often with the “Science 2.0″ / “Open Science” community, where similar dialogs are taking place, both in the informal conversation spaces as well as in the published literature.
Patterns emerged regarding the issues and dynamics. Patients want the choice to be open with their own information, but they don’t like researchers taking what they’ve said or done, and then analysing it without their knowledge or consent. Researchers love scraping social media streams for data, but they don’t trust social media enough to go there and talk themselves. People like some of the results of limited transparency in Google and Facebook, but they don’t always trust what those companies do with the data. And, something people say over and over in ALL these spaces, private or transparent, open or closed, they want it to be both their choice and a two-way street. And they want to change their mind.
I’ve been growing increasingly concerned with the conversations about transparency and privacy in the science and health environments which treat the topics as a linear dynamic, or worse, as polar opposites, as if you can only have one or the other, not both. One of the risks of a linear / polar expression of the concerns is that it lends itself strongly to the sort of cognitive bias in which people select only the information that suits their position, and become literally incapable of perceiving any value to “opposite” side. You see this all the time in our elections in the United States. I marvel at the wisdom of our founders in designing government as a three-legged stool, and wish we had three strong political parties, instead of two. Anytime you have only two choices on the table, it becomes too easy to say they are black and white, good and evil, or other binary dynamics.
The psychological experts on cognitive bias have long known the solution to this. It is to introduce gray areas into the black and white dialog. That doesn’t mean it’s easy. You might do it by asking certain types of questions. You might get folk to role play the other position. When I was on debate team, we would be required to argue both side of a position, intelligently, with supporting research, and with passion and commitment. As a self-proclaimed “militant moderate,” I attempt to do this in many areas. To listen to different points of view, to ask them about their evidence, to explore their evidence, to look at the strengths and weaknesses of their arguments and position (of EACH position!). Trust me, anyone who thinks being a moderate is the lazy man’s way out of thinking doesn’t know a real moderate. Being a moderate is much harder work (IMHO) than to choose a side.
So, the PROTEI model is designed to introduce some subtlety (but not too much) into the dialog about transparency and privacy. Too little subtlety and you get the polarization; too much subtlety and you get paralysis, an inability to incorporate the various factors into decisionmaking and action. This is my initial draft. I’ll unpack it a bit in the eight minute lightning talk on Monday, and much more in the book chapter, which (assuming I finish it!) is due to come out next Spring.
PROTEI is a purposeful acronym, of course. It is the plural of Proteus, the shape-changing god of the sea, the wise “Old Man of the Sea” who could see the future, but who didn’t want to talk about it. Always changing, always reacting and responsive, never staying still, never staying the same. We can’t just preach to the choir about our chosen “side”. To make an effective use of transparency and privacy in our future, we need to make a concerted effort to understand the dynamics of choice and action, when one is most effective, when the other is, and acknowledge that both are important.
Actually, personally, I’d love to introduce a couple other contextual aspects, but I’m not sure how to fit them into the model. Both factors (transparency and privacy) have proponents claiming they are individually essential to both the creation of trust and the support of security. I believe they are both right. So … how does that work? LOL! Time to open this up for comments and other people’s ideas. My brain is getting tired.