Opting-in to a relationship

My series of posts related to Facebook and The Washington Post has become very interesting today. Luke provided some insightful feedback on WaPo’s use of an iframe served up to provide a socially-connected experience, and in doing so he raised an interesting point. He said:

The opt-in question is interesting. Since no information is being transferred, it’s not clear that there’s anything to opt into. I think the social plugins work the same as myriad other plugins and ad networks around the internet, with the exception that it’s more obvious to the user what’s happening. If users needed to click a button in order to see personalized stories, then the vast majority wouldn’t get to experience the value that’s created.

Follow-up on Facebook and The Washington Post

I’ve been getting a lot of comments on my post about Facebook and The Washington Post. I wanted to just write a brief follow-up on it. I had Luke Shepard of Facebook present at the Gartner Catalyst conference last week and through a bit of serendipity he found Tuesdaynight and my recent post. He kindly provided this clarification on what was going on:

The Washington Post still has no idea what your Facebook account is – the blue box is an iframe onto facebook.com, and it’s served entirely by Facebook. No information is transferred to the Wapo, and none of the rest of your activity on Wapo is linked back to Facebook, unless you explicitly choose to (by clicking the “Like” plugin, for example).

Facebook & Washington Post behavior I cannot explain

I was looking at some local news on Washington Post’s website. I happen to notice that there in the right gutter along with miscellaneous ads which my brain filters out of my awareness, was a blue box. In the blue box was a list of things my Facebook friends have “liked” on WaPo recently.

And this took me by surprise.

I opened a different browser and headed to Facebook. First, I checked my Application Settings to see if a Washington Post application had slipped into my profile. I had this happen - Gizmodo and some other sites appeared in my authorized application list without getting my authorization. See this article for more. There was no Washington Post application. Next up, I checked my Privacy Settings to verify once more that I disabled Instant Personalization. And yes, that was still the case.

Maturity and Metrics: A few thoughts from the IAPP’s Privacy Summit 2010

With a case of the volcano blues, I found myself at the International Association of Privacy Professionals Privacy Summit 2010. As I sat in sessions and caught up with customers at this, the largest gathering of its kind, I noticed an undercurrent to the overall conversation. This undercurrent sounded, in some sense, very similar to conversations I have with my identity management customers regarding maturity and metrics. Privacy has moved beyond the compliance officer and is receiving better representation in business operations. Example of this include an increased presence of privacy practices in

Facebook privacy revisited: Privacy Mirror version 2

Facebook’s recent changes to its privacy system has been garnering a lot of attention and not a lot of it is good. Both the EFF and Kaliya Hamlin (via ReadWriteWeb) have written up their takes on the matter and, all in all, I think they are decent assessments.

With all the supposed changes in Facebook’s privacy system, I decided to revisit my work with Privacy Mirror (you can catch the backstory: here and then here). Having retested PM with both friends and strangers, here’s what I’ve learned: Plus ça change, plus c’est la même chose.

Why seeing your social activities again seems so uncomfortable?

Continuing Burton Group’s work of social networking and social media, I’ve been having various forms of this conversation over the last few weeks. First, I was at TechAmerica talking about social networks, privacy, and data breaches. Although the audio isn’t great, you can get the gist from this video. Then I was talking to the guys from InfoChimps ahead of their debut of some huge Twitter datasets. (The potential for data they have is pretty breath-taking.) Meanwhile, I am prep’ing a more formalized version of this talk for an upcoming OWASP event. With all this activity I thought I’d share a part of it. On the whole, people have no problem using social networking tools. Whether for personal or for work reasons more and more people are using a variety of tools to share and connect. And in this regard, we can think of social tools as engines for disclosure. Although people are relatively comfortable making disclosures such as “had a great meal in Ottawa” or “have to burn the midnight oil to get this blog post done,” people feel uncomfortable when these disclosures appear in other places. This feeling is akin to reaching into your computer bag and finding a long lost banana: a little foreign, a little gross, and a little strange. People often want to keep their social structures separates and, using a highly technical word, people feel oogy when they discover that something they have disclosed (an activity, a group they may have joined, a relationship they formed, a trip they have taken, etc) is known by other people in other networks. There are three axes to this problem: * Audience * Content * Time Oogy factor #1 - Audience - People often underestimate the size of the audience to whom their are disclosing information. What they think they are sharing with their team at work, is in fact shared with the enterprise. Furthermore, there are cases where the true size of the audience is not known because linkages between different social networking sites and the social graphs defined therein. Oogy factor #2 -Content - Some disclosures are not obviously under people’s control. It’s obvious when I update my status in Yammer. It isn’t so obvious when I join a group and that fact appears in my work activity stream. This is unsettling as information is being disclosed about me and yet I didn’t actively disclose that information. (I fell prey to this one… ask me sometime - funny story.) Oogy factor #3 - Time - Closely tied to Content, people don’t necessarily have control of when things are disclosed about them. Where social tools are reporting on activity, it isn’t entirely obvious how a person controls such disclosures and when they happen. People build mental models for their believed behavior of social tools along these three axis. If any one axis is shifted and the tool behave in a manner contrary to those mental models people feel uncomfortable. Although people are just establishing a comfort level with social tools from a consumer perspective, the enterprise is just taking its first teetering steps with social tools. There is definitely enterprise-grade ooginess ahead as enterprise grapples with the data breach and privacy implications of these tools. To that end, social tools have to provide meaningful ways for people, in the consumer setting, to adjust tool-behavior to meet their own mental models, and enterprises to accommodate wider regulatory and data protection concerns. I’m going to be giving a longer version of this as a presentation to an OWASP and Tivoli users group meeting in December. If you are in the Hartford area, join us. You can register here. (Cross-posted from Burton Group’s Identity Blog.)

But its such a lovely panopticon, I'd hate to have to return it

Anyone else not surprised by recently findings from this internal report form the London policy force? The net of it is closed circuit television (CCTV) camera do little to solve crimes. It seems that the success rate is 1,000 cameras per solved crime. Just a few million more cameras and we’ve got the crime thing licked, eh? Questions that I’d like to see answered are:

  • How many crimes were not committed because of the presence of a CCTV camera?
  • How many crimes were committed in a different location because of the presence of a CCTV camera?

The first question is impossible to answer. The second can be answered and a UC Berkeley study of the city San Francisco’s CCTV camera efficacy has been released. You can ready about the results here and here. The San Francisco study shows the cameras move crime from areas near cameras to areas away from cameras - no big surprise there. As I have mentioned previously on Tuesdaynight, trading the feeling of safety (without an actual increase in safety) for an invasive, always-on, 3rd-party-accessible video monitoring presence is a choice that leads to a far more paranoid society, less willing to engage in social behavior and less like the kinds of societies in which we want to participate.

The challenge in fixing Facebook’s underlying privacy problems

A few Facebook hacks came across my desk this week. The first set are so called “rogue” applications which do the tediously predictable grab of user information followed by the equally tediously predictable spam-a-palooza. Calling such applications “rogue” is misleading. These didn’t start out okay and turn evil somewhere along the way. These apps were built to cause trouble - they are malware. Facebook has a healthy set of malware apps and the number is growing every day. You can easily spot effected Facebook users by their status messages - “Sorry for the email - my Facebook got a virus.”

Looking beyond the Privacy Mirror

Over the last two weeks, I have been using my homegrown Facebook application, Privacy Mirror, as a means of experimenting with Facebook’s privacy settings. Although Facebook provides a nice interface to view your profile through your friends’ eyes, it does not do the same for applications. I built Privacy Mirror with the hopes of learning what 3rd party application developers can see of my profile by way of my friends’ use of applications. I have yet to speak with representatives of Facebook to confirm my findings, but I am confident in the following findings. Imagine that Alice and Bob are friends in Facebook. Alice decides to add a new application, called App X, to her profile in Facebook. (For clarity’s sake, by “add”, I mean that she authorizes the application to see her profile. Examples of Facebook applications include Polls, Friend Wheel, Movies, etc.) At this point, App X can see information in Alice’s profile. App X can also see that Alice is friends with Bob; in fact, App X can see information in Bob’s profile. Bob can limit how much information about him is available to applications that his friends add to their profiles through the Application Privacy settings. In this case, let’s imaging that Bob has only allowed 3rd party applications to see his profile picture and profile status. After a while, Alice tells Bob about App X. He thinks it sounds cool and adds it to his profile. At this point if App X, via Alice’s profile, looks at Bob’s profile it will see not only his profile picture and status but also his education history, hometown info, activities and movies. That is significantly more than what he authorized in his Application privacy settings. What is going here? It appears what’s going on is that if Alice and Bob both have authorized the same application, that application no longer respects either user’s Application Privacy settings. Instead, it respects the Profile Privacy settings of each person. In essence, App X acts (from a privacy settings point of view) as if it were a friend of Alice and Bob and not a third-party application. Putting my privacy commissioner hat for a moment, I’d want to analyze this situation from a consent and disclosure perspective. When Bob confirms his friendship with Alice he is, in a sense, opting in to a relationship with her. This opt-in indicates that he is willing to disclose certain information to Alice. Bob can control what information is disclosed to Alice through his Profile Privacy settings and this allows him to mitigate privacy concerns he has in terms of his relationship with Alice. What Bob isn’t consenting to (and is not opting in to) is a relationship with Alice’s applications. Bob is completely unaware of which applications Alice currently has or will have in the future. This is an asymmetry of relationship. It is entirely possible that Alice and Bob will have applications in common and once they do the amount of profile information disclosed (by both of them) to an application can radically change and change without notice to either Alice or Bob. Furthermore, it is unclear which Facebook privacy settings Bob needs to manipulate to control what Alice’s applications can learn about him. This lack of clarity is harmful. It shouldn’t take a few hundred lines of PHP, three debuggers, and an engineering degree to figure out how privacy controls work. This lack of clarity robs Facebook users of the opportunity to make meaningful and informed choices about their privacy. This experiment started after I read the Canadian Privacy Commissioner’s report of findings on privacy complaints brought against Facebook. This report raised significant concerns about third-party applications and their access to profile information. As of the beginning of Catalyst (today!), Facebook has about 15 days remaining to respond to the Canadian Privacy Commissioner’s office, I hope that this issue about third party applications and privacy controls is meaningfully addressed in Facebook’s response. (Cross-posted with Burton Group’s Identity Blog.)

Further findings from the Privacy Mirror experiment

I find that I rely on my debugging skills in almost every aspect of my life: cooking, writing, martial arts, photography… And it helps when you’ve got friends who a good debuggers as well. In this case, my friends lent a hand helping me figure out what I was seeing in my Privacy Mirror. The following is a snapshot of the Application Privacy settings I have set in Facebook: Facebook Application Privacy Settings Given these settings, I would expect that the Facebook APIs would report the following to a 3rd party application developer: