On Capitals and Eating: A short trip report from Ottawa

There are great cities that happen to be national capitals. Cities like London and Paris are such places. Great food, great culture, great sites - a good time is had by all. Then there are national capitals that want to be great cities. Washington and Ottawa happen to fall into this category. Neither has the vibe/density/scene that London and Paris have, but they are trying. (And this is where my mother-in-law would add the phrase, “bless their hearts.”) I happen to be in Ottawa a few weeks back and had some kick ass meals. First up, Murray Street - a charcuterie and wine bar. They bring much respect to meats - all of them. Anywhere that has an offal of the day as well as a whole pig head on the menu gets my vote any day of the week. It is a small place with a great feel. Highly recommend. Next up - The Whalesbone Oyster House. Go. There. Now. Imagine a tiny restaurant embedded into an old bike shop. Forget open kitchen, the hot stations are actually in the seating area and the night we were there the a/c wasn’t working - forcing the staff into tank tops and shorts. Whalesbone is, as the name implies, an oyster and fish joint and it takes its ingredients seriously. If the amazing fish, oysters, and drinks doesn’t do it for you, then try this - when was the last time you went to a bar or restaurant where the music was provided by records? Two huge stacks of records behind the bar, from which Ray Charles, Abba, and Sam & Dave were pulled when we were there. The staff has been friends since high school and you can feel their love for the place in everything they do. Again - go there now! Ottawa may be a somewhat sleepy capital but there are definitely some pockets of serious yum and fun to be had - I’ll be waiting until the spring to head back for oysters and offal.

Why seeing your social activities again seems so uncomfortable?

Continuing Burton Group’s work of social networking and social media, I’ve been having various forms of this conversation over the last few weeks. First, I was at TechAmerica talking about social networks, privacy, and data breaches. Although the audio isn’t great, you can get the gist from this video. Then I was talking to the guys from InfoChimps ahead of their debut of some huge Twitter datasets. (The potential for data they have is pretty breath-taking.) Meanwhile, I am prep’ing a more formalized version of this talk for an upcoming OWASP event. With all this activity I thought I’d share a part of it. On the whole, people have no problem using social networking tools. Whether for personal or for work reasons more and more people are using a variety of tools to share and connect. And in this regard, we can think of social tools as engines for disclosure. Although people are relatively comfortable making disclosures such as “had a great meal in Ottawa” or “have to burn the midnight oil to get this blog post done,” people feel uncomfortable when these disclosures appear in other places. This feeling is akin to reaching into your computer bag and finding a long lost banana: a little foreign, a little gross, and a little strange. People often want to keep their social structures separates and, using a highly technical word, people feel oogy when they discover that something they have disclosed (an activity, a group they may have joined, a relationship they formed, a trip they have taken, etc) is known by other people in other networks. There are three axes to this problem: * Audience * Content * Time Oogy factor #1 - Audience - People often underestimate the size of the audience to whom their are disclosing information. What they think they are sharing with their team at work, is in fact shared with the enterprise. Furthermore, there are cases where the true size of the audience is not known because linkages between different social networking sites and the social graphs defined therein. Oogy factor #2 -Content - Some disclosures are not obviously under people’s control. It’s obvious when I update my status in Yammer. It isn’t so obvious when I join a group and that fact appears in my work activity stream. This is unsettling as information is being disclosed about me and yet I didn’t actively disclose that information. (I fell prey to this one… ask me sometime - funny story.) Oogy factor #3 - Time - Closely tied to Content, people don’t necessarily have control of when things are disclosed about them. Where social tools are reporting on activity, it isn’t entirely obvious how a person controls such disclosures and when they happen. People build mental models for their believed behavior of social tools along these three axis. If any one axis is shifted and the tool behave in a manner contrary to those mental models people feel uncomfortable. Although people are just establishing a comfort level with social tools from a consumer perspective, the enterprise is just taking its first teetering steps with social tools. There is definitely enterprise-grade ooginess ahead as enterprise grapples with the data breach and privacy implications of these tools. To that end, social tools have to provide meaningful ways for people, in the consumer setting, to adjust tool-behavior to meet their own mental models, and enterprises to accommodate wider regulatory and data protection concerns. I’m going to be giving a longer version of this as a presentation to an OWASP and Tivoli users group meeting in December. If you are in the Hartford area, join us. You can register here. (Cross-posted from Burton Group’s Identity Blog.)

Hopes and concerns for identity

A friend in the industry recently asked me for my thoughts on OpenID, InfoCards, and the US federal government’s work to consume non-government issued credentials. Letting the question rattle around in my head for a while, here’s what I’ve got so far. My hope is that the overall ICAM initiative is successful—not because I have been eagerly waiting to interact with the federal government using some form of authenticated credential—but because we (citizens, enterprises and government) are at a pivotal moment in the history of the web. With the US government working with both the OpenID and InfoCard Foundations, there exists an opportunity to change how individuals interact with large organizations, both public and private. For the first time, individuals would be able to (even encouraged to) interact with a large organization (such as the US federal government) using an identity asserted, not by the large organization, but by the individual. In this case, the State is no longer the sole provider of identity. This breaks the monopoly that the State has had on credentials and is indicative of the future to come. But there is a long road to walk before getting there. There are numerous concerns with these plans. Among these are notable security concerns, especially with OpenID, that the identity community is not blind to. These are not my primary concerns. My primary concern is with the establishment of standard user behavior that could prolong existing problems. Today, after decades of enterprise training and a decade of consumer training, people naturally expect to see two text boxes on web sites. One is for their username and the one with the little stars is for their password. This behavior is ingrained. Changing this behavior is no small feat - just ask the OpenID and InfoCard groups. But it is a change that must occur to normalize people using something stronger than username and passwords to authenticate themselves. My concern is that the behavior that is being established as a norm - the use of either an identity selector or some other user interface means - will become the username/password for the next generation. This isn’t a hypothetical problem; the writing is already on the wall. Currently, OpenID will only be accepted for low-value transactions with the government known as Level of Assurance 1 (LOA1). Activities like filing tax returns requires a far greater assurance that the person is who they claim to be and thus require a Level of Assurance 3 identifier. And there is problem. The way people use an LOA3 credential may be very different than how they do so with an LOA1 credential. If we, as an industry, normalize user behavior that meets LOA1 needs but not LOA3, we are training in behavior that has to get untrained in a near future. What the government and its partners are on the path to doing is effecting real cultural change. This kind of change doesn’t happen often and is hard to do, and especially hard to undo. I definitely want a future in which I can assert my own identity without validation from the State, but I am very willing to wait for that future to assure that the behavior the industry normalizes is one that will work for generations to come. (Cross-posted from Burton Group’s Identity blog.)

2 blogs with promise

Two friends of mine have finally decided to get blogging. Yes, I know that blogging seems passé to some of you out there, but it still has it purpose. First up - Tuesdaynight’s very own Josh Nanberg has launched his eponymous blog. Josh is one of the few people I know who can

  • breakdown political messaging techniques in to something I can understand
  • cook a four course meal in a 1 course kitchen
  • reference deeply obscure music lyrics

all at the same time. Next up - my friend and mentor, Rob Ciampa has decided to divert his seemingly boundless energies into a bit of blogging. Besides having an encyclopedic knowledge French wine, a photographic memory for menus, and a typical Boston potty-mouth, Rob is one of the best corporate marketers and channel managers I have ever met. Admittedly neither blog has much content but I know these guys, and I know what’s to come. You’ll want to know it to.

But its such a lovely panopticon, I'd hate to have to return it

Anyone else not surprised by recently findings from this internal report form the London policy force? The net of it is closed circuit television (CCTV) camera do little to solve crimes. It seems that the success rate is 1,000 cameras per solved crime. Just a few million more cameras and we’ve got the crime thing licked, eh? Questions that I’d like to see answered are:

  • How many crimes were not committed because of the presence of a CCTV camera?
  • How many crimes were committed in a different location because of the presence of a CCTV camera?

The first question is impossible to answer. The second can be answered and a UC Berkeley study of the city San Francisco’s CCTV camera efficacy has been released. You can ready about the results here and here. The San Francisco study shows the cameras move crime from areas near cameras to areas away from cameras - no big surprise there. As I have mentioned previously on Tuesdaynight, trading the feeling of safety (without an actual increase in safety) for an invasive, always-on, 3rd-party-accessible video monitoring presence is a choice that leads to a far more paranoid society, less willing to engage in social behavior and less like the kinds of societies in which we want to participate.

The challenge in fixing Facebook’s underlying privacy problems

A few Facebook hacks came across my desk this week. The first set are so called “rogue” applications which do the tediously predictable grab of user information followed by the equally tediously predictable spam-a-palooza. Calling such applications “rogue” is misleading. These didn’t start out okay and turn evil somewhere along the way. These apps were built to cause trouble - they are malware. Facebook has a healthy set of malware apps and the number is growing every day. You can easily spot effected Facebook users by their status messages - “Sorry for the email - my Facebook got a virus.”

Looking beyond the Privacy Mirror

Over the last two weeks, I have been using my homegrown Facebook application, Privacy Mirror, as a means of experimenting with Facebook’s privacy settings. Although Facebook provides a nice interface to view your profile through your friends’ eyes, it does not do the same for applications. I built Privacy Mirror with the hopes of learning what 3rd party application developers can see of my profile by way of my friends’ use of applications. I have yet to speak with representatives of Facebook to confirm my findings, but I am confident in the following findings. Imagine that Alice and Bob are friends in Facebook. Alice decides to add a new application, called App X, to her profile in Facebook. (For clarity’s sake, by “add”, I mean that she authorizes the application to see her profile. Examples of Facebook applications include Polls, Friend Wheel, Movies, etc.) At this point, App X can see information in Alice’s profile. App X can also see that Alice is friends with Bob; in fact, App X can see information in Bob’s profile. Bob can limit how much information about him is available to applications that his friends add to their profiles through the Application Privacy settings. In this case, let’s imaging that Bob has only allowed 3rd party applications to see his profile picture and profile status. After a while, Alice tells Bob about App X. He thinks it sounds cool and adds it to his profile. At this point if App X, via Alice’s profile, looks at Bob’s profile it will see not only his profile picture and status but also his education history, hometown info, activities and movies. That is significantly more than what he authorized in his Application privacy settings. What is going here? It appears what’s going on is that if Alice and Bob both have authorized the same application, that application no longer respects either user’s Application Privacy settings. Instead, it respects the Profile Privacy settings of each person. In essence, App X acts (from a privacy settings point of view) as if it were a friend of Alice and Bob and not a third-party application. Putting my privacy commissioner hat for a moment, I’d want to analyze this situation from a consent and disclosure perspective. When Bob confirms his friendship with Alice he is, in a sense, opting in to a relationship with her. This opt-in indicates that he is willing to disclose certain information to Alice. Bob can control what information is disclosed to Alice through his Profile Privacy settings and this allows him to mitigate privacy concerns he has in terms of his relationship with Alice. What Bob isn’t consenting to (and is not opting in to) is a relationship with Alice’s applications. Bob is completely unaware of which applications Alice currently has or will have in the future. This is an asymmetry of relationship. It is entirely possible that Alice and Bob will have applications in common and once they do the amount of profile information disclosed (by both of them) to an application can radically change and change without notice to either Alice or Bob. Furthermore, it is unclear which Facebook privacy settings Bob needs to manipulate to control what Alice’s applications can learn about him. This lack of clarity is harmful. It shouldn’t take a few hundred lines of PHP, three debuggers, and an engineering degree to figure out how privacy controls work. This lack of clarity robs Facebook users of the opportunity to make meaningful and informed choices about their privacy. This experiment started after I read the Canadian Privacy Commissioner’s report of findings on privacy complaints brought against Facebook. This report raised significant concerns about third-party applications and their access to profile information. As of the beginning of Catalyst (today!), Facebook has about 15 days remaining to respond to the Canadian Privacy Commissioner’s office, I hope that this issue about third party applications and privacy controls is meaningfully addressed in Facebook’s response. (Cross-posted with Burton Group’s Identity Blog.)

Further findings from the Privacy Mirror experiment

I find that I rely on my debugging skills in almost every aspect of my life: cooking, writing, martial arts, photography… And it helps when you’ve got friends who a good debuggers as well. In this case, my friends lent a hand helping me figure out what I was seeing in my Privacy Mirror. The following is a snapshot of the Application Privacy settings I have set in Facebook: Facebook Application Privacy Settings Given these settings, I would expect that the Facebook APIs would report the following to a 3rd party application developer:

Privacy Mirror: A privacy experiment in Facebook

As I previously blogged, I read Canada’s Assistant Privacy Commissioner Elizabeth Denham’s findings on Facebook and it got me thinking about 3rd party applications. I wondered what 3rd party app developers could see in my profile. In my estimation, the easiest way to find out what a 3rd party application developer could see, was to become a 3rd party application developer. Enter Privacy Mirror I built a basic Facebook application called Privacy Mirror. The goal of Privacy Mirror was to see, as a 3rd party developer, just what information I could glean from my profile via Facebook’s APIs. At first, I used two Facebook API calls:

Laplace’s Demon, Santa Claus and TSA’s Secure Flight

No doubt you frequent fliers out there have received emails from your airline of choice talking about TSA’s Secure Flight. As you make air travel reservations in the future, your airline will communicate with TSA to get, essentially, a fly/no-fly decision from the Secure Flight system. As the TSA explains in the “How it works” section of their website dedicated to Secure Flight: Secure Flight matches the name, date of birth and gender information for each passenger against government watch lists to: