SOUPS was new for me. I have been working in usable security for only about a year, so I was hoping for flashes of inspiration, insights from the people who have spent their careers on the topic. I was not disappointed.
The Symposium On Usable Privacy and Security (SOUPS) is a little conference, this year there were about 200 attendees. Microsoft hosted the event in July. Many in attendance were academics, graduate students, or researchers who work in corporations. There's mostly an HCI - computer science bent, I felt. There were a few corporate practitioners of security and compliance sprinkled in the crowd, though.
Highlights: Beyond authentication
As you might expect, the presentations reflected the mix of attendees. For me, this wasn't ideal, as I'd just completed a lit review that demonstrated pretty clearly that most of the academic research about usable security out there was not applicable to real situations that normal people face every day. So I was delighted on the morning of the last day to hear reports from researchers who had gone out in the world to look at some interesting problems that real people actually face.
Dinei Florencio and Cormac Herley from Microsoft reviewed security policies from web sites and theorized about what motivated them. Rick Walsh, of Michigan State University, reported on a pet project to understand people's mental models of security in home use of technology. Khai Troung, reporting for his colleagues David Dearman (both of University of Toronto) and Justin Ho (of Google), showed us the ways people name and secure (or don't) home wireless networks and why there are risks. Matthew Kay presented work that he and his partner Michael Terry from University of Waterloo had done on information design of online license agreements.
All were good work: solid research, thoroughly done, (mostly) with people outside universities, looking at some question the answer to which could make things better for users.
Useful lessons from the talks
Password requirements are the opposite of what you'd expect.
Password policies for e-commerce seem to be about making access as easy as possible, whereas policies for university and government sites make security policies strong (and difficult) because they can. The least attacked sites have the most restrictive policies.
Home computer users defend against myths of threats out of ignorance. Botnets take advantage of these "folk models."
You can install all the firewalls and anti-virus software available and still not fight threats effectively. The software is difficult to use; keeping it up takes constant vigilance. Most people Rick talked to identified two threats: viruses and hackers. Though he neatly presents eight insightful folk models of threat scenarios, it comes down to these beliefs of users: viruses are more or less automatic, probably released by hackers; hackers are malicious people who are actively working to break in to computers.
People rely on the default settings, assuming that the way the manufacturer set them is good enough.
Anyone who has installed a home network on a Windows platform knows that setting up wireless access is frustrating and difficult. So, although strong security is built in to wireless routers, giving access control definitions and levels of encryption, people don't know what those are. The usability of the installation and configuration software strongly affects the strength of security applied. When the team tested a configuration wizard they'd designed that helped users know what to do, they found that people made better security choices.
Design of information helps people see how it is relevant to them.
When Matthew and Michael incorporated a distinct visual hierarchy along with relevant graphics and illustrations, people were much more likely to spend time reading and to remember later what the content said, than they were on text that did not incorporate these features. I have some issues with the way the experiment was conducted, and the lack of background in information design theory and practice, but the outcomes are promising and I hope this team will go deeper on this topic.
We're asking too much of users
Taken together, my conclusion is that people delegate security decisions - to ISPs, to user interfaces, to institutions - for two reasons: First, in some situations they have little choice. Password policies, for example, are forced on users by policy makers in the institutions. Second, users feel they have little choice because the choices are mysterious and difficult to understand. Although one of the tenets of good user interface design is to leave the user in control, it feels like we're surfacing too much to users, leaving them with decisions they can't make because they aren't knowledgeable, asking them questions they can't know the answers to. I hope that next year at SOUPS I'll see some work that integrates security more and burdens the user less.
Unfortunately, these excellent projects were more the exception than the rule at SOUPS. My major disappointment was how many of the projects used undergraduate students for their sole source of data. I get using students for pilot studies. Why not? They're practically free and willing (and in some schools and majors required to take part in research). But it takes only a bit more work and a tiny bit more expense to find people outside the undergraduate population. But then we'd have to be doing research on security problems and solutions that are practical in the real world.