Wednesday, July 28, 2010

RANT: What makes you think your content is this important?

vark.com registration dialog




















Dear Vark.com,

I want to ask one question of the Vark community. What makes Vark think that doing that is worth filling out this invasive registration page? Why are birthday and gender required? (What are you doing with that data and how are you protecting it?)

Never mind. I'll just ask Twitter my question.

Thanks anyway,
Dana

Monday, July 26, 2010

SOUPS: We're asking too much of users and not enough of researchers

SOUPS was new for me. I have been working in usable security for only about a year, so I was hoping for flashes of inspiration, insights from the people who have spent their careers on the topic. I was not disappointed.

The Symposium On Usable Privacy and Security (SOUPS) is a little conference, this year there were about 200 attendees. Microsoft hosted the event in July. Many in attendance were academics, graduate students, or researchers who work in corporations. There's mostly an HCI - computer science bent, I felt. There were a few corporate practitioners of security and compliance sprinkled in the crowd, though.

Highlights: Beyond authentication
As you might expect, the presentations reflected the mix of attendees. For me, this wasn't ideal, as I'd just completed a lit review that demonstrated pretty clearly that most of the academic research about usable security out there was not applicable to real situations that normal people face every day. So I was delighted on the morning of the last day to hear reports from researchers who had gone out in the world to look at some interesting problems that real people actually face.

Dinei Florencio and Cormac Herley from Microsoft reviewed security policies from web sites and theorized about what motivated them. Rick Walsh, of Michigan State University, reported on a pet project to understand people's mental models of security in home use of technology. Khai Troung, reporting for his colleagues David Dearman (both of University of Toronto) and Justin Ho (of Google), showed us the ways people name and secure (or don't) home wireless networks and why there are risks. Matthew Kay presented work that he and his partner Michael Terry from University of Waterloo had done on information design of online license agreements.

All were good work: solid research, thoroughly done, (mostly) with people outside universities, looking at some question the answer to which could make things better for users.

Useful lessons from the talks
Password requirements are the opposite of what you'd expect.
Password policies for e-commerce seem to be about making access as easy as possible, whereas policies for university and government sites make security policies strong (and difficult) because they can. The least attacked sites have the most restrictive policies.

Home computer users defend against myths of threats out of ignorance. Botnets take advantage of these "folk models."
You can install all the firewalls and anti-virus software available and still not fight threats effectively. The software is difficult to use; keeping it up takes constant vigilance. Most people Rick talked to identified two threats: viruses and hackers. Though he neatly presents eight insightful folk models of threat scenarios, it comes down to these beliefs of users: viruses are more or less automatic, probably released by hackers; hackers are malicious people who are actively working to break in to computers.

People rely on the default settings, assuming that the way the manufacturer set them is good enough.
Anyone who has installed a home network on a Windows platform knows that setting up wireless access is frustrating and difficult. So, although strong security is built in to wireless routers, giving access control definitions and levels of encryption, people don't know what those are. The usability of the installation and configuration software strongly affects the strength of security applied. When the team tested a configuration wizard they'd designed that helped users know what to do, they found that people made better security choices.

Design of information helps people see how it is relevant to them.
When Matthew and Michael incorporated a distinct visual hierarchy along with relevant graphics and illustrations, people were much more likely to spend time reading and to remember later what the content said, than they were on text that did not incorporate these features. I have some issues with the way the experiment was conducted, and the lack of background in information design theory and practice, but the outcomes are promising and I hope this team will go deeper on this topic.


We're asking too much of users
Taken together, my conclusion is that people delegate security decisions - to ISPs, to user interfaces, to institutions - for two reasons: First, in some situations they have little choice. Password policies, for example, are forced on users by policy makers in the institutions. Second, users feel they have little choice because the choices are mysterious and difficult to understand. Although one of the tenets of good user interface design is to leave the user in control, it feels like we're surfacing too much to users, leaving them with decisions they can't make because they aren't knowledgeable, asking them questions they can't know the answers to. I hope that next year at SOUPS I'll see some work that integrates security more and burdens the user less.


Unfortunately, these excellent projects were more the exception than the rule at SOUPS. My major disappointment was how many of the projects used undergraduate students for their sole source of data. I get using students for pilot studies. Why not? They're practically free and willing (and in some schools and majors required to take part in research). But it takes only a bit more work and a tiny bit more expense to find people outside the undergraduate population. But then we'd have to be doing research on security problems and solutions that are practical in the real world.

Thursday, July 8, 2010

Throw this phish back: Separating the userid and the password

Many financial services are implementing stronger security measures these days, from chip-and-PIN to security questions.  Any credit card that is underwritten by FIA Card Services, if it hasn't already, will be undergoing a new set of measures to prevent phishing attacks.

I encountered the first steps of this on my Schwab VISA the other day: splitting the password from the userid screen. As a security principle, meting out the authentication steps is a spectacular idea to prevent automated attacks. And, because the user is supposed to have personalized the information on the authentication page where she enters her password, eliminates phishing attacks. (Phishing, for those who have been living under a rock for the last 15 years, is the practice of sending emails to people that look real but are not, and which ask for personal information or passwords. When the diligent but unsuspecting user clicks the link in the email, it leads to a web site that also looks real but isn't. The user enters userid, password, and possibly other personal information, and now has surrendered it all to the phishing scammer.)


Because you don't pay attention to who you get email from, we're giving you more work 

The ideal interaction works like this:

Login page: User enters the userid, clicks Submit.

Authentication page: User sees an image and a pass phrase she has entered into a profile previously and then enters a strong password, clicks Submit, and goes to the site.

The added steps should help customers recognize that they're in the right place when they've clicked a link from an email, without taking more than a couple of seconds extra when compared with the userid/password combined page.

In reality, the interaction works like this:

One day, the user goes to her credit card's web site to schedule a payment for her bill. When she arrives, she finds that the site has changed from having the userid and password on the same page to separating them -- without notice.

On the login page, the user realizes that her personal single sign-on will now not work.

Now the strong password generated and stored by her password-managing software must be exposed, because she has not memorized it. She logs in to the password-managing software and opens the record for the web site, which unmasks the password. Then she can copy and paste the strong password into the new authentication page.

By the way, the answer to her security question is her mother's maiden name, which is easily findable by others. There were 5 images offered, one of which was a flower, which would be easily guessable by someone who knew that the user was a middle-aged American female.


As a security measure, this stinks. As a user experience, it is abysmal.

The credit card company (and other financial services companies that use this authentication schema) has traded off one set of security issues for another. That is, because the financial services company gets blamed when there's a phishing attack, they put more security steps on the user. It is unclear whether the credit card company has weighed the direct cost of the burden of their added steps. Most certainly they have not looked at the user's context to understand how their measures fit into users' goals.


Why the security is still lacking
Splitting the userid and password onto different pages makes using a strong password more difficult. In the case of using personal single sign-on, we now have an opportunity for shoulder surfing, when we didn't before.

The images available as site keys are very easy to guess if you know anything else about the user.














Answers to security questions are easily findable if you know anything else about the user.

























A user experience of -3
Imagine that this user has something like 15 to 25 userids and passwords that she uses as frequently as daily or weekly. For today's scenario, the user's goal is to check a balance, pay a bill, or maintain the account in some other way. Authentication enables the task. It is not a task in itself. No one goes to a web site with the goal of logging in.

This so-called enabling task is a burden. And the very things that make it a burden allegedly make each site more secure. The added steps took much more time to implement and then use than the promise suggested. The added steps also had (we assume) the unintended consequence of causing the user to expose her strong password.

Include in the scenario that the change in security measures was a surprise. It's a question of degree: A userid/password combination on the same page is something many people are used to; it was a small bump on the way to accomplishing a task. Splitting the userid from the password was disturbing, distressing, and disruptive. Adding other measures to the authentication page was burdensome, annoying, and failed to demonstrate how implementing them helps the customer.





Making security decisions is complex. Making them in the context of users' work is nearly unheard of. These authentication measures are clunky and burdensome because they've been bolted on to the site rather than built in to the experience.  There must be a better way – a more effective, less burdensome way -- to prevent phishing and secure users' data.


Saturday, July 3, 2010

Don't make me stop this, rdio.

What is going on when a company asks you to surrender your credit card information to verify an account but says they're not going to charge the card?



















It was a trial. It was the extension of a 3-day trial to a 10-day trial. A *trial.* There was nothing said about We need your credit card to set up an account when you register. (Though I wouldn't have done that, either.)

Are they asking for credit card information to ensure that the account registrant is real rather than a bot? What in credit card information ensures that? Nothing.

What is going on when a company asks a new user to surrender credit card information but says they're not going to charge the card is this: A blatant attempt to secure you as a paying customer, not secure your account or personal information. It's overkill, it's rude, and it is very uncool.

  • They're trying to manipulate the relationship so you'll automatically become a paying customer when you've exhausted your trial.
  • They're treating you like you're a customer already, and just giving you 10 free days. So, even if you click the button for a free trial, both of the buttons on the page mean the same thing: 











That's not "verifying" the account. That's establishing an account against the will of the customer. That's sleazy.








Don't worry, we won't charge your card? Then why do you want it? There must be some other way to accomplish your goals, rdio, without asking me for payment information you don't need yet.

A better user experience would be to
  • Not ask for credit card as a verification for the trial, but instead to do something fun, authentic, and clever, like check in with me each day of the trial to see how much I love the service.
  • Help me tell other people how much I love the service.
  • Ask me questions about my usage so far
Because I never used my 3-day trial, I don't know how the service is. They could have learned a lot by asking me about why that was and why I wanted to extend the trial when I did.

When I said I wanted to extend my trial, they assumed that I liked the service and wanted to keep going -- and that they could capture me as a customer at the first opportunity. But the opportunity for them was not the seducible moment or method for me. I'm going back to Pandora