Wednesday, December 15, 2010

There's the ripple: From Gawker comments to personally identifying information

Okay, so Gawker was hacked. One might ask, "Why?" But I think the more interesting question is, "So what?"

On December 12, hackers posted a list of the usernames and passwords from a total batch of over a million users of Gawker Media web sites. (Gawker includes online media properties such as Gizmodo). According to the Wall Street Journal, the passwords were encrypted, but the hackers decoded 188,279 of them and published them. The WSJ.com published a list of the 50 most used out of those decoded.

WSJ.com complains that the most used passwords are extremely weak. But let's keep in mind that there are about 800,00 passwords the hackers didn't publish, and the reason might be that they're too difficult to decode, or at least it would take more time to decrypt them. The top 5 in the decoded dataset were

123456
password
12345678
lifehack
qwerty

The top choice, 123456, came in at over 3,000 uses within the dataset of 188,279. Taken together, it looks like the top 5 cover about 7,000 passwords out of the decoded, published dataset, or about 1 in 4 passwords, more or less.


So people have weak passwords on Gawker sites. So what?
Does it matter here? These are accounts set up so people can leave comments on online articles, thus preventing most spammers from taking over the comments. It's not like it's a big deal. The accounts are a hall pass for access.

WSJ.com goes on to examine whether there are differences in password usage by email provider (I assume they're going by the domain in the email addresses used as usernames for Gawker accounts). I think that WSJ.com is missing the point. There are a few problems with the practice of implementing accounts on comments to prevent spam. First, it puts the burden of keeping a clean site onto the users, rather than implementing stronger security on the server side. Second, having password access for leaving comments may stifle some would-be brilliant insights because people don't want to register on the site - not a great way to encourage engagement. Third, people use the same passwords on many sites. I don't blame users for this when there are sites like Gawkers' that require accounts to do basic things that wouldn't normally risk users' privacy and identity. 

Let's talk a bit more about the first and third problems. They have some things in common.


Account management is for the convenience of site owners, not the protection of the users
Gawker Media and other media web sites have forced commenters to create accounts on their sites to prevent spammers from taking over the commenting space. Not having spammers makes it much easier to moderate the comments (if you're going to at all). So, the sites have traded the convenience of their readers for their own. That is, rather than employ someone or some technology to deal with the spam (or a combination), they implement an account management system, thus putting the burden on their readers to prevent comment spam.

Account management systems are often implemented to make it easier for IT and Security to do their jobs. While dealing with password maintenance issues has a cost, the cost is higher for users than for the organization. For the organization that is looking just at saving IT money, it's a win. For the organization that wants to create a loyal audience, registering on a site and maintaining a password create obstacles to participation.


By putting the burden on users to stop comment spam, media companies actually make their users' data less secure
As we know, people use the same usernames and passwords in as many places as they can. On average, people have between 15 and 25 username-password combinations they use every day. People who work with complex systems often have many, many more. So, when users make the tradeoffs between respecting security policy and getting to their goal, they make reasonable choices, usually in favor of their own efficiency. Thus, using the same username and password in multiple places, for both very risky, highly personal situations such as online banking and low security, low risk scenarios like leaving comments on Gizmodo.

A requirement like Gawker's has possibly inadvertently compromised the personal security of more than a million of its readers. When a hacker knows one username and password for you, along with anything else about you, it is fairly easy to break into all kinds of accounts you access online.


The security experience is the worst part of using nearly every site
IT has owned login and registration for so long, that designers and users alike have been trained to put up with whatever security engineers say needs doing. We rarely question the purpose of a security policy, what it is in response to, what the tradeoffs are, how it fits into the larger security plan of an organization, and what we want to the security experience to be for users. Most of the implementations are made without any user research or usability data at all.

As is the case with many security decisions in organizations, each issue is treated in isolation. Who would have thought that comment spam would interact with a) the security of the servers and b) the security of users' personally identifying information?




See also:
from Jeff Attwood
http://www.codinghorror.com/blog/2010/12/the-dirty-truth-about-web-passwords.html  

from Richi Jennings at ComputerWorld
http://blogs.computerworld.com/17527/why_not_use_same_password_everywhere_gawker_shows_us?ta

the announcement from Gawker
http://gawker.com/5712615/commenting-accounts-compromised-++-change-your-passwords 


ADDED 15 December 2010 at 3:30pm EST, from Karen Bachmann, from an email to me: 


Good points, Dana. I recently worked with a client who has a strong financial need to know that they are reaching the right audience, highly specialized professionals looking for detailed technical information. Visitors are currently required to set up an account and have to log in with username and password each time they visit. The information, though, is not restricted. Anyone can create an account.

When I interviewed members of their intended audience, having to log in to get to non-sensitive information was a huge but familiar barrier to entry. Most people were resigned to this with all websites of this type in their field, but none were happy about it. During the interviews, I actually asked people to interact with the site and saw several problems regularly. 1) Those who had created accounts either forgot completely that they had because of time between visits. 2) Visitors who knew they had accounts forgot their credentials, a problem they indicated was common for their interactions with others site of this type. 3) The most Web savvy stated that they had a standard "throwaway" set of credentials that they would always use on a site like this. When asked about their likely use of the site, most said that they would usually just go elsewhere for the information when the credentials got in their way.

Since the basic use wasn't really about guarding access to the information, I recommended to my client that they simply request an email address as a short-term solution, and omit full credentials for access. If they still required an account (more details about the user), account management would require a login. However, in the longer term, technology could actually take most of the burden from users. The company has a huge database of contacts that could be used to cross reference emails entered. Handling this on their servers would actually provide them with even more data about types of users than the account information they collected.

The managers at the client indicated they really just assumed that the only way they could gather their information was with an account and credentials model. They had not really considered their real needs against the user perception and goals. They intend to make the change I recommended as part of their redesign, which is still pending.

Wednesday, August 18, 2010

Truly secure security questions

Organizations from financial services companies to e-commerce web sites have implemented "security questions" in the log on process. The idea is that, in addition to a username and password, your answering these questions correctly, helps authenticate you to a system.

The idea is good: Provide an answer to a question that only you could know is the correct answer.

But many of the questions are weak, either because they're answerable from publicly available information (mother's maiden name; what if that is her name?), or there's a special format to entering them (case-sensitivity is often a problem).

In addition, security questions seem to be bundled by particular vendors, so a user might get the same questions from organization to organization. This could be an advantage for the user, but also for the cracker, acting like single sign-on. For opinion-based or favorites questions, there's a memorability problem: How did I answer this question last time? Did I answer it the same way on all the sites I've chosen it on? The answers to questions of "favorites" change over time. What's your favorite color? This is a question that, if answered incorrectly, can have dire consequences.

Which leads to a classic workaround: Choose the most outlandish question in the list and answer it with a passphrase. Have to answer multiple security questions? Answer them all with the same passphrase. This subverts the purpose of the questions, but makes it easier for the user as she crosses the hurdles to making an investment, making a purchase, or getting lab results from her health care provider.

And so, I offer some of the most ridiculous *real* security questions, followed by some that some friends brainstormed during a rant about this so-called security mechanism.

Garry Scoville writes regularly about security questions and related topics at http://goodsecurityquestions.com. He's an authority on what makes a less weak question (asserting all the time that there are no good security questions). His list of examples is excellent.


Real, ridiculous security questions
Among the real security questions used in real systems are some of these gems, which I've borrowed from goodsecruityquestions.com:

What is the name of the High School you graduated from? (What if you didn't graduate?)
What is your pet's name? (What if you don't have pets?)
How many bones have you broken? (In my own body or someone else's?)
On which wrist do you wear your watch? (The third one)
What is the color of your eyes? (Seriously? It says that on my driver's license)
What is your favorite teacher's nickname? (Mine for her? Or hers for her?)
What is the name of your hometown? (You think I might have moved once in my life?)
What is the color of your father’s eyes? (He has eyes?)
What is the color of your mother’s eyes? (The ones in the front of her head or the back?)
What is your favorite color? (Blue! No - green! Ahhhhh!)
What was your hair color as a child? (Either black or white because that's what color the photos are.)
What is your work address? (I work at home. Hmmm.)
What is the street name your work or office is located on? (Why don't I just tell the hacker what room the PC is in?)
What is your address, phone number? (And, by the way, the list of passwords is stored in the top right drawer.)



Questions I wish they'd ask

What was your first boyfriend's favorite car brand?
What color was your first grade teacher's house?
How long did your first pet live?
When will global warming end?
Why did your girlfriend say that about your mother?
Why am I soft in the middle?
How can you live in the city?
How dare you?
What is the point of these questions?



What's your favorite security question? 

Wednesday, July 28, 2010

RANT: What makes you think your content is this important?

vark.com registration dialog




















Dear Vark.com,

I want to ask one question of the Vark community. What makes Vark think that doing that is worth filling out this invasive registration page? Why are birthday and gender required? (What are you doing with that data and how are you protecting it?)

Never mind. I'll just ask Twitter my question.

Thanks anyway,
Dana

Monday, July 26, 2010

SOUPS: We're asking too much of users and not enough of researchers

SOUPS was new for me. I have been working in usable security for only about a year, so I was hoping for flashes of inspiration, insights from the people who have spent their careers on the topic. I was not disappointed.

The Symposium On Usable Privacy and Security (SOUPS) is a little conference, this year there were about 200 attendees. Microsoft hosted the event in July. Many in attendance were academics, graduate students, or researchers who work in corporations. There's mostly an HCI - computer science bent, I felt. There were a few corporate practitioners of security and compliance sprinkled in the crowd, though.

Highlights: Beyond authentication
As you might expect, the presentations reflected the mix of attendees. For me, this wasn't ideal, as I'd just completed a lit review that demonstrated pretty clearly that most of the academic research about usable security out there was not applicable to real situations that normal people face every day. So I was delighted on the morning of the last day to hear reports from researchers who had gone out in the world to look at some interesting problems that real people actually face.

Dinei Florencio and Cormac Herley from Microsoft reviewed security policies from web sites and theorized about what motivated them. Rick Walsh, of Michigan State University, reported on a pet project to understand people's mental models of security in home use of technology. Khai Troung, reporting for his colleagues David Dearman (both of University of Toronto) and Justin Ho (of Google), showed us the ways people name and secure (or don't) home wireless networks and why there are risks. Matthew Kay presented work that he and his partner Michael Terry from University of Waterloo had done on information design of online license agreements.

All were good work: solid research, thoroughly done, (mostly) with people outside universities, looking at some question the answer to which could make things better for users.

Useful lessons from the talks
Password requirements are the opposite of what you'd expect.
Password policies for e-commerce seem to be about making access as easy as possible, whereas policies for university and government sites make security policies strong (and difficult) because they can. The least attacked sites have the most restrictive policies.

Home computer users defend against myths of threats out of ignorance. Botnets take advantage of these "folk models."
You can install all the firewalls and anti-virus software available and still not fight threats effectively. The software is difficult to use; keeping it up takes constant vigilance. Most people Rick talked to identified two threats: viruses and hackers. Though he neatly presents eight insightful folk models of threat scenarios, it comes down to these beliefs of users: viruses are more or less automatic, probably released by hackers; hackers are malicious people who are actively working to break in to computers.

People rely on the default settings, assuming that the way the manufacturer set them is good enough.
Anyone who has installed a home network on a Windows platform knows that setting up wireless access is frustrating and difficult. So, although strong security is built in to wireless routers, giving access control definitions and levels of encryption, people don't know what those are. The usability of the installation and configuration software strongly affects the strength of security applied. When the team tested a configuration wizard they'd designed that helped users know what to do, they found that people made better security choices.

Design of information helps people see how it is relevant to them.
When Matthew and Michael incorporated a distinct visual hierarchy along with relevant graphics and illustrations, people were much more likely to spend time reading and to remember later what the content said, than they were on text that did not incorporate these features. I have some issues with the way the experiment was conducted, and the lack of background in information design theory and practice, but the outcomes are promising and I hope this team will go deeper on this topic.


We're asking too much of users
Taken together, my conclusion is that people delegate security decisions - to ISPs, to user interfaces, to institutions - for two reasons: First, in some situations they have little choice. Password policies, for example, are forced on users by policy makers in the institutions. Second, users feel they have little choice because the choices are mysterious and difficult to understand. Although one of the tenets of good user interface design is to leave the user in control, it feels like we're surfacing too much to users, leaving them with decisions they can't make because they aren't knowledgeable, asking them questions they can't know the answers to. I hope that next year at SOUPS I'll see some work that integrates security more and burdens the user less.


Unfortunately, these excellent projects were more the exception than the rule at SOUPS. My major disappointment was how many of the projects used undergraduate students for their sole source of data. I get using students for pilot studies. Why not? They're practically free and willing (and in some schools and majors required to take part in research). But it takes only a bit more work and a tiny bit more expense to find people outside the undergraduate population. But then we'd have to be doing research on security problems and solutions that are practical in the real world.

Thursday, July 8, 2010

Throw this phish back: Separating the userid and the password

Many financial services are implementing stronger security measures these days, from chip-and-PIN to security questions.  Any credit card that is underwritten by FIA Card Services, if it hasn't already, will be undergoing a new set of measures to prevent phishing attacks.

I encountered the first steps of this on my Schwab VISA the other day: splitting the password from the userid screen. As a security principle, meting out the authentication steps is a spectacular idea to prevent automated attacks. And, because the user is supposed to have personalized the information on the authentication page where she enters her password, eliminates phishing attacks. (Phishing, for those who have been living under a rock for the last 15 years, is the practice of sending emails to people that look real but are not, and which ask for personal information or passwords. When the diligent but unsuspecting user clicks the link in the email, it leads to a web site that also looks real but isn't. The user enters userid, password, and possibly other personal information, and now has surrendered it all to the phishing scammer.)


Because you don't pay attention to who you get email from, we're giving you more work 

The ideal interaction works like this:

Login page: User enters the userid, clicks Submit.

Authentication page: User sees an image and a pass phrase she has entered into a profile previously and then enters a strong password, clicks Submit, and goes to the site.

The added steps should help customers recognize that they're in the right place when they've clicked a link from an email, without taking more than a couple of seconds extra when compared with the userid/password combined page.

In reality, the interaction works like this:

One day, the user goes to her credit card's web site to schedule a payment for her bill. When she arrives, she finds that the site has changed from having the userid and password on the same page to separating them -- without notice.

On the login page, the user realizes that her personal single sign-on will now not work.

Now the strong password generated and stored by her password-managing software must be exposed, because she has not memorized it. She logs in to the password-managing software and opens the record for the web site, which unmasks the password. Then she can copy and paste the strong password into the new authentication page.

By the way, the answer to her security question is her mother's maiden name, which is easily findable by others. There were 5 images offered, one of which was a flower, which would be easily guessable by someone who knew that the user was a middle-aged American female.


As a security measure, this stinks. As a user experience, it is abysmal.

The credit card company (and other financial services companies that use this authentication schema) has traded off one set of security issues for another. That is, because the financial services company gets blamed when there's a phishing attack, they put more security steps on the user. It is unclear whether the credit card company has weighed the direct cost of the burden of their added steps. Most certainly they have not looked at the user's context to understand how their measures fit into users' goals.


Why the security is still lacking
Splitting the userid and password onto different pages makes using a strong password more difficult. In the case of using personal single sign-on, we now have an opportunity for shoulder surfing, when we didn't before.

The images available as site keys are very easy to guess if you know anything else about the user.














Answers to security questions are easily findable if you know anything else about the user.

























A user experience of -3
Imagine that this user has something like 15 to 25 userids and passwords that she uses as frequently as daily or weekly. For today's scenario, the user's goal is to check a balance, pay a bill, or maintain the account in some other way. Authentication enables the task. It is not a task in itself. No one goes to a web site with the goal of logging in.

This so-called enabling task is a burden. And the very things that make it a burden allegedly make each site more secure. The added steps took much more time to implement and then use than the promise suggested. The added steps also had (we assume) the unintended consequence of causing the user to expose her strong password.

Include in the scenario that the change in security measures was a surprise. It's a question of degree: A userid/password combination on the same page is something many people are used to; it was a small bump on the way to accomplishing a task. Splitting the userid from the password was disturbing, distressing, and disruptive. Adding other measures to the authentication page was burdensome, annoying, and failed to demonstrate how implementing them helps the customer.





Making security decisions is complex. Making them in the context of users' work is nearly unheard of. These authentication measures are clunky and burdensome because they've been bolted on to the site rather than built in to the experience.  There must be a better way – a more effective, less burdensome way -- to prevent phishing and secure users' data.


Saturday, July 3, 2010

Don't make me stop this, rdio.

What is going on when a company asks you to surrender your credit card information to verify an account but says they're not going to charge the card?



















It was a trial. It was the extension of a 3-day trial to a 10-day trial. A *trial.* There was nothing said about We need your credit card to set up an account when you register. (Though I wouldn't have done that, either.)

Are they asking for credit card information to ensure that the account registrant is real rather than a bot? What in credit card information ensures that? Nothing.

What is going on when a company asks a new user to surrender credit card information but says they're not going to charge the card is this: A blatant attempt to secure you as a paying customer, not secure your account or personal information. It's overkill, it's rude, and it is very uncool.

  • They're trying to manipulate the relationship so you'll automatically become a paying customer when you've exhausted your trial.
  • They're treating you like you're a customer already, and just giving you 10 free days. So, even if you click the button for a free trial, both of the buttons on the page mean the same thing: 











That's not "verifying" the account. That's establishing an account against the will of the customer. That's sleazy.








Don't worry, we won't charge your card? Then why do you want it? There must be some other way to accomplish your goals, rdio, without asking me for payment information you don't need yet.

A better user experience would be to
  • Not ask for credit card as a verification for the trial, but instead to do something fun, authentic, and clever, like check in with me each day of the trial to see how much I love the service.
  • Help me tell other people how much I love the service.
  • Ask me questions about my usage so far
Because I never used my 3-day trial, I don't know how the service is. They could have learned a lot by asking me about why that was and why I wanted to extend the trial when I did.

When I said I wanted to extend my trial, they assumed that I liked the service and wanted to keep going -- and that they could capture me as a customer at the first opportunity. But the opportunity for them was not the seducible moment or method for me. I'm going back to Pandora

Monday, June 28, 2010

Authentication stinks.

I love researching and designing user experiences.

There are so many ways to make them not only not suck, but also to make them good, happy, even wonderful. Designers influence nearly every aspect of the user experience these days. UX people have a seat at nearly every table in the organization, helping to make great experiences for customers and users.

There's one table left: Security.

We don't have a seat next to the CSO because we have neglected that part of the experience, and because (usually), the CSO is a paranoid who is really scary so we think we can't influence that part of the experience.

This is a call for action: Let's make friends with the security people. Let's teach them to look at the tradeoffs between security and usability. Let's help them understand that authentication is part of the customer experience that is so important, it could be killing the business.

Think of all the times you log into something each day, each time you identify yourself to something or someone. What's that like? Why are you putting up with it? Why are you letting your customers go through that?



In this blog, I'm going to catalog every encounter with authentication that I can get my hands on and discuss the design implications of what the imposer of the authentication is creating and possibly missing.