Tuesday, October 4, 2011
Thursday, September 22, 2011
Random factoids I've encountered in authentication user research so far
I've been doing user research about the experience around security and authentication for about a year now. Through a combination of interviews, surveys, and diary studies, I'm trying to put together scenarios of what all that authentication is like for people, what the hassle factors are of authentication, and what the burden is. Are there productivity costs? Are there trust costs? Are the tradeoffs worth it?
Here are some factoids that are going into the models and scenarios:
The average person has between 7 and 25 accounts that they log into every day. These are log-ins for computing devices, networks, software, and web sites.
About half of the accounts people log into every day are necessary for their jobs.
People actually authenticate themselves -- prove who they are to someone or some system -- many more times in the typical day than they realize.
People report authenticating about 15 times in a typical work day on average. This is probably grossly under-reported.
People have rational coping mechanisms:
- They try to use the same passwords in as many places as they can.
- At work, they find the account with the strictest password policy, create a password schema to meet that, and use the same password in all the other places it will work.
- Everyone has a personal password schema or algorithm that they think is unbreakable. They're proud that the organization's IT security people have not discovered their schema and made the password requirements stricter to eliminate their schema.
- People will choose to do some tasks on one device over another because the authentication on one is easier (or embedded or automated somehow).
- People choose the strength of the password based on their perceptions of the importance of the account they're registering for. For example, many people say they use weak passwords for social networking sites and stronger passwords for medical and financial web sites.*
Nearly everyone records their passwords somewhere: paper, email, or password locker software.
The stronger the requirements for a password, the more likely the person will write it down.
The stronger the requirements for a password, the more likely it will have to be reset after being changed for expiration.
The less frequently the password is used, the more likely it'll have to be reset at next use.
Remote access to work systems keeps some people from doing work that they would normally do outside of normal working hours.
At work, most authentication happens in the morning, and then in the early afternoon.
* This is probably a bad idea, as there is much more personally identifying information in a social networking profile and the security of the back-end systems is generally less stringent on social network sites than for medical and financial services web sites.
Monday, March 21, 2011
Oh, Etsy. How could you?
During the last holiday season, I called Land's End. I hardly ever call; I'm a huge fan of their online experience. I wanted to send a special order to my mother, putting two matching things in the same gift box. Landsend.com isn't really set up to do that, but the site instructed me that I could do it, so I called.
The sales rep was friendly and efficient, and very helpful. She pulled up my order-in-progress and put everything in the box that I wanted to be in my mother's gift. When she asked me if there was anything else she could help me with, I blithely said, "You could check my mother's account and tell me what she's sending me for Christmas." The sales rep giggled, teased me a little bit by telling me that she could see my mother's account, but told me I would have to wait until UPS delivered it to find out. She protected the relationship between the seller and the buyer. She also protected the relationship between two buyers – me and my mother -- at least for that episode. If I wanted to find out what I was getting for Christmas, I'd either have to wheedle it out of my mother or wait.
Respect and research. That's all I ask.
Facebook had a go at Beacon, a service that broadcasted out to all your friends the purchases you've made outside of Facebook, without permission. The Federal Trade Commission sued, and Facebook eventually settled and took down the "service" in 2009.
Facebook has a history of screwing with the privacy of its users. Beacon was a prime example. The main problem here is the lack of permission. And that's the case for Etsy's new People Search, too.
The designers of Etsy decided it was a good idea to make everyone on Etsy searchable by name, including buyers. So, if you have ever bought anything on Etsy, you can now be found there by anyone else either by your real name or your username. Your whole profile is viewable, including your purchase history. Not only that, it'll all show up in Google search results.
The idea is that buyers would form social "circles" on the site to share information about their purchases.
Uninformed by research, guided by gut
These are the kinds of things that happen when an organization puts business goals before customer goals. It's also the kind of thing that can happen when an executive wakes up one day and says, We want to be one of the cool kids. And right now, to be one of the cool kids, you have to have social media. How do we do that?
What made Etsy think it needed a social layer on its beautiful, engaging site? It's the kind of thing that happens when teams decide to strap social on rather than looking at the conversation they're already having with customers and that customers are already having with one another.
I'm sure a lot of thought went into this decision of Etsy's. I fear this is a vacuum-sealed decision. Here's my imagined scenario, a scenario I've seen played out in other, similar decisions at other, similar organizations: Management, who forgets that their site is not the center of the universe for anyone outside that room, went to the product manager and asked for some of the social awesomesauce that is out there to turn up the buzz a notch. The product manager brainstormed with the team. The best idea they could come up with is to get customers to talk about the fun, beautiful, interesting stuff they'd bought on Etsy online with one another. (Never mind that we already have Twitter for this.)
Where is making use of the conversation that Etsy is already having with its customers or that buyers and sellers are already having together? They probably can't make use of these conversations because they haven't observed them. Where's the research to support this design decision?
It's hard for me to believe that if Etsy had conducted user research and even informal but realistic usability testing on the idea that they would not have quickly seen the privacy violation. They could have avoided the damage control they now have to deal with because of the breach of trust they've had with buyers who already love the experience of shopping there.
How Etsy could have avoided the problem and discovered a possibly great idea for engaging buyers even more
1. Analyze the risks of a social media strategy to users' privacy, security, and trust. Where was the business plan for allowing search of users? How does having social "circles" support the business model, exactly? How would the social media strategy be supported on the back end? More than all that, let's look at others who have gone before us: Beaon on Facebook and Boden USA come to mind. What happened there? What could the Etsy team learn from those mistakes? Oh, and, why duplicate Facebook in any way?
2. Proof the concept with real people who shop on Etsy. This is pure conjecture based on my experiences with other organizations: Etsy may have thought that to up their game and get people more engaged in the site, they needed to get buyers talking with one another and not just to sellers. Charming idea. But how do you find out if people find that useful?
Focus groups? If there were focus groups, I'm just going to guess here that participants liked the idea, but there was no exploration of the implications of this profile information being public rather than private. Not ideal.
What else could they have done? Invited friends and family. This approach still perhaps is not optimum, because friendly participants might not have exposed the privacy problems. They are, after all, friends and family, so there's automatic trust and wanted connections there already. How about rolling it out to a very small number of key buyers -- 3 or 5 -- and watch what happens for a week or a month as they connect to their people, or until something bad and unintended happens?
3. Conduct usability testing with real people in real contexts to learn the ripples to real relationships. Let's say they did usability testing. Did they bring in real buyers to use a working prototype with their own data? Did it occur to anyone that now my ex can Google me (like he does) and find out that I bought my sister a Star Wars crochet pattern, or my current paramour a hand made can coozie? Or what about the fact that my clients could see all the personal things on my Etsy wish list?
A usability test with a limited "circle" on a closed sandbox (like a walled-off development or testing server) for a couple of weeks might have given them some clues about what might work and what might not.
Etsy, I love you, but I have to go now
Not only will Etsy have to clean up its own site by making the social opt-in, but they'll also have to figure out a way to recover buyers' privacy. How does a web organization reclaim data that is now not in its control? If they could invent a big Web eraser to drag behind them as they invite buyers back to the site, they might have a chance.
Monday, January 17, 2011
Against the braindeadness of password policies: Andrew A. Gill's "Password Manifesto"
I was struck by the rational thoughtfulness of Andrew A. Gill's write up about password rules. Read the original post. I learned about it through a Twitter post from @mediajunkie.
With Andrew's permission, I'm excerpting here.
With Andrew's permission, I'm excerpting here.
The Password Manifesto
by Andrew A. Gill...
For both of the people on the internet who do not know, Gawker's user databases have been compromised and the passwords stolen. I tried to log in to my account over there with any of the throwaway passwords I use when I don't care if I get compromised, and that did not succeed, so I'm pretty sure that I never changed it in the first place.
All of this makes me (much like this guy) wonder why I needed a password for that in the first place. I'm not interested in the Gawker community; I just wanted to let someone know that the Flight of the Bumblebee is not Fantasie Impromptu. Ideally, I should be able to sign up for an account, comment, and then disable the account, unregistering any password for it, requiring e-mail authentication if I ever want to comment again.
... The process of password generation for multiple accounts doesn't need to be a difficult process, but it invariably is because of the brain dead password limitations that the web sites give you.
Or, more often, don't give you. I have several passwords that are less secure than they could be because I'm not sure how long they can be, or in some circumstances because they have to be kept in sync with passwords that have to be shorter or can't use anything other than all caps or other idiocy.
And then these have to be kept alongside passwords that have to have letters and numbers and specials and have to have a number in the second position and can't have a capital on the last letter.
There is no reason why any password cannot have the 62 characters of [:alnum:], and you'll even have the rest of the printable ASCII characters left over to escape anything that Bobby Tables might throw at you. I should demand that all websites use Unicode for passwords. There are many different open source solutions to handle form input (with or without i18n) out there without counting the proprietary ones. But let's settle for the `normal' characters just to get to some sort of common ground now.
But it's time to get to the manifesto. Yes, I know that most of you reading this already know what I'm going to say. Still, we need to make sure that this information is clear and easily understood by everyone so that we can come up with a standard set of procedures to implement.
(1) If nothing else, show the users what options they have for selecting passwords. What is the minimum? What is the maximum? Which characters are allowed and which are disallowed? Are there going to be password hints? Is the password stored on the remote system or the local system, both, or neither (with a system like shadow passwords)? If the remote system is keeping a copy of the password, will that password ever be sent to me in plaintext? Is that code going to be used as a password hint to be given by the local system or as a token given by the remote system to prevent phishing?
Without this information, users will default to using worse passwords than they can. If one password is compromised on your system, it might allow an attacker access to another part of the system where another flaw would leave your entire system compromised. It's bad enough that many systems don't allow good passwords, but we don't need to encourage people to use less secure passwords than they want to.
(2) Allow good passwords. The only excuse for disallowing any Unicode character in a password is that you're running a legacy system that disallows it and you cannot upgrade the system. There is no excuse for disallowing any of the 95 ASCII printable characters (32-126), and if your system disallows some of the 62 characters of [A-Za-z0-9], there's something seriously wrong with your input handling, which wouldn't even be able to handle LadiesMan217. And if you allow those for your usernames but not for the passwords, you've got your priorities backwards.
I suppose I shouldn't have to say this, but I have more than one password or PIN number that is less than ten digits long, and must be numbers. It may sound like I'm being sarcastic when I talk about Unicode on legacy equipment, but I get it. Many people still have legacy equipment and can't change for whatever reason, but unless you're transmitting passwords via Teletype, your system can handle it.
(2a) Allow long passwords. My WPA wireless password can be up to 63 characters long, and that's not the most sophisticated security out there. If every wireless hardware manufacturer has figured out how to implement that length, there is no reason why every software password system can't allow one that long. If the passwords are stored as hashes, increasing the size of the password will not increase any storage requirements on the remote system, since verification is done by checking the result of an cryptographic transformation on the password, and the results are a set length.
This naturally will get into a discussion of passphrases later. Be aware that simply using passphrases doesn't make you more secure--we tend to use a few common words for passphrases, with the result that one word in a passphrase may be worth only two or three printable characters in a password.
(3) Allow bad passwords. Good passwords are a good idea, but so are bad passwords. That doesn't mean allow four number passwords, but it does mean getting rid of the smarmy messages about how ihatethisstupidpasswordchecker is not a strong password since it is composed of words found in a dictionary.
People use passwords for different things, and sometimes they might want to use a bad password for one of a few reasons. I understand that you are concerned that your whole database of which username wants to marry which character from Naruto. This is obviously vitally important to keep secure and thus justifies the randomly generated FIPS-181 password that changes once every two weeks and cannot be set by the user—in case they have some burning desire to add another spouse to their fictional online polyamorous relationship. But it it at least conceivable that lebowskifan87 might not be too concerned that someone finds out that she likes Coen Brothers movies.
(4) Use some real thinking to determine when require good or bad passwords. Does this password give you access to your money? Or other people's money? Perhaps confidential client information. You'll probably want to use a good password for that. But you won't force users to use good passwords if they don't care about your security. They'll use technically good passwords like Ab(d3 or write them down in an unsecured location. How often do you need to enter a particular password? If you have to enter it every time you come back to your PC or send a message, you'll probably want a short password, while a longer password may be useful for things that you have to do once a day or so.
I have several passwords that I either no longer remember or had to write down in an unencrypted file because the ones I wanted to use were too insecure for the webmasters at some blog that forces people to register in a lame attempt to convince its readers that they're part of a `community.' I also have several passwords that I have to use for work to handle client data, and do things like log onto my work PC, which I lock every time I get up from it. I choose high-entropy, difficult to remember passwords, and let muscle memory take me the rest of the way. A couple days after changing the password, and I have no trouble entering the whole thing, even with my typical fat-fingeredness. If I got locked out after a few bad passwords, I'd quickly find myself out of a job trying to enter a 63 character passphrase blind.
(5) Never default to send a user's password via plaintext e-mail without a prior request to do so. Plaintext e-mail, as has often been said, should be treated like a postcard. Expect that any plaintext message can be intercepted and read, because, well, it can be intercepted and read. For whatever reason, some users may want to use plaintext e-mail to have their passwords sent to them from time to time, either as a reminder or as part of a forgotten password recovery procedure. We aren't going to see that go away anytime soon, but many users are not going to want their passwords routinely transmitted insecurely, and we should honor their wishes. At the very least, the user should have to opt in to this type of behavior.
I have had to change passwords on multiple occasions when they were sent to my e-mail address in plaintext, usually as helpful weekly password reminders. This is just downright silly.
Well, that's pretty much it, but here are some advanced tips that I don't expect can be used in every case. I recommend using them in every case, even when there are legitimate objections to them because they are part of best practices, and I suspect that the objections will be overcome in a few years, at which point, these will be part of the standard operating procedure.
(A) All passwords should be transmitted through encrypted communications. This means using SSL for your servers. SSL and certificates aren't cheap, so I understand why you might not want to do this, but there are other ways to encrypt a password if this is not available to you. If you need to send a password to the user's e-mail, send it using the user's public key encryption, if available. If you can't afford to do any encryption at all, consider if you really need users to give you secure passwords if you're not going to keep them secure.
(B) Passwords should be stored as hashes of the password, not the password itself. By doing this, the password itself is not stored and if the password database is cracked, no data are compromised. The cracker would have to generate a hash collision to get a password. Doing this also means that you won't be able to alert the user that the third digit of her four digit PIN number is the same as the previous password she used for this account. Good riddance, in my opinion.
(C) Users should be able to temporarily or permanently disable accounts, removing all passwords from the remote server's database. Whether that's because the user rarely comments, or because the user is paranoid, or because the user gave up the internet for Lent or because the user has been convicted of wire fraud and can't use a computer for the duration of her sentence is irrelevant.
(D) A very long password, like a sentence or some other multi-word phrase may indeed be better than a shorter password, but it might make sense to include it as a sort of two-factor identification. Enter your passphrase at the start of a session, and that passphrase enables the operator to use the shorter password to authenticate when the program notices a five minute idle and logs you out. The password is useless to any attacker who is not physically at your computer while you are logged in with your passphrase.
Subscribe to:
Posts (Atom)