December 17, 2005
I recently ran one of our regular usability sessions for our hospital web site. It's always a fascinating experience watching real users
fumble around navigate our site and the web team always learns a lot from it.
This time was no exception and so I thought I'd post a few thoughts and takeaways from it.
By 'real people' I mean users who are representative of your web site's audience. For the section of the site we were testing, these were parents - we are a children's hospital, after all. If you are trying to make decisions regarding your site without physically watching how people actually use it, you are going to make assumptions and potentially mistakes.
If you haven't user-tested your site recently then you really should make this your number one priority. Yes, it can be time-consuming to setup and execute, and depending on your circumstances difficult to arrange, but the value of what you learn more than makes up for this. I don't think I can overstress this point.
Even if you're analyzing your web site's stats, getting user feedback from surveys, looking at your site search stats (you are doing all this, right?), if you're not physically watching people use your site you are missing a vital component in understanding how usable your site actually is.
I like to test with external users who do not have any sort of working relationship with the hospital so that they aren't tainted by any 'insider' knowledge of our site and its content.
However, it takes a lot more to set up a user test with external participants than it does with, say, some folks from accounting - who never come into contact with our web site during their normal work but are very likely to be parents and thus be appropriate participants.
So, if you're having trouble recruiting external participants for your usability studies, use internal folks instead. As long as you find out how familiar they are with your site and ensure that they are (fairly) representative of your site's audience, then there's no reason not to make use of this readily available pool of users.
In addition, I've found that internal users are inherently interested in the concept of user testing (the novelty factor, I guess) and are only too willing to give up a half hour or so of their time to help you.
By this I mean, the follow-up - the writing up and analysis of what you saw. I have very little free time at work, and so the last thing I want to have to do is write up a long report based on each usability test we do. Consequently, I prefer to quickly focus on what we learned and what we can do to make the site better.
Of course, sometimes full write-ups can be the way to go (if you're doing studies over a period of time), but there's certainly no harm in getting the testing team together after the users have gone to go through what you observed and identify what key takeaways and action items.
After a usability test I find that there are always obvious fixes to be made (e.g. No one noticed that link, so let's make it more obvious. The label for this page is confusing, so let's change it). These are the sorts of thing that you can just go ahead and do, no further discussion needed.
Other times, you may conclude that a certain section of your site doesn't work too well, but you need to do more analysis in order to determine what to do about it. This is where you will need to write up your results so that you can use them for reference as you investigate further.
That's enough for the moment. I'll write up some thoughts about what I learn from usability testing in another post.
Posted on: December 17, 2005 | No comments
Comments are closed for this entry.