seen + learned

Using semantic differentials to evaluate visual design preferences

Posted: Monday, June 9, 2014 | Posted by Debby Levinson | Labels: , , ,

We recently completed a redesign for LoveToKnow, a site that provides advice on everything from beauty tips and pet health to travel recommendations and party-planning. LoveToKnow wanted a clean, clear design that welcomed users while also feeling believable and authoritative. The new designs had to reveal the site’s wealth of content without being overwhelming, encouraging people to spend more time browsing relevant articles, and ultimately, turning to LoveToKnow as a trusted source for help with all kinds of topics.

LoveToKnow wanted to test designs before finalizing them for launch. A/B testing was our first choice, but wasn’t an option for technical reasons. How could we confirm whether users preferred the new look and feel?

Visual design preferences are by definition subjective. However, tools like semantic differentials and Likert scales give people the terms they need to describe subjective opinions by choosing from a range of possible answers between two opposing statements. With this data, researchers can quantify what otherwise seems unquantifiable.

semantic differential scales
Sample semantic differential scales

Designing the test
LoveToKnow had questions they were hoping testing would help answer. We knew who our audience was: primarily middle- to upper-income women with some college education and moderate internet experience. We then broke the tests into three groups: one that set a baseline by evaluating the current designs before the new ones, and two that only saw the new designs, albeit in different orders in case viewing order affected preference.

We began by setting up a scenario so that test participants would all approach the designs with the same mindset: Imagine you have a puppy, and are wondering how much bigger it will get. You type “is my dog full-grown?” into a search engine, and click through to this page. Beginning with a scenario familiar to the dog owners we recruited immediately made the article page designs relevant, and a more realistic scenario is more likely to yield honest results.

Once users had the right mindset, we asked open-ended questions to gather general impressions of the page and website: what did people think the page offered? What did they think the site offered? What would they click on?

We also provided three semantic differential scales:

  • credible / not believable
  • inviting / unappealing
  • helpful / waste of time

Finally, we provided the same scales for three home page design options to help us choose a winner.

Building the test
Designing the questions, as it happened, was the easy part. The much trickier part was creating a test that would help us get answers through remote testing. We’ve had good experiences with usertesting.com before and believed we could make it work here, too; it was just a matter of linking our questions and designs to guarantee that users would proceed in a straight line instead of accidentally meandering off into the woods.

We settled on a split-screen approach: the questions would be coded in SurveyMonkey and show up in a left-hand frame. In a larger, right-hand frame, we’d display the page designs so that people would have them to refer to as they answered the questions. We also set things up so that clicking anywhere on a design would take people to the next one to review.

Over and over through our dry runs, though, we found that people got off-track immediately. They couldn’t help but click on what looked like a real web page to them, and no amount of written instructions stopped that basic behavior. (A real-life reminder of the usability truism that people often ignore written instructions!)

What did eventually curb the clicks was presenting test participants with a thumbnail of the appropriate page design in the survey, and asking them to confirm they were looking at the right design before answering questions. The colorful image broke up the survey’s wall of text and grabbed enough attention for people to stop, read, and understand.

split-screen test image

After that, it was just a matter of watching test videos and analyzing our SurveyMonkey data, which the site helpfully provides as charts and free text.

The final results
Some of what we discovered:

  • The new article page design was slightly more likely to encourage exploring the site. While participants found both old and pages credible, design had little to no effect on perception of credibility, and there was scant difference in design appeal.
  • However, the new design unquestionably helped people understand that LoveToKnow provided information far beyond just dog content, an important improvement.
  • Results for all three home pages were largely positive and equal, but our second design had an edge over the others, particularly in terms of credibility.

0 comments:

Post a Comment

Note: Only a member of this blog may post a comment.