Code for America

Lead UX Writer, Freelance

TL;DR

After joining as the Lead UX writer during a tree test of our site's new information architecture (IA) labeling terminology, I helped analyze whether users could easily navigate and understand the site. Many terms, like 'Research Papers' and 'Data Amateur,' didn't match what our users expected. In fact, we found that over half of the participants were confused by at least 5 of the labels. Based on these insights, I worked with the design team to replace these confusing terms with ones that better matched our users' language.

Situation

At the end of a tree test for the site's newly established information architecture (IA), I joined as the Lead UX writer to provide insights and understand the research team's process. The test aimed to evaluate if users could complete basic tasks within the IA and identify any existing pain points or organizational issues. We recruited 39 volunteer participants, who were regular users of the site and represented a diverse range of demographics, via Slack channels.

Task

I aimed to determine whether users could perform basic tasks in the IA. This involved analyzing participant responses to the tasks and evaluating the suitability of the content labels and the overall structure of the IA.

Actions

  1. I participated in the research team's synthesis session to understand their process and contribute insights from a UX writing perspective. 
  2. I guided the analysis by posing five critical questions:
    1. "Are we using terms from our users' lexicons?" 'Research papers' seemed like more of an academic label, which may have differed from what participants thought when looking for real-life examples.
    2. "Are we using inclusive terms?" For example, one of the prompts asked participants which option best described someone unfamiliar with open data. A 'data amateur' was the only logical option, but the term 'amateur' carried a negative connotation.
    3. "Were the tasks and questions fair?" Some of the tasks may have been leading, double-barreled, or phrased so that participants could not make representative decisions.
    4. "What can the design team tell us?" They had just wrapped up a hi-fi wireframe prototype and could have information from which we could benefit.
    5. "How about we just ask the participants which words work best?" The team had not conducted a card sort, so the opportunity for participants to choose the terms themselves could clarify the fuzzier concepts.
  3. I reviewed the destination chart from the tree test, noting that 55% of the navigation labels confused participants, such as mistaking "Project Overview" for "Request System Background."

Spreadsheet of the original site map.

Result

The analysis revealed that several terms used in the IA did not align with user expectations or lexicons, such as "Research Papers," which was too academic for our audience. The term "Data Amateur" was perceived negatively, and some tasks led participants astray due to their leading or double-barreled nature. Essentially, the tree test's results highlighted significant confusion with existing labels, highlighting our work's necessity for a user-centric approach.

Screenshot of the data analysis and report for the tree test.

Outcomes

In response to these findings, we collaborated closely with the design team, who had just wrapped up a hi-fi wireframe prototype. Their input was crucial in understanding the context and potential implications of the IA changes. We decided to conduct a survey to invite participants to contribute to the labeling process, ensuring the IA's terms were user-generated and more intuitive.