Developing a new site structure for

Our DTA website started out as an interim site and minimum viable product. But over the past year, our agency has grown and our needs and work have changed.

Version 1.0 of the new information architecture with the following menu items: Contact Us, Jobs, News and blogs, Help and advice, Our projects and About us.

Through our discovery work, we also learnt that 54% of visitors to our website were having problems finding what they wanted — so it was time for a revamp, especially to our menu and site structure, which had outgrown its original design.

We conducted extensive user research and experimented with ways to make the findings come to life.

Doing some research

To improve the website, we firstly needed more information about people’s needs.

As the content strategist, I was responsible for the information architecture. In particular I needed the user research to give me a clear understanding of users’ needs, goals and tasks.

Our team conducted research with over 300 people. This included 20 interviews, 11 card sorting sessions, 5 tree tests on the site map, two surveys and usability testing.

IA testing using TreeJack

A great thing we did on this project was run multiple tree tests on the site structure as it developed. A tree test helps you understand if people can find their way through the site structure. We used the online tool TreeJack to run these tests.

This approach allowed us to track how we were performing over time. We ran 5 tests over the course of the project. We tested quickly and often.

A column graph showing test results from 5 versions of the website structure. The results range from 70% to 96%.

Caption: We ran tests on 5 versions of the site structure. The early results are high because we were testing level 1 categories with internal users.

We ran the first test very early in the project, on version 0.1 of the new information architecture (IA). This version featured level 1 categories that we thought might be simpler and more effective.

We only ran the test internally because we wanted quick feedback. We also didn’t want to engage external users until we were more certain of the direction. We iterated based on those results to produce version 0.2. We tested internally again but with a different group of participants.

These early insights were fabulous. It gave us ideas about what resonated internally. It also gave me some ideas, as someone new to the organisation, about where the strategic priorities were and how the organisation viewed itself.

The next three tree tests we ran were all with external participants. In test 3 we introduced level 2 categories, and by test 5 we were testing the whole site structure. At each stage, we modified and refined the IA.

We used a fairly consistent set of tasks that reflected user and business priorities. Over the 5 tests, most of our changes to the site structure were successful. But we did have our share of failures. This forced us to rethink decisions and look closely at the data.

A diagram showing user paths through the website.

Caption: A diagram of user paths. This makes analysis much easier.

From the tree test we got a visual representation of user paths. It shows you where people are going and when they zigzag or trackback through the site structure, which is usually an indication of where they are having difficulty deciding on a path.

What we found through testing was that the site structure was more successful when there were fewer entry points. The DTA’s website is complex in terms of describing its services, projects and functions. The scope is broad and some of the material quite complex. Things didn’t always fall neatly into groups. What we found in the end was that providing few high-level choices — of level 1 navigation — led to stronger task success for users and stronger pathways for users.

Visualising research and findings

As we conducted interviews and tests, we picked up a large range of insights, quotes and comments from participants. Normally I’d document this raw data in spreadsheets and mindmaps. These tools are useful to me but weren’t useful in communicating within our team. Not least because our team was split between Canberra and Sydney.

So we got inspired to try something different. Another DTA project team showed us their approach to visualising their research and analysis. It looked very effective, so we thought we’d give it a go.

Virtual post-it notes with comments from user research interviews with external audiences.

Caption: We used virtual sticky notes to present the raw data from user research.

This approach involved using a cloud-based tool that provides virtual sticky notes and infinite wall space to arrange them on. The tool — RealTimeBoard — allowed our team members to view the research in multiple cities and make comments and edits.

Initially we used the tool to collate and present the raw data. This involved:

  • creating a note for each user comment or insight
  • adding a simple participant identifier (like P1, P2, D1, D2) so we could track back the comments to a participant and research session but keep it anonymous on the board
  • grouping the notes by research question.

The result was that all our raw data was neatly set out. Team members could browse or search the data, and make comments.

Then we analysed the data, looking for common themes and insights. We did this by dragging the relevant notes from multiple participants into various groups. This process is called affinity mapping.

Putting it on the wall

Once we’d done this, at the top of each group we summarised the key outcomes or findings for that group of notes.

At one stage, our team was altogether in the same office. So we printed the findings out, stuck it on the wall and I presented back to the team. It was very successful because team members could browse the summaries, and then scan the relevant source participant insights that supported the summaries.

Virtual post-it notes and symbols grouped around a diagram of the current website to map out what users think of it.

Caption: We used affinity mapping to organise the raw data. We then summarised the findings in a visual manner and put it on a wall for everyone to see.

This was a dynamic way for the team to get across a large amount of detail. It was easy to see the breadth of the data we had, the types of comments and people could judge for themselves if the summary or finding was reasonable. This made the research transparent which is vital.

We have carried on doing this throughout the project. So we now have a visual story map of the project covering all the key points such as who the users are, what outcomes they are seeking, what their key tasks are and so on.

This forms a narrative around the project and the supporting research. It’s a powerful way of communicating within the team and to others outside the team. It’s much more engaging than a slide deck or document, because it allows the reader to validate for themselves that the findings are valid based on the evidence — so in that way it is more participatory and helps the team collectively own the research and the findings.

Turning the research into features

Our user research is the backbone of our website redevelopment project. We are able to progress because we have an ongoing cycle of research, design, test. When we have difficulties or setbacks, we go back to the research to validate our assumptions and look for insights.

Visualising our research and findings helps us own the findings as a team. It also allows us to effectively communicate with other stakeholders.

Keep updated

We’ll be sharing more insights from our website work on our blog. You can sign up to our mailing list to stay updated on our progress.

Want to join the conversation?

Read our comment moderation guidelines