11. Measure performance

Measure the performance of your service and understand what outcomes it is delivering. Report results to your stakeholders openly and regularly to encourage continuous improvement.

Why it's in the Standard

Every service should aim for continuous improvement. Metrics are an important starting point for discussions about a service’s strengths and weaknesses. By measuring performance, you can help answer questions that people care about.

  • what is and isn’t working?
  • what value or benefit does this service provide to users?
  • where can a service be improved? (time, cost, ease of use)
  • have we achieved what we set out to achieve?
There are a few common problems to avoid in performance measurement for government services:

  • the achievement of targets or outputs becomes more important than meeting user needs
  • reliance on what is easy to count, rather than what should be counted
  • failure to adjust to social, demographic, and economic factors that impact performance data
  • measuring after something has changed, without getting a baseline to compare to first.

What is a performance indicator?

A performance indicator is a signal used to monitor a service or system. These signals can include:

  • inputs/outputs – such as money, headcount, physical resources, human resources, time
  • activities – such as calls responded to, units delivered, reports completed
  • outcomes – the result a person experienced because they used a service.

Input and activity data provide only part of the picture of how a service is performing. This is where user research can help you learn more about why something is performing the way it is.

By using a combination of qualitative and quantitative indicators, you will be better placed to understand what is happening for your service and why.

Key performance indicators

Identifying and capturing the right indicators can ensure all your decisions for new or existing services are evidence-driven. The following key performance indicators are a strong start to monitoring your service:

  • outcome – what outcome happened for a person because they interacted with the service?
  • value add – what are the user benefits of this service? What is the return on investment?
  • completion rate and time – how easy or fast is it for a person to move through your service? How many people drop out and do not complete, and why?
  • cost per transaction – how cost efficient is your service for the user and for government?

There are other metrics your service can use to understand how it is performing, such as:

  • error rates
  • uptime – including cloud service performance
  • response/load times – use global standards to compare your results (current best practice is 2 seconds or less)
  • content metrics – such as readability level, scroll depth, and search bar use
  • audience – is your service reaching the expected demographics? Why or why not?
  • repeatability – is the service provided in a consistent and dependable manner?
  • digital take up – this shows how your digital service is being used compared to alternate channels. It can be helpful to understand which channels are more popular or accessible, and why. Remember that not all services and not all people are able to use a digital option
  • user satisfaction – where and how you ask for this feedback matters. Consider face-to-face and anonymous avenues for feedback to support people in providing honest responses.

If your service is compliance based, ways to measure user satisfaction could include:

  • communication and expectations – people know what to do and how long it will take up front, and what they are told is what happens
  • fairness – people may not be happy with the result, but feel they were treated fairly and with respect throughout the service
  • behavioural changes – for example, people demonstrate changes in actions or decisions
  • self-identified results – people provide feedback on how a service made a difference in their life or helped them meet a need.

Be wary of commercial methods of measuring user satisfaction such as Net Promoter Score. These types of measurement do not always translate well in the public arena because they assume people using a service can take their business elsewhere or are interacting with the service by choice.

Share your results

No service can be monitored in a vacuum. Share results with your team to identify areas of improvement. This keeps the team human-focused and makes it easier to explore and prioritise enhancements.

Share results with your stakeholders to tell the story of how the service is currently working, and to raise any issues that may impact service delivery early. Think beyond reports, such as showcases, blogs, short videos and inter-agency meet-ups. Find interactive ways to help communicate more broadly the results your service is experiencing.

Share key metrics with the public and the people who access your service. This builds trust and can encourage people to give you feedback, because it demonstrates you listen and use it to make the service better for themselves and others.

Always consider the audience you share your performance results with, and ensure you are meeting Australian privacy principles and laws.

How you can meet the Standard

In Discovery

During Discovery stage, you’ll have started early measurement activity by:

  • collecting a baseline of what is measured now and why
  • exploring what data is already available, where it’s kept and how you might access it
  • outlining key questions on what you want to know, and how you might get the answer.

In Alpha

In Alpha you will need to consider how you will measure your service in more detail. By the end of Alpha you should have:

  • explored what data is already available, where it’s kept and how you might access it
  • combined this existing data with your own insights from research
  • collected a baseline of what is measured now and why
  • collected baseline data for service operation in all its channels
  • started creating a performance framework outlining your goals and what metrics your team will use to demonstrate whether you meet them.

In this performance framework you should:

  • be able to explain your assumptions
  • be transparent about any statistics
  • be able to explain the rationale for the performance indicators you’ve chosen.

In Beta

By the end of Beta you will be able to show:

  • which metrics and measurements you will use to monitor your performance
  • the baseline measures and the benchmarks for success
  • which tools you use for analysis and web analytics in Beta (and Alpha if appropriate)
  • what you have learned from qualitative and quantitative data (for example, key evidence).

During public Beta you will have been able to test your methods for data collection and validated that the data is accurate.

As you go Live you should be able to show past performance data and improvement to the service based on its findings.

Your data should show:

  • how and where the service is delivering value
  • the outcomes the service is experiencing
  • completion rate has improved (in the instance of an existing service being redesigned) or been maintained (in the instance of a new service now moving from Beta to Live)
  • cost per transaction is decreasing in line with service plans
  • usage rate is increasing in line with service plans.

Further reading