Caption: James Broadbent, Performance Dashboard Business Analyst and Engagement Lead discussing the dashboard.
We are the first service to report against the cost per transaction key performance indicator (KPI), and we’ve decided to report on a host of other metrics to really be transparent about how we are achieving our goals (you can see them on our dashboard). In our own journey applying the Digital Service Standard we have learned a number of lessons to share with other agencies.
Reporting against the mandatory KPIs is obviously very important, but just as important are the other metrics that relate to the specific objectives of the service. How can you measure success, or identify areas for improvement, if you’re not reporting against metrics that relate to what you set out to achieve?
Defining what a transaction is allows the team to calculate things like cost per transaction and completion rate. If this is done early in the process the team can build the mechanisms it needs to capture data and can pinpoint parts of the service that could benefit from further iteration and improvement.
Engagement is so important. Not only is it needed in order to get approval for data to be published, but the feedback of broad range of stakeholders is a valuable resource that should be tapped into. This input also provides a valuable insight into the type of information that senior executives, or service managers, will be interested in when using the dashboard. This will encourage the use of the dashboard as a tool for driving continuous improvement.
What can be tricky
We knew from our engagement with agencies that already report to the Performance Dashboard that it’s not always straight forward trying to report against the KPIs. Given that no current services report against the cost per transaction KPI, we have been working with a broad range of stakeholders to develop a calculation methodology. We’ve developed, and now applied, a straightforward cost-per-transaction calculation and a suite of other metrics that can be used to demonstrate efficiencies gained. We have calculated our cost per transaction by considering the cost to the DTA of making the Performance Dashboard available and keeping it up to date.
Reporting dashboard user satisfaction is also problematic. We received only one response for both December and January. As we have an average of 900 users each month, to have any statistical usefulness we need at least 17 responses to our survey. That would give us a 90% confidence level in the feedback received, allowing for a 20% margin of error. Until we reach this level of feedback we will report ‘no data’ for our user satisfaction metric. The other problem is that users are leaving feedback for specific dashboards, as opposed to the entire Performance Dashboard as a platform. To improve this we are thinking more broadly about how we can make it easier for users to tell us how they rate the whole platform, as well as the specific service dashboards.
What the benefits are
To us the benefits of reporting against the KPIs and more broadly against our own objectives have been obvious.
Firstly, we are being open and transparent, which gets to the heart of what the Performance Dashboard is trying to achieve. Secondly, we have been able to demonstrate that agencies can report against cost per transaction. Thirdly, the data provides us with clear opportunities for improvement. These learnings, and more, help us iterate and improve, iterate and improve, iterate and…
Importantly, we have a platform where we can share this journey. You can follow our progress by going to www.dashboard.gov.au.