17 November 2025
We’ve released the pilot implementation report for the Australian Government artificial intelligence (AI) assurance framework, which presents findings that will inform the updates to the AI impact assessment tool.
The pilot, held in late 2024, tested a draft impact assessment tool across 21 agencies. It was designed to help public servants working on AI projects to identify, assess and manage artificial intelligence (AI) impacts and risks. The draft tool – known as the ‘AI assurance framework’ at the time of the pilot – guides users through a step-by-step assessment of their AI use case. We published this pilot draft of the framework and supporting guidance in October 2024, during the pilot period.
To complete the assessment, users need to consult relevant experts and document the steps they are taking to ensure their AI use case aligns with Australia’s AI Ethics Principles.
This impact assessment process helps agencies design and deploy AI use cases that are safe, transparent and accountable. The assessment tool reaffirms the Australian Government’s commitment to using AI safely and responsibly, in line with community expectations to ensure public confidence.
The pilot found the impact assessment tool provides a strong foundation for assessing AI use case impacts and risks and supporting responsible adoption. Pilot participants reported that the process improved their ability to identify risks that were not addressed by existing governance arrangements. This includes areas relating to fairness, transparency and explainability.
“Understanding the impacts and risks of AI, and how to manage them, equips agencies with confidence to innovate with AI, while ensuring systems operate safely, fairly and as intended,” outlines Lucy Poole, Deputy Chief Executive Officer, Strategy, Planning and Performance.
“The pilot demonstrates the value of clear guidance and consistent standards to support responsible use of AI across government.”
The DTA will be incorporating the pilot findings into an updated AI impact assessment tool, expected to be released later this year. The enhanced tool aims to provide agencies with clearer guidance, improved usability and greater flexibility to align with their internal operational requirements. Renaming the tool aims to clarify its scope and purpose– emphasising its role as complementing and strengthening existing risk management and assurance processes, rather than replacing them.
These updates, alongside the suite of existing DTA policies and guidance, will continue to help agencies adopt AI responsibly and deliver better outcomes for the community.
To read the full report, please visit here https://www.digital.gov.au/policy/ai/ai-assurance-framework-pilot-report