AI transparency statement
Last updated: 5 Dec 2025
The policy for the responsible use of AI in government provides mandatory requirements for departments and agencies relating to accountable officials, and transparency statements. This page provides details of the DTA’s implementation of these policy requirements.
The DTA has two accountable officials under the policy. These are the Chief Technology Officer (CTO), Andrew Morrison and the Chief Operating Officer (COO), Tom Gilmartin.
The CTO has primary responsibility for the following areas of the AI policy:
The COO has primary responsibility for the following areas of the AI policy:
The following areas have been identified as joint responsibilities of both accountable officials:
The DTA is adopting AI as part of the Australian Government’s commitment to digital innovation. For more information, see the section on adopting emerging technologies in the Data and Digital Government Strategy.
The DTA is committed to demonstrating, encouraging and supporting the safe and responsible adoption of AI within the Australian Public Service, and in digital and ICT investments, systems and digital services.
As part of this commitment, we have implemented AI fundamentals training for all staff, regardless of their role.
At this time, we are not using AI in any way that members of the public may directly interact with, or be significantly impacted by, without a human intermediary or intervention.
The DTA is using AI in the domains of Corporate and Enabling, Service Delivery and Workplace Productivity.
From 1 January 2024 to 30 June 2024, the DTA both coordinated and participated in the Australian Government’s trials of a generative AI service, Microsoft 365 Copilot. DTA now continues to make Copilot available to staff.
As a prerequisite to using Copilot, DTA staff are required to complete internal training on the use of generative AI. We also have a policy on the use of AI tools by staff, which staff are required to confirm and acknowledge they are familiar with before accessing generative AI tools online.
This policy encourages and assists staff to:
The DTA participated in the Pilot Australian Government AI assurance framework.
Through our participation in this pilot, we will be introducing a structured AI assisted evaluation model to assist with the Digital Marketplace Panel 2 (DMP2) evaluation process in February 2026. In addition, we are also exploring the potential for AI to be used by our staff and by our ICT systems in accordance with Home Affairs advice.
Within the DTA, each ICT system has an identified system owner who is accountable for the system, and each AI use case has an identified executive sponsor.
All AI use cases are recorded in an internal register to track their progress and status. For new and emerging potential uses of AI, it is the responsibility of the system owner to apply the Pilot Australian Government AI assurance framework, and to identify an appropriate executive sponsor.
The ICT system owner and the AI use case executive sponsor are together responsible for:
ICT system owners and AI use case executive sponsors are accountable to the Executive Board.
For more information about the purpose and operation of the Executive Board, see the DTA’s annual report.
The DTA currently uses AI in the following system use cases as defined by the Standard for Transparency Statements, details of which are located on the internal DTA AI use case register.
Assisting decision making and administrative action through assessing and making recommendations on submitted applications to a human decision maker:
Workplace productivity through the use of Microsoft 365 Copilot:
This transparency statement was last updated on 1 December 2025. It will be updated as our approach to AI changes, and at least every twelve months.
| Update Publication Date | Update Comment |
|---|---|
| 1 November 2024 |
|
| 4 December 2024 |
|
| 01 December 2025 |
|
For further information or enquiries about the DTA’s adoption of artificial intelligence, contact us directly at info@dta.gov.au.