13 October 2025
The DTA publishes new guidance for Australian Government colleagues on safe and responsible use of public generative AI tools.
Generative artificial intelligence (AI) tools are now widely available and increasingly integrated into public tools and services used in both work and personal lives.
This includes search engines, browsers, productivity applications and hardware. These services create new opportunities for staff to more efficiently engage in research, document and data analysis, and development of first drafts.
This growing prevalence also raises new risks for information security if public servants use generative AI unknowingly or improperly. This highlights the need for clear, consistent guidance for Australian Government staff and agencies on using public generative AI.
Our newest guidance on Using public generative AI tools safely and responsibly in the Australian Government builds on previous interim guidance on government staff use of public generative AI tools. Developed for a non-technical audience, and informed by input from across government, it simplifies key concepts into 3 clear overarching principles:
- Protect privacy and safeguard government information.
- Use judgement and critically assess generative AI outputs.
- Be able to explain, justify and take ownership of your advice and decisions.
It also includes examples of appropriate and inappropriate use to demonstrate the application of these principles in practice.
Public generative AI tools are services that are available to the general public. They can be accessed through various channels, including web browsers, standalone applications across a range of devices, or as features embedded in other digital services.
They differ from non-public enterprise AI solutions that can be configured to handle security classified and sensitive information. A growing number of Australian Government agencies are adopting enterprise generative AI tools, which offer stronger data controls and align with the Australian Government’s security requirements.
Some agencies already allow their staff to access some web-based public generative AI tools. This guidance is designed to encourage more agencies to provide staff access to these public tools. It remains up to agencies to decide whether to enable access to public generative AI on their systems, and staff must first and foremost follow their agency’s internal policies.
“Generative AI is already changing how people work all across Australia,” outlines Lucy Poole, Deputy CEO at the DTA. “This guidance supports staff and agencies to consider the benefits and risks. Enabling them to build the skills and confidence required to adopt public generative AI tools safely whilst protecting the information Australians trust us with.”
Why this matters
As generative AI continues to integrate into everyday life, demand for access to these tools and services is growing within government. It is crucial that AI literacy becomes a core capability for Australian Government staff – not just as a technical skill.
Public servants need the practical experience with generative AI to better understand its benefits, limitations and responsible use. Simply restricting access does not remove demand or interest. Instead, it leaves the workforce unprepared for a technology that is already reshaping how we work and engage with the public.
“We don’t want to be in a situation where staff, from any agency, are using these tools without proper advice,” stresses Ms Poole. “Ensuring staff have clear guidance on what information they can share with these services, and how, is critical to minimise risks and maximise the opportunities that AI presents to the public service. Generative AI is here to stay. This guidance gives our workforce the confidence to use public generative AI tools in their roles while keeping security and public trust at the centre of everything we do.”
Behind the guidance
The guidance replaces previous interim guidance on using public generative AI tools, providing clear direction for Australian Government staff and agencies.
For Australian Government staff
The guidance on Using public generative AI tools safely and responsibly in the Australian Government outlines how staff remain accountable for their advice and decisions when using generative AI. Human judgment must always guide how the tools are applied.
The guidance outlines key principles for responsible use of public generative AI tools, including:
- Don’t put security classified information – OFFICIAL: Sensitive or above – into these tools.
- Never enter personal information.
- Always follow your agency’s ICT security policies when using generative AI tools.
- Check outputs for fairness, accuracy and bias – noting generative AI can produce convincing but inaccurate content and reproduce biases from its training data.
For agencies
The guidance on Managing access to public generative AI tools encourages agencies to take a balanced, risk-based approach. Safe use should be enabled while protecting government information.
The guidance offers the following recommendations:
- Providing training and safeguards that build workforce capability.
- Monitoring use, requiring human oversight and recording AI-supported decisions.
- Prioritising enterprise-grade AI solutions for sensitive or classified material.
Supporting material
The Department of Home Affairs has released a Protective Security Policy Framework (PSPF) Policy Advisory on OFFICIAL Information Use with Generative Artificial Intelligence.
The Policy Advisory provides certainty to Australian Government entities that OFFICIAL information can be used with generative AI technologies. It also establishes central guidance covering foreign ownership, control, and influence for 18 Australian and foreign companies under the Hosting Certification Framework, to streamline approval processes within your organisation.
As generative AI capabilities are increasingly embedded across digital infrastructure, adopting the principles listed in the Policy Advisory, and in accordance with the PSPF, gives entities confidence in approving the use of generative AI for OFFICIAL information, while ensuring safe and responsible practices.
Looking ahead
Building AI literacy is critical to prepare the Australian Government workforce for the future. Agencies should invest in structured training, develop competency frameworks, and create opportunities for staff to gain practical experience.
“The community expects government to use technology in a way that is safe, transparent and responsible. By embedding safeguards and building capability, we can make the most of generative AI and deliver better services for Australians,” explains Ms Poole.
View the full guidance for Australian Government use of public generative AI
Australian Government agencies and staff are strongly encouraged to view the full guidance for Australian Government use of public generative AI.
The Digital Transformation Agency is the Australian Government's adviser for the development, delivery, and monitoring of whole-of-government strategies, policies, and standards for digital and ICT investments, including ICT procurement.
For media enquiries email us at media@dta.gov.au
For other enquiries email us at info@dta.gov.au