Speech: Accelerating AI adoption in the public sector
Lucy Poole - Deputy CEO, Strategy, Planning and Performance Division – delivered the following keynote at Tech in Gov 2025.
NOTE: This speech was delivered on Tuesday 12 August 2025. Check against delivery.
Starting the Journey
It’s a pleasure to be here and to be part of a forum that brings together such a diverse and forward-thinking community.
Today I’d like to talk about the incredible momentum we’re seeing across the public sector when it comes to artificial intelligence.
We’ve moved past the question of whether AI has a role in government — we’ve shifted gears and are now focused on how it can deliver real, tangible benefits for the Australian community through better services, smarter infrastructure, and a more responsive institution.
Today I’ll cover the importance of technology in government, current technological initiatives, the challenges we face, and our vision for the future.
Gaining Momentum: The Importance of Technology in Government
During a recent panel I facilitated, Dr Liming Zhu from CSIRO’s Data61 shared a metaphor that stayed with me: “Brakes don’t slow us down—they help us go faster.”
It’s a powerful reminder that the safeguards we build—our standards, governance, and assurance tools—aren’t barriers. They’re enablers. They give us the confidence to accelerate, knowing we’re doing so safely and with purpose.
Building on that idea, it's clear that the frameworks and standards we establish are not just technical necessities, but the very engines that allow us to move forward boldly.
That’s why the release of the AI Technical Standards last month was such a milestone. These standards address key technical aspects such as data interoperability, algorithmic transparency, and model validation, which are crucial for building robust and trustworthy AI systems.
Developed through deep collaboration across government, they reflect not just technical rigour but a shared commitment to transparency, accountability, and safety.
What’s striking to me is how our sector is engaging with AI—not just with enthusiasm, but with care. We’re seeing a culture that’s excited, yes, but also deeply aware of its responsibilities.
That’s leadership. That’s public service at its best.
These standards help us move from pilots to platforms—from isolated experiments to whole-of-government capability. And they do so by embedding trust at the core of our systems.
In government, trust is everything. One misstep can stall progress and erode the social license we rely on. And once it’s lost, it’s a long road back.
That’s why governance matters. It’s not about slowing down—it’s about knowing that we can. These standards give us the guardrails, the map, and the brakes we need to navigate complexity with confidence.
We’ve seen how thoughtful, well-designed standards can support that journey. The question now is: what does that look like in practice?
Navigating the Path: Current Technological Initiatives
Let’s turn to some real-world examples that bring these principles to life. We got a glimpse of that at the AI in Government Showcase at the end of July.
It wasn’t just a celebration—it was a signal. A signal that AI is no longer a future ambition. It’s here, and it’s already reshaping how we serve the public.
Take the Tiwi Island Rangers. They’re using drones and AI to track ghost nets and marine debris along some of the most remote coastlines in the country. The AI models are used for image recognition and classification, enabling precise identification of marine debris. These models are trained on local knowledge—annotated by the rangers themselves.
This is an example of community-led innovation, grounded in Country, that has significantly improved marine conservation efforts.
Or take the National Library of Australia. With 58,000 hours of oral history recordings, AI now transcribes one hour of audio in just 90 seconds. That means decades of voices—stories, lived experiences—are now searchable and accessible to researchers, educators, and the public.
And in Kakadu today, the Office of the Supervising Scientist is using AI to monitor the rehabilitation of the Ranger uranium mine. From drone-based tree profiling to fish detection and DNA taxonomy, they’re tracking ecological recovery across 800 hectares—faster, safer, and more precisely than ever before.
That’s a powerful example of how far we’ve come. Because if we look back, Kakadu was also the site of one of Australia’s earliest AI innovations.
In 1986, Parks Australia and CSIRO developed the Kakadu Fire Management Expert System—a tool that helped rangers navigate hundreds of fire-planning rules and ecological factors. It was a pioneering moment, and it reminds us that our journey with AI has deep roots.
These aren’t isolated examples. They’re part of a broader shift.
And with over 700 attendees at the showcase, it’s clear: we’re not just building capability—we’re building community.
And I want to be clear—this isn’t a race. It’s not about who gets there first. It’s about how we get there, together.
Agencies and partners are leaning in, asking the right questions, and collaborating across boundaries. We’re learning from peers in other countries, sharing what works, and adapting it to our context. Because while our challenges are uniquely Australian, the opportunity—and responsibility—of AI is shared.
These examples show what’s possible when innovation is supported by collaboration, community, and confidence. But even as we move forward with purpose and energy, we must also acknowledge the barriers that still stand in the way.
Overcoming Roadblocks: Challenges and Opportunities
While we celebrate our advances, we must also confront the reality of risk with eyes wide open.
Legacy systems hinder AI adoption due to outdated infrastructure, poor data management, and compatibility issues, making integration challenging. Modernising these systems is costly and time-consuming. And maintaining our legacy systems is diverting resources from AI initiatives, slowing down the adoption process.
The recent DTA AI Accountable Officials Survey, which covered 80 agencies, revealed that although most are actively engaging with AI—trialling tools, building staff capability, and embedding AI into operations—there remain agencies that have yet to begin.
In the rapidly changing landscape of technology, standing still is not a neutral position; it exposes organisations and the communities they serve to significant risk.
The risks are not hypothetical. One weak link in our chain can have serious consequences—compromising not only the effectiveness of our systems, but also the trust of the public.
Smaller agencies, in particular, have reported gaps in skills, resources, and funding, as well as concerns around data privacy and legal complexity. If these challenges are not addressed head-on, we risk undermining the progress we’ve made and leaving segments of our community behind.
It is imperative that we continue to invest in robust governance frameworks, reliable assurance tools, and, most importantly, in each other. In today’s AI-enabled environment, our expectations of leadership are evolving.
Leaders are no longer just decision-makers; they are tone-setters. Their confidence, posture, and commitment to adopting AI responsibly shape how their teams engage with innovation.
When leaders demonstrate a clear understanding of the risks and opportunities, and actively support safe experimentation, they empower their staff to explore, learn, and build with purpose.
This shared confidence is what enables experimentation. It’s what allows teams to test new ideas, iterate quickly, and learn from failure. It gives our teams the confidence to experiment.
It gives our leaders the clarity to invest. And it gives our citizens the trust to engage with services that are increasingly digital, data-driven, and AI-enabled.
Steering Towards the Future: Vision for Technology in Government
Looking ahead, this moment matters—not just for technologists, but for engineers, designers, policy leads, and delivery teams. Building AI capability isn’t just about the tech—it’s about the systems we build, the safeguards we embed, the leaders who set the course, and the people who make it work. With shared vigilance and a resolve to address obstacles honestly, we can ensure our optimism is well-founded and sustainable.
So, let’s keep building the brakes.
Let’s keep refining the scaffolding.
Let’s keep showcasing what’s possible.
Because this isn’t just about doing the same things faster.
As Mo Gawdet reminds us, ‘this isn’t just a story about technology. It’s about us — human nature, ethics, and how we choose to handle this powerful tool’.
For the DTA that means knowing when to slow down, when to steer, and when to stop.
And with those foundations in place, we’re not just accelerating —we’re moving forward together. With purpose.
Thank you.
The Digital Transformation Agency is the Australian Government's adviser for the development, delivery, and monitoring of whole-of-government strategies, policies, and standards for digital and ICT investments, including ICT procurement.