– 22 April 2026
Lucy Poole - Deputy CEO, Strategy, Planning and Performance Division – delivered the following keynote at the 12th Annual Data and Digital Governance Summit.
NOTE: This speech was delivered on Wednesday 22 April 2026. Check against delivery.
Good morning.
It’s a pleasure to be here at the 12th annual Data and Digital Governance Summit, and to join a group that’s working — in very practical ways — to strengthen how the Australian Public Service uses data, digital capability and emerging technologies.
I want to begin by thanking Public Spectrum and the wonderful events team for convening us again.
One of the strengths of this summit is that it gives us room for a different kind of conversation — not just optimism and headlines, but the real-world craft of public administration, and the complexity of the systems we’re all trying to steward.
Over the last 12 months, the conversation about AI in government has really accelerated.
New tools are arriving fast. And things that felt theoretical not long ago are now being tested, piloted — and in some places, built into day-to-day work.
Across the APS, AI is starting to show up inside real workflows — drafting, summarising, finding information, triage — rather than sitting off to the side as a separate experiment.
Last month I was in London for Innovation 2026, co-hosted by the UK Government, the Cabinet Office and the UK Civil Service.
Public servants from many countries were wrestling with the same questions many of us are wrestling with here:
The experience has stayed with me. Not because other governments have solved these challenges — they haven’t — but because there is increasing clarity about what the next phase of AI adoption looks like.
And what I took away was this: we’re moving into a phase shaped not just by experimentation, but by a deeper re-examination of the relationship between governments and citizens.
One UK colleague put it to me this way: the relationship is emotional as much as it’s transactional. Because when people deal with government, it’s often personal — and the quality of that experience can shape lives, and it can shape trust.
I’ll be honest — that framing really made me pause. I’m still thinking it through.
But I think many of us recognise it. Government interactions have a feel to them. And over time, unresolved complexity, repeated failure, or a poor experience adds up — it can chip away at confidence and, ultimately, legitimacy.
So that’s the lens I want to use today.
I’ll touch on 3 priorities that I think will matter for the APS through 2026 and beyond.
At first, that might sound a bit whimsical.
But it came up again and again in the UK — from public servants, academics and industry — that we need to step outside our default settings and be willing to challenge the frameworks we take for granted.
Not just adopting new tools, but being willing to think differently — to be creative, and sometimes to be deliberately bold.
And what struck me is that this wasn’t about talent, or individual capability.
I didn’t leave with the impression that public servants elsewhere were any more creative or innovative than we are.
If anything, the instinct is the same everywhere: improve services, cut admin burden, and free people up to focus on higher-value work. We’ve got that in Australia, too.
So I came home with a simple question: why do we so often feel less able to be bold and imaginative about AI, even when the capability — and the intent — is clearly there?
In the UK conversations I sat in on, imagination wasn’t left to chance. It came from a deliberate effort to “rewire the state”.
What they meant was pretty practical: not just bringing in new tech, but rethinking the role of government, who holds authority for decisions, and how we make change possible.
This is an aspiration that resonates.
It’s an idea you hear from people such as Sir Geoff Mulgan too — that we’re starting to lose our “social imagination”: our ability to think seriously about different futures, and about the kind of system-level change they would actually take.
Rebuilding that imaginative capacity is not about abandoning discipline or accountability.
It’s about creating the conditions in which new ways of working can be conceived, tested and — where they prove their value — scaled, in service of better outcomes for the people government exists to serve.
And if I’m honest, a lot of the hesitation we see isn’t about capability — it’s about where to start.
And when we’re not sure, we fall back on what feels safe: a linear tech rollout — a pilot, a tool, an implementation — squeezed into the structures we already have.
That approach can deliver incremental improvement, but it rarely creates the space required for genuinely new ways of working to emerge.
A different approach is to change the underlying conditions that govern how change happens in the first place: how value is defined, how progress is measured, how authority is exercised, and where responsibility for experimentation sits.
This is not about moving faster or pushing risk boundaries unnecessarily. It is about being more deliberate, and about actively seeking insight and partnership across government, industry and academia, as we are doing today.
That shift matters because it changes how we use AI. Instead of just speeding up today’s processes, AI becomes a way to ask better questions — about service design, decision-making, system boundaries and user experience — and to make visible where the current setup is holding outcomes back.
In other words, imagination does not emerge by chance.
It is enabled by design.
And just to be clear: this isn’t about lowering the bar on accountability. If anything, it lifts it — because we’re pairing ambition with clearer intent, stronger measurement, and clear ownership.
And it is precisely why the APS AI Plan, the DTA’s responsible‑use frameworks and the guardrails we have put in place matter. They exist to give us confidence to explore, not permission to stand still.
At present, we are often asking AI to help us do what we have always done, only faster. In a pressured environment, that matters.
But the larger opportunity lies in using AI to help us do things we could not previously do at all: to surface patterns across fragmented systems, support better judgement in complex policy contexts, and reframe problems rather than simply processing them more efficiently.
And that is not primarily a technical challenge
Across the APS right now, we’re seeing very different speeds of progress.
Some agencies are already putting AI into operational environments — turning experiments into something that actually sticks.
Others are moving more carefully — often because they’re dealing with decades of accumulated legacy in systems, procurement, data environments and delivery models.
And to be fair, legacy doesn’t mean people haven’t tried. It’s what happens when big institutions respond, year after year, to new policies, new funding settings and new technology.
But it does shape what is possible — particularly as AI moves from the margins of work into core operational systems.
A study from Kearney in the US late last year puts some numbers on this. In heavily regulated sectors — including government — organisations can spend 60 to 80 per cent of their IT budgets just keeping legacy systems running.
In their federal system, that adds up to more than 80 billion US dollars a year — spent maintaining ageing servers, networks and data centres that were never built for today’s scale, security and compliance needs.
In Australia, we are facing a similar challenge. While the absolute cost is unlikely to mirror the US experience, available estimates are still largely based on assumptions rather than comprehensive, system‑wide data.
However, this is not simply a cost issue.
The same research shows that legacy infrastructure is a significant source of systemic and cyber risk, with vulnerabilities accounting for a substantial proportion of data breaches.
And when a core system fails in a regulated environment, the impacts stack up fast — for safety, for compliance, and for service delivery. That can slow down AI adoption, and it can also make day-to-day improvement harder than it needs to be.
However, it’s important to recognise that modernisation does not always imply wholesale replacement.
Modernising legacy isn’t one neat, linear pathway. In practice, agencies sit on a spectrum of approaches.
Within this spectrum, AI can be used to stabilise and bring greater visibility to existing processes, while also enabling deeper re‑engineering of digital cores and, in some cases, the reimagining of business capability altogether.
For example, using generative AI to support routine customer interactions with limited human involvement, or deploying digital twins to model the operational and risk implications of turning off legacy systems before irreversible decisions are made.
For many organisations — particularly in highly regulated environments — the most immediate value lies in using AI to make complex, federated systems more intelligible: improving visibility, predicting failures, optimising patching and workload allocation, and supporting better-informed decisions about where intervention is genuinely required.
More ambitious re‑engineering becomes possible only once that foundation of understanding and control is in place.
Why this matters for the APS is that agencies are not slowing because they lack intent, but because they sit at different points across this spectrum — with different service obligations, risk tolerances and legacy constraints.
As AI capability becomes more distributed and increasingly agency‑led, the coordination challenge changes. The task is no longer simply about collective momentum. It is about coherence — whether progress occurring in different parts of the system can be understood, connected and sustained over time.
At this stage, the more useful question is no longer how fast can we adopt? It is: how well can we integrate?
Integration isn’t just a technology task. It also means getting governance, data foundations, operating models and accountability right.
If we scale AI without a clear understanding of how our systems work — and who is responsible for what — we risk baking in fragility, especially where legacy complexity makes cause and effect hard to see.
For example, think about compliance case management. An AI model might help triage referrals, suggest risk ratings, or recommend next steps based on past cases.
But if the case data sits across multiple systems, definitions aren’t consistent, and ownership is unclear (who signs off the model, who changes the risk rules, who audits outcomes), then 2 similar cases can be treated differently — and when a decision is challenged, it becomes difficult to trace what drove it and who is accountable.
Alignment matters because it is what allows pace without sacrificing resilience. It is how we move from isolated progress to genuine, system‑level learning and capability development.
For a long time now, governments across the world have been striving to design services around people’s lives, rather than requiring individuals to navigate organisational structures and agency boundaries.
When services are organised around life events, complexity is reduced, effort decreases, and confidence in government services is strengthened.
In the UK, we’re already seeing this reflected in practice through the development of companion AI agents, designed to support citizens with complex or high needs that cut across multiple government agencies.
Estonia is another great example. Their life-event approach is built on a simple idea: if you’re entitled to something, you shouldn’t have to hunt for it and apply. Services are proactively offered as circumstances change. Their systems aren’t just ‘digital by default’ — they aim to be ‘coherent by design’.
In Australia, the DTA’s Digital Experience Policy — in place since January 2025 — and the Digital Inclusion Standard give us a strong foundation. They set clear expectations for new and replacement public- and staff-facing digital services, and they extend those expectations to existing public-facing services as well.
The opportunity now is to embed these practices — redesigning services that make sense end-to-end, feel consistent across channels, and respond to people as their circumstances change.
AI has a role to play in this — but only if we are clear about what it can and cannot do.
Used carefully, AI can help reduce effort, connect fragmented systems, personalise assistance, and take on more of the cognitive and administrative load that people and business currently carry.
Used uncritically, it can do the opposite — reinforcing poor design, masking underlying complexity, and giving the appearance of accessibility without delivering the experience itself.
For example, AI auto‑generated alt text can be a helpful starting point because it shows how a system interprets an image, but it doesn’t explain why the image is there.
We see this in everyday reporting.
Imagine a photo in a program update showing an APS officer at a counter with a member of the public. An AI-generated description might say, “two people talking at a desk”. That’s accurate — but it misses the purpose. A more meaningful description would explain that staff are helping someone understand their entitlements and complete an application, or resolve an issue that has been holding up a payment.
It’s that human context — the purpose and meaning — that makes the image useful.
This is why something I read recently really stuck with me.
UNESCO’s Ideas Lab published a piece about the risk of treating AI as a substitute for inclusive design — and for genuine human engagement.
The author, Alice Bennett, makes a simple but important point: automated accessibility tools — AI-generated alt text, synthetic “user feedback”, automated compliance checks — can create a false sense that we’ve dealt with inclusion, without actually improving the lived experience of people who rely on accessible design.
The central point and the one that stuck with me is that accessibility cannot be automated into existence.
Automated tools can support analysis by highlighting issues, but it is the expertise, experience, and judgement of public servants that ultimately ensure decisions are sound and effective in real‑world conditions.
Automated tools cannot always determine whether content is meaningful, whether instructions are genuinely understandable, or whether a service works in practice for people using assistive technologies in real‑world conditions.
Some take a different view. Recent reporting in the Harvard Business Review, for example, describes the use of AI‑generated ‘synthetic personas and digital twins’ as stand‑ins for real consumers in market research.
This is lower stakes than accessibility, but the lesson still applies: just because a technology solution exists doesn’t mean it’s the right one — especially when we’re asking AI to predict, model or stand in for human judgement, instinct and behaviour.
So the question becomes: how do we use AI to support accessibility and inclusion, without treating it as a shortcut or a substitute for good design?
If we’re serious about accessibility, the best approach is still the simplest one: design services that are usable, understandable and inclusive from day one.
It’s also just cheaper — and easier — than trying to bolt on accessibility later with automated fixes.
The pressure to find efficiencies is real, and tools that promise fast, low‑effort accessibility improvements are understandably attractive in busy organisations.
The risk, however, is that accessibility becomes something we check for after the fact, rather than something we design for from the outset.
And because people experience government across channels — online, over the phone, and at the counter — gaps in accessibility don’t just create friction; they erode trust and, over time, the legitimacy of the service itself.
For government, this goes straight to trust. If a service technically meets the standard but is still confusing, incomplete or misleading for people who rely on accessible design, then we haven’t delivered what we set out to deliver.
And the presence of captions, alt text, or even AI-generated assistance doesn’t guarantee equal access — especially if those elements are generic, inaccurate, or missing the context people actually need.
Success, therefore, is not about faster services for their own sake, but about simpler, clearer, and more accessible services—especially for those who have historically found government hardest to navigate.
AI can help — but it can’t replace the basics: user-centred design, listening to people who use services in different ways, and building accessibility into everyday work instead of bolting it on at the end.
As AI capability matures, we are seeing a shift — particularly in large enterprise environments — from systems that assist humans to systems that are able to act on their behalf within defined boundaries.
This is often described as agentic AI: systems that can initiate tasks, coordinate workflows and exercise a degree of delegated authority.
It is increasingly clear that agentic AI is on the horizon.
The real question is when. Big technology shifts rarely move in a straight line, and it’s hard to predict exactly when pilots become business as usual.
IBM research, based on a global survey of executives, suggested that many expected employees to be routinely interacting with AI assistants by the end of 2025. While this level of uptake has not yet fully materialised, it also hasn’t stalled. Deloitte also found that about a quarter of organisations have already piloted agentic systems. And most expect that number to climb quickly over the next few years — potentially doubling by 2027.
Regardless of whether every one of those predictions has landed exactly as forecast, what they clearly tell us is that the appetite for more agentic forms of AI has been building for some time.
And that appetite isn’t going away.
At the DTA, we’re beginning to see early signals of this shift, including evidence of AI agents interacting directly with our websites, policies, and online content.
These signals point to a broader change underway—one that will increasingly shape how information is accessed, how web and policy content is created, and ultimately how that content is interpreted and used by AI agents, not just people.
We are already planning updates, including an Agentic Addendum to our AI technical standard, to respond to this change.
As AI evolves from a tool that generates suggestions to an agent capable of initiating actions, the trust equation begins to shift in more complex and less predictable ways.
One response we’re starting to see is the idea of “AI control towers” — a central function that keeps an eye on how AI systems and agents are behaving across an organisation, and helps maintain visibility and accountability as autonomy increases.
Here, we will all be challenged with the question of delegation — who or what is authorised to act, on whose behalf, under what conditions, and with what visibility of accountability.
We have already seen in the private sector how sensitive this shift can be.
What we know is that where people are placed into AI mediated interactions without clear explanation, choice or recourse, trust erodes quickly — not because the technology fails, but because agency is diminished.
People experience decisions as being done to them, rather than with them.
By contrast, the banking and financial services sector is beginning to provide more positive signals. For example, Mastercard has delivered Australia’s first authenticated agentic transactions on its network as part of its global Agent Pay program.
These developments demonstrate how agent‑based transactions can be introduced in a controlled way, creating environments in which both customers and their agents can engage with confidence.
In one of the most trust‑sensitive domains, this shows how clear accountability, transparency, and safeguards can enable new models to be adopted responsibly and at scale.
As AI becomes more embedded in citizen‑facing interactions — through automated triage, AI‑assisted decision‑making, or adaptive digital services — we need to be clear about what role these systems are playing and where responsibility ultimately sits.
This isn’t just about consent in the narrow sense. It’s about whether people feel informed and respected — and whether they can understand a decision and challenge it when they need to.
Our existing frameworks rightly emphasise transparency and accountability.
As systems become more capable, and as agentic patterns become more common in our society, those principles need to be made real in practice through clear delegation boundaries, accountability, the ability to intervene, and the ability to disengage.
I came back from London more energised than I expected — not because others have cracked problems we haven’t, but because it made clear what’s possible when:
That is the conversation we need to keep having in Australia in 2026 and beyond.
Not just about whether our systems are modern or our policies are compliant, but whether we are genuinely building a public service that safeguards trust, integrates well, and works better for everyone.
AI does not change the core principles of public administration. It raises the bar on how deliberately we apply them.
---- Thank you for listening, I look forward to your questions.
The Digital Transformation Agency is the Australian Government's adviser for the development, delivery, and monitoring of whole-of-government strategies, policies, and standards for digital and ICT investments, including ICT procurement.
For media enquiries email us at media@dta.gov.au
For other enquiries email us at info@dta.gov.au