Why AI won’t work without the platform maturity we should have had years ago.

For years, Platform Engineering was incorrectly treated as a deferrable cost – a structural investment bypassed in favour of immediate feature delivery. We relied on manual coordination and the invisible effort of engineers to navigate unstandardised environments, essentially masking the true cost of organisational friction.

As we move into the era of AI, that deferral has reached its limit. The paved roads, service catalogs, and automated guardrails that were once ignored are no longer just about reducing developer toil; they are the fundamental requirements for making AI work at all.

In this session, we’ll explore how the technical debt of yesterday has become the AI blocker of today. We will dive into why an LLM is only as good as the predictability of your underlying environments, why AI agents can’t navigate an undocumented mess, and how true platform maturity provides the deterministic foundation that non-deterministic GenAI demands.

What you’ll learn:
– The Context Gap: Why AI agents fail in organisations without a mature Digital Platform to navigate.
– Safety & Governance: How Paved Roads prevent AI from turning fast deployments into fast disasters.
– From DevEx to AIX: Shifting your platform strategy to support both human engineers and their machine collaborators.

We should have built these platforms to support our people. Now, we must build them to make AI work.

Shipping a new feature is easy. Knowing if it actually improved anything? That’s the hard part.

Many teams ship features based on intuition rather than evidence.

This makes it impossible to understand user behaviour or build confidence in product decisions.

Experimentation is the only reliable way for engineering teams to uncover the true impact of the features they ship.

In this talk, we will walk through the experimentation lifecycle from an engineering perspective – designing an experiment, translating it into code, launching and collecting data, analysing results and learning – showing how teams can embed this process into their software delivery practice. We will also navigate through common challenges with experimentation and how to scale it within organisations.

Everyone has an opinion on AI. Far fewer people have actually shipped it inside a real organisation, with real customers, real legacy systems, and real risk on the table.

This panel brings together three leaders doing that work. Andrew Cresp is CIO at NGM Group, where every decision sits on top of customer money, regulation and trust. Josh Doolan leads APAC for Endava and founded Mudbath before its acquisition — he sees AI adoption playing out across dozens of enterprises, not just one. Katherine Squire has 25 years leading product and engineering across Macquarie, ASX, Nasdaq and Culture Amp — with deep experience on both the vendor and client side of the software business.

We’ll talk about what enterprise AI actually looks like once the demos are over. Where to start when you can’t change everything at once. How to bring a whole organisation along when half the room is excited and half is anxious. What’s genuinely shifting for engineers, product people and tech leaders. And how to handle security, privacy and data without becoming the team that just says no.

As AI tools integrate into software development, the real value doesn’t come from the tools themselves; it comes from how they are used and implemented within existing workflows. Drawing on Octopus Deploy’s AI Pulse Report, which examines AI adoption patterns, current capabilities, and where these tools are being used across development workflows.

The research reveals a critical misalignment: AI’s current capabilities don’t align with what developers actually need help with, including compliance, security, onboarding, deployment, and release management, with common frustrations reflecting that gap.

Automation through Continuous Delivery practices provides the foundation for AI to deliver compounding value, bridging the gap between individual productivity gains and organizational impact, and creating the environment where AI’s actual strengths can be applied to solve problems at scale.

A case study on the business value of engaging and empowering staff to build their digital skills.

How I saved $391,000 in 15-minutes.  A support model that combines technical support and governance with change management and training activities. Tailoring learning delivery to busy people – nano-learning video content, micro-learning 15-minute digital skills sessions, through to 30-min deeper dives and ‘ask me anything’. Techniques to build communities (in Microsoft Teams) to keep staff informed, engaged, creating a strong peer-support network of users. Establishing trust, connection and educating users where and how to self-help.

In 2008, Larry Ellison (Oracle CEO) said according to the Wall Street Journal[1]: “The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?”. He was, of course, talking about Cloud Computing.

Cloud Computing wasn’t the first or last (over?) hyped technology, and Larry was poking fun at the “Amazing new XYZ product now with Cloud!” marketing. We have seen similar things happen with eCommerce late 1990’ies (online shopping using Windows 98 and 56 kbps modem?), Quantum Computers (IT industry’s version of Schrödinger’s cat), distributed ledgers were predicted to transform $2 trillion online economy in mid 2010s[2], metaverse (what exactly was that anyway?), digital twins (‘must have’ for CEOs[3]), and recently (dare I say it?) AI.

This session will explore early warning signs and provide some practical advice on how to foster a constructive discussion within your organisation to avoid the worst of software misapplications:

  1. Is the new software appropriate for the required solution? Does it come with sufficient support for privacy, data protection & mgmt., integration, governance etc?
  2. Does the organisation have the necessary technology capability and operational maturity?
  3. How (well) does it integrate with the rest of the utilised software?

 

[1] CNET (Sept 2008): “Oracle’s Ellison nails cloud computing”

[2] Accenture (2016): “Editing the uneditable blockchain – Why distributed ledger technology must adapt to an imperfect world”

[3] AFR (Feb 2026): “3 things these bosses plan to do differently this year”

Most organisations say automation will “save time.” But few stop to ask the question every employee is quietly wondering: what should we do with the time?

In this session, Dan Godden introduces Thea, a customer service rep navigating the growing wave of AI tools appearing across her workplace. Along the way he draws an unexpected comparison with toilet training toddlers to explain why technology rollouts so often fail to change behaviour.

Through Thea’s story, Dan explores the human side of AI adoption and the emerging role of Humans-in-the-Loop (or even Humans-at-the-helm), helping teams think differently about judgement, responsibility and the uniquely human value that remains when more work becomes automated.

We spend our careers hardening backends against external threats, but are we inadvertently building “Internal Exploits” right into our interfaces? When design choices weaponise the developing psychology of minors to drive metrics, we aren’t just frustrating users – we are violating their inherent dignity.

Drawing on the concept of the “technocratic paradigm” from Laudato Si’, this session moves beyond surface-level design ethics into the structural reality of how we build software. We will explore how to stop treating humans purely as extractable data points and start engineering interfaces that protect the most vulnerable.

Join this session to learn:

It’s time to patch the “Innocence Vulnerability”. Come learn how to architect a future where the human in the loop is empowered, not entrapped.

UX design often focuses on usability. But how do we measure the mental cost of an interface?

In this talk, Dr Ben Shelton explores how Cognitive Load Theory can be used as a practical tool to assess interface effectiveness and quantify the hidden impact of digital noise.

If we want calm, human-centred technology, we need to understand what it demands of the mind.