I flipped my desk last week. Literally. Sent the monitor flying, stomped on my laptop in a blind rage.

Or at least, I did in my head. Then I heard my name, tuned back into the daily call, and carried on fixing a broken build, rotating secrets, and untangling merge conflicts I didn’t cause. Just to clear the way for the work that actually mattered.

And that’s the point: everything is too hard. Not meaningful-hard. Not worth-it-hard. Just absurdly, unnecessarily, needlessly hard.

We’ve normalised friction. We’ve glamorised struggle. We celebrate Git mastery as a badge of honour, telling ourselves the problem is our competence, not the fact that we’ve standardised a tool where one wrong move can undo a whole week’s work. We twist our models to fit relational schemas and write SQL incantations that make us feel superior, even when they’re clearly not the right tools for the job.

We tell ourselves this is maturity, when really it’s resignation. We build ever higher barriers around “the right way”, then congratulate ourselves for surviving them, instead of making it harder to do things wrong.

This talk doesn’t offer a framework, or a fix, or five steps to simplification. It offers recognition. It’s a rallying cry for everyone who’s ever stared at some pointless obstacle and thought: Why am I even doing this?

Because you’re not the problem.
Everything is too hard.
And maybe the first step to fixing that is finally talking about it.

Why AI won’t work without the platform maturity we should have had years ago.

For years, Platform Engineering was incorrectly treated as a deferrable cost – a structural investment bypassed in favour of immediate feature delivery. We relied on manual coordination and the invisible effort of engineers to navigate unstandardised environments, essentially masking the true cost of organisational friction.

As we move into the era of AI, that deferral has reached its limit. The paved roads, service catalogs, and automated guardrails that were once ignored are no longer just about reducing developer toil; they are the fundamental requirements for making AI work at all.

In this session, we’ll explore how the technical debt of yesterday has become the AI blocker of today. We will dive into why an LLM is only as good as the predictability of your underlying environments, why AI agents can’t navigate an undocumented mess, and how true platform maturity provides the deterministic foundation that non-deterministic GenAI demands.

What you’ll learn:
– The Context Gap: Why AI agents fail in organisations without a mature Digital Platform to navigate.
– Safety & Governance: How Paved Roads prevent AI from turning fast deployments into fast disasters.
– From DevEx to AIX: Shifting your platform strategy to support both human engineers and their machine collaborators.

We should have built these platforms to support our people. Now, we must build them to make AI work.

This is talk about technical communication in large scale complex systems where operational safety is paramount. In such environments, technical phrases, formal procedures, recent biases and individual habits influence how we communicate. But in high risk environments, when miscommunications or misunderstandings occur, it can be catastrophic.

Today we’re going to learn how to fly a plane. Or, specifically, we’re going to learn how large international jets taxi and take off from a runway.

While thousands of planes take off and land safely every day, occasionally there are some close calls when disaster is averted by sheer luck or coincidence. In those circumstances, we get a safety investigation to help us learn from the mistakes and avoid catastrophic outcomes. This talk learns from one such safety report and we’ll discuss the parallels software engineers can learn from the aviation industry when communicating in complex operating environments. When safety is paramount, is just one human in the loop really enough?

Shipping a new feature is easy. Knowing if it actually improved anything? That’s the hard part.

Many teams ship features based on intuition rather than evidence.

This makes it impossible to understand user behaviour or build confidence in product decisions.

Experimentation is the only reliable way for engineering teams to uncover the true impact of the features they ship.

In this talk, we will walk through the experimentation lifecycle from an engineering perspective – designing an experiment, translating it into code, launching and collecting data, analysing results and learning – showing how teams can embed this process into their software delivery practice. We will also navigate through common challenges with experimentation and how to scale it within organisations.

Only required for those who don’t register on day 1.

As AI tools integrate into software development, the real value doesn’t come from the tools themselves; it comes from how they are used and implemented within existing workflows. Drawing on Octopus Deploy’s AI Pulse Report, which examines AI adoption patterns, current capabilities, and where these tools are being used across development workflows.

The research reveals a critical misalignment: AI’s current capabilities don’t align with what developers actually need help with, including compliance, security, onboarding, deployment, and release management, with common frustrations reflecting that gap.

Automation through Continuous Delivery practices provides the foundation for AI to deliver compounding value, bridging the gap between individual productivity gains and organizational impact, and creating the environment where AI’s actual strengths can be applied to solve problems at scale.

Most people in tech are stretched. Career. Health. Family. Life. And at some point it stops feeling like hustle and starts feeling like failure.

Right now there’s another layer. The landscape is shifting fast. Technology professionals, software engineers especially, are asking real questions about where their careers are headed. What it actually means to build a career in technology when the ground keeps moving.

This session is for anyone sitting with that uncertainty.

It draws on real experience. Career pivots, building a business under pressure, backing yourself before you felt ready. The pattern that keeps showing up: growth doesn’t come from waiting until things are clear. It comes from choosing hard things while they’re still unclear.

It also reframes balance. Not as something you maintain, but as something that shifts. Seasons. Some seasons you build hard. Some you recover. Neither is wrong.

No motivational fluff. No predictions about AI. Just a more honest way to think about your career in tech and how to move through uncertainty with clarity instead of guilt.

Frustrated by your business partners missing tech opportunities?

Baffled by the decisions that mean high tech risk continues?

This all comes down to quality decision making, and is a critical part of being a senior tech leader.

This session will break down the biases to watch out for and give usable structures and methods to drive the best possible decision making and help tech leaders stay sane.