I flipped my desk last week. Literally. Sent the monitor flying, stomped on my laptop in a blind rage.
Or at least, I did in my head. Then I heard my name, tuned back into the daily call, and carried on fixing a broken build, rotating secrets, and untangling merge conflicts I didn’t cause. Just to clear the way for the work that actually mattered.
And that’s the point: everything is too hard. Not meaningful-hard. Not worth-it-hard. Just absurdly, unnecessarily, needlessly hard.
We’ve normalised friction. We’ve glamorised struggle. We celebrate Git mastery as a badge of honour, telling ourselves the problem is our competence, not the fact that we’ve standardised a tool where one wrong move can undo a whole week’s work. We twist our models to fit relational schemas and write SQL incantations that make us feel superior, even when they’re clearly not the right tools for the job.
We tell ourselves this is maturity, when really it’s resignation. We build ever higher barriers around “the right way”, then congratulate ourselves for surviving them, instead of making it harder to do things wrong.
This talk doesn’t offer a framework, or a fix, or five steps to simplification. It offers recognition. It’s a rallying cry for everyone who’s ever stared at some pointless obstacle and thought: Why am I even doing this?
Because you’re not the problem.
Everything is too hard.
And maybe the first step to fixing that is finally talking about it.
Join us for a powerful lunch as Women in Technology Hunter (WITH) officially launches.
Hear from the women who started it, why it matters, and how a simple idea from last year’s SlashNEW turned into a growing, grassroots network.
What began as one informal lunch has become a supportive community where women working in, with, or looking to step into technology show up for each other, share experiences, open doors, and have a few laughs along the way.
We also welcome HunterWise to the panel. HunterWiSE is an initiative that strives to increase the number of girls entering in the STEM pipeline, while fostering a supportive professional network of women in STEM fields.
I’ve watched incredibly smart engineers and technology leaders lose the room, not because they were wrong, but because they over-explained, went too deep too fast, or tried to prove they were right instead of making it easy for people to say yes.
It’s a pattern I’ve seen play out for years. The gap between having the best answer and actually getting it heard is real, and nobody really teaches you how to close it.
In this session, I’ll talk about what gets in the way and what to do differently so your ideas actually move things forward.
This is talk about technical communication in large scale complex systems where operational safety is paramount. In such environments, technical phrases, formal procedures, recent biases and individual habits influence how we communicate. But in high risk environments, when miscommunications or misunderstandings occur, it can be catastrophic.
Today we’re going to learn how to fly a plane. Or, specifically, we’re going to learn how large international jets taxi and take off from a runway.
While thousands of planes take off and land safely every day, occasionally there are some close calls when disaster is averted by sheer luck or coincidence. In those circumstances, we get a safety investigation to help us learn from the mistakes and avoid catastrophic outcomes. This talk learns from one such safety report and we’ll discuss the parallels software engineers can learn from the aviation industry when communicating in complex operating environments. When safety is paramount, is just one human in the loop really enough?
Most people in tech are stretched. Career. Health. Family. Life. And at some point it stops feeling like hustle and starts feeling like failure.
Right now there’s another layer. The landscape is shifting fast. Technology professionals, software engineers especially, are asking real questions about where their careers are headed. What it actually means to build a career in technology when the ground keeps moving.
This session is for anyone sitting with that uncertainty.
It draws on real experience. Career pivots, building a business under pressure, backing yourself before you felt ready. The pattern that keeps showing up: growth doesn’t come from waiting until things are clear. It comes from choosing hard things while they’re still unclear.
It also reframes balance. Not as something you maintain, but as something that shifts. Seasons. Some seasons you build hard. Some you recover. Neither is wrong.
No motivational fluff. No predictions about AI. Just a more honest way to think about your career in tech and how to move through uncertainty with clarity instead of guilt.
Frustrated by your business partners missing tech opportunities?
Baffled by the decisions that mean high tech risk continues?
This all comes down to quality decision making, and is a critical part of being a senior tech leader.
This session will break down the biases to watch out for and give usable structures and methods to drive the best possible decision making and help tech leaders stay sane.
Most Security Operations Centers (SOCs) are drowning in noise, yet adding more analysts is rarely the sustainable answer. Drawing on his experience as a Software Engineer turned Security Operations Manager, Daniel Clements shares the blueprint used at nib Group to move beyond traditional, manual monitoring.
This session explores the practical journey of building and implementing AI triage agents and SOAR workflows to automate the “heavy lifting” of investigations and stakeholder communications.
Attendees will learn how to shift their team’s focus from manual ticket-pushing to high-value security engineering. Daniel provides a diverse perspective focused on pragmatism, demonstrating how to deliver security outcomes that satisfy both technical requirements and executive expectations for efficiency.
Most engineering work starts the same way. Someone needs something built, you get it done, everyone moves on. Until the next request arrives. And the next. Before long you’re writing the same Terraform, configuring the same pipeline, answering the same questions you answered six months ago on a completely different project.
Spoiler: 80% of what you’re rebuilding is identical. 15% is a config value. You’re only ever writing the 5% that’s genuinely new. You just haven’t built the scaffolding to prove it yet. This talk walks through a phased approach to turning first-pass deliveries into something your whole team can actually reuse: kanban templates that give you a running start, modular IaC where one file controls everything, pipelines with deployment gates and automated testing baked in, and decision records that capture the why so the next person doesn’t have to rediscover it.
But we’ll also get into the stuff that actually stops smart people from doing this. The “every project is different” myth. The quiet fear that documenting your work makes you replaceable (it doesn’t, it gets you promoted). And the discipline it takes to carve out 5% of the timeline for harvest work when the next deadline is already breathing on your neck. These aren’t technical problems.
If you’ve ever shipped something and immediately thought “I should really template this,” this is the session that shows you how to actually follow through.
Does it feel like you’re constantly juggling All The Things but never quite nailing anything? Are you experiencing the special drain of using AI – the Dracula effect (as coined by Steve Yegge)? It’s not just you: almost 40% of us met the criteria for workplace burnout in 2025, and we know burnout among creatives is much higher. AI is an accelerant and if you or your team tilts towards burnout, you may very well be accelerating on that trajectory. The good news is that there are also opportunities for flipping the pattern and cultivating healthy, sustainable performance.
This fun, interactive session will:
As AI becomes embedded in cyber operations, incident response plans must evolve — but blindly “adding AI” to response workflows can create new risks.
This session explores how to design AI-augmented incident response capability that improves speed and decision-making without sacrificing human judgment, governance, and accountability.