Talks

Conference talks, meetup presentations, and other speaking engagements

Short Summary

Privacy-breaking pattern analysis isn't new - but AI has just made it accessible to everyone. Tools that were once complex, specialized, and limited to well-resourced actors are now available to anyone with access to large language models. Your carefully crafted privacy protections, designed to withstand traditional analysis, are about to face a wave of AI-powered pattern recognition that puts sophisticated privacy-breaking capabilities into everyone's hands.

Through live demonstrations, you'll watch as simple AI tools reconstruct user identities from "anonymous" chat data, rebuild social networks from encrypted messages, and expose organizational structures from metadata we thought was safe. What once required deep expertise in statistical analysis can now be done with a few prompts to an LLM.

We'll explore how traditional privacy approaches fail against these democratized threats, and examine modern defenses like differential privacy and federated learning. You'll leave understanding both the scale of this new challenge and practical steps to protect your systems. Whether you're building communication tools, handling sensitive data, or protecting user privacy, this talk will show you why yesterday's privacy tools won't survive in a world where sophisticated pattern analysis is available to all.

Key Takeaways:

  • How traditional privacy and anonymization tools fail against basic AI analysis
  • Understanding the new ways AI can reconstruct identities and relationships
  • Practical architectures and techniques for building AI-resistant privacy systems

LLMs power everything from chatbots to autonomous agents, but their non-deterministic nature exposes you to spoofing, privilege escalation, and compliance pitfalls. In this session, we'll draw on the social engineering experiments I undertook while building conversational AI systems, and we'll see how attackers could bypass security guardrails. We'll explore:

  • Real-world injection attacks and the vulnerabilities that make them possible
  • Emerging identity patterns, from W3C Verifiable Credentials to on-chain verification
  • Methods to protect against prompt manipulation and the often-overlooked elements in audit logs
  • A roadmap to LLM-aware identity ecosystems, including policy-as-code enforcement and federated governance models

You'll discover practical approaches to securing LLM workflows today while preparing for tomorrow's decentralised identity architectures. Through demos and case studies, you'll leave with actionable patterns for building trust into AI systems, and insight into where the ecosystem is heading.

Elevator Pitch

JSON Web Tokens are great! Or are they? They're signed, and self-contained payloads of data, but what could go wrong? Come and find out. Live demos of hackery included.

Description

JWTs are secure; they're signed; they're the best thing since sliced bread! So you've adopted them into your applications, and feel much safer. The chances that things will go wrong are slim. Right?

This talk will introduce the ways in which JWT implementations can go wrong, together with live demos, and take you on a journey to understand how to make sure you can trust these handy payloads in your applications and APIs.

Description

Even the most carefully crafted system prompts can “go rogue,” reverting to generic assistant mode or leaking hidden instructions—undermining security, consistency, and user trust. Drawing on hard-earned lessons from building a goal-oriented AI group chat platform, this session delivers:

  • Multiple prompt-leakage and prompt-reversion examples showcasing real-world LLM failures
  • Live demos of evaluation workflows, detecting and analysing rogue or unexpected responses in real time
  • Practical security patterns for prompt engineering to mitigate leakage and fallback risks
  • Techniques for adding nondeterministic evaluation tests into your deployment pipeline

This no-fluff, demo-driven talk equips engineers and security practitioners with battle-tested patterns to keep LLM-powered applications on-brand and secure. You’ll leave with open-source repos, threat-model templates, and actionable takeaways to implement immediately.