Tuesday, 12th of May

Time:
9:00 am – 5:00 pm

Venue:
RAEng, Prince Philip House

Cost: Free

VeTSS Annual Conference 2026

The VeTSS Annual Conference is a key event for professionals from both academia and industry who are passionate about program analysis, testing, and verification. This open and inclusive gathering offers a unique opportunity to connect, share ideas, and stay at the forefront of cutting-edge research and innovation.

During the event, we will also announce the VeTSS Doctoral Dissertation Awards, recognising outstanding PhD research conducted in the UK. In addition, the recipients of the VeTSS Research Award 2025 will present highlights from their innovative and exciting research.

Registration is free but numbers are limited. To register your interest to attend, please complete this short form. We will then contact you as soon as we are able to confirm your place at the event.

This year’s event will be held at the Royal Academy of Engineering, London on Tuesday, 12th of May. You can find the address and full directions to the venue on this link.

The meeting with start at 9:00. Registration, with tea and coffee, is from 8:30. On arrival, please make your way up to the ERA Foundation room on the first floor. You will be able to collect your badge there.

Please find the draft programme below. The detailed schedule will be updated periodically.

We want to make every VeTSS event a respectful and friendly environment for all, and we ask that everyone attending the Annual conference adheres to the VeTSS code of conduct.

Draft Programme

8:30 – 9:00 Registration, with Tea and Coffee
Session 1
9:00 – 9:10 Welcome, VeTSS Directors
9:10 – 10:10 Keynote talk: Towards Resilient Systems in an Increasingly Complex World, Kathleen Fisher (CEO, ARIA)
10:10 – 10:30 VeTSS Doctoral Dissertation Awards
10:30 – 11:00 Morning break
Session 2
11:00 – 11:30 Talk: Nitro Isolation Engine: formally verifying a production hypervisor, Dominic Mulligan (AWS)
11:30 – 12:30 Talks from VeTSS Research Awards 2025 winners
12:30 – 13:30 Lunch
Session 3
13:30 – 14:00 Talk: Petros Wallden (University of Edinburgh)
14:00 – 14:30 Talk: David Chisnall (CHERI).
14:30 – 15:00 Talk: Using causality for debugging and explainability, Hana Chockler (King’s College London)
15:00 – 15:30 Afternoon break
Session 4
15:30 – 16:00 Talk: Repository Automation with GitHub Agentic Workflows, Don Syme (Microsoft / Github)
16:00 – 16:30 Talk: Types for more than memory safety in OxCaml, Stephen Dolan (Jane Street)
16:30 – 17:00 Talk: Formal Verification of EMV Payments, Tom Chothia (University of Birmingham)

Details of talks

Keynote: Towards Resilient Systems in an Increasingly Complex World

Kathleen Fisher (CEO, ARIA)

Abstract:

As global systems grow more interconnected, our digital and biological infrastructure faces unprecedented fragility. This talk explores the urgent shift from reactive security to built-in resilience that address systemic vulnerabilities.

Drawing on lessons from the history of formal methods at DARPA, we will examine the evolution of software security and its critical role in our current landscape. The session will also introduce ARIA (the UK’s Advanced Research + Invention Agency) and our mission to tackle these challenges through ambitious, high-risk programmes – spanning areas from climate resilience to next-generation trust infrastructure.

Nitro Isolation Engine: formally verifying a production hypervisor

Dominic Mulligan (AWS)

Abstract:

Cloud computing relies on hypervisors to enforce isolation between co-tenanted virtual machines. Hypervisors are therefore critical security infrastructure, and assurance of their correctness is paramount. Traditional engineering techniques—code review, testing, fuzzing—provide strong assurance but cannot exhaustively verify that isolation holds across all possible execution paths. Formal verification extends and complements these approaches by establishing mathematical guarantees about system behaviour.

This talk presents our experience applying interactive theorem proving to verify a production hypervisor component: the Nitro Isolation Engine. This is a trusted, minimalist computing base written in Rust, enforcing isolation between virtual machines on AWS Graviton5 EC2 instances. Designed for verification from inception, we have specified the intended behaviour of this component and verified correctness in the Isabelle/HOL interactive theorem prover, producing approximately 260,000 lines of machine-checked models and proofs.

Our verification establishes three key classes of property:

Functional correctness: The system behaves as specified for all operations including virtual machine creation, memory mapping, and abort handling. Our total verification approach additionally establishes memory-safety, termination, and absence of runtime errors.

Confidentiality: A noninterference-style property demonstrates that guest virtual machine state remains hidden from an expansive definition of observer monitoring system actions, formalised as indistinguishability preservation up to permitted declassification flows.

Integrity: Guest virtual machine private state is unaffected by operations on distinct virtual machines.

The talk will discuss the verification approach, key proof techniques, and challenges in applying formal methods to production systems.

Talk:

Petros Wallden (University of Edinburgh)

Talk: CHERI

David Chisnall (CHERI)

Using causality for debugging and explainability

Hana Chockler (King’s College London)

Abstract:

In this talk I will look at the applications of actual causality to debugging models and explainability. Specifically, I will talk about actual causality as introduced by Halpern and Pearl, and its quantitative extensions. This theory turns out to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. It turns out to be particularly useful for explaining the outputs of large AI systems. I will argue that explainability can be viewed as a debugging technique and illustrate this approach with a number of examples. I will discuss the differences between the traditional view of explainability as a human-oriented technique and the type of explainability we are proposing, which is essentially a window inside the (otherwise black-box) system.

Bio: Professor Hana Chockler is a Professor of AI at the Department of Informatics, King’s College London. Prior to joining KCL in 2013, she worked at IBM Research in the formal verification department. She received her PhD in computer science from the Hebrew University of Jerusalem, with her PhD dissertation focussing on coverage metrics for formal verification. Professor Chockler’s current research is mostly in the area of explainable AI, where she applies the principles of actual causality to explain the outputs of black-box AI models.

Repository Automation with GitHub Agentic Workflows

Don Syme (Microsoft / Github)

Abstract

This talk introduces GitHub Agentic Workflows, now in technical preview, and explores how coding agents can be integrated into GitHub Actions to automate repository tasks. The detailed information of the talk will be based around https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/

Types for more than memory safety in OxCaml

Stephen Dolan (Jane Street)

Abstract:

Sound type systems provide a lightweight way to verify various program safety properties, in a way that is natural and convenient for programmers by being part of the language itself, instead of a separate verification tool with its own syntax and semantics. While traditionally used to establish memory safety, modern type systems can verify more detailed program properties. In this talk, I’ll describe some ongoing work at Jane Street building OxCaml, a series of extensions to the OCaml programming language which includes support for verifying data race freedom, uniqueness, purity and concurrency properties.

Formal Verification of EMV Payments

Tom Chothia (University of Birmingham)

Abstract:

The EMV contactless bank card payment system has many independent parties: payment providers, terminal companies, smart-phone companies, banks and regulators. Many of these stakeholders have unilaterally added undocumented features to the core protocol specifications, leading to a highly complex system. Our research has shown that formal verification is an effective method to understand, explain and test the security of EMV. This has led to the discovery of 12 different attacks against EMV cards; highlights of our work include taking $10,000 from the locked iPhone of YouTube influencer MKBHD and enabling an offline Square reader to accept transactions from fake cards for £20,000. In addition to finding vulnerabilities, we have designed a security protocol for smart cards and used formal verification to show that it stops most attacks against EMV. Our new protocol is currently being added to the ISO 14443 standard for all smart cards.

This is joint work with many people, including Ioana Boureanu, Anna Clee, Andreea-Ina Radu and George Pavlides.