Tuesday, 12th of May

Time:
9:00 am – 5:00 pm

Venue:
RAEng, Prince Philip House

Cost: Free

VeTSS Annual Conference 2026

The VeTSS Annual Conference is a key event for professionals from both academia and industry who are passionate about program analysis, testing, and verification. This open and inclusive gathering offers a unique opportunity to connect, share ideas, and stay at the forefront of cutting-edge research and innovation.

During the event, we will also announce the VeTSS Doctoral Dissertation Awards, recognising outstanding PhD research conducted in the UK. In addition, the recipients of the VeTSS Research Award 2025 will present highlights from their innovative and exciting research.

This year’s event will be held at the Royal Academy of Engineering, London on Tuesday, 12th of May. You can find the address and full directions to the venue on this link.

The meeting with start at 9:00. Registration, with tea and coffee, is from 8:30. On arrival, please make your way up to the ERA Foundation room on the first floor. You will be able to collect your badge there.

Please find the draft programme below. The detailed schedule will be updated periodically.

We want to make every VeTSS event a respectful and friendly environment for all, and we ask that everyone attending the Annual conference adheres to the VeTSS code of conduct.

Programme

8:30 – 9:00 Registration, with Tea and Coffee
Session 1
9:00 – 9:10 Welcome, VeTSS Directors
9:10 – 10:10 Keynote talk: Towards Resilient Systems in an Increasingly Complex World, Kathleen Fisher (CEO, ARIA)
10:10 – 10:30 VeTSS Doctoral Dissertation Awards
10:30 – 11:00 Morning break
Session 2
11:00 – 11:30 Talk: Nitro Isolation Engine: formally verifying a production hypervisor, Dominic Mulligan (AWS)
11:30 – 12:30 Talks from VeTSS Research Awards 2025 winners
12:30 – 13:30 Lunch
Session 3
13:30 – 14:00 Talk: Evaluating and Verifying Quantum Computations, Petros Wallden (University of Edinburgh)
14:00 – 14:30 Talk: How CHERIoT’s least-privilege architecture enables partial verification, David Chisnall (CHERI)
14:30 – 15:00 Talk: Using causality for debugging and explainability, Hana Chockler (King’s College London)
15:00 – 15:30 Afternoon break
Session 4
15:30 – 16:00 Talk: Repository Automation with GitHub Agentic Workflows, Don Syme (Microsoft / Github)
16:00 – 16:30 Talk: Types for more than memory safety in OxCaml, Stephen Dolan (Jane Street)
16:30 – 17:00 Talk: Formal Verification of EMV Payments, Tom Chothia (University of Birmingham)

Details of talks

Keynote: Towards Resilient Systems in an Increasingly Complex World

Kathleen Fisher (CEO, ARIA)

Abstract:

As global systems grow more interconnected, our digital infrastructure faces unprecedented fragility. This talk explores the urgent shift from reactive security to built-in resilience that address systemic vulnerabilities.

Drawing on lessons from the history of formal methods at DARPA, we will examine the evolution of software security and its critical role in our current landscape. The session will also introduce ARIA (the UK’s Advanced Research + Invention Agency) and our mission to tackle these challenges through ambitious, high-risk programmes – spanning areas from climate resilience to next-generation trust infrastructure.

Nitro Isolation Engine: formally verifying a production hypervisor

Dominic Mulligan (AWS)

Abstract:

Cloud computing relies on hypervisors to enforce isolation between co-tenanted virtual machines. Hypervisors are therefore critical security infrastructure, and assurance of their correctness is paramount. Traditional engineering techniques—code review, testing, fuzzing—provide strong assurance but cannot exhaustively verify that isolation holds across all possible execution paths. Formal verification extends and complements these approaches by establishing mathematical guarantees about system behaviour.

This talk presents our experience applying interactive theorem proving to verify a production hypervisor component: the Nitro Isolation Engine. This is a trusted, minimalist computing base written in Rust, enforcing isolation between virtual machines on AWS Graviton5 EC2 instances. Designed for verification from inception, we have specified the intended behaviour of this component and verified correctness in the Isabelle/HOL interactive theorem prover, producing approximately 260,000 lines of machine-checked models and proofs.

Our verification establishes three key classes of property:

Functional correctness: The system behaves as specified for all operations including virtual machine creation, memory mapping, and abort handling. Our total verification approach additionally establishes memory-safety, termination, and absence of runtime errors.

Confidentiality: A noninterference-style property demonstrates that guest virtual machine state remains hidden from an expansive definition of observer monitoring system actions, formalised as indistinguishability preservation up to permitted declassification flows.

Integrity: Guest virtual machine private state is unaffected by operations on distinct virtual machines.

The talk will discuss the verification approach, key proof techniques, and challenges in applying formal methods to production systems.

Evaluating and Verifying Quantum Computations

Petros Wallden (University of Edinburgh)

Abstract:

Quantum computing promises to offer transformative computational advantages leading to significant advances in all sectors of economy and society. This new computational paradigm requires new methods to evaluate the performance of the hardware and software and importantly to determine when and how true advantage can be achieved. In this talk I will (a) introduce the quantum computational paradigm, (b) review the main milestones that have been achieved recently and give a (hype-less) description of the state-of-the-art and (c) describe methods and progress in evaluating quantum computing, spanning from benchmarking devices, to algorithmic benchmarks and more abstract verification of quantum programs.

How CHERIoT’s least-privilege architecture enables partial verification

David Chisnall (CHERI)

Abstract:

CHERIoT was designed to showcase what assuming memory safety from assembly code upwards made possible for system design. The closest equivalent to a kernel (i.e. the only part of the system that runs with sufficient privileges to be able to violate core security guarantees) is around 350 instructions. All other OS services are implemented as small components that run with minimal access.

This combination of small components, strong isolation, and hardware-enforced guarantees, provides both a rich set of axioms for verification to build on and significant system-level benefits from verification of properties in individual components. This talk will cover the system design properties that enable this as well as some completed and ongoing verification work on parts of the system.

Bio:

David Chisnall’s background spans computer architecture, compilers, operating systems, and security. He has been an LLVM contributor since 2018 and has written two language runtimes that have each been deployed on over a hundred million devices. He was elected twice to the FreeBSD Core Team, and the author of the Definitive Guide to the Xen Hypervisor. He joined the CHERI project at the University of Cambridge in 2012 to lead the compilers and languages strand of the research. He moved to Microsoft in 2018 where he worked on several internal silicon projects and led Microsoft’s engagement with the Digital Security by Design programme. As part of this, he led the team that created the CHERIoT platform, a hardware-software co-design effort to create a secure platform for connected embedded devices with minimal hardware overhead. In 2023 he co-founded SCI Semiconductor to commercialise CHERIoT, where he is responsible for system architecture and software development. SCI demonstrated the first CHERIoT silicon in 2026.

Using causality for debugging and explainability

Hana Chockler (King’s College London)

Abstract:

In this talk I will look at the applications of actual causality to debugging models and explainability. Specifically, I will talk about actual causality as introduced by Halpern and Pearl, and its quantitative extensions. This theory turns out to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. It turns out to be particularly useful for explaining the outputs of large AI systems. I will argue that explainability can be viewed as a debugging technique and illustrate this approach with a number of examples. I will discuss the differences between the traditional view of explainability as a human-oriented technique and the type of explainability we are proposing, which is essentially a window inside the (otherwise black-box) system.

Bio:

Professor Hana Chockler is a Professor of AI at the Department of Informatics, King’s College London. Prior to joining KCL in 2013, she worked at IBM Research in the formal verification department. She received her PhD in computer science from the Hebrew University of Jerusalem, with her PhD dissertation focussing on coverage metrics for formal verification. Professor Chockler’s current research is mostly in the area of explainable AI, where she applies the principles of actual causality to explain the outputs of black-box AI models.

Repository Automation with GitHub Agentic Workflows

Don Syme (Microsoft / Github)

Abstract

This talk introduces GitHub Agentic Workflows, now in technical preview, and explores how coding agents can be integrated into GitHub Actions to automate repository tasks. The detailed information of the talk will be based around https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/

Types for more than memory safety in OxCaml

Stephen Dolan (Jane Street)

Abstract:

Sound type systems provide a lightweight way to verify various program safety properties, in a way that is natural and convenient for programmers by being part of the language itself, instead of a separate verification tool with its own syntax and semantics. While traditionally used to establish memory safety, modern type systems can verify more detailed program properties. In this talk, I’ll describe some ongoing work at Jane Street building OxCaml, a series of extensions to the OCaml programming language which includes support for verifying data race freedom, uniqueness, purity and concurrency properties.

Formal Verification of EMV Payments

Tom Chothia (University of Birmingham)

Abstract:

The EMV contactless bank card payment system has many independent parties: payment providers, terminal companies, smart-phone companies, banks and regulators. Many of these stakeholders have unilaterally added undocumented features to the core protocol specifications, leading to a highly complex system. Our research has shown that formal verification is an effective method to understand, explain and test the security of EMV. This has led to the discovery of 12 different attacks against EMV cards; highlights of our work include taking $10,000 from the locked iPhone of YouTube influencer MKBHD and enabling an offline Square reader to accept transactions from fake cards for £20,000. In addition to finding vulnerabilities, we have designed a security protocol for smart cards and used formal verification to show that it stops most attacks against EMV. Our new protocol is currently being added to the ISO 14443 standard for all smart cards.

This is joint work with many people, including Ioana Boureanu, Anna Clee, Andreea-Ina Radu and George Pavlides.