VSS25: How can we avoid conflicts of interest?

Monday 11th August, Afternoon Session

BACKGROUND:

Ethical security policies ensure confidentiality properties that respect conflicts of interest. They are inspired by Chinese Walls originally introduced to avoid insider trading. Still today, insider trading is a major issue leading to regular criminal proceedings and multi-million-dollar fines. Conflicts of interests can also arise in computer systems such as Cloud computing and other decentralised architectures, where multiple organisations share infrastructure and resources. A famous paper by Brewer and Nash proposed a formal security policy model inspired by ethical Chinese Wall policies, where a policy model ensures that security policies that conform to the model guarantee the security properties of the model. The formal semantics of the Brewer-Nash model were however underspecified, leaving scope for multiple interpretations of what accesses are permitted and what the intended security goals are.

In this talk we scrutinise famous papers on ethical policy models and develop two formal models. The first clarifies directly the model intended by Brewer and Nash. The second is a more abstract generalisation, inspired by Sandu’s work on Lattice-based Access Control. An interesting feature of these models is that write access can be revoked when a subject (the entity accessing data) reads too much information. I’ll argue that this problem is too important to be left underspecified and that our methodology can bring more confidence and flexibility to policies while respecting conflicts of interest.

Dr Sebastian Ullrich headshot

Dr. Ross Horne is a senior lecturer in the Deapartment of Computer & Information Sciences at the University of Strathclyde, Glasgow where he directs their MSc in cyber security. His primary research focuses on the security and privacy of communication protocols and systems relevant to our digital society, e.g., e-passports, contactless payments and digital wallets. His research develops formal threat models and logical methods to identify and mitigate risks such as attackers manipulating networks to impersonate users or infer sensitive behaviour.