Automated Testing for Web Browsers

Web browsers are now a main part of daily life in the developed world, so that the reliability of browsers is important, and the security of browsers is critical. This has led to a great deal of interest in how to intensively test web browsers via fuzzing – testing using randomly-generated inputs. Fuzzing, applied at a large scale by companies such as Google, has been effective in finding low-level defects in many web browsers, including a large number of security-critical defects.

However, this sort of low-level fuzzing does nothing to determine whether a web browser is operating correctly at a semantic level – i.e., whether it is rendering pages in an appropriate manner. Nor does it go any way towards protecting end users against higher-level security threats, such as the circumvention of browser enforcement mechanisms, that do not manifest as low-level defects.

The main difficulty associated with semantic fuzzing for browsers is the lack of a test oracle – a means for determining whether a browser has rendered a page, or obeyed and enforcement policy – properly. In this project, we will conduct initial investigations into novel oracles to enable semantic fuzzing of browsers. Drawing on successes in compiler testing, we will investigate strategies including differential testing across browsers, and metamorphic testing across families of equivalent pages, and plan to evaluate these across the latest browser versions from the major vendors.

Researchers: Benjamin Livshits and Alastair Donaldson, Imperial College London