How many years it takes to publish a new idea

While the original idea could be traced back to a chapter in my PhD thesis, the actual work on large-scale e-voting started in 2009 when I was still working in Cambridge. With my colleague Matthew Kreeger, we began to critically examine the basic theory underpinning the 20 years research on End-to-End (E2E) verifiable e-voting, and attempted to design a new category of E2E systems that did not rely on any trustworthy tallying authorities. We called the new category “self-enforcing e-voting”.

We first released a technical report in IACR (2010), and then tried to publish it at a conference.

http://eprint.iacr.org/2010/452

Until its recent acceptance by USENIX JETS (Vol. 2, No. 3, 2014), the paper had been repeatedly rejected by top conferences in the field. The final version of the paper is in the open-access domain (below). The technical protocol in the paper remains unchanged from its 2010 IACR report.

https://www.usenix.org/jets/issues/0203/hao

This has been an interesting personal experience, from receiving consistently harsh reviews and repeated rejections from top conferences, to getting surprisingly positive feedback from the ERC panel and a €1.5m starting grant to support my further work, until the final acceptance of the paper just recently.

Getting rejections is always a frustrating experience, but in the end, I feel I learned most from the rejections rather than the acceptance. Today’s top security conferences have developed an extremely rigorous reviewing process, which is good. But perhaps, the process could be slightly adjusted to give a little bit more tolerance to “new” ideas, albeit they may be controversial or have all sorts of shortcomings in the beginning.

Acknowledgement: The co-authors of the paper are Matthew Kreeger, Brian Randell, Dylan Clarke, Siamak Shahandashti and Peter Lee. We especially thank several dozens of anonymous reviewers – who liked or disliked the paper – for the feedback and for helping us improve the paper.

Radio interview with Ehsan Toreini about private browsing

Ehsan was interviewed on a radio show last Thursday (17 July, 2015) by an Australian radio station LifeFM, on the security issues of private browsing in modern browsers. It was related to a recently published paper which Ehsan co-authored: “On the privacy of private browsing – A forensic approach” (2014, ScienceDirect).

The interview recording can be found here.

Quantitative Workflow Resiliency

John Mace will present at ESORICS 2014 some joint work with Aad van Moorsel and myself, on the problem of quantitative workflow resiliency.

Intuitively, a workflow is a partially ordered set of tasks, such that if a task t is less than a task t’, then t must be executed before t’. Each task is executed by a user, and different tasks can be executed by different users. For instance, consider the reviewing process for a journal: the paper is assigned to some reviewers by an editor, then each reviewer submits a review for the paper, and the editor makes a decision. Clearly, the decision can only be made after each review has been submitted, and each review can only be submitted after the assignment has been done. However, the different reviews can be submitted in any order.

A security policy for a workflow consists of multiple constraints over which users can execute which tasks.  For instance, some tasks can only be executed by some users (e.g., only an editor can assign a reviewer), some tasks are bound by a separation of duty (e.g., someone submitting a paper cannot be reviewing that paper), while other tasks are bound by a binding of duty (e.g., the editor making the final decision must be the same than the editor in charge of the initial reviewer assignment). Assigning each task to a user while respecting the security policy is known as the Workflow Satisfiability Problem (WSP), and we refer to “On the Parameterized Complexity and Kernelization of the Workflow Satisfiability Problem”, by Crampton, Gutin and Yeo, from Royal Holloway, for a recent paper on the WSP (Jason Crampton is giving a Colloquium at Newcastle University on the 22/07 about this line of work).

To some extent, the WSP is rather static: the workflow is analysed together with its policy, and either a correct assignment is found, in which case the workflow can be executed, otherwise there is no possible assignment. However, in practice, the availability of the different users can vary from the start of the workflow until the end. For instance, a reviewer can become unavailable after being assigned a review, or the editor in charge of the paper at the beginning could develop a conflict-of-interest with the authors during the process. Wang and Li introduced the notion of workflow resiliencyroughly speaking, a workflow is N-resilient if the WSP can still be solved while N users becoming unavailable. Resiliency can be static, when users can only become unavailable before the start of the workflow, decremental when users can become unavailable at any point, or dynamic, when users can become unavailable at any point and then potentially become available again. 

Resiliency is a binary notion: a workflow is N-resilient if any subset of N unavailable users blocks it, and not N-resilient otherwise. In the ESORICS paper, we suggest that in practice, user availability is not a strictly deterministic property, but a probabilistic one. Indeed, if we can know in advance that a reviewer is not available at a particular time, then we can simply make that reviewer unqualified for the corresponding task, thus reducing resiliency to solving the WSP; Instead, we might just know that a reviewer has some probability to become unavailable at some point (e.g., sickness, over-scheduling, travelling, etc). In other words, a workflow has some probability to terminate execution, depending on whether some users become unavailable or not.

Now, consider two workflows both satisfying the WSP (i.e., assuming full user availability, it is possible to find a user assignment ensuring that each task is executed), such that, over 100 instances execution, the first one fails to terminate 1 time, while the second fails to terminate 99 times, for a similar user availability. Although both are not resilient, we believe these two workflows should have different measures, since the first one could actually be acceptable in many situations, while the second one could effectively be unusable. Of course, one could argue that, if possible, a workflow terminating in all cases is ideal, but in practice, if we had to choose between the two previous workflows, the first one is likely to be better.

Hence, we define the notion of Quantitative Workflow Resiliency, which measures the likelihood of a workflow to terminate given a probabilistic user availability model, and considering that users can be reassigned at runtime, as long as the policy remains satisfied. More precisely, we encode the problem of selecting a user for a task as a Markov Decision Process, where each process state consists of a context (containing the previous task assignment and the user availability) and a task, and each action corresponds to selecting a user for the task of the state. The transition function govern which task is to be executed next and how the context evolves, in particular concerning user availability. The termination of the workflow is associated with a reward of 1, while any other transition is associated with a reward of 0. The value of the initial state then corresponds to the probability of the workflow to terminate. We also showed that by associating each transition with a reward of 1 and termination with a reward of 0, the value of the initial state corresponds to the expected number of tasks executed by the workflow.

We believe these two metrics are useful to evaluate workflow resiliency, and provide more information than a binary notion of resiliency. In particular, we provide in the paper an  illustrative example, where the assignment optimising termination is different from that optimising the number of tasks, and both are different from that optimising the computation time required at runtime to calculate the assignment. In other words, finding an optimal user assignment might require to solve a tradeoff between termination, task execution and computation time.

 

 

 

On the privacy of private browsing – A forensic approach

Most readers should already be familiar with the concept of Private Browsing (also known as the Incognito mode in Google Chrome and Inprivate Browsing in IE). This post is about our newly published paper in “Journal of Information Security and Applications” (Vol. 19, No. 1, 2014), which investigates the privacy issues of Private Browsing among four mainstream browsers: IE, Firefox, Chrome and Safari.

Our work is motivated by an apparent lack of research on the subject, despite that millions of users have been using private browsing to protect their privacy on the daily basis. In USENIX Security’08, Agrawal et al. studied the security of then-newly introduced private browsing feature in modern browsers and discovered several security issues. Their experiments were mainly focused on Firefox (in particular, Firefox V3.6). A year later, Said et al. extended the earlier work into analyzing the computer memory and they found traces of private browsing data in memory and cache after the browser is closed. Recently, in ESORICS’13, Lerner et al. presented a software tool that allows automatic verification of the browser extensions’ compliance with the private mode. The tool was mainly tested on Firefox extensions. Apart from these papers, the subject of private browsing seems to have received little attention from the security community.

In our paper, we conducted a systematic approach to investigate the privacy of private browsing across four main browsers: IE, Firefox, Chrome and Safari, and from various angles: not just in memory, but also in local database and web traffic. Our work constitutes an independent evaluation of the private browsing feature provided by mainstream web browsers. It presents the latest understanding on the security of private browsing as of February, 2014.

Our threat model is defined in terms of the attacker’s goals and capabilities. We divide the attacks into two categories: local and remote attacks. Local attacks mean the attacker has physical access to the user’s computer and has full control over it after the target user has exited the private browsing session (i.e., “after the fact” forensic). Remote attacks assume that the attacker is engaged with the user through HTTP(S) and wants to find out if the user is currently in the private browsing mode. Typically, this happens when the target user is visiting a web site controlled by the attacker. We have assessed different attacks in each category. A summary of all the attacks is presented in the following table. Those marked with * contain new results discovered by our study, while others correspond to attacks that have been previously known but validated again by our study. Full details about the attacks can be found in our paper. All the source codes for extensions and timing attacks are freely available here. We welcome any comments.

Firefox  Chrome  IE  Safari  Information leakage
Domain name system Browsing history
Memory inspection Browsing history, passwords, cookies
File timestamp When private mode was last used
Index.dat* N/A N/A N/A When private mode was last used
SQLite database crash* N/A Minor to serious depending on browsers
SQLite added bookmark* N/A Minor to serious depending on browsers
Extension* Browsing history
Cross-mode Interference* N/A N/A N/A User activities in private mode
Hyperlink attack If the user is in private mode
Timing attack* If the user is in private mode

Acknowledgements: This paper is based on an MSc dissertation titled “Is private browsing private?” by Kiavash Satvat.The authors of the paper are Kiavash Satvat, Matthew Forshaw, Feng Hao and Ehsan Toreini.