A video demonstration of DRE-ip

We have made available a video demonstration of the DRE-ip voting system on YouTube. The video was made by Ehsan Toreini.

DRE-ip (Direct Recording Electronic with Integrity and Privacy) is an end-to-end verifiable e-voting system without tallying authorities, designed by Siamak Shahandashti and myself in 2016. The DRE-ip paper was presented in ESORCIS’16 and is freely available at: https://eprint.iacr.org/2016/670.pdf.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (2/2)

Previously, we showed the Prisoner’s contract and how it would force the two clouds to behave honestly, by creating a Prisoner’s Dilemma. However, this only works if the two clouds cannot make credible commitments. The fundamental problem in the Prisoner’s Dilemma and in game 1 (in the previous post) is that each player cannot believe the other, a selfish utility maximizer, will follow the collusion strategy.

Now if someone does not believe you, you cannot convince him/her by just talking. What convinces a rational player is by showing that lying will make you worse off. If lying is not in your interest, you will not lie. And if you want someone to do what you expected, you have to show doing so is in the best interest of him/her.

That is the idea behind the Colluder’s contract, in which both clouds show their loyalty to collusion (i.e. sending the agreed wrong result r) by promising that I will suffer a loss if I cheat, and any damage caused by my cheating behaviour to you will be compensated. The one who initiates the collusion can also give a slice of his own profit to the other as an additional incentive. The contract again is based on deposit:

  • Each cloud pays a large enough deposit into Colluder’s contract;
  • Anyone who does not follow the collusion strategy will lose its own deposit, which is transferred to the other cloud.
  • The ringleader commits to giving a bribe if the other follows the collusion strategy.

This colluder’s contract, when in place, will change the game into:

As you can see, now the equilibrium (bold path) for the two clouds is to collude and both follow the collusion strategy.

This is bad. After trying to prevent collusion using smart contracts, we found smart contracts actually can be used to enable collusion. And if the client tries to counter that by another contract, the clouds can have another contract to counter back. This is an endless loop.

What can we do then, if we cannot counter directly back? In the end, we came up with the idea that uses a smart contract to incentivize secret betrayal and reporting. This leads to the Traitor’s contract. In this contract, the first cloud who reports collusion will not be punished by the prisoner’s contract and will get an additional reward if the collusion attempt does exist (there is a motivation to report). However, if someone tries to report a non-existent collusion case, it will have to bear the consequence and suffer a loss (there is a motivation not to abuse the system).

The consequence of reporting is that the client can call the trusted party, and find out who cheated. Once the trusted party is called, there is no point to counter back using another contract because the payoff of each cloud now only depends on whether it cheats or not, not the other’s behavior. So we break the loop. More importantly, the Traitor’s contract creates distrust between the clouds because “agree to collude then betray” is the best responding strategy if one cloud tries to initiate collusion. Now both clouds understand that, then no one will want to initiate collusion because they know they will be betrayed and end up with a worse payoff. Then both will behave honestly in the first place.

The contract works again by manipulating deposits paid upfront, by the client and the reporting cloud. Details can be found in the paper. Here I just show the full game tree:

Implementation of the contracts in Solidity is available here. We actually tested the contracts on the official Ethereum network. There are challenges when implementing the contracts, one being that the transparency of a public blockchain. This means everything you put on the blockchain is visible to anyone. To make it worse,  a blockchain is append-only, which means later there is no way to delete the data if you change your mind.

To preserve data privacy, we used some light cryptography, including Pedersen Commitment and Noninteractive zero-knowledge proof (NIZK). Pederson Commitment allows us to put a “commitment” (ciphertext) of a value on blockchain rather than the value itself. The commitment has the property that it leaks no information about the value it is committed to, and is bounded to that value in the sense that you cannot find a different value and convince other people that the new value was committed in the commitment. One problem caused by the “hiding” property is that the miners cannot see the values committed in the commitments and thus cannot compare them to determine whether the values are equal or not (which is needed to execute the contracts). Fortunately, we can use NIZKs, which are cryptographic proofs that can be checked publically with commitments as inputs. There are already NIZK protocols that allow proving equality/inequality of committed values, which we can simply use.

The cost of using the smart contracts comes from the transaction fees paid to the miners for storing and executing the contracts. In our experiments conducted on the official Ethereum network, the transaction fees are small. Depending on the transactions, the fees range from $0.01 to $0.40. This was done in May 2017, when the price of Ether was about $90. Today the Ether price is about $360, so transaction fees would be higher. Luckily, the most expensive operations are cryptographic ones, and the recent Ehtereum hard fork has made ECC cryptography (which we use) cheaper than before. So the increase in transaction fee should not be steep as the increase in Ether price.

The End.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (1/2)

(Previous post)

The idea of our counter-collusion contracts is to make collusion a bad choice and leads to loss, so that the clouds will avoid it like a plague. Collusion has been studied for many years by economists, and they have made several key observations:

  • Collusion is profit driven. Note that collusion is often illegal, without additional profit, no one would have the motivation to collude.
  • Colluding parties have their own interests. And often those who collude are also competitors. This is very true in our case (cloud computing).
  • Cheating is a big problem in collusion. Often the cheating party who deviates from collusion can get an even higher profit, so is motivated to do so.

Collusion is delicate and requires trust among the colluding parties. If we can take away the trust, the clouds cannot collude.

Everything I say below is based on certain assumptions. The most important ones include:

  • The clouds are rational, which means two things: they try to maximize their payoffs and they understand all consequences of the games.
  • There exists a trusted third party that can be called upon to re-compute and find who was wrong, if the two clouds return different results. What interesting about this trusted party is that the analysis shows that if the two clouds are rational, the trusted party will not need to be involved at all.
  • The task to be outsourced must be deterministic or can be reduced to be deterministic (e.g. by including a seed as input and use a pseudorandom number generator for random choices).

There are other less important assumptions, check them in the paper.

Prisoner’s Contract is where we started. The contract is designed for outsourcing and needs to be signed by a client and two clouds. Informally, the contract requires each cloud to pay a deposit before it can take the job. The deposit, of course, needs to be large enough (which we have derived a lower bound in our paper). Then the clouds get the task to compute, and each returns a result before the deadline. An honest cloud will be paid a “salary” for the computation, a cheating cloud (if caught) will be punished by losing its deposit, and if one cheats one is honest, the honest cloud will get an additional reward (from cheater’s deposit). In cases where the client cannot decide who is honest, the trusted party will be called to resolve the dispute. The cost of dispute resolution will always be born by the cheating cloud(s), from the deposit(s). This means for the client, its cost is bounded and will never be more than the two salaries, even in the unlikely case that the trusted party has to be involved.

What is the consequence of the contract? The highest payoff each cloud can get comes from the case where it is honest and the other cheat. What does that mean? Let us play the role of clouds, you and me:

  • Me: let’s collude and cheat together!

What would you do?

  • A: decline to collude
  • B: collude with me and sent the agreed wrong result
  • C: agree to collude but later remain honest
  • D: decline and try to cheat

If your choice is A, you are a good person. And your honest behaviour will force me to behave honestly because I will be punished if I cheat and you do not cooperate.

If your choice is B, you are too naive. Of course collusion, if it succeeds, will lead to a higher profit (than being honest) for you . But have you ever considered the possibility that I am lying? I can take advantage of your trust, and later send the correct result to get a higher profit.

If your choice is C, you are a smart badass, with a sense of “business acumen”. This is actually a choice that is no worse than A and could lead to the best payoff for you in the game (if I am naive or mad).

If your choice is D (I hope not), you are dangerous because this choice makes no sense (it is the worst thing you can ever do), and who chooses this must be out of his mind.

Anyway, you should get the idea and can understand the game presented below:

C1 and C2 are two clouds, the label on edges are the actions they can take: f(x) means to send the correct result, r means to send the agreed wrong result, other means any other actions. Below the leaf nodes are the payoffs: u1 is the payoff of C1, u2 is the payoff of C2. No need to pay attention to the payoffs now, you can find how they are derived in the paper. The bold edges show the best choices of the players (C1 and C2). A bold path from the root to a leaf node is an equilibrium, a stable state in which both parties do not want to change their strategies if the other does not.

In the game, there is only one equilibrium, and in the equilibrium, the choice of each party is strictly better than the other choices (dominant strategy). In the equilibrium, both clouds will play honestly because

  • If no one asks me to collude, being honest leads to the best payoff.
  • If the other cloud asks me to collude, then I should agree to collude but later remain honest, in order to get the highest payoff.
  • Anyway, behaving honestly is always the best choice.

If you are familiar with game theory, you should have noticed that this is somewhat a Prisoner’s Dilemma game. For both clouds, the reasoning is the same and both will stay honest. If both clouds stay honest, everyone is happy and dispute resolution is not needed.

So far so good and it seems that we have solved the problem. But unfortunately, no. In the next post, you will see how the clouds, using another contract to change the game completely and make collusion the best choice, and how we solve this problem.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (Prologue)

People say smart contracts is the next big thing in the blockchain space. In the simplest term, a smart contract is a piece of program stored and executed in the blockchain. The fancy things about a smart contract are that its execution is (or will be) always correct (if you believe in the consensus protocol that maintains the blockchain), it is self-enforcing (executed and enforced by peers), it is trustless (no central authority) and it is cheap to use. It sounds so good, but what can smart contracts do? Of course, we want something more than ICOs. And this is I will write about.

A Short Summary in case you are impatient: we use smart contracts to implement mechanisms designed based on game theory, to enable cost-effective verifiable cloud computing. The paper (co-authored with Yilei Wang, Amjad Aldweesh, Patrick McCorry, Aad van Moorsel) was presented early this month in CCS 2017, and here are the full paper and slides.

The Need for Verifiable Cloud Computing comes from distrust. Everyone using cloud computing probably knows that “the cloud” is just a bunch of computers belongs to someone else. Then when I outsource something to the cloud, how can I be sure it is done properly in the cloud? In current practice, I cannot. You can imagine how annoying this would be when that outsourced computation is important to me. It is not necessary that the clouds are malicious, it is simply a consequence of uncertainty: I do not know what happens exactly in the clouds, and I have no control over that either. So the best I can do, as a matter of due diligence, is not to trust the clouds and verify all results returned by the clouds. But how? Verification can be as expensive as recomputing the task, and I might not have the resource to do that (if I have, I can avoid using the cloud in the first place by computing it by myself).

The Current State of verifiable computing is more or less divided into two streams. Some verify by using cryptography, some verify by using replication. In the cryptography based approach, the cloud must generate a proof that the computation is done correctly. Cryptography ensures that, unless our beloved cryptographic assumptions are wrong, the cloud cannot generate a valid proof if the computation is wrong. By checking the proof, I can be assured the correctness of the computation. In the replication based approach, I give the same task to several clouds, and later collect results from them, and cross-check the results. If the results from all replicas match, I can assert with a high confidence that the computation was done correctly. Of course the more replicas I use, the more reliable my assertion would be. More replicas can also help me to find the correct result, should there is something wrong in some replicas.

What is Missing in all existing verifiable computing techniques is a sense of economy. Surely they are technically sound, but with an unaffordable price. The problem is that cloud is not free. You pay for what you compute. Generating a cryptographic proof is much more expensive than what you would think. Currently, the overhead is 3 – 6 orders of magnitude more than the computation being verified. Simple primary school math:

  • The costs of my computation: £500 per month
  • The costs of getting the proofs: £500 * 1000 = half a million per month
  • What I get: bankruptcy and out of business

For replication based approach, since I have to pay each of the replicas, the cost is blown up by a factor that equals the number of replicas. Of course, it soon becomes unaffordable when the factor grows up.

One, perhaps the most important, reason people want to use cloud computing is cost saving. When there is no advantage in term of cost over on-premises IT infrastructure, which you have control and don’t need to worry much about correctness, many would not be that keen on the cloud.

The Question then is: can we have cost-effective verifiable cloud computing after all? Well, for cryptography based approach, I am a bit pessimistic. The gap is just too big. Unless there is a big breakthrough, we won’t be able to use it in practice in the near future. For replication based approach, the might be some hope, if the number of replicas we pay is small. How small the number can be? The least is 2. In fact, that might work. The idea is that using cloud computing is cheaper than using your own trusted on-premises IT infrastructure. “Two cloud replicas” means doubling the cost, and cost-wise this may not differ much or may be even lower than using your trusted IT infrastructure. Given the other good qualities cloud computing processes, people would have the motivation to use the cloud.

This is straightforward, but why has not anyone came up with something? Let us forget all engineering difficulties such as synchronization, replication, latency etc., and focus on the idea. It has a fatal weakness: collusion. In replication based approach, verification is done by comparing the results. What if the two clouds collude and give you the same wrong result? You know nothing and you cannot verify anything. Can the clouds collude? Of course they can. Remember, it is not about whether they will collude or not, it about whether you believe they will collude or not. You don’t trust the clouds, then collusion is a threat to you. In the face of collusion, verification based on 2 replicas is insecure.

How to Prevent Collusion is then our objective. The technical details will follow. A spoiler from the abstract of the paper: a client “uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat”.

Beware What Lurks Within Your Browser: The Threat of Malicious Extensions

Extensions have been a common staple to the modern browser, with some extensions such as AdBlock Plus receiving over half a million weekly downloads. Browsers place emphasis on their extension model being resistant to attacks from the perspective of malware being uploaded to their web store, as well as external website based attacks. One could then assume that a user’s safety is preserved as long as you download the extension from the browser’s official extension repository.

However, our study shows that this is not the case. We show that Chrome, Firefox and Firefox for Android are highly susceptible to their extensions being used for a malicious purpose. We enumerate the range of capabilities each extension model possesses and discuss the impact this has on a user’s privacy and browsing integrity. We found that Firefox and Firefox for Android users in particular should be more wary of malicious extensions compared to Chrome users, with some attacks affecting even the user’s OS file system.

In conjunction to our findings, we designed a simple botnet to control a vast network of malicious extensions and tested its feasibility by uploading a malicious extension to both Chrome and Firefox’s web stores (both extensions had the botnet remotely disabled so no reviewers could come to harm in using the extension, for ethical reasons). We found that both Firefox and Chrome’s web store checks are not sufficient in finding malicious extensions as both of our extensions were approved.

Our paper has been accepted for publication to the IEEE S&P Magazine, and a pre-print version is currently available at: https://arxiv.org/pdf/1709.09577.pdf

J-PAKE published as an international standard

After attending ISO/IEC SC 27 WG2 for 4 years, I’m happy to say that J-PAKE is finally published in ISO/IEC 11770-4 (2017) as an international standard. In the mean time, J-PAKE is also published in RFC 8236 by IETF (together with an accompanying RFC 8235 on Schnorr non-interactive zero-knowledge proof). This is a milestone for J-PAKE. From the first presentation at Security Protocol Worksop ’08 in Cambridge to the publication in the international standard in 2017, J-PAKE has come a long way. The critical insight in the design of J-PAKE was based on understanding the importance of zero knowledge proof (ZKP), but this insight was not shared by other researchers in the field at the time. One main reason is that the use of ZKP was considered incompatible with the then-universally-adopted formal models in the PAKE field. However, in an independent study due to Abdalla, Benhamouda, MacKenzie and published in IEEE S&P 2015, the formal model for PAKE protocols was modified to make it compatible with ZKP, and the modified model was applied to prove J-PAKE was secure. The provable results are the same as in the original J-PAKE paper, but are constructed in a formal model, thus bridging the gap between theory and practice in the end.

Today, J-PAKE has already been used by many million users in commercial products, e.g., Palemoon Sync, Google Nest, ARM mbed OS, OpenSSL, Mozilla NSS, and Bouncycastle API. In particular, J-PAKE has been adopted by the Thread Group as a standard key exchange mechanism for the IoT commissioning process, i.e., adding new IoT devices to an existing network. The protocol has already been embedded into IoT products. The following video demonstrates how J-PAKE is used to securely enrol a new IoT device into the Thread network during the commissioning process (more details about Thread can be found at NXP, Thread Group, ARM, Silicon Labs and Google Nest’s Open Thread). It’s expected that in the near future, J-PAKE will be used by many billion Thread-compliant IoT devices for the initial bootstrapping of trust.

First campus trial of the DRE-ip voting system

Today, we ran the first campus trial of a new e-voting system called DRE-ip. The DRE-ip system was initially published at ESORICS’16 (paper here), and since then we have been busy developing a prototype. In our current implementation, the front end of the prototype consists of a touch-screen tablet (Google Pixel C), linked via Bluetooth to a thermal printer (EPSON TM-P80). The backend is a web server hosted in the campus of Newcastle University.

The e-voting trial was conducted in front of the Students Union from 11:00 am to 2 pm. We managed to get nearly 60 people to try our prototype and fill in a questionnaire. All users provided us useful and constructive feedback (which will take us a while to analyze in full detail). The general reception of our prototype has been very positive. The prototype worked robustly during the 3-4 hours trial. Apart from the occasional slight delay in printing a receipt from the thermal printer, the system worked reliably without any problem. This is the first time that we put our theoretical design of an e-voting system into the practical test, and we are glad that it worked well to our expectation on the first trial.

img_1092 img_20170523_111127

img_20170523_131134

During the trial, we asked the user to choose a candidate from the following choices: Theresa May, Jeremy Corbyn, Nicola Sturgeon, Tim Farron, None of above . The tallying results are a bit surprising: Jeremy Corbyn won the most popular votes! However, the voting question we used in our trial was meant to be a lighthearted choice. Our main aim was to test the reliability and usability of the prototype and to identify areas for improvements. Many users understood that. resultsToday’s trial was greatly helped by the nice weather, which is not that usual in Newcastle. Everyone from the project team tried their best. It was a great teamwork, and it was great fun. When we finished the trial, it was already past 2:00 pm. A relaxed lunch with beer and celebration drinks in our favorite Red Mezze restaurant is well deserved (which I should foresee no problem in justifying to the ERC project sponsor).

img_20170523_152149

We plan to analyze and publish today’s trial results in the near future. Keep tuned.

 

We ranked 3rd in the Economist Cyber Security Competition 2016

In the announcement for the winners of the 2016 Economist Cyber security Challenge, our team “Security upon Tyne” from the School of Computing Science, Newcastle University, won the 3rd place in this international competition. Universities that participated in this competition were selected by invitation based on their track record in cyber security research (particularly in bitcoin and voting) to participate. In the end, 19 universities from the UK and USA accepted the challenge.

In the final outcome, Newcastle University is the only UK university in the final top three, and it came after the New York University and the University of Maryland.

In this challenge, each team was tasked “to design a blockchain-compliant system for digital voting” to address the following aspects of an election: ensuring privacy and the ability to check the votes, protecting voting under duress, prohibiting publication of of interim results, supporting undecided voters and addressing any potential dispute in voting aftermath.

Each team had to prepare a 3000 word report describing their work in two weeks from September 15, 2016 to September 29, 2016 and then, they had a week to produce two videos for the challenge. One video describes their proposal in between 3 to 5 minutes and an elevator pitch clip no more than 2 minutes. The attendants were asked to provide a proof of concept implementation of their solution to demonstrate the feasibility of their proposal. It was an intense challenge to do all these within the short 2-3 weeks. The full list of participants, as well as the detailed description of their proposed solutions, is available here.

In our report, we presented a proof-of-concept implementation of the Open Vote Network e-voting protocol as a self-enforcing voting algorithm over the Ethereum blockchain. Ethereum is a decentralized peer to peer block chain that ensures execution of code as smart contracts. In our proposal, the blockchain is not only used as a bulletin board for publishing encrypted votes, but a trusted platform to verify all cryptographic data before they are published. Ethereum provides the opportunity to implement a self-tallying algorithm in the Open Vote network protocol as a smart contract so the correct execution of the algorithm is enforced by the consensus-based mechanism in Ethereum. Our full report could be accessed via here. Furthermore, our team videos are here.

Our solution is designed for small scale e-voting over the internet. To support large-scale elections, we have suggested two further solutions, using the DRE-i and DRE-ip protocols for the centralized remote voting and centralized polling station voting respectively. Overall, our three suggested systems could fulfill all the challenge criteria. However, due to the space limit in the report, we focused on the small-scale voting over the Internet and only briefly covered the large-scale elections for both onsite and internet voting scenarios. We noted that the two top winning teams primarily focused on large-scale elections for onsite voting. An overview of our proposed algorithms is shown below:

How our proposed algorithms fulfilled the challenge criteria in the economist cyber security competition

How our proposed algorithms fulfilled the challenge criteria in the economist cyber security competition

Our team consisted of three PhD students, Maryam Mehrnezhad, Ehsan Toreini and Patrick Mccorry, in the Secure and Resilient Systems Group in Newcastle University, United Kingdom. In the announcement for winners, Kaspersky, the sponsor for this Economist Cyber security Challenge, commented on the Newcastle solution: “Newcastle University’s (proposal) is the best solution in which remote voting is permitted.”

DRE-ip: A Verifiable E-Voting Scheme without Tallying Authorities

Next week in ESORICS 2016, I will be presenting our paper (co-authored with Feng Hao) on a new e-voting system which we call DRE-ip. The system is designed for e-voting in supervised environments, that is, in polling stations, but alternative implementations of the underlying idea can also be used for remote e-voting (sometimes called i-voting for internet voting).

DRE-ip is an end-to-end verifiable e-voting system that guarantees vote secrecy. These properties are similar to those provided by state-of-the-art verifiable systems like VoteBox, STAR-Vote, and vVote designed to be used in elections in the US and Australia. However, crucially DRE-ip achieves these properties without requiring trusted tallying authorities. These are entities holding the decryption keys to encrypted ballots.

In almost all systems with tallying authorities, the votes are encrypted to provide vote secrecy. These encrypted ballots are posted on a publicly accessible bulletin board to enable vote verification. In some systems, the votes are shuffled (using mix-nets) before the tallying authorities decrypt them individually. In some other systems, they are aggregated (using homomorphic encryption) before decryption and the tallying authorities only decrypt the tally. These two techniques are used to protect vote secrecy from tallying authorities. However, there is nothing to prevent tallying authorities to get together and decrypt ballots on the bulletin board, and even worse, there is no way to detect if this happens. So at the end of the day, we are trusting the tallying authorities for vote secrecy.

DRE-ip works based on a simple observation: if a message is encrypted using randomness r, the ciphertext can be decrypted using the secret key or the randomness r. Now, imagine a situation where multiple messages are encrypted and say, we are interested in finding the sum of these messages. One way would be to decrypt the ciphertexts individually and then find the sum. Another way, if we use a homomorphic encryption, would be to aggregate the ciphertexts first and then decrypt the encrypted sum. These two ways are what other systems are doing. But our observation above tells us that there is a third way: whoever is encrypting the messages can keep an aggregation of all randomness used in encryption and release it at some point, which would enable decrypting the sum of the messages. DRE-ip is built on top of this observation.

In DRE-ip the direct-recording electronic (DRE) voting machine that captures the votes and encrypts them, keeps an aggregation of randomness used in the encryptions as well and at the end of the election releases this value to the bulletin board along with announcing the tally. This enables the public to verify the tally. No secret keys are involved in the process of verifying the tallying integrity, and hence no tallying authorities are required. In fact, the system is set up in a way that no one knows the secret key of the encryption scheme. This means that no one is able to decrypt individual ballots. The election tally is the only information that can be verified given the encrypted ballots and this computation is public.

Having the idea is perhaps the easy part, but the main work is to design a system carefully such that it provides full end-to-end verifiability and at the same time one can argue rigorously about it guaranteeing ballot secrecy. In the paper we give proofs of why using encryption is such a way is secure.

DRE-ip achieves levels of security comparable to those of state-of-the-art systems, but crucially with one less group of trusted authorities. To argue the significance of this, it would be sufficient to just quote Ross Anderson‘s definition of a trusted third party: 

A Trusted Third Party is a third party that can break your security policy.

A week in Darmstadt – Our Attendance in Security and Privacy Week (SPW’16)

Being in a world-class conference is always exciting. Now imagine Security and Privacy Week is a conjunction of a few well-known conferences that happen within an intensive week of parallel sessions. This would triple the excitement! This year, the conference was held in Darmstadt, Germany. The whole event was a week long from July 18 to July 22. The major conferences involved were listed as follows:

  • Wisec, from July 18 to July 20
  • PETS, from July 19 to July 22
  • IFIPTM, from July 18 to July 22

Alongside, This week included a few parallel workshops such as EuroSec, ECRYPT, PLLS, SPMED, Infer, The Dark Side of Digitization, CrossFyre and HotPets. The main reason of attendance was to present our paper published in EuroSec’16 about the user’s perception about sensors in modern smart phones. Fortunately, our presentation and paper was well-received and parts of our results was used in the PETS keynote presentation by Angela Sasse.

I attended several talks in the conference. However, some of the keynote talks stand out in my notes. The PETS keynote was presented by Angela Sasse of UCL “Goodbye Passwords, Hello Biometrics – Do We Understand the Privacy Implications?” that discussed the new challenges by the biometric authentication in mobile phones and other conventional devices around us. Another keynote talk for WiSec, “The Ultimate Frontier for Privacy and Security: Medicine” by Jean-Pierre Hubaux from EPFL explained the vital importance of Genetic Sequences and the inadequate attention from security researchers to offer ideas to protect this data. He argued that the large scale sequencing is still a challenge for attackers but the lack of sufficient protection in the infrastructure could help them attack easily as soon as they have enough power. An interesting piece of information from his talk was the “US Wall of Shame” website. The American medical institutes that breached more than specific number of patients, must announce their hack publicly in this website to get some reductions in their penalties. He pointed out the key differences between medical researcher and security researches in his presentation. DigiDark keynote talks were important in terms of carrying governmental viewpoints to the new challenges of security. Susan Landau (Worcester Polytechnic Institute) presented “Crypto Wars: The Apple iPhone and the FBI” and introduced a brief history of previous wars between the governments and big companies to access their data and she specifically emphasized that the most recent Apple vs FBI court to access the mobile data of the San Bernardino Attacker could potentially open another to new legislation for such access in the future. She also highlights the importance of current cyber-wars and argued the strategies possibly could be involved in such a digital conflict. The last keynote that I attended was “Networks of ‘Things’ – Demystifying IoT” by Jeff Voas (National Institute of Standards and Technology). He discussed the lack of standard documentation on the principles of the IoT and he mentioned the attempts in the NIST to fulfill this goal. He announced that the official draft of the NIST documentation is now available to download in here.  However, he argued the correct term for IoT is Network of Things because it fits better to the nature of the concept. I somehow agree with him in principle, but I still think “Internet of Things” is cooler!

Furthermore, numerous amazing ideas and researches were presented during the past week. Among them, some were novel in terms of ideas. Miro Enev et al. from Washington University proposed the idea of “Automobile Driver Fingerprinting”. They recorded the sensors embedded in a modern automobile by different drivers in various circumstances and extracted a fingerprint of the driving style based on their sensor records. Their research showed that the brake system usage is the most distinguishing feature in driving style. Their proposal has already gained attention in well-known blogs such as here, here or here. On another research, Vijay Sivaraman et al. from university of South Wales proposed a novel idea to attack the smart home sensors by leveraging the lack of authentication in Universal Plug and Play (uPnP) protocol. They managed to intrude into a smart home and control the devices based on this vulnerability. The above mentioned talks were only a highlights of what happened in SPW’16. A list of all the talks, including the reviewers’ opinions are available here. Our team members discussed about the attended talks in our wiki page, here. Furthermore, the live report of the talks by Ross Anderson could be reached via here. Some more relevant tweets could be found with #PETS16 and #SPW2016 in twitter.

Apart from exciting researches presented in the conference, I have to confess it was a very interesting week for me in different aspects. Most important of all, I get to meet a combination of intelligent people from industry, governments and of course academia. I can say the number of attendance from industry or government surprised me. The organizers tried to provide different social events, including a delightful evening in a Bavarian Beer Garden and a dinner gathering in Frankenstein Castle both around Darmstadt. The only noteworthy drawback of the whole event was the mobile app that they developed. The fact that users need to be connected to the internet to see the conference schedules was not the best we could get since as travelers, we are bound to use roaming services which might not be available all the time. Excluding this, it was an excellent experience and a great chance to meet talented people all around the world!