Smart Counter-Collusion Contracts for Verifiable Cloud Computing (2/2)

Previously, we showed the Prisoner’s contract and how it would force the two clouds to behave honestly, by creating a Prisoner’s Dilemma. However, this only works if the two clouds cannot make credible commitments. The fundamental problem in the Prisoner’s Dilemma and in game 1 (in the previous post) is that each player cannot believe the other, a selfish utility maximizer, will follow the collusion strategy.

Now if someone does not believe you, you cannot convince him/her by just talking. What convinces a rational player is by showing that lying will make you worse off. If lying is not in your interest, you will not lie. And if you want someone to do what you expected, you have to show doing so is in the best interest of him/her.

That is the idea behind the Colluder’s contract, in which both clouds show their loyalty to collusion (i.e. sending the agreed wrong result r) by promising that I will suffer a loss if I cheat, and any damage caused by my cheating behaviour to you will be compensated. The one who initiates the collusion can also give a slice of his own profit to the other as an additional incentive. The contract again is based on deposit:

  • Each cloud pays a large enough deposit into Colluder’s contract;
  • Anyone who does not follow the collusion strategy will lose its own deposit, which is transferred to the other cloud.
  • The ringleader commits to giving a bribe if the other follows the collusion strategy.

This colluder’s contract, when in place, will change the game into:

As you can see, now the equilibrium (bold path) for the two clouds is to collude and both follow the collusion strategy.

This is bad. After trying to prevent collusion using smart contracts, we found smart contracts actually can be used to enable collusion. And if the client tries to counter that by another contract, the clouds can have another contract to counter back. This is an endless loop.

What can we do then, if we cannot counter directly back? In the end, we came up with the idea that uses a smart contract to incentivize secret betrayal and reporting. This leads to the Traitor’s contract. In this contract, the first cloud who reports collusion will not be punished by the prisoner’s contract and will get an additional reward if the collusion attempt does exist (there is a motivation to report). However, if someone tries to report a non-existent collusion case, it will have to bear the consequence and suffer a loss (there is a motivation not to abuse the system).

The consequence of reporting is that the client can call the trusted party, and find out who cheated. Once the trusted party is called, there is no point to counter back using another contract because the payoff of each cloud now only depends on whether it cheats or not, not the other’s behavior. So we break the loop. More importantly, the Traitor’s contract creates distrust between the clouds because “agree to collude then betray” is the best responding strategy if one cloud tries to initiate collusion. Now both clouds understand that, then no one will want to initiate collusion because they know they will be betrayed and end up with a worse payoff. Then both will behave honestly in the first place.

The contract works again by manipulating deposits paid upfront, by the client and the reporting cloud. Details can be found in the paper. Here I just show the full game tree:

Implementation of the contracts in Solidity is available here. We actually tested the contracts on the official Ethereum network. There are challenges when implementing the contracts, one being that the transparency of a public blockchain. This means everything you put on the blockchain is visible to anyone. To make it worse,  a blockchain is append-only, which means later there is no way to delete the data if you change your mind.

To preserve data privacy, we used some light cryptography, including Pedersen Commitment and Noninteractive zero-knowledge proof (NIZK). Pederson Commitment allows us to put a “commitment” (ciphertext) of a value on blockchain rather than the value itself. The commitment has the property that it leaks no information about the value it is committed to, and is bounded to that value in the sense that you cannot find a different value and convince other people that the new value was committed in the commitment. One problem caused by the “hiding” property is that the miners cannot see the values committed in the commitments and thus cannot compare them to determine whether the values are equal or not (which is needed to execute the contracts). Fortunately, we can use NIZKs, which are cryptographic proofs that can be checked publically with commitments as inputs. There are already NIZK protocols that allow proving equality/inequality of committed values, which we can simply use.

The cost of using the smart contracts comes from the transaction fees paid to the miners for storing and executing the contracts. In our experiments conducted on the official Ethereum network, the transaction fees are small. Depending on the transactions, the fees range from $0.01 to $0.40. This was done in May 2017, when the price of Ether was about $90. Today the Ether price is about $360, so transaction fees would be higher. Luckily, the most expensive operations are cryptographic ones, and the recent Ehtereum hard fork has made ECC cryptography (which we use) cheaper than before. So the increase in transaction fee should not be steep as the increase in Ether price.

The End.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (1/2)

(Previous post)

The idea of our counter-collusion contracts is to make collusion a bad choice and leads to loss, so that the clouds will avoid it like a plague. Collusion has been studied for many years by economists, and they have made several key observations:

  • Collusion is profit driven. Note that collusion is often illegal, without additional profit, no one would have the motivation to collude.
  • Colluding parties have their own interests. And often those who collude are also competitors. This is very true in our case (cloud computing).
  • Cheating is a big problem in collusion. Often the cheating party who deviates from collusion can get an even higher profit, so is motivated to do so.

Collusion is delicate and requires trust among the colluding parties. If we can take away the trust, the clouds cannot collude.

Everything I say below is based on certain assumptions. The most important ones include:

  • The clouds are rational, which means two things: they try to maximize their payoffs and they understand all consequences of the games.
  • There exists a trusted third party that can be called upon to re-compute and find who was wrong, if the two clouds return different results. What interesting about this trusted party is that the analysis shows that if the two clouds are rational, the trusted party will not need to be involved at all.
  • The task to be outsourced must be deterministic or can be reduced to be deterministic (e.g. by including a seed as input and use a pseudorandom number generator for random choices).

There are other less important assumptions, check them in the paper.

Prisoner’s Contract is where we started. The contract is designed for outsourcing and needs to be signed by a client and two clouds. Informally, the contract requires each cloud to pay a deposit before it can take the job. The deposit, of course, needs to be large enough (which we have derived a lower bound in our paper). Then the clouds get the task to compute, and each returns a result before the deadline. An honest cloud will be paid a “salary” for the computation, a cheating cloud (if caught) will be punished by losing its deposit, and if one cheats one is honest, the honest cloud will get an additional reward (from cheater’s deposit). In cases where the client cannot decide who is honest, the trusted party will be called to resolve the dispute. The cost of dispute resolution will always be born by the cheating cloud(s), from the deposit(s). This means for the client, its cost is bounded and will never be more than the two salaries, even in the unlikely case that the trusted party has to be involved.

What is the consequence of the contract? The highest payoff each cloud can get comes from the case where it is honest and the other cheat. What does that mean? Let us play the role of clouds, you and me:

  • Me: let’s collude and cheat together!

What would you do?

  • A: decline to collude
  • B: collude with me and sent the agreed wrong result
  • C: agree to collude but later remain honest
  • D: decline and try to cheat

If your choice is A, you are a good person. And your honest behaviour will force me to behave honestly because I will be punished if I cheat and you do not cooperate.

If your choice is B, you are too naive. Of course collusion, if it succeeds, will lead to a higher profit (than being honest) for you . But have you ever considered the possibility that I am lying? I can take advantage of your trust, and later send the correct result to get a higher profit.

If your choice is C, you are a smart badass, with a sense of “business acumen”. This is actually a choice that is no worse than A and could lead to the best payoff for you in the game (if I am naive or mad).

If your choice is D (I hope not), you are dangerous because this choice makes no sense (it is the worst thing you can ever do), and who chooses this must be out of his mind.

Anyway, you should get the idea and can understand the game presented below:

C1 and C2 are two clouds, the label on edges are the actions they can take: f(x) means to send the correct result, r means to send the agreed wrong result, other means any other actions. Below the leaf nodes are the payoffs: u1 is the payoff of C1, u2 is the payoff of C2. No need to pay attention to the payoffs now, you can find how they are derived in the paper. The bold edges show the best choices of the players (C1 and C2). A bold path from the root to a leaf node is an equilibrium, a stable state in which both parties do not want to change their strategies if the other does not.

In the game, there is only one equilibrium, and in the equilibrium, the choice of each party is strictly better than the other choices (dominant strategy). In the equilibrium, both clouds will play honestly because

  • If no one asks me to collude, being honest leads to the best payoff.
  • If the other cloud asks me to collude, then I should agree to collude but later remain honest, in order to get the highest payoff.
  • Anyway, behaving honestly is always the best choice.

If you are familiar with game theory, you should have noticed that this is somewhat a Prisoner’s Dilemma game. For both clouds, the reasoning is the same and both will stay honest. If both clouds stay honest, everyone is happy and dispute resolution is not needed.

So far so good and it seems that we have solved the problem. But unfortunately, no. In the next post, you will see how the clouds, using another contract to change the game completely and make collusion the best choice, and how we solve this problem.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (Prologue)

People say smart contracts is the next big thing in the blockchain space. In the simplest term, a smart contract is a piece of program stored and executed in the blockchain. The fancy things about a smart contract are that its execution is (or will be) always correct (if you believe in the consensus protocol that maintains the blockchain), it is self-enforcing (executed and enforced by peers), it is trustless (no central authority) and it is cheap to use. It sounds so good, but what can smart contracts do? Of course, we want something more than ICOs. And this is I will write about.

A Short Summary in case you are impatient: we use smart contracts to implement mechanisms designed based on game theory, to enable cost-effective verifiable cloud computing. The paper (co-authored with Yilei Wang, Amjad Aldweesh, Patrick McCorry, Aad van Moorsel) was presented early this month in CCS 2017, and here are the full paper and slides.

The Need for Verifiable Cloud Computing comes from distrust. Everyone using cloud computing probably knows that “the cloud” is just a bunch of computers belongs to someone else. Then when I outsource something to the cloud, how can I be sure it is done properly in the cloud? In current practice, I cannot. You can imagine how annoying this would be when that outsourced computation is important to me. It is not necessary that the clouds are malicious, it is simply a consequence of uncertainty: I do not know what happens exactly in the clouds, and I have no control over that either. So the best I can do, as a matter of due diligence, is not to trust the clouds and verify all results returned by the clouds. But how? Verification can be as expensive as recomputing the task, and I might not have the resource to do that (if I have, I can avoid using the cloud in the first place by computing it by myself).

The Current State of verifiable computing is more or less divided into two streams. Some verify by using cryptography, some verify by using replication. In the cryptography based approach, the cloud must generate a proof that the computation is done correctly. Cryptography ensures that, unless our beloved cryptographic assumptions are wrong, the cloud cannot generate a valid proof if the computation is wrong. By checking the proof, I can be assured the correctness of the computation. In the replication based approach, I give the same task to several clouds, and later collect results from them, and cross-check the results. If the results from all replicas match, I can assert with a high confidence that the computation was done correctly. Of course the more replicas I use, the more reliable my assertion would be. More replicas can also help me to find the correct result, should there is something wrong in some replicas.

What is Missing in all existing verifiable computing techniques is a sense of economy. Surely they are technically sound, but with an unaffordable price. The problem is that cloud is not free. You pay for what you compute. Generating a cryptographic proof is much more expensive than what you would think. Currently, the overhead is 3 – 6 orders of magnitude more than the computation being verified. Simple primary school math:

  • The costs of my computation: £500 per month
  • The costs of getting the proofs: £500 * 1000 = half a million per month
  • What I get: bankruptcy and out of business

For replication based approach, since I have to pay each of the replicas, the cost is blown up by a factor that equals the number of replicas. Of course, it soon becomes unaffordable when the factor grows up.

One, perhaps the most important, reason people want to use cloud computing is cost saving. When there is no advantage in term of cost over on-premises IT infrastructure, which you have control and don’t need to worry much about correctness, many would not be that keen on the cloud.

The Question then is: can we have cost-effective verifiable cloud computing after all? Well, for cryptography based approach, I am a bit pessimistic. The gap is just too big. Unless there is a big breakthrough, we won’t be able to use it in practice in the near future. For replication based approach, the might be some hope, if the number of replicas we pay is small. How small the number can be? The least is 2. In fact, that might work. The idea is that using cloud computing is cheaper than using your own trusted on-premises IT infrastructure. “Two cloud replicas” means doubling the cost, and cost-wise this may not differ much or may be even lower than using your trusted IT infrastructure. Given the other good qualities cloud computing processes, people would have the motivation to use the cloud.

This is straightforward, but why has not anyone came up with something? Let us forget all engineering difficulties such as synchronization, replication, latency etc., and focus on the idea. It has a fatal weakness: collusion. In replication based approach, verification is done by comparing the results. What if the two clouds collude and give you the same wrong result? You know nothing and you cannot verify anything. Can the clouds collude? Of course they can. Remember, it is not about whether they will collude or not, it about whether you believe they will collude or not. You don’t trust the clouds, then collusion is a threat to you. In the face of collusion, verification based on 2 replicas is insecure.

How to Prevent Collusion is then our objective. The technical details will follow. A spoiler from the abstract of the paper: a client “uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat”.

Real-world Electronic Voting: Design, Analysis and Deployment

We are pleased to announce the completion of a new book “Real-world Electronic Voting: Design, Analysis and Deployment”, which is due to be published by the CRC Press. It’s still in press, but you can pre-order it from Amazon (the book will be freely available in the open-access domain two years from its publication).

This book is co-edited by Peter Ryan and myself. It aims to capture all major developments in electronic voting since 2003 in a real-world setting. It covers three broad categories: e-voting protocols, attacks reported on e-voting, and new developments on the use of e-voting.

Table of contents [PDF]

Foreword (Josh Benaloh) [PDF]

Preface (Feng Hao and Peter Ryan) [PDF]

Part 1: Setting the scheme

  • Chapter 1: Software Independence Revisited (Ronald L. Rivest and Madars Virza)
  • Chapter 2: Guidelines for Trialling E-voting in National Elections (Ben Goldsmith)

Part II: Real-world e-voting in national elections

  • Chapter 3: Overview of Current State of E-voting World-wide (Carlos Vegas and Jordi Barrat)
  • Chapter 4: Electoral Systems Used around the World (Siamak F. Shahandashti)
  • Chapter 5: E-voting in Norway (Kristian Gjøsteen)
  • Chapter 6: E-voting in Estonia (Dylan Clarke and Tarvi Martens)
  • Chapter 7: Practical Attacks on Real-world E-voting (J. Alex Halderman)

Part III: E2E verifiable protocols and real-world applications

  • Chapter 8: An Overview of End-to-End Verifiable Voting Systems (Syed Taha Ali and Judy Murray)
  • Chapter 9: Theoretical Attacks on E2E Voting Systems (Peter Hyun-Jeen Lee and Siamak F. Shahandashti)
  • Chapter 10: The Scantegrity Voting System and its Use in the Takoma Park Elections (Richard T. Carback, David Chaum, Jeremy Clark, Aleksander Essex, Travis Mayberry, Stefan Popoveniuc, Ronald L. Rivest, Emily Shen, Alan T.
    Sherman, Poorvi L. Vora, John Wittrock, and Filip Zagórski)
  • Chapter 11: Internet voting with Helios (Olivier Pereira)
  • Chapter 12: Prêt à Voter – the Evolution of the Species (Peter Y A Ryan, Steve Schneider, and Vanessa Teague)
  • Chapter 13: DRE-i and Self-Enforcing E-Voting (Feng Hao)
  • Chapter 14: STAR-Vote: A Secure, Transparent, Auditable, and Reliable Voting System (Susan Bell, Josh Benaloh, Michael D. Byrne, Dana DeBeauvoir, Bryce Eakin, Gail Fisher, Philip Kortum, Neal McBurnett, Julian Montoya, Michelle Parker, Olivier Pereira, Philip B. Stark, Dan S. Wallach, and Michael Winn)

On the Trust of Trusted Computing in the Post-Snowden Age

At the 8th CSF Workshop on Analysis of Security API (13 July, 2015), I presented a talk entitled “on the Trusted of Trusted Computing in the Post-Snowden Age”. The workshop produces no formal proceedings, but you can find the abstract here. My slides are available here.

In the field of Trusted Computing (TC),  people often take “trust” for granted. When secrets and software are encapsulated within a tamper resistant device, users are always educated to trust the device. The level of trust is sometimes boosted by the acquisition of a certificate (based on common criteria or FIPS).

However, such “trust” can be easily misused to break security. In the talk, I used TPM as an example. Suppose TPM is used to implement secure data encryption/decryption. A standard-compliant implementation will be trivially subject to the following attack, which I call the “trap-door attack”. The TPM first compresses the data before encryption, so that it can use the saved space to insert a trap-door block in the ciphertext. The trap-door block contains the decryption key wrapped by the attacker’s key. The existence of such a trap-door is totally undetectable so long as the encryption algorithms are semantically secure (and they should be).

trapdoor_encTo the best of my knowledge, no one has mentioned this kind of attack in the literature. But if I were NSA, this would be the sort attack that I would consider first. It is much more cost-effective than investing on quantum computers or parallel search machines. With the backing of the state, NSA could coerce (and bribe) the manufacturer to implant this in the TPM. No one will be able to find out since the software is encapsulated within the hardware and protected by the tamper resistance. In return, NSA would have the exclusive power to read all encrypted data at a mass scale with trivial efforts in decrypting data.

Is this attack realistic? I would argue yes. In fact, according to Snowden revelations, NSA has already done a similar trick by implanting an insecure random number generator in the RSA products (NSA reportedly paid RSA US$10m). What I have described is a different trick, and there may well be many more similar ones.

This attack highlights the importance of taking into account the threat of a state-funded adversary in the design of a Trusted Computing system in the post-Snowden age. The essence of my presentation is a proposal to change the (universally held) assumption of “trust” in Trusted Computing to “trust-but-verify”. I gave a concrete solution in my talk to show that this proposal is not just a theoretical concept, but is practically feasible. As highlighted in my talk, my proposal constitutes only a small step in a long journey – but it is an important step in the right direction I believe.

Topics about NSA and mass surveillance are always heavy and depressing. So while in Verona (where the CSF workshop was held), I took the opportunity to tour around the city. It was a pleasant walk with views of well-preserved ancient buildings, the sunny weather (yes, a luxury for someone coming from the UK) and familiar sound of cicadas (which I used to hear every summer during my childhood in China).

The Verona Arena is the area that attracts most of the tourists. The conference organizers highly recommended us to attend one of the operas, but I was eventually deterred by the thought of having to sit for 4 hours and listen to a language that I couldn’t understand. So I decided to wander freely. As I entered a second floor of a shop that sold hand-made sewing items, my attention was suddenly drawn by someone who passionately shouted while pointing figures toward outside the window, “Look, that’s the balcony where Juliet and Romeo used to date!” Wow, I was infected by the excitement and quickly took a photo. In the photo below, you can see the Juliet Statue next to the balcony. (Of course a logical mind will question how this dating is possible given that the two people are just fictional figures, but why should anyone care? It was in Verona, a city of legends.)

Julie

Password Authenticated Key Exchange in a Group

Today, we release a new paper entitled “The Fairy-Ring Dance: Password Authenticated Key Exchange in a Group“. This is joint work with Xun Yi (RMIT University), Liqun Chen (HP Labs) and Siamak Shahandashti (Newcastle University). The initial work started when Xun visited us at the School of Computing Science, Newcastle University in Feb, 2014. This is one of the collaborative research outcomes, which stem from that visit.

The subject of two-party Password Authenticated Key Exchange (PAKE) has been well studied for nearly three decades, however the topic of multi-party Group PAKE (GPAKE) has so far received little attention. Partly, this is because a Group PAKE is significantly more complex to design than a two-party PAKE due to more interactions between participants, hence exposing more potential attack vectors for an adversary to exploit.

We decided to investigate this subject as we believed Group PAKE protocols would become increasingly more important in the next 10 years – especially in the era of Internet of Things (IoT). Using a Group PAKE protocol can help set up a group of out-of-box IoT devices that have no pre-installed secrets or certificates; one just needs to enter a common (low-entropy) passcode into each of the devices. The protocol can then take over to ensure secure group communication among these IoT devices despite that all data is transmitted through an insecure Internet.

One major technical challenge here is to make the protocol as round efficient as possible. With Moore’s law, the computational efficiency can rapidly improve over time, but the round efficiency will stay more or less the same. Intuitively, when a group of entities engage in multiple rounds of interactions over a distributed network, the bottleneck of the overall latency will likely be determined by the slowest responder in each round. Hence, our strategy is to trade off computation for optimal round efficiency, with the aim to minimize the number of rounds as much as possible.

The paper (a free copy available at IACR ePrint) gives more technical details about how the above strategy is realized. I’ll present the paper at the ASIACCS Workshop on IoT Privacy, Trust, and Security (IoTPTS), in April 2015. It will be interesting to hear feedback from academic and industrial researchers working on IoT.

Before reading the paper, I would suggest the reader to watch the following “Fairy Ring Dance” from YouTube first, since the structural design of our solution shares some similarity to that dance.

Fairy Ring Dance (YouTube)

 

Verifiable Classroom Voting

In today’s Teaching and Learning 2014 Conference, I presented a talk on “Enforcing Teaching and Learning with Electronic Voting” (Slides). This talk summarizes our last two years’ work on developing a smart-phone based Verifiable Classroom Voting (VCV) system and experience of trialing it in real classroom teaching (with positive student feedback which can be viewed here and here near the bottom of the questions).

The VCV system is built based on a cryptographic protocol called DRE-i, which ensures the integrity of the tallying results without involving any trusted tallying authorities. Voters are able to independently verify if their votes have been actually captured by the system and correctly tallied without compromise on voter privacy. To the best of our knowledge, the developed system is the first in the world – none of the commercially available classroom voting systems permits public verifiability as ours does.

Encouraged by the positive student feedback, our ultimate aim is to make the system freely available to teachers in any university or school worldwide to help enhance teaching and learning in a classroom. At the moment, we are still at a trial stage. Limited by the available resource, the system is currently only available to those who have valid Newcastle University campus accounts.

If you are a member of the teaching staff in the university, and would like to trial the system in your class, follow the brief instructions below.

Before the class:

  • Log on https://evoting.ncl.ac.uk as a coordinator (using your campus ID)
  • Create an election under the “Creation Election” tab
  • Take note of the election ID generated by the system at the end of the election creation. (An example of the election ID is: 4535)

Voting in the class

  • Inform the students the election ID and an optional passcode
  • Ask students to visit https://evoting.ncl.ac.uk with the provided election ID and passcode (if any)
  • Alternatively, students may vote using mobile phone apps (free iPhone app available here and Android app here)

Displaying the results

  • Log on https://evoting.ncl.ac.uk using your campus ID
  • Go to the “Manage Elections” tab. Next to the election ID, choose “End Election” from the “Action” drop-down menu
  • Go to the home page, enter election ID and then choose “View results”

PowerPoint Plug-In: Normally, “Displaying the results” requires the use of a web browser. But that means you need to swap the presentation modes between the PowerPoint and a web browser. For a smoother presentation,  you can install a PowerPoint plug-in (freely available at SourceForge) which inserts an IE browser in one PowerPoint slide, so you can  stay in the PowerPoint slideshow throughout the presentation.

Special-interest group: If you would like to join us to further the trials of classroom voting for pedagogical purposes, please get in touch. We may set up a mailing list for the special-interest group if there is sufficient interest.

Acknowledgement: This research work is kindly supported by Newcastle University UTLSEC Innovation Fund (2012) and ERC Starting Grant (2013-2018). Dylan Clarke developed the back-end server, initial web interface and iPhone app. Carlton Shepherd developed the initial Android app and helped improve the web interface. Ehsan Toreini developed the PowerPoint plugin and helped improve the web interface.