Smart Counter-Collusion Contracts for Verifiable Cloud Computing (2/2)

Previously, we showed the Prisoner’s contract and how it would force the two clouds to behave honestly, by creating a Prisoner’s Dilemma. However, this only works if the two clouds cannot make credible commitments. The fundamental problem in the Prisoner’s Dilemma and in game 1 (in the previous post) is that each player cannot believe the other, a selfish utility maximizer, will follow the collusion strategy.

Now if someone does not believe you, you cannot convince him/her by just talking. What convinces a rational player is by showing that lying will make you worse off. If lying is not in your interest, you will not lie. And if you want someone to do what you expected, you have to show doing so is in the best interest of him/her.

That is the idea behind the Colluder’s contract, in which both clouds show their loyalty to collusion (i.e. sending the agreed wrong result r) by promising that I will suffer a loss if I cheat, and any damage caused by my cheating behaviour to you will be compensated. The one who initiates the collusion can also give a slice of his own profit to the other as an additional incentive. The contract again is based on deposit:

  • Each cloud pays a large enough deposit into Colluder’s contract;
  • Anyone who does not follow the collusion strategy will lose its own deposit, which is transferred to the other cloud.
  • The ringleader commits to giving a bribe if the other follows the collusion strategy.

This colluder’s contract, when in place, will change the game into:

As you can see, now the equilibrium (bold path) for the two clouds is to collude and both follow the collusion strategy.

This is bad. After trying to prevent collusion using smart contracts, we found smart contracts actually can be used to enable collusion. And if the client tries to counter that by another contract, the clouds can have another contract to counter back. This is an endless loop.

What can we do then, if we cannot counter directly back? In the end, we came up with the idea that uses a smart contract to incentivize secret betrayal and reporting. This leads to the Traitor’s contract. In this contract, the first cloud who reports collusion will not be punished by the prisoner’s contract and will get an additional reward if the collusion attempt does exist (there is a motivation to report). However, if someone tries to report a non-existent collusion case, it will have to bear the consequence and suffer a loss (there is a motivation not to abuse the system).

The consequence of reporting is that the client can call the trusted party, and find out who cheated. Once the trusted party is called, there is no point to counter back using another contract because the payoff of each cloud now only depends on whether it cheats or not, not the other’s behavior. So we break the loop. More importantly, the Traitor’s contract creates distrust between the clouds because “agree to collude then betray” is the best responding strategy if one cloud tries to initiate collusion. Now both clouds understand that, then no one will want to initiate collusion because they know they will be betrayed and end up with a worse payoff. Then both will behave honestly in the first place.

The contract works again by manipulating deposits paid upfront, by the client and the reporting cloud. Details can be found in the paper. Here I just show the full game tree:

Implementation of the contracts in Solidity is available here. We actually tested the contracts on the official Ethereum network. There are challenges when implementing the contracts, one being that the transparency of a public blockchain. This means everything you put on the blockchain is visible to anyone. To make it worse,  a blockchain is append-only, which means later there is no way to delete the data if you change your mind.

To preserve data privacy, we used some light cryptography, including Pedersen Commitment and Noninteractive zero-knowledge proof (NIZK). Pederson Commitment allows us to put a “commitment” (ciphertext) of a value on blockchain rather than the value itself. The commitment has the property that it leaks no information about the value it is committed to, and is bounded to that value in the sense that you cannot find a different value and convince other people that the new value was committed in the commitment. One problem caused by the “hiding” property is that the miners cannot see the values committed in the commitments and thus cannot compare them to determine whether the values are equal or not (which is needed to execute the contracts). Fortunately, we can use NIZKs, which are cryptographic proofs that can be checked publically with commitments as inputs. There are already NIZK protocols that allow proving equality/inequality of committed values, which we can simply use.

The cost of using the smart contracts comes from the transaction fees paid to the miners for storing and executing the contracts. In our experiments conducted on the official Ethereum network, the transaction fees are small. Depending on the transactions, the fees range from $0.01 to $0.40. This was done in May 2017, when the price of Ether was about $90. Today the Ether price is about $360, so transaction fees would be higher. Luckily, the most expensive operations are cryptographic ones, and the recent Ehtereum hard fork has made ECC cryptography (which we use) cheaper than before. So the increase in transaction fee should not be steep as the increase in Ether price.

The End.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (1/2)

(Previous post)

The idea of our counter-collusion contracts is to make collusion a bad choice and leads to loss, so that the clouds will avoid it like a plague. Collusion has been studied for many years by economists, and they have made several key observations:

  • Collusion is profit driven. Note that collusion is often illegal, without additional profit, no one would have the motivation to collude.
  • Colluding parties have their own interests. And often those who collude are also competitors. This is very true in our case (cloud computing).
  • Cheating is a big problem in collusion. Often the cheating party who deviates from collusion can get an even higher profit, so is motivated to do so.

Collusion is delicate and requires trust among the colluding parties. If we can take away the trust, the clouds cannot collude.

Everything I say below is based on certain assumptions. The most important ones include:

  • The clouds are rational, which means two things: they try to maximize their payoffs and they understand all consequences of the games.
  • There exists a trusted third party that can be called upon to re-compute and find who was wrong, if the two clouds return different results. What interesting about this trusted party is that the analysis shows that if the two clouds are rational, the trusted party will not need to be involved at all.
  • The task to be outsourced must be deterministic or can be reduced to be deterministic (e.g. by including a seed as input and use a pseudorandom number generator for random choices).

There are other less important assumptions, check them in the paper.

Prisoner’s Contract is where we started. The contract is designed for outsourcing and needs to be signed by a client and two clouds. Informally, the contract requires each cloud to pay a deposit before it can take the job. The deposit, of course, needs to be large enough (which we have derived a lower bound in our paper). Then the clouds get the task to compute, and each returns a result before the deadline. An honest cloud will be paid a “salary” for the computation, a cheating cloud (if caught) will be punished by losing its deposit, and if one cheats one is honest, the honest cloud will get an additional reward (from cheater’s deposit). In cases where the client cannot decide who is honest, the trusted party will be called to resolve the dispute. The cost of dispute resolution will always be born by the cheating cloud(s), from the deposit(s). This means for the client, its cost is bounded and will never be more than the two salaries, even in the unlikely case that the trusted party has to be involved.

What is the consequence of the contract? The highest payoff each cloud can get comes from the case where it is honest and the other cheat. What does that mean? Let us play the role of clouds, you and me:

  • Me: let’s collude and cheat together!

What would you do?

  • A: decline to collude
  • B: collude with me and sent the agreed wrong result
  • C: agree to collude but later remain honest
  • D: decline and try to cheat

If your choice is A, you are a good person. And your honest behaviour will force me to behave honestly because I will be punished if I cheat and you do not cooperate.

If your choice is B, you are too naive. Of course collusion, if it succeeds, will lead to a higher profit (than being honest) for you . But have you ever considered the possibility that I am lying? I can take advantage of your trust, and later send the correct result to get a higher profit.

If your choice is C, you are a smart badass, with a sense of “business acumen”. This is actually a choice that is no worse than A and could lead to the best payoff for you in the game (if I am naive or mad).

If your choice is D (I hope not), you are dangerous because this choice makes no sense (it is the worst thing you can ever do), and who chooses this must be out of his mind.

Anyway, you should get the idea and can understand the game presented below:

C1 and C2 are two clouds, the label on edges are the actions they can take: f(x) means to send the correct result, r means to send the agreed wrong result, other means any other actions. Below the leaf nodes are the payoffs: u1 is the payoff of C1, u2 is the payoff of C2. No need to pay attention to the payoffs now, you can find how they are derived in the paper. The bold edges show the best choices of the players (C1 and C2). A bold path from the root to a leaf node is an equilibrium, a stable state in which both parties do not want to change their strategies if the other does not.

In the game, there is only one equilibrium, and in the equilibrium, the choice of each party is strictly better than the other choices (dominant strategy). In the equilibrium, both clouds will play honestly because

  • If no one asks me to collude, being honest leads to the best payoff.
  • If the other cloud asks me to collude, then I should agree to collude but later remain honest, in order to get the highest payoff.
  • Anyway, behaving honestly is always the best choice.

If you are familiar with game theory, you should have noticed that this is somewhat a Prisoner’s Dilemma game. For both clouds, the reasoning is the same and both will stay honest. If both clouds stay honest, everyone is happy and dispute resolution is not needed.

So far so good and it seems that we have solved the problem. But unfortunately, no. In the next post, you will see how the clouds, using another contract to change the game completely and make collusion the best choice, and how we solve this problem.

Smart Counter-Collusion Contracts for Verifiable Cloud Computing (Prologue)

People say smart contracts is the next big thing in the blockchain space. In the simplest term, a smart contract is a piece of program stored and executed in the blockchain. The fancy things about a smart contract are that its execution is (or will be) always correct (if you believe in the consensus protocol that maintains the blockchain), it is self-enforcing (executed and enforced by peers), it is trustless (no central authority) and it is cheap to use. It sounds so good, but what can smart contracts do? Of course, we want something more than ICOs. And this is I will write about.

A Short Summary in case you are impatient: we use smart contracts to implement mechanisms designed based on game theory, to enable cost-effective verifiable cloud computing. The paper (co-authored with Yilei Wang, Amjad Aldweesh, Patrick McCorry, Aad van Moorsel) was presented early this month in CCS 2017, and here are the full paper and slides.

The Need for Verifiable Cloud Computing comes from distrust. Everyone using cloud computing probably knows that “the cloud” is just a bunch of computers belongs to someone else. Then when I outsource something to the cloud, how can I be sure it is done properly in the cloud? In current practice, I cannot. You can imagine how annoying this would be when that outsourced computation is important to me. It is not necessary that the clouds are malicious, it is simply a consequence of uncertainty: I do not know what happens exactly in the clouds, and I have no control over that either. So the best I can do, as a matter of due diligence, is not to trust the clouds and verify all results returned by the clouds. But how? Verification can be as expensive as recomputing the task, and I might not have the resource to do that (if I have, I can avoid using the cloud in the first place by computing it by myself).

The Current State of verifiable computing is more or less divided into two streams. Some verify by using cryptography, some verify by using replication. In the cryptography based approach, the cloud must generate a proof that the computation is done correctly. Cryptography ensures that, unless our beloved cryptographic assumptions are wrong, the cloud cannot generate a valid proof if the computation is wrong. By checking the proof, I can be assured the correctness of the computation. In the replication based approach, I give the same task to several clouds, and later collect results from them, and cross-check the results. If the results from all replicas match, I can assert with a high confidence that the computation was done correctly. Of course the more replicas I use, the more reliable my assertion would be. More replicas can also help me to find the correct result, should there is something wrong in some replicas.

What is Missing in all existing verifiable computing techniques is a sense of economy. Surely they are technically sound, but with an unaffordable price. The problem is that cloud is not free. You pay for what you compute. Generating a cryptographic proof is much more expensive than what you would think. Currently, the overhead is 3 – 6 orders of magnitude more than the computation being verified. Simple primary school math:

  • The costs of my computation: £500 per month
  • The costs of getting the proofs: £500 * 1000 = half a million per month
  • What I get: bankruptcy and out of business

For replication based approach, since I have to pay each of the replicas, the cost is blown up by a factor that equals the number of replicas. Of course, it soon becomes unaffordable when the factor grows up.

One, perhaps the most important, reason people want to use cloud computing is cost saving. When there is no advantage in term of cost over on-premises IT infrastructure, which you have control and don’t need to worry much about correctness, many would not be that keen on the cloud.

The Question then is: can we have cost-effective verifiable cloud computing after all? Well, for cryptography based approach, I am a bit pessimistic. The gap is just too big. Unless there is a big breakthrough, we won’t be able to use it in practice in the near future. For replication based approach, the might be some hope, if the number of replicas we pay is small. How small the number can be? The least is 2. In fact, that might work. The idea is that using cloud computing is cheaper than using your own trusted on-premises IT infrastructure. “Two cloud replicas” means doubling the cost, and cost-wise this may not differ much or may be even lower than using your trusted IT infrastructure. Given the other good qualities cloud computing processes, people would have the motivation to use the cloud.

This is straightforward, but why has not anyone came up with something? Let us forget all engineering difficulties such as synchronization, replication, latency etc., and focus on the idea. It has a fatal weakness: collusion. In replication based approach, verification is done by comparing the results. What if the two clouds collude and give you the same wrong result? You know nothing and you cannot verify anything. Can the clouds collude? Of course they can. Remember, it is not about whether they will collude or not, it about whether you believe they will collude or not. You don’t trust the clouds, then collusion is a threat to you. In the face of collusion, verification based on 2 replicas is insecure.

How to Prevent Collusion is then our objective. The technical details will follow. A spoiler from the abstract of the paper: a client “uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat”.

Beware What Lurks Within Your Browser: The Threat of Malicious Extensions

Extensions have been a common staple to the modern browser, with some extensions such as AdBlock Plus receiving over half a million weekly downloads. Browsers place emphasis on their extension model being resistant to attacks from the perspective of malware being uploaded to their web store, as well as external website based attacks. One could then assume that a user’s safety is preserved as long as you download the extension from the browser’s official extension repository.

However, our study shows that this is not the case. We show that Chrome, Firefox and Firefox for Android are highly susceptible to their extensions being used for a malicious purpose. We enumerate the range of capabilities each extension model possesses and discuss the impact this has on a user’s privacy and browsing integrity. We found that Firefox and Firefox for Android users in particular should be more wary of malicious extensions compared to Chrome users, with some attacks affecting even the user’s OS file system.

In conjunction to our findings, we designed a simple botnet to control a vast network of malicious extensions and tested its feasibility by uploading a malicious extension to both Chrome and Firefox’s web stores (both extensions had the botnet remotely disabled so no reviewers could come to harm in using the extension, for ethical reasons). We found that both Firefox and Chrome’s web store checks are not sufficient in finding malicious extensions as both of our extensions were approved.

Our paper has been accepted for publication to the IEEE S&P Magazine, and a pre-print version is currently available at:

DRE-ip: A Verifiable E-Voting Scheme without Tallying Authorities

Next week in ESORICS 2016, I will be presenting our paper (co-authored with Feng Hao) on a new e-voting system which we call DRE-ip. The system is designed for e-voting in supervised environments, that is, in polling stations, but alternative implementations of the underlying idea can also be used for remote e-voting (sometimes called i-voting for internet voting).

DRE-ip is an end-to-end verifiable e-voting system that guarantees vote secrecy. These properties are similar to those provided by state-of-the-art verifiable systems like VoteBox, STAR-Vote, and vVote designed to be used in elections in the US and Australia. However, crucially DRE-ip achieves these properties without requiring trusted tallying authorities. These are entities holding the decryption keys to encrypted ballots.

In almost all systems with tallying authorities, the votes are encrypted to provide vote secrecy. These encrypted ballots are posted on a publicly accessible bulletin board to enable vote verification. In some systems, the votes are shuffled (using mix-nets) before the tallying authorities decrypt them individually. In some other systems, they are aggregated (using homomorphic encryption) before decryption and the tallying authorities only decrypt the tally. These two techniques are used to protect vote secrecy from tallying authorities. However, there is nothing to prevent tallying authorities to get together and decrypt ballots on the bulletin board, and even worse, there is no way to detect if this happens. So at the end of the day, we are trusting the tallying authorities for vote secrecy.

DRE-ip works based on a simple observation: if a message is encrypted using randomness r, the ciphertext can be decrypted using the secret key or the randomness r. Now, imagine a situation where multiple messages are encrypted and say, we are interested in finding the sum of these messages. One way would be to decrypt the ciphertexts individually and then find the sum. Another way, if we use a homomorphic encryption, would be to aggregate the ciphertexts first and then decrypt the encrypted sum. These two ways are what other systems are doing. But our observation above tells us that there is a third way: whoever is encrypting the messages can keep an aggregation of all randomness used in encryption and release it at some point, which would enable decrypting the sum of the messages. DRE-ip is built on top of this observation.

In DRE-ip the direct-recording electronic (DRE) voting machine that captures the votes and encrypts them, keeps an aggregation of randomness used in the encryptions as well and at the end of the election releases this value to the bulletin board along with announcing the tally. This enables the public to verify the tally. No secret keys are involved in the process of verifying the tallying integrity, and hence no tallying authorities are required. In fact, the system is set up in a way that no one knows the secret key of the encryption scheme. This means that no one is able to decrypt individual ballots. The election tally is the only information that can be verified given the encrypted ballots and this computation is public.

Having the idea is perhaps the easy part, but the main work is to design a system carefully such that it provides full end-to-end verifiability and at the same time one can argue rigorously about it guaranteeing ballot secrecy. In the paper we give proofs of why using encryption is such a way is secure.

DRE-ip achieves levels of security comparable to those of state-of-the-art systems, but crucially with one less group of trusted authorities. To argue the significance of this, it would be sufficient to just quote Ross Anderson‘s definition of a trusted third party: 

A Trusted Third Party is a third party that can break your security policy.

Towards Bitcoin Payment Networks

The Newcastle University Bitcoin group was invited by the 21st Australasian Conference on Information Security and Privacy (ACISP 2016) to write a paper about Bitcoin security.

Instead, in collaboration with Malte Möser from the University of Münster, we decided to summarise an upcoming field in Bitcoin. We call this field `Bitcoin Payment Networks’ and we hope that our paper will inspire other researchers to think about how to construct payment channels, how bitcoins can be routed across two or more channels using an overlay payment network, and what potential regulation (if any) is required if anyone can participate in routing payments?

Now, I am sure you are thinking:

I thought Bitcoin already supported payments?

What is a payment channel?

And why do we need an overlay payment network for that matter?

Yes, Bitcoin is a peer to peer electronic cash system, and yes, Bitcoin already has a peer to peer network.

Unfortunately, Bitcoin as deployed today does not scale. The currency can support approximately 2-3 transactions per second and this is primarily due to an artificial cap of 1 MB blocks. Simply lifting the cap can alleviate the scalability problems in the short term, but recent research at the 3rd Bitcoin Workshop demonstrates that the underlying peer to peer network can only handle up to 27 tps. As well, lifting the cap and re-parameterizing Bitcoin is arguably just kicking the can down the road.

So how do we solve this problem? Research has focused in two directions:

  1. New types of Blockchain protocols such as GHOST and Bitcoin-NG,
  2. Facilitating Bitcoin transactions `off-chain’ and only using the Blockchain if an adjudicator is required.

In our paper, we focus on the second approach that requires a payment channel that facilitates bi-directional payments. These channels can be established when one or both parties deposit bitcoins into a multi-signature address controlled by both parties. Sending a payment then requires the authorisation of both parties, and essentially re-allocates the channel’s bitcoins to both parties. For example, if the channel has 1 BTC, then 0.5 BTC can be sent to Alice and 0.5 sent to Bob. If Alice sends 0.1 BTC to Bob, then the channel is updated to send 0.4 BTC to Alice, and 0.6 BTC to Bob. To close the channel, both parties can cooperate and settle using a single transaction, or when cooperation is not possible, either party can raise a dispute (requiring two or more transactions) on the Blockchain to settle the final balance.

Why are these channels useful?

  • The channel can be set up in such a way that the depositors are guaranteed to have their bitcoins returned if no payments occur,
  • The channel supports bidirectional activity and thousands of transactions can happen without the need to inform the Bitcoin network,
  • The channel prevents double-spending as each payment requires the authorization of the counterparty.

The prominent schemes proposed in the community are Duplex Micropayment Channels by Christian Decker and Roger Wattenhofer, and Lightning Channels by Joseph Poon and Thaddeus Dryja. We provide a comparison of both schemes to better understand their suitability for payment networks. Overall, we found that both schemes are primarily suited for different network topologies. Duplex is suited for a regulated hub-and-spoke model with highly reliable hubs, whereas Lightning is suited for a mesh network in which anyone can become a hub.

Now that we have a channel that can accept deposits from both parties and send bidirectional payments – what exactly is a payment network? A network consists of a set of channels, and allows two parties to find a route of channels that connects them. Then, both parties can send bi-directional payments across the network.

In the simplest case, Alice and Bob share a channel with Caroline (A->C, C->B), and Alice can send bitcoins to Bob using both of Caroline’s channels.

However, in the more interesting case, we can have more than one intermediary and still route payments:

Alice to Caroline, A->C

Caroline to Dave, C->D

Dave to Eugene, D->E

Eugene to Bob, E->B

Alice can send bitcoins to Bob via Caroline, Dave and Eugene. Most importantly, all routed bitcoins are sent without trusting any intermediary, and this relies upon the Hashed Time-Locked Contract (HTLC) technique proposed by Joseph Poon and Thaddeus Dryja for the Lightning Network.

In our paper, we detail how HTLC works for both Duplex Micropayment Channels and the Lightning Network. We highlight the potential limitations that HTLC imposes on the channels. For example, in Duplex Micropayment Channels, the routed bitcoins are potentially locked into the transfer for the channel’s entire lifetime, while in Lightning, the time allocated for the transfer determines how frequently each party must monitor the Blockchain for the remaining lifetime of their channel which is a potential burden for low-resourced participants.

One of the most important remaining challenges for payment networks is assessing the feasibility of different underlying topologies for the network. For instance, the community’s vision is that a mesh network will exist in which anyone can route bitcoins.

However, to achieve a mesh network, we need to decide how users can find payment routes on the network. Is there a gossip network for hubs to advertise their services? Do hubs reveal their channels (and leak their financial privacy) using Bitcoin’s blockchain? Are routing fees fixed or dynamic per route? Finally, are hubs on the mesh network considered money transmitters and need to be regulated? In the final section of our paper, we provide a brief discussion on some of these challenges.

We hope our blood, sweat and tears (yes blood, I somehow managed to cut myself while stapling a copy of the paper) will help both researchers and the community understand how cryptocurrencies can scale using payment networks. Furthermore, we hope to highlight that payment networks as a scaling solution can also potentially maintain the open-membership and self-enforcing properties that we have grown to love about Bitcoin.

Just in case, our paper can be found here. Also, just below you will find a selfie of the first two authors Patrick and Malte, alongside Rasheed from while attending Financial Cryptography 2016 this year.

Malte, Patrick and Rasheed

Malte, Patrick and Rasheed

Refund Attacks on Bitcoin’s Payment Protocol

We got our paper “Refund Attacks on Bitcoin’s Payment Protocol” accepted at the 20th Financial Cryptography & Data Security Conference in Bridgetown, Barbados. The question is… what is the paper about and why do we think it is important for the Bitcoin community?

BIP70: Payment Protocol is a community-accepted standard which governs how customers and merchants interact during the payment process. It is currently in use by Coinbase and BitPay, the two largest Payment Processors in the Bitcoin Community, who collectively provide the Payment Protocol for more than 100,000 merchants world-wide to use with their customers. The premise behind the protocol is to improve the user experience as customers no longer handle (or see) Bitcoin addresses during the payment process. Most importantly, the protocol should prevent man in the middle attacks as customer’s can authenticate messages from the merchant when a payment is requested.

A Bitcoin Core wallet displaying a Payment Request from (Source: BitPay)

Figure 1: A Bitcoin Core wallet displaying a Payment Request from (Source: BitPay)

To briefly describe the Payment Protocol:

  • The merchant sends a Payment Request message that contains their Bitcoin address, the number of bitcoins requested and a memo describing the purpose of the payment. This message is signed using their X.509 certificate’s private key.
  • The customer’s wallet verifies the authenticity of the merchant’s Payment Request message and displays on-screen the payment details to the customer (as seen in Figure 1).
  • If the customer authorises the payment, the wallet performs two actions:
    1. Authorises a payment transaction and broadcasts it the Bitcoin network,
    2. Responds with a Payment message that contains a copy of the payment transaction (Bitcoin transaction that sends bitcoins to the merchant), the customer’s refund address and the number of bitcoins that should be refunded in the event of a dispute.
  • Finally, the merchant replies with a Payment Acknowledgement message that repeats the customer’s Payment message and informs the wallet to display a confirmatory message, “Thank you for your payment!”.

A full description of the Payment Protocol can be found in our paper and in the BIP.

It should be noted that the protocol provides two pieces of evidence in case of a dispute:

  1. The customer has publicly verifiable evidence that they were requested to make a payment by presenting the Payment Request message signed by the merchant.
  2. The customer has publicly verifiable evidence that they fulfilled the requested by presenting the payment transaction that is stored in Bitcoin’s Blockchain.

What we propose in the paper is that a third piece of evidence should be provided.

The merchant should have publicly verifiable evidence that he sent the refunded bitcoins to a Bitcoin address endorsed by the same pseudonymous customer who authorised the payment. 

Why is this endorsement important? In conventional online commerce, the merchant refunds the money back to the same account that authorised the payment. However, in Bitcoin (and the Payment Protocol), refunds are sent to a different Bitcoin address. This refund address has no connection to the Bitcoin address(es) that authorised the payment. Fundamentally, the merchant needs to be confident they are actually sending the bitcoins back to the customer.

Furthermore, there is no community-accepted refund protocol in use today. The Payment Processors (and merchants) have had to implement their own policy to deal with refunds in Bitcoin. Unfortunately, sending refunds in Bitcoin is not as trivial as it first appears and these observations lead us to identify two new attacks:

  • The Silkroad Trader attack relies on an authentication vulnerability in the Payment Protocol as customers can send bitcoins to an illicit trader via an honest merchant, and then plausibly deny their involvement.
  • The Marketplace Trader attack relies on the current refund policies of Coinbase and BitPay who both accept the refund address over e-mail. This allows a rogue trader to use the reputation of a trusted merchant to entice customers to fall victim to a phishing-style attack.

Full details of the attacks can be found in the paper (and are written in such a way that we hope even people without any prior knowledge about Bitcoin can easily understand them).

We performed experiments on real-world merchants to validate the feasibility of our proposed attacks and privately disclosed our results to Coinbase, BitPay, Bitt and others (all our experiments were approved by our university ethical committee). These Payment Processors have taken precautionary measures to prevent the Marketplace Trader attack (as it relies on their refund policies). However, to solve the Silkroad Trader attack requires the Payment Protocol to endorse the refund addresses sent at the time of payment.

A concrete solution is outlined in the paper and we are in the process of implementing it for both Bitcoin Core and Bitcoinj. We hope to soon release the code to the Bitcoin community alongside a new BIP to outline the technical details. In essence, the solution aims to associate each transaction input with a refund address – as the keys that authorised the transaction are also required to sign the refund address. We settled with this solution to ensure the customer has full flexibility over which refund address was chosen. (i.e. No additional information needs to be stored to re-generate the refund address).

We recommend reading the paper to understand the attacks, experiments and solution. Please do leave us a comment if you found the post interesting or want to know more information. I can also be privately contacted at patrick.mccorry at

On the Trust of Trusted Computing in the Post-Snowden Age

At the 8th CSF Workshop on Analysis of Security API (13 July, 2015), I presented a talk entitled “on the Trusted of Trusted Computing in the Post-Snowden Age”. The workshop produces no formal proceedings, but you can find the abstract here. My slides are available here.

In the field of Trusted Computing (TC),  people often take “trust” for granted. When secrets and software are encapsulated within a tamper resistant device, users are always educated to trust the device. The level of trust is sometimes boosted by the acquisition of a certificate (based on common criteria or FIPS).

However, such “trust” can be easily misused to break security. In the talk, I used TPM as an example. Suppose TPM is used to implement secure data encryption/decryption. A standard-compliant implementation will be trivially subject to the following attack, which I call the “trap-door attack”. The TPM first compresses the data before encryption, so that it can use the saved space to insert a trap-door block in the ciphertext. The trap-door block contains the decryption key wrapped by the attacker’s key. The existence of such a trap-door is totally undetectable so long as the encryption algorithms are semantically secure (and they should be).

trapdoor_encTo the best of my knowledge, no one has mentioned this kind of attack in the literature. But if I were NSA, this would be the sort attack that I would consider first. It is much more cost-effective than investing on quantum computers or parallel search machines. With the backing of the state, NSA could coerce (and bribe) the manufacturer to implant this in the TPM. No one will be able to find out since the software is encapsulated within the hardware and protected by the tamper resistance. In return, NSA would have the exclusive power to read all encrypted data at a mass scale with trivial efforts in decrypting data.

Is this attack realistic? I would argue yes. In fact, according to Snowden revelations, NSA has already done a similar trick by implanting an insecure random number generator in the RSA products (NSA reportedly paid RSA US$10m). What I have described is a different trick, and there may well be many more similar ones.

This attack highlights the importance of taking into account the threat of a state-funded adversary in the design of a Trusted Computing system in the post-Snowden age. The essence of my presentation is a proposal to change the (universally held) assumption of “trust” in Trusted Computing to “trust-but-verify”. I gave a concrete solution in my talk to show that this proposal is not just a theoretical concept, but is practically feasible. As highlighted in my talk, my proposal constitutes only a small step in a long journey – but it is an important step in the right direction I believe.

Topics about NSA and mass surveillance are always heavy and depressing. So while in Verona (where the CSF workshop was held), I took the opportunity to tour around the city. It was a pleasant walk with views of well-preserved ancient buildings, the sunny weather (yes, a luxury for someone coming from the UK) and familiar sound of cicadas (which I used to hear every summer during my childhood in China).

The Verona Arena is the area that attracts most of the tourists. The conference organizers highly recommended us to attend one of the operas, but I was eventually deterred by the thought of having to sit for 4 hours and listen to a language that I couldn’t understand. So I decided to wander freely. As I entered a second floor of a shop that sold hand-made sewing items, my attention was suddenly drawn by someone who passionately shouted while pointing figures toward outside the window, “Look, that’s the balcony where Juliet and Romeo used to date!” Wow, I was infected by the excitement and quickly took a photo. In the photo below, you can see the Juliet Statue next to the balcony. (Of course a logical mind will question how this dating is possible given that the two people are just fictional figures, but why should anyone care? It was in Verona, a city of legends.)


Perils of an Unregulated Global Virtual Currency

We (Dylan Clarke, Patrick McCorry and myself) recently presented a position paper at the 23rd Security Protocols Workshop (SPW) in Cambridge. Our paper, titled Bitcoin: Perils of an Unregulated Global P2P Currency, makes the case that the ideological and design choices that define Bitcoin’s strengths are also directly responsible for the Bitcoin-related crime that we encounter in the news so often today.

In a nutshell: Bitcoin’s anonymity and lack of regulation are key to freeing users from central banks but they also empower drug dealers and money launderers. Using virtual assets as money reduces dependence on banks as users can handle their own wealth, but this opens the door to hackers and malware. Mainstreaming an entire global financial infrastructure to trade virtual assets cuts banks out of the picture entirely, but also de-risks crime, exposes users to threats from all over the world and opens a Pandora’s box of ethical and legal dilemmas.

We do a quick survey of the landscape of Bitcoin-related crime and observe that crime is thriving with rapid growth and increasing sophistication. Dark markets are taken down often but they continue to grow in numbers and volume. Bitcoin also de-risks crime: drugs can be ordered almost as easily as pizza, and criminals no longer need to take the risks traditionally associated with setting up and protecting illicit financial flows. Bitcoin exchanges are a regular target for hackers and customers routinely end up losing their coins. Malware that steals bitcoins from victim’s computers is booming. The ransomware industry is also thriving. In a short space of three years, CryptoLocker and CryptoWall have claimed hundreds of thousands of victims and successfully made tens of millions of dollars. There’s now even a DIY ransomware kit out called Tox – customers download an executable, secretly infect someone’s computer, and then share the ransom with the makers of the kit.

Flipping Bitcoin’s positive strengths also gives us insight to anticipate future threats: Governments and law enforcement are already sounding the alarm that Bitcoin’s anonymity and lack of regulation is ideally suited for tax evasion and money laundering. Non-currency exploits can piggyback on the Bitcoin network infrastructure. Researchers have already demonstrated how to deliver malware and operate botnets by crafting Bitcoin transactions embedded with malicious payloads.

There are no easy answers to this. If Bitcoin becomes ubiquitous, this will be the new normal. It is not possible to ‘tweak’ Bitcoin to make the negatives go away without affecting its key strengths. This is similar to the TOR dilemma – i.e. an anonymity network for activists living under repressive regimes will also empower hate speech and illegal pornography. This tradeoff, for Bitcoin, has yet to be explicitly acknowledged.

This theme – that we must recognize certain security threats do not have solutions in the technological domain – emerged periodically on the three days in the workshop in talks on disparate topics, including browser fingerprinting, TOR deployment and software design.

Apart from that, it was good weather in Cambridge. This was my first time at SPW, this particular workshop was hugely inspirational during my own PhD, and I was very excited to participate in it for the first time. The food was spectacular. A big and surprising highlight was – I’m a big fan of Lord Protector Oliver Cromwell – and during the course of the workshop I discovered not only did he study in the college where our workshop was being conducted, Sidney Sussex college – but even more astounding – that Oliver Cromwell’s head was buried in a room right next to where we were convening. (Cromwell died in 1658, his body was disinterred after the British monarchy was restored in 1659 and was hung and decapitated. The head passed into the hands of private collectors and was finally secretly buried in Sidney Sussex College in 1960).

Plaque marking burial site of Oliver Cromwell's head in Sidney Sussex College, Cambridge

The technical report for our paper can be found here and all SPW talks are liveblogged here (courtesy of Ross Anderson).

Deleting secret data with public verifiability

In an upcoming paper (to be published by IEEE Transactions on Dependable and Secure Computing, 2015), we (with Dylan Clarke and Avelino Zorzo) investigate the secure data deletion problem. This problem concerns the guaranteed deletion of digital data using software means, which has been an active topic in recent years with quite a number of publications on major security conferences (Oakland, CCS, USENIX Security etc).

We observe that in all published solutions, the underlying data deletion system is (implicitly) assumed to be a trusted black-box. When the user instructs the system to delete data, she receives one bit in return: success or failure. The user has to trust the outcome, but cannot easily verify it. Demanding the deletion program to be open-source appears to be a solution, but it does not address the real problem since the actual code used in the run-time is opaque to the user. This is especially problematic when the software program is encapsulated within a tamper resistant hardware, and it’s impossible for users to access the internal code.

For those who work on electronic voting, the above problems should sound familiar. A black-box voting system works in exactly the same way. Take the standard Direct Electronic Voting (DRE) machine as an example. The DRE system records voters’ choices through a touch screen interface. At the end of the election day, the system announces the final tally, which voters have to trust but cannot easily verify. The source code is usually not publicly available as it may contain IPR. Even it were available, there is no guarantee that the software actually running in the DRE system is the same as the source code released in the public domain.

It’s exactly the above problems that promoted the thirty-year research on “verifiable e-voting”. Today, the importance of enabling public verifiability for an e-voting system has been widely accepted. Established solutions generally involve using cryptography to allow a voter to “trust-but-verify” a voting system rather than “completely-trust” it. The essence of those solutions is succinctly summarized by Ron Rivest as ”software independence”: namely, one should be able to verify the integrity of software by verifying the output of the software rather than its source code.

While the trust-but-verify principle has been universally adopted in the e-voting community, it has so far been almost entirely neglected in the field of secure data deletion. In this new paper, we initiated an investigation of applying the “trust-but-verify” principle to the data secure problem with a concrete solution, called Secure Storage and Erasure (SSE). The SSE protocol allows public verifiability on two important operations, namely encryption and deletion. More technical descriptions about the solution can be found in the paper (available in IACR ePrint).

It’s worth mentioning that we have implemented a fully functional prototype of the SSE on a resource-constrained Java Card. The source code is publicly available here. This implementation proved to be a non-trivial challenge as there was no precedent to follow. We were severely constrained by the (very limited) set of the APIs available to a Java card, and sometimes had to implement some primitive functions (such as modular multiplication) from scratch in pure software (without any support from the hardware co-processor). Kudos to Avelino Zorzo, who did the bulk of the implementation work during his sabbatical with us at the School of Computing Science in 2013, and also Dylan Clarke, who continued and completed the development work after Avelino left.