Cyber Security: a MOOC in progress

Members of the research group in Secure and Resilient Systems at Newcastle University are currently preparing a new MOOC (Massive Open Online Course) on the practicalities of Cyber Security. The three-week course Cyber Security: Safety at Home, Online, in Life will be running on the FutureLearn platform from 5th September 2016.

Preparing to discuss how we handle risks in everyday life

The course team preparing to film a discussion on how we handle risks in everyday life

Although it’s the first time our group has participated in MOOC development, it’s the 5th course that Newcastle University’s Learning and Teaching Development Service (LTDS) will have delivered, so we feel we’re in safe hands. Our aim is to introduce course participants to current topics in cyber security research and show how they relate to everyday life: privacy of data, safety of financial transactions, and security implications of smart devices, to take three examples.

For us as researchers and lecturers in security and resilience, it’s an interesting and sometimes challenging process to think about how best to present material in this medium. We’re moving from research papers and presentations, lectures and coursework assignments to short articles, discussion topics, quizzes and video. We hope it will be of interest to anyone with some background knowledge in cyber security and an interest in finding out current practice and research directions in this area.

We hope you can join us on 5th September! You can register for the course at https://www.futurelearn.com/courses/cyber-security.

Towards Bitcoin Payment Networks

The Newcastle University Bitcoin group was invited by the 21st Australasian Conference on Information Security and Privacy (ACISP 2016) to write a paper about Bitcoin security.

Instead, in collaboration with Malte Möser from the University of Münster, we decided to summarise an upcoming field in Bitcoin. We call this field `Bitcoin Payment Networks’ and we hope that our paper will inspire other researchers to think about how to construct payment channels, how bitcoins can be routed across two or more channels using an overlay payment network, and what potential regulation (if any) is required if anyone can participate in routing payments?

Now, I am sure you are thinking:

I thought Bitcoin already supported payments?

What is a payment channel?

And why do we need an overlay payment network for that matter?

Yes, Bitcoin is a peer to peer electronic cash system, and yes, Bitcoin already has a peer to peer network.

Unfortunately, Bitcoin as deployed today does not scale. The currency can support approximately 2-3 transactions per second and this is primarily due to an artificial cap of 1 MB blocks. Simply lifting the cap can alleviate the scalability problems in the short term, but recent research at the 3rd Bitcoin Workshop demonstrates that the underlying peer to peer network can only handle up to 27 tps. As well, lifting the cap and re-parameterizing Bitcoin is arguably just kicking the can down the road.

So how do we solve this problem? Research has focused in two directions:

  1. New types of Blockchain protocols such as GHOST and Bitcoin-NG,
  2. Facilitating Bitcoin transactions `off-chain’ and only using the Blockchain if an adjudicator is required.

In our paper, we focus on the second approach that requires a payment channel that facilitates bi-directional payments. These channels can be established when one or both parties deposit bitcoins into a multi-signature address controlled by both parties. Sending a payment then requires the authorisation of both parties, and essentially re-allocates the channel’s bitcoins to both parties. For example, if the channel has 1 BTC, then 0.5 BTC can be sent to Alice and 0.5 sent to Bob. If Alice sends 0.1 BTC to Bob, then the channel is updated to send 0.4 BTC to Alice, and 0.6 BTC to Bob. To close the channel, both parties can cooperate and settle using a single transaction, or when cooperation is not possible, either party can raise a dispute (requiring two or more transactions) on the Blockchain to settle the final balance.

Why are these channels useful?

  • The channel can be set up in such a way that the depositors are guaranteed to have their bitcoins returned if no payments occur,
  • The channel supports bidirectional activity and thousands of transactions can happen without the need to inform the Bitcoin network,
  • The channel prevents double-spending as each payment requires the authorization of the counterparty.

The prominent schemes proposed in the community are Duplex Micropayment Channels by Christian Decker and Roger Wattenhofer, and Lightning Channels by Joseph Poon and Thaddeus Dryja. We provide a comparison of both schemes to better understand their suitability for payment networks. Overall, we found that both schemes are primarily suited for different network topologies. Duplex is suited for a regulated hub-and-spoke model with highly reliable hubs, whereas Lightning is suited for a mesh network in which anyone can become a hub.

Now that we have a channel that can accept deposits from both parties and send bidirectional payments – what exactly is a payment network? A network consists of a set of channels, and allows two parties to find a route of channels that connects them. Then, both parties can send bi-directional payments across the network.

In the simplest case, Alice and Bob share a channel with Caroline (A->C, C->B), and Alice can send bitcoins to Bob using both of Caroline’s channels.

However, in the more interesting case, we can have more than one intermediary and still route payments:

Alice to Caroline, A->C

Caroline to Dave, C->D

Dave to Eugene, D->E

Eugene to Bob, E->B

Alice can send bitcoins to Bob via Caroline, Dave and Eugene. Most importantly, all routed bitcoins are sent without trusting any intermediary, and this relies upon the Hashed Time-Locked Contract (HTLC) technique proposed by Joseph Poon and Thaddeus Dryja for the Lightning Network.

In our paper, we detail how HTLC works for both Duplex Micropayment Channels and the Lightning Network. We highlight the potential limitations that HTLC imposes on the channels. For example, in Duplex Micropayment Channels, the routed bitcoins are potentially locked into the transfer for the channel’s entire lifetime, while in Lightning, the time allocated for the transfer determines how frequently each party must monitor the Blockchain for the remaining lifetime of their channel which is a potential burden for low-resourced participants.

One of the most important remaining challenges for payment networks is assessing the feasibility of different underlying topologies for the network. For instance, the community’s vision is that a mesh network will exist in which anyone can route bitcoins.

However, to achieve a mesh network, we need to decide how users can find payment routes on the network. Is there a gossip network for hubs to advertise their services? Do hubs reveal their channels (and leak their financial privacy) using Bitcoin’s blockchain? Are routing fees fixed or dynamic per route? Finally, are hubs on the mesh network considered money transmitters and need to be regulated? In the final section of our paper, we provide a brief discussion on some of these challenges.

We hope our blood, sweat and tears (yes blood, I somehow managed to cut myself while stapling a copy of the paper) will help both researchers and the community understand how cryptocurrencies can scale using payment networks. Furthermore, we hope to highlight that payment networks as a scaling solution can also potentially maintain the open-membership and self-enforcing properties that we have grown to love about Bitcoin.

Just in case, our paper can be found here. Also, just below you will find a selfie of the first two authors Patrick and Malte, alongside Rasheed from Bitt.com while attending Financial Cryptography 2016 this year.

Malte, Patrick and Rasheed

Malte, Patrick and Rasheed

ISO/IEC SC 27, NSA, Shrimp and Crab – A Trip Report

The 52nd meeting of ISO/IEC SC 27 was held last week 11-15 April, in the beautiful city of Tampa, Florida State, USA. Many readers may not be familiar with ISO/IEC SC 27 and the security standards that it develops. So in this post I’ll provide a brief overview of SC 27, its organization structure and the process of taking a new technique to becoming part of the ISO/IEC SC 27 standards. Also, I’ll give a short account of some discussions occurred in Work Group 2 in which I am a member.

SC 27 is a technical subcommittee under ISO and IEC, with the specific mission to standardize security-related techniques. It is further divided into five working groups (WGs) with different working areas

  • WG1:  on information security management systems
  • WG2: on cryptography and security mechanisms
  • WG3: on security evaluation, testing and specification
  • WG4: on security controls and services
  • WG5: on identity management and privacy technologies

To standardize a new security technique, there are multiple stages to go through. A typical process is summarised as follows (also see ISO/IEC stage codes): Study Period (SP) -> Working Draft (WD) -> Committee Draft (CD) -> Draft International Standard (DIS) -> Final Draft International Standard (FDIS) -> Final publication. Exception FDIS, all other stages are compulsory. There are two ISO/IEC SC 27 meetings every year. In the six months between the meetings, national body experts are invited to provide comments on working documents received at each of the above stages. Comments are then discussed in the subsequent meeting, and hopefully are resolved to everyone’s satisfaction. If the document is considered stable (e.g., the comments received are mainly editorial changes, and technical comments are few and trivial) , the document can move on to the next stage, e.g., from the 1st Working Draft to the 1st CD; otherwise, it remains in the same stage with a new version of the document, i.e.,  the1st WD to the 2nd WD. The new document will be circulated among national body experts, with another cycle of discussing comments in the the next meeting. To standardize a new technique typically takes 3-4 years at least.

There are several criteria for deciding whether to include a new technique into the ISO/IEC standards. The first is the maturity of the technique. It’s generally expected that the proposed technique should have been published for a number of years (typically at least 4-5 years) in a stable state, and that it has received a sufficient level of cryptanalysis and no weakness has been found. The second is the industrial need — whether there is a practical demand for this technique to be standardized. Finally, it is considered desirable if the technique comes with security proofs. But “security proofs” can be a very tricky thing, as different people interpret what the “proof” should look like in different ways. Usually, the best security proof is still the proof of “time”, which is why the proposed technique should have been published for a number of years before it could be considered for standardization.

The ISO/IEC SC 27 standardization process may look dauntingly formal and lengthy, but once you get a grip of it, you will find it is actually easier than it looks. I started attending the ISO/IEC SC 27 meetings as a member of the UK national body delegation from April 2014 in the Hong Kong meeting, where I first presented J-PAKE to ISO/IEC SC 27 WG2 for inclusion into ISO/IEC 11770-4. The J-PAKE paper was first published at SPW’08 in April 2004. So it was 6 years old when I first presented it to ISO/IEC. We were open about the public analysis results of J-PAKE, and a full discussion track record was publicly viewable at the Cambridge Blog. My presentation was well received in the first meeting, with the agreement to start a study period and call for contributions from national bodies to comment on the possible inclusion of J-PAKE into ISO/IEC SC 27. Comments were discussed in the next Kuching meeting (Oct 2014) and it was unanimously agreed by national body delegates in that meeting to start the 1st working draft for the inclusion of J-PAKE into ISO/IEC 11770-4, with me and another member of WG 2 appointed as the editors.  After two working drafts, the J-PAKE proposal was formally accepted by ISO/IEC SC 27 for inclusion in the Jaipur meeting (Oct 2015). This was the 1st CD stage.  At this meeting, all comments received on the 1st CD of ISO/IEC 11770-4 were discussed and resolved. It was then agreed in this Tampa meeting that we would proceed to the DIS stage. It is expected that the final ISO/IEC 11770-4 standard that includes J-PAKE will be published in 2017. So it will take approximately 3 years in total.

Attending ISO/IEC SC 27 has been a great personal experience. It’s different from usual academic conferences in that the majority of attendees are from industry and they are keen to solve real-world security problems. Not many academics attend SC 27 though. One main reason is due to funding; attending two overseas meeting a year is quite a financial commitment. Fortunately, in the UK, all universities are starting to pay more attention to research impact (a new assessment category that was first introduced in the 2014 Research Excellence Framework). The research impact concerns the impact on industry and society (i.e., how the research actually benefits the society and changes the world rather than getting how many citations). I was fortunate and grateful that the CS faculty in my university decided to support my travels. Newcastle University CS did very well in REF 2014 and it was ranked the 1st in the UK for research impact. Hopefully it will continue to do well in the next REF. The development of an ISO/IEC standard for J-PAKE may help make a new impact case for REF 2020.

Tampa is a very beautiful city and it was such a long journey to get there. I would fail my university’s sponsorship if I stop here without sharing experience about other happenings in SC 27 Working Group 2.

In the Tampa meeting, one work item in WG 2 attracted a significant attention and heated debates. That item was about the NSA proposal to include SIMON and SPECK, two lightweight block ciphers designed by NSA, into the ISO/IEC 29192-2 standard.

Starting from the Mexico meeting in Oct 2014, the NSA delegate presented a proposal to include SIMON and SPEKE into the ISO/IEC 29192-2 standard. The two ciphers are designed to be lightweight, and are considered particularly suitable for Internet-of-Things (IoT) applications (e.g., ciphers used in light bulks, door locks etc). The proposal was discussed again in the subsequent Kuching meeting in April 2015, and a study period was initiated. The comments received during the study period were discussed in the subsequent Jaipur meeting in Oct 2015. There was a substantial disagreement among the delegates on whether to include SIMON and SPECK. In the end, it was decided to go for a straw poll by nation bodies (which rarely happened in WG 2). The outcome was a small majority (3 Yes, 2 No, all other countries abstained) to support the start of the first working draft, and meanwhile, continuing the study period and call for contributions to solicit more comments from national bodies. (However, the straw poll procedure at the Jaipur meeting was disputed to be invalid six months later in the Tampa meeting, as I’ll explain later.)

Comments on the 1st WD of SIMON and SPEKE were discussed in this Tampa meeting. Unsurprisingly, this, again, led to another long debate. The originally scheduled 45 minute session had to be stretched to nearly 2 hours. Most of the arguments were on technical aspects of the two ciphers. In summary, there were three main concerns.

First, the NSA proposal of SPECK includes a variant that has a block size of only 48 bits. This size was considered too small by crypto experts in WG 2. Some experts worry that a powerful attacker might perform pre-computation to identify optimal search paths. The pre-computation (2^48) is way beyond the capability of an ordinary attacker, but should be within the reach of a state-funded adversary. Also, the small block size makes key refreshing problematic. Due to the birthday paradox, a single key should not be used for encrypting more than 2^24 blocks, which is a rather small number. This bound is further reduced under the multi-user setting as some experts pointed out.

Second, the SIMON and SPECK ciphers were considered too young. The ciphers were first published in IACR eprint in June 2013 (yes, technical reports on IACR eprint are considered an acceptable form of publication according to ISO/IEC). When NSA first proposed them to ISO/IEC for standardization in the Mexico meeting (Oct 2014), the two ciphers were only 1 year and 4 months old. Both SIMON and SPECK are built on  ARJ, which is a relatively new technique. The public understanding of security properties of ARJ is limited, as acknowledged by many experts in the meeting.

Third, the public analysis on SIMON and SPECK was not considered sufficient. The supporting argument from NSA in the Jaipur meeting (Oct 2015) was that the cryptanalysis results on SIMON and SPECK had “reached a a plateau”. However, within the next 6 months, there has been progress on the analysis, especially on SPECK (see the latest paper in 2016 due to Song et al). Hence, the argument of reaching a plateau is no longer valid. Instead, in the Tampa meeting, NSA argued that the cryptanalysis results became “more uniformly distributed” — i.e., now the safety margins for all SIMON and SPECK variants, as measured against the best known public analysis, are roughly centered around 30%, while 6 months ago, the safety margins for some SPECK variants were as high as 46%.

Most of the arguments in the meeting were technical, however, it was inevitable that the trustworthiness of the origin of SIMON and SPECK was called into question. There have been plenty of reports that allege the NSA involvement in inserting backdoors in security products and compromising security standards. The infamous random number generator algorithm, Dual_EC_DRBG, was once added into ISO/IEC 18031:2005 as proposed by NSA delegates, but later had to be removed when the news about the potential backdoor in Dual_EC_DRBG broke out. In this meeting, NSA delegates repeatedly reminded experts in WG 2 that they must judge the inclusion of a proposal based on the technical merits not where it came from. This was met with scepticism and distrust by some people.

Given the previous troubles with NSA proposals, some experts demanded that the designers of SIMON and SPECK should show security proofs, in particular, proofs that no backdoor exists. This request was reputed by the NSA delegate as technically impossible. One can point out the existence of a backdoor (if any), but proving the absence of it in a block cipher design is impossible. No block ciphers in the existing ISO/IEC standards have this kind of proofs, as the NSA delegate argued.

When all parties made their points, and the arguments became a circling repetition, it was clear that another straw poll was the only way out. This time, the straw poll was conducted among the experts in the room. On whether to support the SIMON and SPECK to proceed to the next (CD) stage, the outcome was an overwhelming objection (8 Yes, 16 No, the rest abstained).

Near the end of the Tampa meeting, it also merged that the straw poll in the previous Jaipur meeting on starting the 1st working draft for SIMON/SPECK was disputed to be invalid. According to the ISO/IEC directive, the straw poll should have been done among the experts present in the meeting rather than the national bodies. The difference is that in the latter there can only be one vote per country, while in the former, there can be many votes per country. In the Tampa meeting, it was suggested to redo the straw poll of the Jaipur meeting among experts, but the motion was rejected by NSA on the grounds that there were not enough experts in the room. This matter is being escalated to the upper management of ISO/IEC SC 27 for a resolution. At the time of writing this blog, this dispute remains still unresolved.

OK. That’s enough for the trip report. After a long and busy day of meetings, how to spend the rest of the day? A nice dinner with friends, and some beers, should be deserved.

2016-04-14 19.40.122016-04-14 19.40.072016-04-14 19.39.56

Real-world Electronic Voting: Design, Analysis and Deployment

We are pleased to announce the completion of a new book “Real-world Electronic Voting: Design, Analysis and Deployment”, which is due to be published by the CRC Press. It’s still in press, but you can pre-order it from Amazon (the book will be freely available in the open-access domain two years from its publication).

This book is co-edited by Peter Ryan and myself. It aims to capture all major developments in electronic voting since 2003 in a real-world setting. It covers three broad categories: e-voting protocols, attacks reported on e-voting, and new developments on the use of e-voting.

Table of contents [PDF]

Foreword (Josh Benaloh) [PDF]

Preface (Feng Hao and Peter Ryan) [PDF]

Part 1: Setting the scheme

  • Chapter 1: Software Independence Revisited (Ronald L. Rivest and Madars Virza)
  • Chapter 2: Guidelines for Trialling E-voting in National Elections (Ben Goldsmith)

Part II: Real-world e-voting in national elections

  • Chapter 3: Overview of Current State of E-voting World-wide (Carlos Vegas and Jordi Barrat)
  • Chapter 4: Electoral Systems Used around the World (Siamak F. Shahandashti)
  • Chapter 5: E-voting in Norway (Kristian Gjøsteen)
  • Chapter 6: E-voting in Estonia (Dylan Clarke and Tarvi Martens)
  • Chapter 7: Practical Attacks on Real-world E-voting (J. Alex Halderman)

Part III: E2E verifiable protocols and real-world applications

  • Chapter 8: An Overview of End-to-End Verifiable Voting Systems (Syed Taha Ali and Judy Murray)
  • Chapter 9: Theoretical Attacks on E2E Voting Systems (Peter Hyun-Jeen Lee and Siamak F. Shahandashti)
  • Chapter 10: The Scantegrity Voting System and its Use in the Takoma Park Elections (Richard T. Carback, David Chaum, Jeremy Clark, Aleksander Essex, Travis Mayberry, Stefan Popoveniuc, Ronald L. Rivest, Emily Shen, Alan T.
    Sherman, Poorvi L. Vora, John Wittrock, and Filip Zagórski)
  • Chapter 11: Internet voting with Helios (Olivier Pereira)
  • Chapter 12: Prêt à Voter – the Evolution of the Species (Peter Y A Ryan, Steve Schneider, and Vanessa Teague)
  • Chapter 13: DRE-i and Self-Enforcing E-Voting (Feng Hao)
  • Chapter 14: STAR-Vote: A Secure, Transparent, Auditable, and Reliable Voting System (Susan Bell, Josh Benaloh, Michael D. Byrne, Dana DeBeauvoir, Bryce Eakin, Gail Fisher, Philip Kortum, Neal McBurnett, Julian Montoya, Michelle Parker, Olivier Pereira, Philip B. Stark, Dan S. Wallach, and Michael Winn)

TouchSignatures: Identification of user touch actions and PINs based on mobile sensor data via JavaScript

How much do you trust your browser when you are surfing the internet on a mobile phone  using Safari, Chrome, Opera or Firefox? Perhaps, you feel secure as long as you do not download suspicious files, or enter your secret passwords onto unknown websites. You may feel even more secure by closing the browser and locking the phone.

However, our recent research (published in the Journal of Information Security and Applications) shows that there is a significant deficiency in the current W3C specifications, which affects the security of all major mobile browsers including Safari, Chrome, Opera and Firefox. The current W3C specification allows embedded JavaScript code in a web page to access the motion and orientation sensors on a mobile phone without requiring any user permission. This makes it possible for a remote website to learn sensitive user information such as phone call timing, physical activities, touch actions on the screen, and even the PINs, by collecting and analysing the sensor data.

We studied the implementation of W3C in all major mobile browsers. Our study confirms that embedded JavaScript code can compromise user sensitive information by listening to the side channel data provided by the motion and orientation sensors without any user permission, through an inactive tab, iframe, or minimised browser (even when the screen of the mobile phone is locked). Below you can see a list of affected browsers on iOS and Android.

1

To show the feasibility of the attack, we present TouchSignatures which implements an attack where malicious JavaScript code on an attack tab listens to such sensor data measurements. Based on these streams and by using advanced machine learning methods, TouchSignatures is able to distinguish the user’s touch actions (i.e., tap, scroll, hold, and zoom) and her PINs, allowing a remote website to learn the client-side user activities. We demonstrate the practicality of this attack by collecting data from real users and reporting high success rates, up to 70% identification of digits (PIN) in Android and 56% in iOS. For more details, we refer the reader to our paper.

This problem has been largely neglected in the past as the sensor stream available to JavaScript has been restricted to low rates (3-5 times lower than those available to app). The common perception within the W3C community and the browser industry is that such a low rate should not expose risks to information leakage. However, our work suggests this perception is incorrect. There are serious security risks imposed by the JavaScript’s unrestricted access to the sensor data even at a low rate.

We reported the results of this research to the W3C community and mobile browser vendors including Mozilla, Opera, Chromium and Apple. We are grateful to their quick and constructive feedback, which is summarized below:

  • W3C: “This would be an issue to address for any future iterations on this [W3C] document”.
  • Mozilla: “Indeed, and it should be fixed consistently across all the browsers and also the spec [W3C specification] needs to be fixed.”
  • Chrome: “It [i.e. this research] sounds like a good reason to restrict it [i.e. sensor reading] from iframes”.
  • Opera: “Opera on iOS giving background tabs access to the events does seem like an unwanted bug”.
  • Safari: “We have reviewed your paper and are working on the mitigations listed in the paper.”

An earlier version of the paper was presented in AsiaCCS’15 and a journal version is published in JISA (Elsevier). Please feel free to leave comment or contact me (m.mehrnezhad@ncl.ac.uk) if you have any questions about this research project.

Refund Attacks on Bitcoin’s Payment Protocol

We got our paper “Refund Attacks on Bitcoin’s Payment Protocol” accepted at the 20th Financial Cryptography & Data Security Conference in Bridgetown, Barbados. The question is… what is the paper about and why do we think it is important for the Bitcoin community?

BIP70: Payment Protocol is a community-accepted standard which governs how customers and merchants interact during the payment process. It is currently in use by Coinbase and BitPay, the two largest Payment Processors in the Bitcoin Community, who collectively provide the Payment Protocol for more than 100,000 merchants world-wide to use with their customers. The premise behind the protocol is to improve the user experience as customers no longer handle (or see) Bitcoin addresses during the payment process. Most importantly, the protocol should prevent man in the middle attacks as customer’s can authenticate messages from the merchant when a payment is requested.

A Bitcoin Core wallet displaying a Payment Request from BitPay.com (Source: BitPay)

Figure 1: A Bitcoin Core wallet displaying a Payment Request from BitPay.com (Source: BitPay)

To briefly describe the Payment Protocol:

  • The merchant sends a Payment Request message that contains their Bitcoin address, the number of bitcoins requested and a memo describing the purpose of the payment. This message is signed using their X.509 certificate’s private key.
  • The customer’s wallet verifies the authenticity of the merchant’s Payment Request message and displays on-screen the payment details to the customer (as seen in Figure 1).
  • If the customer authorises the payment, the wallet performs two actions:
    1. Authorises a payment transaction and broadcasts it the Bitcoin network,
    2. Responds with a Payment message that contains a copy of the payment transaction (Bitcoin transaction that sends bitcoins to the merchant), the customer’s refund address and the number of bitcoins that should be refunded in the event of a dispute.
  • Finally, the merchant replies with a Payment Acknowledgement message that repeats the customer’s Payment message and informs the wallet to display a confirmatory message, “Thank you for your payment!”.

A full description of the Payment Protocol can be found in our paper and in the BIP.

It should be noted that the protocol provides two pieces of evidence in case of a dispute:

  1. The customer has publicly verifiable evidence that they were requested to make a payment by presenting the Payment Request message signed by the merchant.
  2. The customer has publicly verifiable evidence that they fulfilled the requested by presenting the payment transaction that is stored in Bitcoin’s Blockchain.

What we propose in the paper is that a third piece of evidence should be provided.

The merchant should have publicly verifiable evidence that he sent the refunded bitcoins to a Bitcoin address endorsed by the same pseudonymous customer who authorised the payment. 

Why is this endorsement important? In conventional online commerce, the merchant refunds the money back to the same account that authorised the payment. However, in Bitcoin (and the Payment Protocol), refunds are sent to a different Bitcoin address. This refund address has no connection to the Bitcoin address(es) that authorised the payment. Fundamentally, the merchant needs to be confident they are actually sending the bitcoins back to the customer.

Furthermore, there is no community-accepted refund protocol in use today. The Payment Processors (and merchants) have had to implement their own policy to deal with refunds in Bitcoin. Unfortunately, sending refunds in Bitcoin is not as trivial as it first appears and these observations lead us to identify two new attacks:

  • The Silkroad Trader attack relies on an authentication vulnerability in the Payment Protocol as customers can send bitcoins to an illicit trader via an honest merchant, and then plausibly deny their involvement.
  • The Marketplace Trader attack relies on the current refund policies of Coinbase and BitPay who both accept the refund address over e-mail. This allows a rogue trader to use the reputation of a trusted merchant to entice customers to fall victim to a phishing-style attack.

Full details of the attacks can be found in the paper (and are written in such a way that we hope even people without any prior knowledge about Bitcoin can easily understand them).

We performed experiments on real-world merchants to validate the feasibility of our proposed attacks and privately disclosed our results to Coinbase, BitPay, Bitt and others (all our experiments were approved by our university ethical committee). These Payment Processors have taken precautionary measures to prevent the Marketplace Trader attack (as it relies on their refund policies). However, to solve the Silkroad Trader attack requires the Payment Protocol to endorse the refund addresses sent at the time of payment.

A concrete solution is outlined in the paper and we are in the process of implementing it for both Bitcoin Core and Bitcoinj. We hope to soon release the code to the Bitcoin community alongside a new BIP to outline the technical details. In essence, the solution aims to associate each transaction input with a refund address – as the keys that authorised the transaction are also required to sign the refund address. We settled with this solution to ensure the customer has full flexibility over which refund address was chosen. (i.e. No additional information needs to be stored to re-generate the refund address).

We recommend reading the paper to understand the attacks, experiments and solution. Please do leave us a comment if you found the post interesting or want to know more information. I can also be privately contacted at patrick.mccorry at ncl.ac.uk.

On the Trust of Trusted Computing in the Post-Snowden Age

At the 8th CSF Workshop on Analysis of Security API (13 July, 2015), I presented a talk entitled “on the Trusted of Trusted Computing in the Post-Snowden Age”. The workshop produces no formal proceedings, but you can find the abstract here. My slides are available here.

In the field of Trusted Computing (TC),  people often take “trust” for granted. When secrets and software are encapsulated within a tamper resistant device, users are always educated to trust the device. The level of trust is sometimes boosted by the acquisition of a certificate (based on common criteria or FIPS).

However, such “trust” can be easily misused to break security. In the talk, I used TPM as an example. Suppose TPM is used to implement secure data encryption/decryption. A standard-compliant implementation will be trivially subject to the following attack, which I call the “trap-door attack”. The TPM first compresses the data before encryption, so that it can use the saved space to insert a trap-door block in the ciphertext. The trap-door block contains the decryption key wrapped by the attacker’s key. The existence of such a trap-door is totally undetectable so long as the encryption algorithms are semantically secure (and they should be).

trapdoor_encTo the best of my knowledge, no one has mentioned this kind of attack in the literature. But if I were NSA, this would be the sort attack that I would consider first. It is much more cost-effective than investing on quantum computers or parallel search machines. With the backing of the state, NSA could coerce (and bribe) the manufacturer to implant this in the TPM. No one will be able to find out since the software is encapsulated within the hardware and protected by the tamper resistance. In return, NSA would have the exclusive power to read all encrypted data at a mass scale with trivial efforts in decrypting data.

Is this attack realistic? I would argue yes. In fact, according to Snowden revelations, NSA has already done a similar trick by implanting an insecure random number generator in the RSA products (NSA reportedly paid RSA US$10m). What I have described is a different trick, and there may well be many more similar ones.

This attack highlights the importance of taking into account the threat of a state-funded adversary in the design of a Trusted Computing system in the post-Snowden age. The essence of my presentation is a proposal to change the (universally held) assumption of “trust” in Trusted Computing to “trust-but-verify”. I gave a concrete solution in my talk to show that this proposal is not just a theoretical concept, but is practically feasible. As highlighted in my talk, my proposal constitutes only a small step in a long journey – but it is an important step in the right direction I believe.

Topics about NSA and mass surveillance are always heavy and depressing. So while in Verona (where the CSF workshop was held), I took the opportunity to tour around the city. It was a pleasant walk with views of well-preserved ancient buildings, the sunny weather (yes, a luxury for someone coming from the UK) and familiar sound of cicadas (which I used to hear every summer during my childhood in China).

The Verona Arena is the area that attracts most of the tourists. The conference organizers highly recommended us to attend one of the operas, but I was eventually deterred by the thought of having to sit for 4 hours and listen to a language that I couldn’t understand. So I decided to wander freely. As I entered a second floor of a shop that sold hand-made sewing items, my attention was suddenly drawn by someone who passionately shouted while pointing figures toward outside the window, “Look, that’s the balcony where Juliet and Romeo used to date!” Wow, I was infected by the excitement and quickly took a photo. In the photo below, you can see the Juliet Statue next to the balcony. (Of course a logical mind will question how this dating is possible given that the two people are just fictional figures, but why should anyone care? It was in Verona, a city of legends.)

Julie

J-PAKE built into Google Nest thermostats

The J-PAKE key exchange protocol, designed by Prof Peter Ryan and myself in 2008, has been built into the Nest thermostat products (Nest was bought by Google in 2014 for US$3.2 billion). A technical white paper that describes the implementation has recently gone public (13 July, 2015).

Besides the Google Nest, J-PAKE has also been used in other commercial products. Since 2010, J-PAKE has been used by Mozilla Firefox to implement secure sync and deployed to over 400 million internet users. Recently, Mozilla Firefox starts to deploy a different mechanism (less secure but more usable than J-PAKE) for sync. However, the Palemoon browser, a popular fork of Firefox, retains the original J-PAKE based mechanism for preserving full security in protecting sync data (which contain sensitive user passwords). In the ISO/IEC SC 27 meeting held in Mexico City in October 2014, it was unanimously supported by national bodies in Work Group 2 to include J-PAKE into the ISO/IEC 11770-4 standard. The standardization of J-PAKE is currently in process and expects to finish in another two years.

The original J-PAKE paper was initially rejected by major conferences in the field, as the protocol design was based on a new method and didn’t follow any established approaches in the main stream at the time. The paper was eventually accepted and published by a small workshop (Security Protocols Workshop’08) held locally in Cambridge, UK in 2008. After 7 years of test by time, it is pleasing to see that the J-PAKE technique and its basic design ideas are being gradually accepted by the academic community and the industry.

Perils of an Unregulated Global Virtual Currency

We (Dylan Clarke, Patrick McCorry and myself) recently presented a position paper at the 23rd Security Protocols Workshop (SPW) in Cambridge. Our paper, titled Bitcoin: Perils of an Unregulated Global P2P Currency, makes the case that the ideological and design choices that define Bitcoin’s strengths are also directly responsible for the Bitcoin-related crime that we encounter in the news so often today.

In a nutshell: Bitcoin’s anonymity and lack of regulation are key to freeing users from central banks but they also empower drug dealers and money launderers. Using virtual assets as money reduces dependence on banks as users can handle their own wealth, but this opens the door to hackers and malware. Mainstreaming an entire global financial infrastructure to trade virtual assets cuts banks out of the picture entirely, but also de-risks crime, exposes users to threats from all over the world and opens a Pandora’s box of ethical and legal dilemmas.

We do a quick survey of the landscape of Bitcoin-related crime and observe that crime is thriving with rapid growth and increasing sophistication. Dark markets are taken down often but they continue to grow in numbers and volume. Bitcoin also de-risks crime: drugs can be ordered almost as easily as pizza, and criminals no longer need to take the risks traditionally associated with setting up and protecting illicit financial flows. Bitcoin exchanges are a regular target for hackers and customers routinely end up losing their coins. Malware that steals bitcoins from victim’s computers is booming. The ransomware industry is also thriving. In a short space of three years, CryptoLocker and CryptoWall have claimed hundreds of thousands of victims and successfully made tens of millions of dollars. There’s now even a DIY ransomware kit out called Tox – customers download an executable, secretly infect someone’s computer, and then share the ransom with the makers of the kit.

Flipping Bitcoin’s positive strengths also gives us insight to anticipate future threats: Governments and law enforcement are already sounding the alarm that Bitcoin’s anonymity and lack of regulation is ideally suited for tax evasion and money laundering. Non-currency exploits can piggyback on the Bitcoin network infrastructure. Researchers have already demonstrated how to deliver malware and operate botnets by crafting Bitcoin transactions embedded with malicious payloads.

There are no easy answers to this. If Bitcoin becomes ubiquitous, this will be the new normal. It is not possible to ‘tweak’ Bitcoin to make the negatives go away without affecting its key strengths. This is similar to the TOR dilemma – i.e. an anonymity network for activists living under repressive regimes will also empower hate speech and illegal pornography. This tradeoff, for Bitcoin, has yet to be explicitly acknowledged.

This theme – that we must recognize certain security threats do not have solutions in the technological domain – emerged periodically on the three days in the workshop in talks on disparate topics, including browser fingerprinting, TOR deployment and software design.

Apart from that, it was good weather in Cambridge. This was my first time at SPW, this particular workshop was hugely inspirational during my own PhD, and I was very excited to participate in it for the first time. The food was spectacular. A big and surprising highlight was – I’m a big fan of Lord Protector Oliver Cromwell – and during the course of the workshop I discovered not only did he study in the college where our workshop was being conducted, Sidney Sussex college – but even more astounding – that Oliver Cromwell’s head was buried in a room right next to where we were convening. (Cromwell died in 1658, his body was disinterred after the British monarchy was restored in 1659 and was hung and decapitated. The head passed into the hands of private collectors and was finally secretly buried in Sidney Sussex College in 1960).

Plaque marking burial site of Oliver Cromwell's head in Sidney Sussex College, Cambridge

The technical report for our paper can be found here and all SPW talks are liveblogged here (courtesy of Ross Anderson).

Deleting secret data with public verifiability

In an upcoming paper (to be published by IEEE Transactions on Dependable and Secure Computing, 2015), we (with Dylan Clarke and Avelino Zorzo) investigate the secure data deletion problem. This problem concerns the guaranteed deletion of digital data using software means, which has been an active topic in recent years with quite a number of publications on major security conferences (Oakland, CCS, USENIX Security etc).

We observe that in all published solutions, the underlying data deletion system is (implicitly) assumed to be a trusted black-box. When the user instructs the system to delete data, she receives one bit in return: success or failure. The user has to trust the outcome, but cannot easily verify it. Demanding the deletion program to be open-source appears to be a solution, but it does not address the real problem since the actual code used in the run-time is opaque to the user. This is especially problematic when the software program is encapsulated within a tamper resistant hardware, and it’s impossible for users to access the internal code.

For those who work on electronic voting, the above problems should sound familiar. A black-box voting system works in exactly the same way. Take the standard Direct Electronic Voting (DRE) machine as an example. The DRE system records voters’ choices through a touch screen interface. At the end of the election day, the system announces the final tally, which voters have to trust but cannot easily verify. The source code is usually not publicly available as it may contain IPR. Even it were available, there is no guarantee that the software actually running in the DRE system is the same as the source code released in the public domain.

It’s exactly the above problems that promoted the thirty-year research on “verifiable e-voting”. Today, the importance of enabling public verifiability for an e-voting system has been widely accepted. Established solutions generally involve using cryptography to allow a voter to “trust-but-verify” a voting system rather than “completely-trust” it. The essence of those solutions is succinctly summarized by Ron Rivest as ”software independence”: namely, one should be able to verify the integrity of software by verifying the output of the software rather than its source code.

While the trust-but-verify principle has been universally adopted in the e-voting community, it has so far been almost entirely neglected in the field of secure data deletion. In this new paper, we initiated an investigation of applying the “trust-but-verify” principle to the data secure problem with a concrete solution, called Secure Storage and Erasure (SSE). The SSE protocol allows public verifiability on two important operations, namely encryption and deletion. More technical descriptions about the solution can be found in the paper (available in IACR ePrint).

It’s worth mentioning that we have implemented a fully functional prototype of the SSE on a resource-constrained Java Card. The source code is publicly available here. This implementation proved to be a non-trivial challenge as there was no precedent to follow. We were severely constrained by the (very limited) set of the APIs available to a Java card, and sometimes had to implement some primitive functions (such as modular multiplication) from scratch in pure software (without any support from the hardware co-processor). Kudos to Avelino Zorzo, who did the bulk of the implementation work during his sabbatical with us at the School of Computing Science in 2013, and also Dylan Clarke, who continued and completed the development work after Avelino left.