About Feng Hao

Reader in Security Engineering, School of Computing Science, Newcastle University

A video demonstration of DRE-ip

We have made available a video demonstration of the DRE-ip voting system on YouTube. The video was made by Ehsan Toreini.

DRE-ip (Direct Recording Electronic with Integrity and Privacy) is an end-to-end verifiable e-voting system without tallying authorities, designed by Siamak Shahandashti and myself in 2016. The DRE-ip paper was presented in ESORCIS’16 and is freely available at: https://eprint.iacr.org/2016/670.pdf.

J-PAKE published as an international standard

After attending ISO/IEC SC 27 WG2 for 4 years, I’m happy to say that J-PAKE is finally published in ISO/IEC 11770-4 (2017) as an international standard. In the mean time, J-PAKE is also published in RFC 8236 by IETF (together with an accompanying RFC 8235 on Schnorr non-interactive zero-knowledge proof). This is a milestone for J-PAKE. From the first presentation at Security Protocol Worksop ’08 in Cambridge to the publication in the international standard in 2017, J-PAKE has come a long way. The critical insight in the design of J-PAKE was based on understanding the importance of zero knowledge proof (ZKP), but this insight was not shared by other researchers in the field at the time. One main reason is that the use of ZKP was considered incompatible with the then-universally-adopted formal models in the PAKE field. However, in an independent study due to Abdalla, Benhamouda, MacKenzie and published in IEEE S&P 2015, the formal model for PAKE protocols was modified to make it compatible with ZKP, and the modified model was applied to prove J-PAKE was secure. The provable results are the same as in the original J-PAKE paper, but are constructed in a formal model, thus bridging the gap between theory and practice in the end.

Today, J-PAKE has already been used by many million users in commercial products, e.g., Palemoon Sync, Google Nest, ARM mbed OS, OpenSSL, Mozilla NSS, and Bouncycastle API. In particular, J-PAKE has been adopted by the Thread Group as a standard key exchange mechanism for the IoT commissioning process, i.e., adding new IoT devices to an existing network. The protocol has already been embedded into IoT products. The following video demonstrates how J-PAKE is used to securely enrol a new IoT device into the Thread network during the commissioning process (more details about Thread can be found at NXP, Thread Group, ARM, Silicon Labs and Google Nest’s Open Thread). It’s expected that in the near future, J-PAKE will be used by many billion Thread-compliant IoT devices for the initial bootstrapping of trust.


First campus trial of the DRE-ip voting system

Today, we ran the first campus trial of a new e-voting system called DRE-ip. The DRE-ip system was initially published at ESORICS’16 (paper here), and since then we have been busy developing a prototype. In our current implementation, the front end of the prototype consists of a touch-screen tablet (Google Pixel C), linked via Bluetooth to a thermal printer (EPSON TM-P80). The backend is a web server hosted in the campus of Newcastle University.

The e-voting trial was conducted in front of the Students Union from 11:00 am to 2 pm. We managed to get nearly 60 people to try our prototype and fill in a questionnaire. All users provided us useful and constructive feedback (which will take us a while to analyze in full detail). The general reception of our prototype has been very positive. The prototype worked robustly during the 3-4 hours trial. Apart from the occasional slight delay in printing a receipt from the thermal printer, the system worked reliably without any problem. This is the first time that we put our theoretical design of an e-voting system into the practical test, and we are glad that it worked well to our expectation on the first trial.

img_1092 img_20170523_111127


During the trial, we asked the user to choose a candidate from the following choices: Theresa May, Jeremy Corbyn, Nicola Sturgeon, Tim Farron, None of above . The tallying results are a bit surprising: Jeremy Corbyn won the most popular votes! However, the voting question we used in our trial was meant to be a lighthearted choice. Our main aim was to test the reliability and usability of the prototype and to identify areas for improvements. Many users understood that. resultsToday’s trial was greatly helped by the nice weather, which is not that usual in Newcastle. Everyone from the project team tried their best. It was a great teamwork, and it was great fun. When we finished the trial, it was already past 2:00 pm. A relaxed lunch with beer and celebration drinks in our favorite Red Mezze restaurant is well deserved (which I should foresee no problem in justifying to the ERC project sponsor).


We plan to analyze and publish today’s trial results in the near future. Keep tuned.


ISO/IEC SC 27, NSA, Shrimp and Crab – A Trip Report

The 52nd meeting of ISO/IEC SC 27 was held last week 11-15 April, in the beautiful city of Tampa, Florida State, USA. Many readers may not be familiar with ISO/IEC SC 27 and the security standards that it develops. So in this post I’ll provide a brief overview of SC 27, its organization structure and the process of taking a new technique to becoming part of the ISO/IEC SC 27 standards. Also, I’ll give a short account of some discussions occurred in Work Group 2 in which I am a member.

SC 27 is a technical subcommittee under ISO and IEC, with the specific mission to standardize security-related techniques. It is further divided into five working groups (WGs) with different working areas

  • WG1:  on information security management systems
  • WG2: on cryptography and security mechanisms
  • WG3: on security evaluation, testing and specification
  • WG4: on security controls and services
  • WG5: on identity management and privacy technologies

To standardize a new security technique, there are multiple stages to go through. A typical process is summarised as follows (also see ISO/IEC stage codes): Study Period (SP) -> Working Draft (WD) -> Committee Draft (CD) -> Draft International Standard (DIS) -> Final Draft International Standard (FDIS) -> Final publication. Exception FDIS, all other stages are compulsory. There are two ISO/IEC SC 27 meetings every year. In the six months between the meetings, national body experts are invited to provide comments on working documents received at each of the above stages. Comments are then discussed in the subsequent meeting, and hopefully are resolved to everyone’s satisfaction. If the document is considered stable (e.g., the comments received are mainly editorial changes, and technical comments are few and trivial) , the document can move on to the next stage, e.g., from the 1st Working Draft to the 1st CD; otherwise, it remains in the same stage with a new version of the document, i.e.,  the1st WD to the 2nd WD. The new document will be circulated among national body experts, with another cycle of discussing comments in the the next meeting. To standardize a new technique typically takes 3-4 years at least.

There are several criteria for deciding whether to include a new technique into the ISO/IEC standards. The first is the maturity of the technique. It’s generally expected that the proposed technique should have been published for a number of years (typically at least 4-5 years) in a stable state, and that it has received a sufficient level of cryptanalysis and no weakness has been found. The second is the industrial need — whether there is a practical demand for this technique to be standardized. Finally, it is considered desirable if the technique comes with security proofs. But “security proofs” can be a very tricky thing, as different people interpret what the “proof” should look like in different ways. Usually, the best security proof is still the proof of “time”, which is why the proposed technique should have been published for a number of years before it could be considered for standardization.

The ISO/IEC SC 27 standardization process may look dauntingly formal and lengthy, but once you get a grip of it, you will find it is actually easier than it looks. I started attending the ISO/IEC SC 27 meetings as a member of the UK national body delegation from April 2014 in the Hong Kong meeting, where I first presented J-PAKE to ISO/IEC SC 27 WG2 for inclusion into ISO/IEC 11770-4. The J-PAKE paper was first published at SPW’08 in April 2004. So it was 6 years old when I first presented it to ISO/IEC. We were open about the public analysis results of J-PAKE, and a full discussion track record was publicly viewable at the Cambridge Blog. My presentation was well received in the first meeting, with the agreement to start a study period and call for contributions from national bodies to comment on the possible inclusion of J-PAKE into ISO/IEC SC 27. Comments were discussed in the next Kuching meeting (Oct 2014) and it was unanimously agreed by national body delegates in that meeting to start the 1st working draft for the inclusion of J-PAKE into ISO/IEC 11770-4, with me and another member of WG 2 appointed as the editors.  After two working drafts, the J-PAKE proposal was formally accepted by ISO/IEC SC 27 for inclusion in the Jaipur meeting (Oct 2015). This was the 1st CD stage.  At this meeting, all comments received on the 1st CD of ISO/IEC 11770-4 were discussed and resolved. It was then agreed in this Tampa meeting that we would proceed to the DIS stage. It is expected that the final ISO/IEC 11770-4 standard that includes J-PAKE will be published in 2017. So it will take approximately 3 years in total.

Attending ISO/IEC SC 27 has been a great personal experience. It’s different from usual academic conferences in that the majority of attendees are from industry and they are keen to solve real-world security problems. Not many academics attend SC 27 though. One main reason is due to funding; attending two overseas meeting a year is quite a financial commitment. Fortunately, in the UK, all universities are starting to pay more attention to research impact (a new assessment category that was first introduced in the 2014 Research Excellence Framework). The research impact concerns the impact on industry and society (i.e., how the research actually benefits the society and changes the world rather than getting how many citations). I was fortunate and grateful that the CS faculty in my university decided to support my travels. Newcastle University CS did very well in REF 2014 and it was ranked the 1st in the UK for research impact. Hopefully it will continue to do well in the next REF. The development of an ISO/IEC standard for J-PAKE may help make a new impact case for REF 2020.

Tampa is a very beautiful city and it was such a long journey to get there. I would fail my university’s sponsorship if I stop here without sharing experience about other happenings in SC 27 Working Group 2.

In the Tampa meeting, one work item in WG 2 attracted a significant attention and heated debates. That item was about the NSA proposal to include SIMON and SPECK, two lightweight block ciphers designed by NSA, into the ISO/IEC 29192-2 standard.

Starting from the Mexico meeting in Oct 2014, the NSA delegate presented a proposal to include SIMON and SPEKE into the ISO/IEC 29192-2 standard. The two ciphers are designed to be lightweight, and are considered particularly suitable for Internet-of-Things (IoT) applications (e.g., ciphers used in light bulks, door locks etc). The proposal was discussed again in the subsequent Kuching meeting in April 2015, and a study period was initiated. The comments received during the study period were discussed in the subsequent Jaipur meeting in Oct 2015. There was a substantial disagreement among the delegates on whether to include SIMON and SPECK. In the end, it was decided to go for a straw poll by nation bodies (which rarely happened in WG 2). The outcome was a small majority (3 Yes, 2 No, all other countries abstained) to support the start of the first working draft, and meanwhile, continuing the study period and call for contributions to solicit more comments from national bodies. (However, the straw poll procedure at the Jaipur meeting was disputed to be invalid six months later in the Tampa meeting, as I’ll explain later.)

Comments on the 1st WD of SIMON and SPEKE were discussed in this Tampa meeting. Unsurprisingly, this, again, led to another long debate. The originally scheduled 45 minute session had to be stretched to nearly 2 hours. Most of the arguments were on technical aspects of the two ciphers. In summary, there were three main concerns.

First, the NSA proposal of SPECK includes a variant that has a block size of only 48 bits. This size was considered too small by crypto experts in WG 2. Some experts worry that a powerful attacker might perform pre-computation to identify optimal search paths. The pre-computation (2^48) is way beyond the capability of an ordinary attacker, but should be within the reach of a state-funded adversary. Also, the small block size makes key refreshing problematic. Due to the birthday paradox, a single key should not be used for encrypting more than 2^24 blocks, which is a rather small number. This bound is further reduced under the multi-user setting as some experts pointed out.

Second, the SIMON and SPECK ciphers were considered too young. The ciphers were first published in IACR eprint in June 2013 (yes, technical reports on IACR eprint are considered an acceptable form of publication according to ISO/IEC). When NSA first proposed them to ISO/IEC for standardization in the Mexico meeting (Oct 2014), the two ciphers were only 1 year and 4 months old. Both SIMON and SPECK are built on  ARJ, which is a relatively new technique. The public understanding of security properties of ARJ is limited, as acknowledged by many experts in the meeting.

Third, the public analysis on SIMON and SPECK was not considered sufficient. The supporting argument from NSA in the Jaipur meeting (Oct 2015) was that the cryptanalysis results on SIMON and SPECK had “reached a a plateau”. However, within the next 6 months, there has been progress on the analysis, especially on SPECK (see the latest paper in 2016 due to Song et al). Hence, the argument of reaching a plateau is no longer valid. Instead, in the Tampa meeting, NSA argued that the cryptanalysis results became “more uniformly distributed” — i.e., now the safety margins for all SIMON and SPECK variants, as measured against the best known public analysis, are roughly centered around 30%, while 6 months ago, the safety margins for some SPECK variants were as high as 46%.

Most of the arguments in the meeting were technical, however, it was inevitable that the trustworthiness of the origin of SIMON and SPECK was called into question. There have been plenty of reports that allege the NSA involvement in inserting backdoors in security products and compromising security standards. The infamous random number generator algorithm, Dual_EC_DRBG, was once added into ISO/IEC 18031:2005 as proposed by NSA delegates, but later had to be removed when the news about the potential backdoor in Dual_EC_DRBG broke out. In this meeting, NSA delegates repeatedly reminded experts in WG 2 that they must judge the inclusion of a proposal based on the technical merits not where it came from. This was met with scepticism and distrust by some people.

Given the previous troubles with NSA proposals, some experts demanded that the designers of SIMON and SPECK should show security proofs, in particular, proofs that no backdoor exists. This request was reputed by the NSA delegate as technically impossible. One can point out the existence of a backdoor (if any), but proving the absence of it in a block cipher design is impossible. No block ciphers in the existing ISO/IEC standards have this kind of proofs, as the NSA delegate argued.

When all parties made their points, and the arguments became a circling repetition, it was clear that another straw poll was the only way out. This time, the straw poll was conducted among the experts in the room. On whether to support the SIMON and SPECK to proceed to the next (CD) stage, the outcome was an overwhelming objection (8 Yes, 16 No, the rest abstained).

Near the end of the Tampa meeting, it also merged that the straw poll in the previous Jaipur meeting on starting the 1st working draft for SIMON/SPECK was disputed to be invalid. According to the ISO/IEC directive, the straw poll should have been done among the experts present in the meeting rather than the national bodies. The difference is that in the latter there can only be one vote per country, while in the former, there can be many votes per country. In the Tampa meeting, it was suggested to redo the straw poll of the Jaipur meeting among experts, but the motion was rejected by NSA on the grounds that there were not enough experts in the room. This matter is being escalated to the upper management of ISO/IEC SC 27 for a resolution. At the time of writing this blog, this dispute remains still unresolved.

OK. That’s enough for the trip report. After a long and busy day of meetings, how to spend the rest of the day? A nice dinner with friends, and some beers, should be deserved.

2016-04-14 19.40.122016-04-14 19.40.072016-04-14 19.39.56

Real-world Electronic Voting: Design, Analysis and Deployment

We are pleased to announce the completion of a new book “Real-world Electronic Voting: Design, Analysis and Deployment”, which is due to be published by the CRC Press. It’s still in press, but you can pre-order it from Amazon (the book will be freely available in the open-access domain two years from its publication).

This book is co-edited by Peter Ryan and myself. It aims to capture all major developments in electronic voting since 2003 in a real-world setting. It covers three broad categories: e-voting protocols, attacks reported on e-voting, and new developments on the use of e-voting.

Table of contents [PDF]

Foreword (Josh Benaloh) [PDF]

Preface (Feng Hao and Peter Ryan) [PDF]

Part 1: Setting the scheme

  • Chapter 1: Software Independence Revisited (Ronald L. Rivest and Madars Virza)
  • Chapter 2: Guidelines for Trialling E-voting in National Elections (Ben Goldsmith)

Part II: Real-world e-voting in national elections

  • Chapter 3: Overview of Current State of E-voting World-wide (Carlos Vegas and Jordi Barrat)
  • Chapter 4: Electoral Systems Used around the World (Siamak F. Shahandashti)
  • Chapter 5: E-voting in Norway (Kristian Gjøsteen)
  • Chapter 6: E-voting in Estonia (Dylan Clarke and Tarvi Martens)
  • Chapter 7: Practical Attacks on Real-world E-voting (J. Alex Halderman)

Part III: E2E verifiable protocols and real-world applications

  • Chapter 8: An Overview of End-to-End Verifiable Voting Systems (Syed Taha Ali and Judy Murray)
  • Chapter 9: Theoretical Attacks on E2E Voting Systems (Peter Hyun-Jeen Lee and Siamak F. Shahandashti)
  • Chapter 10: The Scantegrity Voting System and its Use in the Takoma Park Elections (Richard T. Carback, David Chaum, Jeremy Clark, Aleksander Essex, Travis Mayberry, Stefan Popoveniuc, Ronald L. Rivest, Emily Shen, Alan T.
    Sherman, Poorvi L. Vora, John Wittrock, and Filip Zagórski)
  • Chapter 11: Internet voting with Helios (Olivier Pereira)
  • Chapter 12: Prêt à Voter – the Evolution of the Species (Peter Y A Ryan, Steve Schneider, and Vanessa Teague)
  • Chapter 13: DRE-i and Self-Enforcing E-Voting (Feng Hao)
  • Chapter 14: STAR-Vote: A Secure, Transparent, Auditable, and Reliable Voting System (Susan Bell, Josh Benaloh, Michael D. Byrne, Dana DeBeauvoir, Bryce Eakin, Gail Fisher, Philip Kortum, Neal McBurnett, Julian Montoya, Michelle Parker, Olivier Pereira, Philip B. Stark, Dan S. Wallach, and Michael Winn)

On the Trust of Trusted Computing in the Post-Snowden Age

At the 8th CSF Workshop on Analysis of Security API (13 July, 2015), I presented a talk entitled “on the Trusted of Trusted Computing in the Post-Snowden Age”. The workshop produces no formal proceedings, but you can find the abstract here. My slides are available here.

In the field of Trusted Computing (TC),  people often take “trust” for granted. When secrets and software are encapsulated within a tamper resistant device, users are always educated to trust the device. The level of trust is sometimes boosted by the acquisition of a certificate (based on common criteria or FIPS).

However, such “trust” can be easily misused to break security. In the talk, I used TPM as an example. Suppose TPM is used to implement secure data encryption/decryption. A standard-compliant implementation will be trivially subject to the following attack, which I call the “trap-door attack”. The TPM first compresses the data before encryption, so that it can use the saved space to insert a trap-door block in the ciphertext. The trap-door block contains the decryption key wrapped by the attacker’s key. The existence of such a trap-door is totally undetectable so long as the encryption algorithms are semantically secure (and they should be).

trapdoor_encTo the best of my knowledge, no one has mentioned this kind of attack in the literature. But if I were NSA, this would be the sort attack that I would consider first. It is much more cost-effective than investing on quantum computers or parallel search machines. With the backing of the state, NSA could coerce (and bribe) the manufacturer to implant this in the TPM. No one will be able to find out since the software is encapsulated within the hardware and protected by the tamper resistance. In return, NSA would have the exclusive power to read all encrypted data at a mass scale with trivial efforts in decrypting data.

Is this attack realistic? I would argue yes. In fact, according to Snowden revelations, NSA has already done a similar trick by implanting an insecure random number generator in the RSA products (NSA reportedly paid RSA US$10m). What I have described is a different trick, and there may well be many more similar ones.

This attack highlights the importance of taking into account the threat of a state-funded adversary in the design of a Trusted Computing system in the post-Snowden age. The essence of my presentation is a proposal to change the (universally held) assumption of “trust” in Trusted Computing to “trust-but-verify”. I gave a concrete solution in my talk to show that this proposal is not just a theoretical concept, but is practically feasible. As highlighted in my talk, my proposal constitutes only a small step in a long journey – but it is an important step in the right direction I believe.

Topics about NSA and mass surveillance are always heavy and depressing. So while in Verona (where the CSF workshop was held), I took the opportunity to tour around the city. It was a pleasant walk with views of well-preserved ancient buildings, the sunny weather (yes, a luxury for someone coming from the UK) and familiar sound of cicadas (which I used to hear every summer during my childhood in China).

The Verona Arena is the area that attracts most of the tourists. The conference organizers highly recommended us to attend one of the operas, but I was eventually deterred by the thought of having to sit for 4 hours and listen to a language that I couldn’t understand. So I decided to wander freely. As I entered a second floor of a shop that sold hand-made sewing items, my attention was suddenly drawn by someone who passionately shouted while pointing figures toward outside the window, “Look, that’s the balcony where Juliet and Romeo used to date!” Wow, I was infected by the excitement and quickly took a photo. In the photo below, you can see the Juliet Statue next to the balcony. (Of course a logical mind will question how this dating is possible given that the two people are just fictional figures, but why should anyone care? It was in Verona, a city of legends.)


J-PAKE built into Google Nest thermostats

The J-PAKE key exchange protocol, designed by Prof Peter Ryan and myself in 2008, has been built into the Nest thermostat products (Nest was bought by Google in 2014 for US$3.2 billion). A technical white paper that describes the implementation has recently gone public (13 July, 2015).

Besides the Google Nest, J-PAKE has also been used in other commercial products. Since 2010, J-PAKE has been used by Mozilla Firefox to implement secure sync and deployed to over 400 million internet users. Recently, Mozilla Firefox starts to deploy a different mechanism (less secure but more usable than J-PAKE) for sync. However, the Palemoon browser, a popular fork of Firefox, retains the original J-PAKE based mechanism for preserving full security in protecting sync data (which contain sensitive user passwords). In the ISO/IEC SC 27 meeting held in Mexico City in October 2014, it was unanimously supported by national bodies in Work Group 2 to include J-PAKE into the ISO/IEC 11770-4 standard. The standardization of J-PAKE is currently in process and expects to finish in another two years.

The original J-PAKE paper was initially rejected by major conferences in the field, as the protocol design was based on a new method and didn’t follow any established approaches in the main stream at the time. The paper was eventually accepted and published by a small workshop (Security Protocols Workshop’08) held locally in Cambridge, UK in 2008. After 7 years of test by time, it is pleasing to see that the J-PAKE technique and its basic design ideas are being gradually accepted by the academic community and the industry.

Deleting secret data with public verifiability

In an upcoming paper (to be published by IEEE Transactions on Dependable and Secure Computing, 2015), we (with Dylan Clarke and Avelino Zorzo) investigate the secure data deletion problem. This problem concerns the guaranteed deletion of digital data using software means, which has been an active topic in recent years with quite a number of publications on major security conferences (Oakland, CCS, USENIX Security etc).

We observe that in all published solutions, the underlying data deletion system is (implicitly) assumed to be a trusted black-box. When the user instructs the system to delete data, she receives one bit in return: success or failure. The user has to trust the outcome, but cannot easily verify it. Demanding the deletion program to be open-source appears to be a solution, but it does not address the real problem since the actual code used in the run-time is opaque to the user. This is especially problematic when the software program is encapsulated within a tamper resistant hardware, and it’s impossible for users to access the internal code.

For those who work on electronic voting, the above problems should sound familiar. A black-box voting system works in exactly the same way. Take the standard Direct Electronic Voting (DRE) machine as an example. The DRE system records voters’ choices through a touch screen interface. At the end of the election day, the system announces the final tally, which voters have to trust but cannot easily verify. The source code is usually not publicly available as it may contain IPR. Even it were available, there is no guarantee that the software actually running in the DRE system is the same as the source code released in the public domain.

It’s exactly the above problems that promoted the thirty-year research on “verifiable e-voting”. Today, the importance of enabling public verifiability for an e-voting system has been widely accepted. Established solutions generally involve using cryptography to allow a voter to “trust-but-verify” a voting system rather than “completely-trust” it. The essence of those solutions is succinctly summarized by Ron Rivest as ”software independence”: namely, one should be able to verify the integrity of software by verifying the output of the software rather than its source code.

While the trust-but-verify principle has been universally adopted in the e-voting community, it has so far been almost entirely neglected in the field of secure data deletion. In this new paper, we initiated an investigation of applying the “trust-but-verify” principle to the data secure problem with a concrete solution, called Secure Storage and Erasure (SSE). The SSE protocol allows public verifiability on two important operations, namely encryption and deletion. More technical descriptions about the solution can be found in the paper (available in IACR ePrint).

It’s worth mentioning that we have implemented a fully functional prototype of the SSE on a resource-constrained Java Card. The source code is publicly available here. This implementation proved to be a non-trivial challenge as there was no precedent to follow. We were severely constrained by the (very limited) set of the APIs available to a Java card, and sometimes had to implement some primitive functions (such as modular multiplication) from scratch in pure software (without any support from the hardware co-processor). Kudos to Avelino Zorzo, who did the bulk of the implementation work during his sabbatical with us at the School of Computing Science in 2013, and also Dylan Clarke, who continued and completed the development work after Avelino left.

Password Authenticated Key Exchange in a Group

Today, we release a new paper entitled “The Fairy-Ring Dance: Password Authenticated Key Exchange in a Group“. This is joint work with Xun Yi (RMIT University), Liqun Chen (HP Labs) and Siamak Shahandashti (Newcastle University). The initial work started when Xun visited us at the School of Computing Science, Newcastle University in Feb, 2014. This is one of the collaborative research outcomes, which stem from that visit.

The subject of two-party Password Authenticated Key Exchange (PAKE) has been well studied for nearly three decades, however the topic of multi-party Group PAKE (GPAKE) has so far received little attention. Partly, this is because a Group PAKE is significantly more complex to design than a two-party PAKE due to more interactions between participants, hence exposing more potential attack vectors for an adversary to exploit.

We decided to investigate this subject as we believed Group PAKE protocols would become increasingly more important in the next 10 years – especially in the era of Internet of Things (IoT). Using a Group PAKE protocol can help set up a group of out-of-box IoT devices that have no pre-installed secrets or certificates; one just needs to enter a common (low-entropy) passcode into each of the devices. The protocol can then take over to ensure secure group communication among these IoT devices despite that all data is transmitted through an insecure Internet.

One major technical challenge here is to make the protocol as round efficient as possible. With Moore’s law, the computational efficiency can rapidly improve over time, but the round efficiency will stay more or less the same. Intuitively, when a group of entities engage in multiple rounds of interactions over a distributed network, the bottleneck of the overall latency will likely be determined by the slowest responder in each round. Hence, our strategy is to trade off computation for optimal round efficiency, with the aim to minimize the number of rounds as much as possible.

The paper (a free copy available at IACR ePrint) gives more technical details about how the above strategy is realized. I’ll present the paper at the ASIACCS Workshop on IoT Privacy, Trust, and Security (IoTPTS), in April 2015. It will be interesting to hear feedback from academic and industrial researchers working on IoT.

Before reading the paper, I would suggest the reader to watch the following “Fairy Ring Dance” from YouTube first, since the structural design of our solution shares some similarity to that dance.

Fairy Ring Dance (YouTube)


J-PAKE adopted by ISO/IEC standard

J-PAKE is a password-based authenticated key exchange protocol, developed by Peter Ryan and myself in 2008. Over the past six years, the protocol has withstood all sorts of attacks and has started to see some real-world use in providing end-to-end secure communication for Internet users. The full records of discussions on J-PAKE can be found in the previous lightbluetouchpaper blog.

About six months ago, in the ISO/IEC SC 27 meeting held at Hong Kong in April 2014, I gave a presentation on the rationale of including J-PAKE into the ISO/IEC 11770-4 standard. The presentation slides are available here. An accompanying document was officially circulated among the national bodies under ISO before the meeting. It was agreed in that meeting to start a six-month study period on Revision of ISO/IEC 11770-4 and invite all national bodies to comment my proposal.

This week, in its meeting held in Mexico City, October 20-24, 2014, ISO/IEC SC 27 Working Group 2 considered the contributions received under the study period. After some discussion, SC 27/WG 2 unanimously agreed that this standard should be revised to include J-PAKE.

In the same meeting, two security weaknesses of the existing SPEKE protocol in ISO/IEC 11770-4 were discussed based on the findings reported in our SSR’14 paper. (A copy of the paper is publicly available at IACR ePrint and the paper is discussed in a previous blog post.) After some discussion, it was agreed that the SPEKE specification in ISO/IEC 11770-4 should be revised to address the attacks reported in our SSR’14 paper. The revision work on ISO/IEC 11770-4 starts immediately with myself being one of the editors. We expect to provide the first working draft for comment by 15 Dec, 2014.

On a more lightweight subject, while in Mexico, I try to do as Mexicans do: i.e., drink a glass of cactus (mixed with celery, parsley, pineapple and orange) during the breakfast. It was such a horrible taste that I was unable to finish it the first time. However, the more I try it, the more I like it. Now I can’t have a breakfast without it. The way our body treats a new taste of drink reminds me of the way how our mind treats a new idea. A “new” idea usually has a bitter taste in it as it challenges our mind into accepting something different. The natural reaction is to reject it and remain satisfied where we are and what we already know. However, to appreciate the “sweetness” out of the initial “bitterness” of any new idea, it takes time and patience – and in fact, lots of patience. When I return to the UK, I am sure this will be the drink I miss most from Mexico. So, cheers one more time before my flight home tomorrow!

Cactus drink