Aad raises a good question in his blog post; Why are we using reliability / dependability tools and techniques such as Praxis methodology and Z modelling to identify flaws in the existing EMV payment system? These tools were intended to be used to in the building of reliable systems, however we are using them to analyse the security / reliability of an existing system post implementation.
Our CCS 2014 paper “Harvesting High Value Foreign Currency Transactions from EMV Contactless Credit Cards without the PIN” outlines the methodology we have developed for analysing the EMV payments protocol. The methodology uses (i) abstract modelling to identify potential anomalies in the EMV payments protocol (ii) software emulation to exhaustively test real-world implications of the anomaly on credit/debit cards (iii) practical demonstrations of any flaws identified, allowing us to better communicate our findings to the general public and the banks.
In short the methodology we have developed is the scientific contribution that underpins our work, without which it could be argued that we were simply lucky hackers who have stumbled across some interesting bugs in the EMV implementation.
So far the methodology has identified the Foreign Currency flaw (CCS 2014) and the Contactless PIN Verify flaw (FC 2013) both of which stem from the specification NOT detailing what should happen under a given set of circumstances. This is where we get back to Aad’s question “Why are we using reliability tools for our analysis?”; Leo Freitas’s abstract model is very good at identifying conditional cases that are not satisfied by the specification, highlighting a missing case (which is not handled) rather than an incorrect case or an exception case, something we human are quite bad at seeing.
The system designers and coders are usually focused on making the system work and dealing with the expected exception cases; this is perfectly valid as this is the most efficient route to delivering a working system. However this may lead to the assumption that a particular set of exception values would not occur together and therefore the exception case is not handled. This un-handled case shows up as an anomaly in Leo’s abstract model => which leads to testing in the EMV emulator => which leads to a practical demonstration of the flaw=> which leads to a paper at CCS => and eventually ends up on the BBC News.
This year I have found it a little frustrating that the sensational “hack” papers grab the headlines and are easiest to get published at a good conference (we also wrote a “solution” paper and a “methodology” paper this year which generated zero interest and were not published). However what we learned from the CCS paper reviews was that the best paper combines all 3 elements; good science in the “methodology”, good results in the “hack” and a responsible set of possible “solutions” in the conclusion.
Hopefully my random ramblings have been of some help