It is axiomatic that if criminals have a means of communicating, which law enforcement agencies cannot understand, then it is a serious impediment to both detection and investigation. The main focus for the debate around this topic has been the use of encryption by criminals: encryption that is so powerful that it is impractical to decipher any communications using these techniques. It is understandable that governments are expressing concern about their ability to both protect people from criminal and extremist behaviour, and to bring those responsible to justice.
The most simplistic approach to the situation is to make the use of encryption illegal for anything other than a specific set of electronic interactions. This is based upon the argument that only criminals would wish to use encryption. The corollary is that law-abiding citizens have no need (or desire) to secure their communications, and that the desire for privacy is not an acceptable end in itself. However, most appear to now accept that is logic is flawed. Data collected by mass surveillance, if retained, may be misused at some future point. Governments change and the reason for the initial action may not be that for which the data is subsequently used, in addition to any ‘mission creep’.
Especially in the wake of the information leaked by Edward Snowden, and the associated allegations of mass surveillance, there is greater concern among the wider population about privacy from government, as well as perhaps from the private sector: this is illustrated by the Eurobarometer data. It is argued that privacy is a fundamental human right, as stated in Article 12 of the UN Universal Declaration of Human Rights:
No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
Many governments agree, but point out that this is a right to be protected from “arbitrary” interference with a person’s privacy. Rather than a desire for arbitrary surveillance, some governments wish there to be a means by which only targeted criminals’ communications can be deciphered, under appropriate oversight such as the need for the issuing of specific warrants. This sentiment was perhaps best summed up by a statement made by the UK Prime Minister, David Cameron:
"Do we want to allow a means of communication between two people which even in extremis with a signed warrant from the home secretary personally that we cannot read?
"My answer to that question is no, we must not. The first duty of any government is to keep our country and our people safe."
Whilst this is a sentiment with which many, if not most, would agree, the problem is in the detail of how this is implemented. The possible means of achieving such a situation have been discussed many times and were well rehearsed in the late 1990s when several countries were attempting to deal with encryption whilst introducing legislation to cover investigatory powers. The reasons these mechanisms were rejected then remain valid today. In summary they are:
This is a technology that governments can no longer control. Unlike weapons of mass destruction, there is no large infrastructure needed to produce and distribute encryption technology. The technology is already widely and freely available. Trying to put it under control now would be impractical. In any event, even if legislation were passed in all EU Member States to outlaw encryption, and the wider population abided by this, it would not stop criminals using the technology. It would have the unfortunate effect of making those who abide by any such law more vulnerable to the very criminals who it is designed to handicap. This is exacerbated by the fact that if the EU Member States were to pass such a law, there is no guarantee that other countries would do the same. As organised crime is often committed across borders, it would be another dimension in which to frustrate the detection and prosecution of criminals, not least via arguments about the propriety of mutual legal assistance.
Trying to restrict the distribution of encryption software is impractical. Even if one could prevent it leaving a country once produced there is nothing to stop the ideas travelling and being re-implemented in another country. We saw exactly this happen with PGP when the US government attempted to control its distribution outside of the US: it was simply reincarnated as PGP International.
It may be possible to enforce a blanket ban on encryption. If all the Internet service providers established technology to detect and block encrypted traffic it would be impractical to use encrypted communications into, out of, or within a country. However, attempting to differentiate between legitimate use of encryption and that being used by criminals would be a non-trivial task, and likely to be flawed unless all providers did exactly the same. It would also be relatively easy to circumvent by using virtual private networks, Tor, or some similar mechanism.
It also does not address the problem of using steganography. It is perfectly possible for criminals, again using widely and freely available technology, to disguise communications, and to encrypt those communications. Likewise the use of dead letter box style email accounts and similar covert means of communication would go undetected.
In the modern world we are increasingly dependent upon the Internet yet it was never designed to be a secure network. Layering encryption on top of the Internet is currently the only practical means of ensuring confidentiality, integrity and authenticity of our Internet based interactions.
It was mooted early on in the debate that anyone using encryption should be obliged to file a copy of their encryption key with either a government agency or possibly a trusted third party. If an authorised agency then needed to decrypt communications the key could be retrieved. There are several significant problems with this approach:
Many have suggested that only encryption which law enforcement agencies can ‘crack’ should be allowed. It has been suggested that this might be through the use of, for example, a weakened algorithm, restricted key lengths or the inclusion of a back door. All of these have the same issues: if you weaken encryption for your enemies, you do so for your friends. It no longer takes vast computing facilities to break weakened encryption, nor would it take a determined group of criminals long to find a back door.
The security community has been imploring users to ensure they use the latest encryption and extend their key lengths, precisely because the arms race between encryption and the ability of computing power to break it is continuous. Organised criminals have access to significant finances and some of the best technologists in the world so it would naïve to assume that governments would enjoy any form of advantage in breaking deliberately weakened encryption.
This approach was typified when the US government attempted to introduce the Clipper chip, which had a backdoor. It was announced in 1993 and was totally defunct by 1996 as it had been rendered impractical for all of the reasons discussed above.
The use of weakened encryption has a long-term impact as well. A recent vulnerability in Transport Layer Security (TLS) was discovered where hackers were able, in some implementations, to force an encrypted link to use an older ‘export’ grade encryption which was breakable by modern computers. Once such weakened encryption enters the wider environment, in order to maintain compatibility, especially backward compatibility, it has to be always possible to request that an interaction uses the weaker form of encryption: there will always be someone who is still using it and the way in which these interactions are established (between those who may not have communicated before) means that the initial dialogue moves to the lowest common denominator. Whilst these flaws are blocked off when disclosed, the applications that use them are very complex and it is almost inevitable that further such flaws will emerge based upon legacy weakened encryption. It would appear to compound the issue by reintroducing newly weakened encryption.
This appears to be the only practical method of handling encryption where the keys are held by individual users. Rather like refusing to take a breath test to see if you are over the drink driving alcohol limits, it is possible to make it an offence to disclose an encryption key that allows law enforcement agencies to examine encrypted data. This has the advantage of enabling a criminal to be prosecuted if he reveals his encrypted data or refuses to do so.
Internationally there are some courts that have been asked to consider such an action as tantamount to self-incrimination. However, on the whole it has been seen by the courts as justified as part of criminal investigations.
Unfortunately, this tends to be effective only when data remains on the suspect/criminal’s computer. If the keys are transient, especially if they are system generated, it can be practically impossible to recover these. This is then compounded by the fact that the communication itself may be transient and not recorded, i.e. even if they key could be recovered, there is nothing to decrypt unless it has been captured through surveillance and recorded by the law enforcement agencies.
As mentioned above, this situation is complicated by the re-architecting of communications services for the likes of WhatsApp, iMessage, Facebook and Facetime, and the email services provided by Google and Yahoo, by enabling end-to-end encryption.
If there were a practical place where encryption could be tackled, it would be through achieving agreement with these service providers to implement security architectures that did not enable end-to-end encryption; if the communications were encrypted from each participant to the service provider but potentially readable on the service providers’ systems, it would be possible for law enforcement agencies to present a suitable warrant to read the communications.
The issue that service providers have expressed is that their users are internationally based, and they would find it difficult to know which law enforcement agencies they should cooperate with. The companies providing these services are predominantly US based and their users have expressed concern that the US and its allies would be able to use such an architecture to conduct mass surveillance. Similarly a US based company might be placed in an invidious position if law enforcement agencies from unfriendly countries made such requests, perhaps for politically motivated surveillance.
It was this dilemma that resulted in the introduction of end-to-end encryption in the first place.
The debate currently underway is one that quite rightly is being held in public. Whilst most would agree with the sentiments of wanting their law enforcers to have access to criminals’ communications, the dilemma is the negative impact of the ways in which this would be achieved. However, one significant piece of data is missing from the debate: the scale of the problem. What is currently not in the public domain is the degree to which criminal detection and investigation is being hampered by the use of encryption by criminals.
It would seem that if a proper public debate is to be forthcoming, if legislators are to be trusted in what they wish to place into law, and if decisions on what inevitably will be compromises in security and privacy are to be evidence based, it is important that the problem is quantified in a way that earns the trust of most if not all members of the public. EC3 will be asking Member States if they will cooperate in providing the data to enable the nature of the problem (current and future potential) to be established.