Invading a user’s privacy was a bad idea when it was proposed in the 1990s via the ill-conceived Clipper chip, and it’s a bad idea now, no matter what name it’s given this time.
Back in the days of the Clinton administration, the government wanted to force electronics manufacturers to install a chip that would allow the government (with a court order, supposedly) to have unfettered access to any and all encrypted data on the device. It proposed to do this via a scheme called “key escrow,” wherein the cryptographic keys to the kingdom were stored by a presumably trusted escrow service and only turned over to the government upon lawful request in the form of a subpoena. After a long battle, the Clipper chip program was shot down, largely because the public saw it as too intrusive.
But since then, law enforcement, including top officials at the FBI, have never stopped moaning about the strength of modern cryptography. On current iOS hardware, for example, information is encrypted using the Advanced Encryption Standard (AES) with 256-bit keys. The 256-bit keys used to encrypt the file system and files are derived from, among other things, the user’s passcode, which is protected in the device’s secure enclave hardware via the user’s fingerprint. In short, on a modern iOS device that is locked with a strong passcode, there is no back door than to have access to the keys themselves.
Today, as the Justice Department and the FBI seek Apple’s cooperation in unlocking an iPhone used by one of the San Bernardino terrorists, the specter of the Clipper chip seems to be haunting us. (Note: In the San Bernardino case, the terrorist used an older iPhone and, as I understand it, a four-digit PIN, so that situation does not apply to a more modern and general case that I describe above.) Meanwhile, the past two decades have seen immense changes in the tech landscape.
Modern end-user devices go well beyond desktops and laptops to include smartphones and tablets. All of these devices typically can be remotely managed. For example, in Apple’s iOS world, a mobile device management (MDM) system can oversee a fleet of iPhones and iPads.
MDMs do this by installing a security profile onto each device they manage. Think of a security profile as a list of security policies and settings (for example, the user must use a strong password that is at least N characters long). What makes this all work is a hierarchical chain of digital signatures. In essence, the policies are stored in a tamper-evident container, and the iOS device will not circumvent the policies without the consent of the device owner, or the MDM owner, if that administrative privilege has been delegated to the MDM.
Sign up for CIO Asia eNewsletters.