All posts by Rufo Guerreschi

Why we won’t have ultra-private IoT without ultra-private ICT

(Originally published for Meet-IoT 2015)

A large segment of the booming Internet-of-Things market is made of solutions comprising devices with external sensors that are within the sensing reach of their users and/or other passerby citizens. These include wearables, home automation solutions, smart city solutions, airborne connected objects, etc.
Such IoT devices are in almost all cases currently designed, fabricated and assembled according to socio-technical standards that are very similar to those of other end-user computing devices like phones and PCs, which place performance, features and cost considerations way ahead of security, privacy or resiliency.
In almost all of these use case scenarios, a malfunction or breakdown will cause no or insignificant physical or economic harm to users or passerbies. Therefore, they are and can be discounted as a minor requirements. Privacy breach, on the other hand, appears at first to be a strong concern for users.
After Snowden, with a deluge of revelations, attacks and discovered vulnerabilities, it has become clear that businesses and citizens are hugely exposed to attacks, by massive as well as targeted, yet highly-scalable remote attacks beyond-point-of-encryption, by criminal actors and state security agencies, which seek access to industrial secrets and personal data.
While, in the case of smartphones and PCs, it can be expected that scalable targeted access may mostly be available to high-level attackers, and entities close to them. Whereas In fact, IoT solutions have currently less regulatory requirements, liabilities, secure technology standards involved, and are often offered by smaller newer companies that have less to loose, overall, from public discovery of critical security flaws in their products. It follows that IoT presents substantial additional assurance problems, that make it substantially more likely that such access is available to even mid- and low-level attacker.
However, any privacy concerns will soon have to face the fact that IoT users are surrounded at any given time by a smartphone, PC or connected TV which can very easily be listening and sensing everything. Privacy is already so compromised that users don’t, won’t and probably shouldn’t care if one additional devices listens in.
From these considerations, we can attempt a prediction for such IoT sub-market. It may be characterised in the near and mid future by 3 kind of solutions: (1) A “no privacy” kind of solutions which will completely ignore or just pay “lip service” to privacy, vulnerable to even scalable low- and mid-level attacks; (2) A smaller “privacy but not from government” kind – similar to the approach of Blackphone in smartphone market – where you have reasonable expectations of privacy from all, except from highly-scalable massive targeted high-level threats; (3) An even smaller “meaningful privacy” kind, for very privacy sensitive use cases or individuals, where assurance can be reasonably expected against such highly-scalable massive targeted high-level threats, but not against non scalable proximity-based surveillance techniques.
The creation of this last “meaningful privacy ” kind of IoT solutions, will need radical changes on the socio-technical paradigms for the design, fabrication, assembly and provisioning of all the software, hardware and processes critically involved in their life-cycle and provisioning. Such changes will need be adopted by a critical mass of actors, which may initially be small, but comprised the entire computing life-cycle.
But such solutions may never provide meaningful utility to a user if, as we said, at any given time by ICT devices, such as a smartphone, PC or connected TV are easily be listening and sensing everything the user’s doing. Almost all IoT solutions interface – for operation, configuration or update – with ICT components that can be turned into a critical point of failure of the IoT solution, if they do not also provide “meaningful privacy”. Such dependency also works the other way around. The market for “meaningful privacy” ICT devices may well be dependent on the availability of “meaningful privacy” IoT devices, or at the very least IoT devices that can reliably be turned off by the user. In fact, it would be inconvenient enough to have to place your ordinary phone in a purse, or under a thick pillow, before making a call with your (ultra-) private device, but it would be unbearable to most to have go in the garden because their TV or my fridge may be listening.
For “meaningful privacy” ICT devices to gain any wide consumer adoption, it is crucial, therefore, to press for national laws providing for a wide-market availability of any Internet-connectible home and office devices with a certified physical switch-off for mic, camera and power.
Given these interdependencies, and the huge costs of creating and sustaining a “meaningful privacy” computing platform supply-chain and ecosystems, it is worth considering if the socio-technical standards and technology platforms for “meaningful privacy” IoT, and those for ICT, may well be shared to a large extent. These may be possible if such initial shares platform define a relatively small form factor, low energy consumption, and most of all a low cost of production at scale.

Cyber-libertarianism vs. Rousseau’s Social Contract in cyberspace

In this post, I argue that the cyberlibertarian belief that we can individually protect our rights in cyberspace is incorrect, as it is impossible for an individual to provide him/herself meaningful assurance from hardware fabrication and assembly undetectable backdooring – even if supported by informal digital communities of trust. As for offline freedom, world citizens need to build international social contracts for cyberspace by deliberately and carefully building organizations to whom they will delegate part of the freedoms, in order to receive in return both protection of both their online civil liberties and their physical safety.

In its 1762 “Social Contract” (pdf) Rousseau wrote:

“‘Find a form of association that will bring the whole common force to bear on defending and protecting each associate’s person and goods, doing this in such a way that each of them, while uniting himself with all, still obeys only himself and remains as free as before.” There’s the basic problem that is solved by the social contract”.

Flash forward 250 years later and half our time is spent in cyberspace, where virtually all citizens have NO access to end-user devices nor cloud services with meaningful assurance that their privacy, identity and security is not completely and continuously compromised at extremely low marginal cost.

In fact, adequate protection is not required by the state – as it does for nuclear weapons, air planes or housing standards – nor is it offered by companies or traditional social organizations. Citizens are left alone to protect themselves.

In cyberspace, would citizens be better able to protect themselves alone or through adequate joint associations? Should we let users alone to protect themselves or is there a need for some forms of cyberspace social contracts? Would delegating part of one’s control of its computing to jointly-managed organizations produce more or less freedom overall?

Rousseau wen ton saying: “Each man in giving himself to everyone gives himself to no-one; and the right over himself that the others get is matched by the right that he gets over each of them. So he gains as much as he loses, and also gains extra force for the preservation of what he has“.

The current mainstream answer is that we can and should do it alone. Cyber-libertarianism has completely prevailed globally among activists and IT experts dedicated to freedom, democracy and human rights in and through cyberspace (arguably because of the nature of anarcho-libertarian geek political culture of the US west coast, especially north west).

But achieving meaningful protection is completely impossible by an individual – even if supported by informal digital communities of trust, shared values and joint action, more or less hidden in cyberspace.

In fact, achieving meaningful assurance of one’s computing device requires meaningful trustworthiness of the oversight processes of fabrication and assembly of critical hardware components that can completely compromised such devices.  So therefore, even for a pure P2P solution we need anyway those in-person processes for the fabrication and assembly. Until a user will be able to 3D print a device in its basement, there will be a need for such geolocated complex organizational process, with which the NSA and others can completely compromise surreptitiously or outlaw it.

The necessity of such oversight organizational processes can be desumed from this 3 minute video except in which Bruce Schneier clearly explains how we must assume CPUs as untrustworthy and therefore we may need to develop intrinsically trustworthy organizational processes, similar to those that guarantee the integrity of election ballot boxes. (As we at UVST apply to the CivicRoom and the CivicFab).

In fact, since NSA and similar, after the 90s popularisation of high-grade software encryption and after the Clipper Chip failed, they were at risk of loosing their (legal sanctioned and constitutionally) authority of intercept, search and seizure. They therefore have had the excuse (and reason) to break all end-points at birth, surreptitiously or not, all the way down to the assembly or at the CPU or SoC foundry. They succeeded wildly and, even more importantly, succeeded in letting most criminals and dissenters think that some crytpo software or device where safe enough to share critical information. Recent Snowden tight-lipped revelations about intrusion NSA in Korean and China IT companies, show that things have not changed since 1969 when most governments of the world were using Swiss Crypto AG equipment thinking it secure, while they were undetectably spied upon by NSA.

Therefore, we must have some form of social contracts to have any chance of gaining and retaining any freedom in cyberspace.

The great news is that those social contracts – and related socio-technical systems – can be enacted among relatively-small number of individuals that share values, aims and trust, rather than the some territory, and they can be changed at will by the user, enabling much more decentralized and democratically resilient forms of democratic association.

Since we must have such social contracts in place for each such communities of trust to handle the oversight processes, we may as well extend those, in (redundant pairs of) democratic countries, to provide in-person processes controlled by effective citizen-jury-based processes to allow constitutional – no more no less – access to intercept, search and seizure, to discourage its use by criminals and avoid giving a reason & excuse for the state to outlaw it, or surreptitiously break it.

A sort of social contract for cyberspace was enacted in 2004 by the founders of the Debian GNU/Linux operating systems, through the Debian Social Contract. It eventually became a huge adoption success, as it developed the world leading free software OS, and originated much of the tech leaders of the leading free software privacy tools. But ultimately it did not deliver trustworthy computing, even to its most developers, no matter how much convenience and user-friendliness was sacrificed.

In addition to poor technical architecture choices – such as the belief in their ability to make huge software stacks adequately secure with limited resources – what ultimately caused their failure by the fact that the contract was for the users but not by the users, i.e. the users were not substantially involved in its governance. For this reason, it’s priorities were those of geek developers, i.e. the freedom of hack around and share, through barely functioning code, as opposed to freedom from abuse of core civil rights – through extreme engineering and auditing intensity relative to resources, extreme minimization, trustworthy critical hardware life-cycle and compartmentation, in order to deliver minimal functionality but meaningful assurance that your device is your instrument and not someone else’s.

The User Verified Social Telematics project proposes to extend on the organizational and technical, to enable effective autonomous cyberspace social contracts of the users, by the users and for the users.

UPDATED Nov 24th: Added an abstract.

We should consider if almost all free software ethical hackers, and their fan journos, over last 2 decades have been very “usefull idiots” for NSA

We should consider if almost all free software ethical hackers, and their fan journos, over last 2 decades have been very “useful idiots” for NSA, and similar, by unwillingly conveying a hugely false sense of security on the techs they have been providing.

That has had catastrophic consequences, allowing NSA and similar: (1) to spy on a ton of people sharing very valuable critical data via the Net which they wouldn’t have if they knew better, (2) to cry for “going dark”, and (3) to push for laws to outlaw access privacy.

Nov 27th 2014 UPDATE: I regret the choice of the term “useful idiots” which may be regarded as offensive, even though that is not its original meaning.

“Officials have expressed alarm for several years about the expansion of online communication services that — unlike traditional and cellular telephone communications — lack intercept capabilities because they are not required by law to build them in.”

says a US official in this Washington Post article.

“I do think that more and more they’ll see less and less,” said Albert Gidari Jr., a partner at the law firm Perkins Coie who represents tech firms, referring to the government’s quandary. “But it’s their own fault,” he added. “No one now believes they were ever going dark. It’s just that they had the lights off so you couldn’t see what they were collecting.”

The new “Anti-theft kill-switch” backdoor mandated by new law in CA is coming nation-wide.

The extension nation-wide of such California and Minnesota laws matches well the recurring proposals for giving ability to FBI to implant malware when court-mandated for lawful intercept or search & seizure.

The 2 laws attempt, ineffectively, tackle a genuine important problem of “going dark” while, of course, creating huge potential (certainty?!) for privacy abuse.
In fact, in order to stop criminals, the FBI should also be able to prevent non-compliant devices to be used on US soil or connect in any way to US.

Is there a way to prevent its abuse through state-regulated and/or citizen-controllled safeguards?

TOR exec dir:” I worry that by making turning encryption into a panacea, law enforcement and intelligence agencies will just lobby for weak encryption, backdoor access, or flat out make it illegal.”

http://blog.lewman.is/personal-thoughts-on-being-targeted-by-the-nsa

Sounds that the only solution may be to devise techs and services that reconcile ability to perform court-mandated intercept (search and seizure) and to provide meaningful privacy,  so that they would be made illegal ?
May as User Verified Social Telematics project?

Internet should be regulated as a utility, a utility, like water and electricity

Go ahead, say it out loud. The internet is a utility.

There, you’ve just skipped past a quarter century of regulatory corruption and lawsuits that still rage to this day and arrived directly at the obvious conclusion. Internet access isn’t a luxury or a choice if you live and participate in the modern economy, it’s a requirement. Have you ever been in an office when the internet goes down? It’s like recess. My friend Paul Miller lived without the internet for a year and I’m still not entirely sure he’s recovered from the experience. The internet isn’t an adjunct to real life; it’s not another place. You don’t do things “on the internet,” you just do things. The network is interwoven into every moment of our lives, and we should treat it that way.

http://www.theverge.com/2014/2/25/5431382/the-internet-is-fucked

Court-mandated malware installation for “search and seizure” is coming in the US. Are safeguards possible?

From wired:

It’s clear that the Justice Department wants to scale up its use of the drive-by download. It’s now asking the Judicial Conference of the United States to, tweak the rules governing when and how federal judges issue search warrants. The revision would explicitly allow for warrants to “use remote access to search electronic storage media and to seize or copy electronically stored information” regardless of jurisdiction.

The revision, a conference committee concluded last May (.pdf), is the only way to confront the use of anonymization software like Tor, “because the target of the search has deliberately disguised the location of the media or information to be searched.”Is it possible to prevent abuse by state and other actors? What safeguards should requested?!

Can citizens self-provide proper safeguards without obstracting crime persecution and prevention?!

We believe so at the User Verified Social Telematics project…

If US and EU intelligence parliamentarians are ALL spied, how can we trust the best civilian privacy solutions out there?!

If computing devices of US Senate Intelligence Committee can be undetectably spied upon for unknown amount of time, and Snowden –  as he sweared – could read the emails of any member of the Eu Espionage parliamentary committee, then is anyone safe in the civilian world?

How many people and actors can have such access in illegal ways?

How likely is it for abuse to be discovered by external reviews?

How can we ever estimate that? Can such actors also undetectably tamper placing false evidence?