Category Archives: work1

“Islamic” terrorism and western state-terrorism can only be reduced together

Today, there was an islamic terrorist massacre in Paris.

Aside from madness, what could the Paris Massacre terrorists, and those that support or strategize behind them, possibly have aimed to achieve?!

It can only be an increase of fear and hate among innocent civilian of 2 different religious faiths and cultures, that would lead to more war in Islamic states, and then to the coming to power of more fanatic irrational regimes that claim to represent true Islamic faith.

But more war in islamic states, with “collateral” massacres and injustices towards millions of islamic civilians, is unfortunately a goal that – for Cristian religious fanaticism and hate, political misjudgment or huge economic interests – has also been very actively promoted by some western private (oil and defense contractors) and governmental actors.

We can’t fight one without the other.

Welcome to Linear City 2.0, a social and human urban redevelopment concept

For my master thesis in Public Policy and Regional Planning at Rutgers University in 2000, I defined in fine detail an ethical vision I had in 1998 that convinced me to pursue that Master in that school: the technical, political and conceptual business plan for a LINEAR CITY (1.0), i.e. a large-scale intermodal urban corridor RE-development, heavily centered on public transport and light electric vehicles, to make cities social, human and ecologically sound. I even had full 3D animations done by myself with amazing detail:
www.linearcity.org

WELCOME TO LINEAR CITY 2.0

Fifteen years later – given all the advances in self-driving vehicles, and the fact that Linearcity that it will still take many years before they are authorized on the streets, and decades before they reach majority of cars – my Linearity concept could be amended by substituting all feeder systems to the main subway/train – which are in version 1.0 a mix of mixed-grade bus and automated guided buses (i.e. with driver!) – with pure self-driving small buses, but on a mix of separate-grade and mixed-grade. In some case, separate-grade may just be a preferential line well-marked on the asphalt, and sidewalk pedestrian warning, without physical separation.

Some comments on the Preamble of the Italian Internet “Bill of Rights”

Last July 2015, the Italian parliament approved, through a motion, an Italian Internet “Bill of Rights”. We greatly admire and support the motives of the drafters, many of which are friends, but we believe it necessary to highlight some serious shortcomings to its approach, starting with its Preamble.

PREAMBLE

It has fostered the development of a more open and free society.

This is very arguable. A large majority of digital rights activists and IT security and privacy experts would disagree that, overall, it has.

The European Union is currently the world region with the greatest constitutional protection of personal data, which is explicitly enshrined in Article 8 of the EU Charter of Fundamental Rights.

This is correct, although Switzerland may be better in some regards.Nevertheless, even such standards to date have not at all been able to stop widespread illegal and/or inconstitutional EU states bulk surveillance, until Snowden and Max Schrems came along. Furthermore, even if the US and EU states fully adhered to EU standards, it would significantly improve assurance for passive bulk surveillance, but it would do almost nothing for highly scalable targeted endpoint surveillance (NSA FoxAcid, Turbine, hacking Team, etc), against of tens and hundreds of thousands of high-value targets, such as activists, parliamentarians, reporters, etc.

Preserving these rights is crucial to ensuring the democratic functioning of institutions and avoiding the predominance of public and private powers that may lead to a society of surveillance, control and social selection.

“May” lead?! There is a ton of evidence available for the last 2 years that to a large extent we have been living for many years in a “society of surveillance, control and social selection.”

Internet … it is a vital tool for promoting individual and collective participation in democratic processes as well as substantive equality

Since it has emerged to be overwhelmingly a tool of undemocratic social control, it would be more correct to refer to its potential to “promoting individual and collective participation in democratic processes”, rather than a current actual fact.

The principles underpinning this Declaration also take account of the function of the Internet as an economic space that enables innovation, fair competition and growth in a democratic context.

By framing this at the end of the preamble, it makes it appear that privacy and civil rights needs are obstacles to innovation, fair competition and growth, which is not the case, as the Global Privacy as Innovation Network has been clearly arguing for over 2 years.

A Declaration of Internet Rights is crucial to laying the constitutional foundation for supranational principles and rights.

First, there have been about 80 Internet Bill of Rights approved by various stakeholders, including national legislative bodies. Second, a “declaration of rights” can very well be just smoke in the eyes, if those rights are not defined clearly enough and meaningful democratic enforcement is also enacted. There are really no steps towards proper “Supranational principles and rights”, and related enforcement mechanism, except a number of nations bindingly agreeing to them, similarly to the process that lead to creation of the International Criminal Court.

Richard Hawking on the great risks of the “default” scenarios for the future of AI

Richard Hawking, the great physicist, sees in the future of humanity like no one else. He sees our greatest risks related to the future of self-improving AI machines:

(1) Human exinction, if AI machines can be controlled at all. He said “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”.

(2) Huge wealth [and power]  gaps, if AI machine owners will allow a fair distribution once these will take on all human labor. He said “If machines produce everything we need, the outcome will depend on how things are distributed.” Hawking continued, “Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”

Is meaningful trustworthiness a requirement of Free Software “computing freedom”?

In this youtube video excerpt (minute 8.33-15.55) from Panel 2 of the Free and Safe in Cyberspace conference, that I organized 2 weeks ago, in which Richard Stallman and myself debate about IT trustworthiness and free software. The entire panel video is also available in WebM format here.

In such excerpt, Richard Stallman said that computing trustworthiness is a “practical advantage or convenience” and not a requirement for computing freedom. I opposed to that a vision by which the lack of meaningful trustworthiness turns inevitably the other four software freedoms into a disutility to their users, and to people with whom they share code. I suggest that this realization should somehow be “codified” as a 5th freedom, or at least very widely acknowledged within the free software movement.

How could the US government incentivize IT service providers to voluntarily and adequately provide compliance to lawful access?!

More news on Obama’s search for legislative or regulatory solution to lawful access to digital systems.

For some time now, the US government has been ever more often stating that there will not be a mandatory technical requirements to enable remote state lawful access, but that they expect provider to somehow come up autonomously with solutions that would allow for lawful access when needed by investigating agencies.

But any company that decided to come up with some techncial and organizational processes to do so, even with extremely effective safeguards for both the citizen and the investigating agency, would appear to be, and possibly actually be, less secure than competing services or devices that do not provide such access.

This problem could be solved if the US government provided very solid and reliable incentives to those that do, and do in a proper way, i.e., they comply to a minimum of citizen-accountable extreme safeguards, that guarantee both the user and the agency. The US government could approve some solidly enforceable policies that prescribe much higher personal economic and penal consequences for official of state agencies that are found searching or implanting vulnerabilities ONLY for high-assurance IT service providers that offer socio-technical systems to comply to government request, as certified by an independent international technically-proficient and accountable certification body. Such new policies would instead exclude IT service or device providers that do not.

To get 2 beans with one stone, such international body could also certify IT services and devices that offer meaningfully high-levels of trustworthiness, something that is direly missing today. One such certification body is being promote by the Open Media Cluster (that I lead), with the name of Trustless Computing Certification Initiative.

A Proposed Solution to Wikimedia funding problem …

… without introducing any undemocratic bias:

Introduce contextual ads made exclusively of product/service comparisons made by  democratically-controlled consumer organizations. In Italy for example there is Altroconsumo org with 100s of thousands of members which regularly produces extensive comparative reports.

In practice: for each new report that comes out, a request is made to the companies producing the product/service in the top 30% to sponsor it publishing inside Wikimedia portals.
Such formula could be extended to Wikimedia video, generating huge funds, arguably without any. Proceed are shared among Wikimedia and the consumer org.

(originally written in 2011, and sent to Jimmy Whale, which found it interesting)

“Unabomber with flowers”. May it be our best option to stave off AI superintelligence explosion?

There are many ways to try to prevent catastrophic AI developments by actively getting involved as a researcher, political activist or entrepreneur. In fact, I am trying to do my part as a Executive Director of the Open Media Cluster.

But maybe the best thing we can do to help reduce chances of the catastrophic risks of artificial super-intelligence explosion (and other existential risks) become a “Unabomber with flowers“.

By that I mean, we could hide out in the woods, as the Unabomber did, to live in modern off-grid eco-villages somewhere. But, instead of sending bombs to those most irresponsibly advancing general Artificial Intelligence, we’d send them flowers, letters and fresh produce, and invitations for a free travel in the woods.

Here’s what the  wrote in the Unabomber wrote in his manifesto “Industrial Society and Its Future”, published by the New York Times in 1995:  

173. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

My wife Vera and my dear friend Beniamino Minnella surely think so.

IT security research needs for artificial intelligence and machine super-intelligence

(originally appeared on Open Media Cluster website on July 7th 2015)

On Jan 23rd 2015, nearly the entire “who’s who” of artificial intelligence, including the leading researchers, research centers, companies, IT entrepreneurs – in addition to what are possibly the leading world scientists and IT entrepeneurs – have signed Open Letter Research priorities for robust and beneficial artificial intelligence with an attached detailed paper (we’ll refer to both below as “Open Letter”).

In this post, we’ll look at such Open Letter and ways in which its R&D priorities in the areas of IT security may crucially need to be corrected, and “enhanced” in future version.

We’ll also look at the possibility that short-term and long-term R&D needs of artificial intelligence “(“AI”) and information technology (“IT”) – in terms of security for all critical scenarios – may become synergic elements of a common “short to long term” vision, producing huge societal benefits and shared business opportunities. The dire short-term societal need and market demand for radically more trustworthy IT systems for citizens privacy and security and societal critical assets protection, can very much align – in a grand strategic cyberspace EU vision for AI and IT – with the medium-term market demand and societal need of large-scale ecosystems capable to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI critical scenarios.

But let’s start from the state of the debate on the future of AI, machine super-intelligence, and the role of IT security.

In recent years, rapid developments in AI specific components and applications, theoretical research advances, high-profile acquisitions from important global IT giants, and heart-felt declaration on the  dangers of future AI advances from leading global scientists and entrepreneurs, have brought AI to the fore as both (A) a key to economic dominance in IT, and other business sectors, as well as (B) the fastest emerging existential risk for humanity in its possible evolution into uncontrolled machine super-intelligence.

Google, in its largest EU acquisition this year acquired for 400M€ a global AI leader, DeepMind; already invested by Facebook primary initial investors Peter Thiel and Elon Musk. Private investment in AI has been increasing 62% a year, while it is not known – but presumably very large and fast increasing – the level of secret investments by multiple secretive agencies of powerful nations, such as the NSA, in a possibly already-started winner-take-all race to machine super-intelligence among public and private actors.

Global AI experts on average estimate that there is a 50% chance to achieve human-level general artificial intelligence by 2040 or 2050, while not excluding significant possibilities that it could be reached sooner. Such estimates may be strongly biased towards later dates because: (A) there is an intrinsic interest in those that are by far the largest investors in AI – global IT giants and USG – to avoid risking a major public opinion that a major political; (B) As it has happened for surveillance program and technologies of Five Eyes countries, it plausible or probable that huge advancements in AI capabilities and programs may have already happened but successfully kept hidden for many years and decades, even while involving large numbers of people.

Many and increasing numbers of experts believe that progress beyond such point may become extremely rapid, in a sort of “intelligence explosion”, posing grave questions on humans ability to control it at all. (See Nick Bostrom TED presentation). Very clear and repeated statements by Stephen Hawking (the most famous scientist alive), by Bill Gates, by Elon Musk (main global icon of enlightened tech entrepreneurship), By Steve Wozniak (co-founder of Apple), agree on the exceptionally grave risks posed by uncontrolled machine super-intelligence.

Elon Musk, shortly after having invested in DeepMind, even declared, in an erased but not retracted comment:

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.”

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognise the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…”

Such Open Letter is an incredibly important and well-thought out, and important to increase the chance that the overall impact of AI in coming decades – large in the medium term and huge in the long-term by all account – will be in accordance to humanities values and priorities. Nonetheless, such document comes with what we believe to be potentially gravely erroneous assumptions about the current state-of-the-art and R&D directions in IT security of high-assurance systems, which in turn would potentially completely undermine its verification, validity and control. 

In general, the such Open Letter overestimate the levels of trustworthiness, measurability, the at-scale costs, of existing and planned highest-assurance low-level computing systems and standards. 

More in detail, here are line by line suggestions to the Short Term Research Priorities – 2.3.3 Security section, from page 5: 

2.3.3   Security

Security research can help make AI more robust.

A very insufficiently-secure AI system may be greatly “robust” in the sense of business continuity, risk management and resilience, but still be extremely weak in safety or reliability of control. This outcome may sometimes be aligned with the AI sponsor/owner goals – and those of other third parties such as state security agencies, publicly or covertly involved – but be gravely misaligned  to chances to maintain a meaningful democratic and transparent control, i.e. having transparent reliability about what the system, in actuality, is set out to do and who, in actuality, controls it.

Much more important than “robustness”, adequate security is the most crucial foundation for AI safety and actual control in the short and long terms, as well as a precondition for verification and validity. 

As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks.

There is a large amount of evidence that many AI techniques have long been and are [1] currently being used by the most powerful states intelligence agencies, to attack – often in contrast with national or international norms – end-users and IT systems, including IT systems using AI. As said above, while it is not known the levels of investment of public agencies of powerful nations such as the NSA, is presumably very large and fast increasing,  in a possibly already started race against among public and private actors. The distribution of such finding aims most likely will follow the current ratio of tens of times more resources to offensive R&D rather than defensive R&D.

Robustness against exploitation at the low-level is closely tied to verifiability and freedom from bugs. 

This is a correct although partial. Especially for use in critical and ultra-critical use cases, which will become more and more dominant.

   It is better to talk about auditability in order not get confused with (formal) IT verification. It is crucial and unavoidable to have complete public auditability of all critical HW, SW and procedural components involved in an AI systems life-cycle, from certification standards setting, to CPU design, to fabrication oversight. In fact, since 2005 US Defense Science Board has highlighted how “Trust cannot be added to integrated circuits after fabrication” as vulnerabilities introduced during fabrication can be impossible to verify afterwards. Bruce Schneier, Steve Blank, and Adi Shamir,  among others, have clearly said there is no reason to trust CPUs and SoCs (design and fabrication phases). No end-2-end IT system or standards exist today that provide such complete auditability of critcal components. 

   “Freedom from bugs” is a very improper term as it excludes voluntarily introduced vulnerabilities, or backdoors, and it should clearly differentiate between critical and non-critical bugs. Vulnerabilities may be accidental (bug) or voluntary (backdoor). It is often impossible to prove that a vulnerability was introduced voluntarily and not accidentally. We should talk of “Freedom from critical vulnerabilities
It is impossible, and most probably will remain so, to ensure perfectly against critical vulnerabilities, given the socio-technical complexity of IT socio-technical systems even if simplified by 10 or 100 times, and with radically higher levels of auditing relative to complexity.
Nonetheless, it remains
extremely crucial and fundamental that adequate research could device ways to achieve sufficiently-extreme level confidence about “freedom from critical vulnerabilities” through new paradigms to achieve sufficient user-trustworthiness that sufficient intensity and competency of engineering and auditing efforts relative to complexity have been applied, for all critical software and hardware components that are actually running on the involved device. No system or standard exist today to systematically and comparatively assess – for such target levels of assurance for a given end-2-end computing service, and its related life-cycle and supply-chain.  

As stated above, all AI systems in critical use cases – and even more crucially those in advanced AI system that will soon be increasingly approaching machine super-intelligence – will need to be so robust in terms of security so such as extent that they are resistant against multiple extremely-skilled attackers willing to devote cumulatively even tens or hundreds of millions of Euros to compromise at least one critical components of the supply chain or life-cycle, through legal and illegal subversion of all kinds, including economic pressures; while having high-level of plausible deniability, low risk of attribution, and (for some state actors) minimal risk of legal consequences if caught.

In order to reduce substantially this enormous pressure, it may be extremely useful to research socio-technical paradigms by which sufficiently-extreme level of AI systems user-trustworthiness can be achieved, while at the same time transparently enabling due legal process cyber-investigation and crime prevention. The possible solution of such dichotomy would reduce the level of pressure by states to subvert secure high-assurance IT systems in general, and possibly – through mandatory or voluntary standards international lawful access standards – improve the ability of humanity to conduct cyber-investigations on the most advanced private and public AI R&D programs.

For example, the DARPA SAFE program aims to build an integrated hardware-software system with a flexible metadata rule engine, on which can be built memory safety, fault isolation, and other protocols that could improve security by preventing exploitable flaws [20]. Such programs cannot eliminate all security flaws (since verification is only as strong as the assumptions that underly the specification), but could significantly reduce vulnerabilities of the type exploited by the recent “Heartbleed bug” and “Bash Bug”.

There is a need to avoid the risk of relying for guidance on high-assurance low-level systems standard/platform projects from defense agencies of powerful nations, such as the mentioned DARPA SAFE, NIST, NSA Trust Foundry Program, DARPA Trust in Integrated Circuits Program, when it is widely proven that their intelligence agencies (such as NSA) have gone to huge length to surreptitiously corrupt technologies and standards, even those that are overwhelmingly used internally in relatively high-assurance scenarios.

Such systems could be preferentially deployed in safety-critical applications, where the cost of improved security is justified.

The cost of radically more trustworthy low-level system for AI could become very comparable to those of current corporate-grade security IT systems, mostly used as standard in AI systems development. Those costs differentials could possibly be reduced to being insignificant through production at scale, and open innovation models to drive down royalty costs. For example, hardware parallelization of secure systems and lower unit costs, could make so that adequately secure systems could compete or even out compete in cost and performance those other generic systems. (The emerging non-profit User Verified Social Telematics consortium, for example, show the possibility of creating sufficiently-secure general-purpose computing systems running at 1-300Mhz with a cost made of cost of production (few tens of euros depending on quantity), and overall royalty costs of only 30% of the end-user cost.)

At a higher level, research into specific AI and machine learning techniques may become increasingly useful in security. These techniques could be applied to the detection of intrusions [46], analyzing malware [64], or detecting potential exploits in other programs through code analysis [11].

There is a lot of evidence to show that R&D investment on solutions to defend devices from the inside (that assume failure in intrusion prevention), could become end up increasing the attack surface if those systems life-cycle are not themselves subject to the same extreme security standards as the low-level system on which they rely upon. Much like antivirus tools, password storing application and other security tools are often used a ways to get directly to a user or end-point most crucial data. Recent scandal of NSA, Hacking Team, JPMorgan show the ability of hackers to move inside extremely crucial system without being detected, possibly for years. DARPA high-assurance program highlight how about 30% of vulnerabilities in high-assurance systems are introduced by internally security products.[2]

It is not implausible that cyber attack between states and private actors will be a risk factor for harm from near-future AI systems, motivating research on preventing harmful events.

Such likelihood is clearly higher than “not implausible”. It is not correct to say that it “will be a risk factor” as it is already a risk factor and at least one of the parties in the such cyber attacks, powerful states, are now extensively using and expectedly aggressively advancing AI tools.

As AI systems grow more complex and are networked together, they will have to intelligently manage their trust, motivating research on statistical-behavioral trust establishment [61] and computational reputation models [70].

Interoperability framework among AI systems, and among AI and IT systems, will need effective independent ways to assess the security of the other system. As stated above, current comparative standards are lacking so comprehensiveness and depth to make it impossible to compare the security of a given system.

Ultimately, it may be argued that IT security is about the nature of the organizational processes involved and the intrinsic constrains and incentives critically involve in individual within such organizations. Therefore, the most critical security factor to be researched, for critical AI systems in the short and long term, is probably will be the technical proficiency and citizen accountability of the organizational processes, that will govern the setting of key AI security certification standards or system, and the socio-technical systems, that will be deployed to ensure extremely effective and citizen-accountable oversight processes of all critical phase in the supply-chain and operational life-cycle of the AI system.

The dire short- term societal need and market demand for radically more trustworthy IT systems for citizens privacy and security and societal critical assets protection, can very much align in a grand strategic cyberspace EU vision to satisfy – in the medium and long-term – both the huge societal need and great economic opportunity of creating large-scale ecosystems able to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI critical scenarios.

NOTES

[1] See the National Security Analysis Center or the capabilities offered by companies like Palantir

[2] https://youtu.be/3D6jxBDy8k8?t=4m20s

A definition of “Constitutionally-meanigful levels of trustworthiness” in IT systems

A proposed definition of “Constitutionally-meanigful levels of trustworthiness” in IT systems

An IT system (or more precisely a end-2-end computing service or experience) will be said to have “constitutionally-meaningful levels of trustworthiness” when its confidentiality, authenticity, integrity and non-repudiation is sufficiently high to make its use – by ordinary, active and “medium-value target” citizens alike –rationally compatible to the full and effective Internet-connected exercise of their core civil rights, except for voting in governmental elections.  In concrete terms, it defines an end-2-end computing experience that warrants extremely well-placed confidence that the cost and risks for an extremely-skilled attacker to remotely perform continuous or pervasive comprimization substantially exceed the following: (1) for comprimization of a single user, the tens of thousands of euros, and the significant discoverability, such as those associated with enacting such level of abuse through on-site, proximity-based user surveillance, or non-scalable remote endpoint techniques, such as NSA TAO; (2) For the comprimization of the entire supply chain or lifecycle, the tens of millions of euros and significant discoverability, that are reportedly typically sustained by advanced actors, for high-value supply chains, through legal and illegal subversions of all kinds, including economic pressures.”

Who sets the security standards for lawful access systems like Hacking Team team?!

After what came out of the Hacking Team scandal, we should consider if the standards for such techs, crucial for society – that many governments want extended as mandatory to other IP communications – maybe we have a problem at their origina, i.e. with their international governance by NIST and ETSI, the non-binding bodies that set their standards (which are then mostly updaken by national governments).  If we know NIST has broken crucial crypto standards on pressure fom NSA, here is the formal governance of ETSI, which is then deeply participated in its process by industry players :

 

Screen Shot 2015-07-10 at 10.12.15

Why Hacking Team backdoor is old news from the late 80’s!

The just revealed Hacking Team RCS systems backdoor (for them and presumably for their state friends) was the very reason of existence of the first such systems from the early 80-90’s (!!), created by former NSA staff, and then taken over by former (?) Mossad senior agents, and sold to tens of governments worldwide.

Pushed around “presumably” with the key goal of giving Israeli intelligence full info on what other intelligence were up to. US made an illegal copy for itself and pushed that one around to other governments …

Here is the Wikipedia file a long detailed story of it, and Here excerpts from a relatively authoritative book on the history of Mossad “Gideon’s Spies” which I finished reading last Christmas:
https://en.wikipedia.org/wiki/Inslaw
http://cryptome.info/promis-mossad.htm

In a recent post on Wired, called “Why We Need Free Digital Hardware Designs“, Richard Stallman compares the prospects and meaining of Free digital Hardware and designs, in comparison with Free Software:

You can’t build and run a circuit design or a chip design in your computer. Constructing a big circuit is a lot of painstaking work, and that’s once you have the circuit board. Fabricating a chip is not feasible for individuals today; only mass production can make them cheap enough. With today’s hardware technology, users can’t download and run John H Hacker’s modified version of a digital hardware design, as they could run John S Hacker’s modified version of a program. Thus, the four freedoms don’t give users today collective control over a hardware design as they give users collective control over a program. That’s where the reasoning showing that all software must be free fails to apply to today’s hardware technology.

Sure, but without meaningfully-trustworthy hardware – i.e. with verifiable and adequately verified critical hardware components, even during fabrication – the Free Software gives the user much freedom to hack and very little civil freedom, as there is little assurance against scalable undetectable low-cost end-point attacks.

In 1983 there was no free operating system, but it was clear that if we had one, we could immediately use it and get software freedom. All that was missing was the code for one.

In 2014, if we had a free design for a CPU chip suitable for a PC, mass-produced chips made from that design would not give us the same freedom in the hardware domain. If we’re going to buy a product mass produced in a factory, this dependence on the factory causes most of the same problems as a nonfree design. For free designs to give us hardware freedom, we need future fabrication technology.

We can envision a future in which our personal fabricators can make chips, and our robots can assemble and solder them together with transformers, switches, keys, displays, fans and so on. In that future we will all make our own computers (and fabricators and robots), and we will all be able to take advantage of modified designs made by those who know hardware. The arguments for rejecting nonfree software will then apply to nonfree hardware designs too.

That future is years away, at least.

That vision is great, but the timing is even worst. In fact, the economics of assuring the such fabricators and robots so that they themselves will not contain vulnerabilities that may compromise all devices produced with them, places the such home fabrication possibility at the very least one or two decades away.

Is there no alternative till then thatn to just trust multiple hardware makers?!

In the meantime, there is no need to reject hardware with nonfree designs on principle.

*As used here, “digital hardware” includes hardware with some analog circuits and components in addition to digital ones.

We need free digital hardware designs

Although we need not reject digital hardware made from nonfree designs in today’s circumstances, we need to develop free designs and should use them when feasible. They provide advantages today, and in the future they may be the only way to use free software.

Free hardware designs offer practical advantages. Multiple companies can fabricate one, which reduces dependence on a single vendor. Groups can arrange to fabricate them in quantity. Having circuit diagrams or HDL code makes it possible to study the design to look for errors or malicious functionalities (it is known that the NSA has procured malicious weaknesses in some computing hardware).

I makes it possible to look only for some errors, as its is widely recognized that there are vulnerabilities that may be inserted during fabrication which cannot be ascertained after fabrication. “You cannot add trust to intergated circuits after fabrication” said US Defense Science Board back in 2005.

Furthermore, free designs can serve as building blocks to design computers and other complex devices, whose specs will be published and which will have fewer parts that could be used against us.

Free hardware designs may become usable for some parts of our computers and networks, and for embedded systems, before we are able to make entire computers this way.

Free hardware designs may become essential even before we can fabricate the hardware personally, if they become the only way to avoid nonfree software. As common commercial hardware is increasingly designed to subjugate users, it becomes increasingly incompatible with free software, because of secret specifications and requirements for code to be signed by someone other than you. Cell phone modem chips and even some graphics accelerators already require firmware to be signed by the manufacturer. Any program in your computer, that someone else is allowed to change but you’re not, is an instrument of unjust power over you; hardware that imposes that requirement is malicious hardware. In the case of cell phone modem chips, all the models now available are malicious.

Some day, free-design digital hardware may be the only platform that permits running a free system at all. Let us aim to have the necessary free digital designs before then, and hope that we have the means to fabricate them cheaply enough for all users.

If you design hardware, please make your designs free. If you use hardware, please join in urging and pressuring companies to make hardware designs free.

What’s the use of ultra-privacy techs when mics are everywhere?

Since Snowden all hopes to retain a meaningful, albeit limited, personal privacy sphere have relied on the possibility of making devices resistant to advanced surveillance available to citizens, supplementary to ordinary commercial ones, and make so that they won’t be made illegal.

Eve if we succeeded, such devices may not serve their purpose or achieve wide adoption, if the average citizen will be constantly and increasingly surrounded by Net connected devices with a mic (mobile, Tv, Pc, Internet of Things), which may allow extremely low cost and scalable continuous surveillance. Schneier just made a fantastic analysis of the issue.

In fact, it would be inconvenient enough to have to place your ordinary phone in a purse, or under a thick pillow, before making a call with your (ultra-) private device, but it would be unbearable to most to have go in the garden because their TV or my fridge may be listening.

It is crucial, therefore, to press for national laws forbidding the sales of any Internet-connectible devices without a certified physical switch-off for mic, camera and power.

If one doesn’t come soon, we may be lead to a point where we might be better quitting on privacy altogether, and turn our efforts assessing the technical and political feasibility of making total surveillance as symmetrical as possible versus the powerful, somewhat in the vision of the Transparent Society paradigm of David Brin.

It is a major change in the existential nature of human life, but a large and increasing number of people (such as me) are already  living in such world, with constant awareness that any word I say near my mobile (i. e. always) or I type in an electronic device may very well be collected and archived, at extreme low cost, and accessible to who knows how many.

It’s bearable.

What I can’t bear is that a small group of powerful or rich people, state and non-state related, can increasingly enjoy ultra-privacy and/or huge access to the information of others. This creates a huge shift of unaccountable power towards them, with very dire consequences for human race prospects of survival, and avoidance of durable forms of inhumane global governance.

The Limits of software-only crypto, the feasibilty of meanigful privacy and a Plan B

The latest article by Julian Assange on the New York Times contains very true and insightful analysis, such as:

It is not, as we are asked to believe, that privacy is inherently valuable. It is not. The real reason lies in the calculus of power: the destruction of privacy widens the existing power imbalance between the ruling factions and everyone else,”

and

At their core, companies like Google and Facebook are in the same business as the U.S. government’s National Security Agency. They collect a vast amount of information about people, store it, integrate it and use it to predict individual and group behaviour, …

It contains however what I believe to be very wrong and dangerous representations of the level of privacy assurance that an individual expect by downloading the right software and buying a new cheap laptop. He says:

If there is a modern analogue to Orwell’s “simple” and “democratic weapon,” which “gives claws to the weak” it is cryptography, the basis for the mathematics behind Bitcoin and the best secure communications programs. It is cheap to produce: cryptographic software can be written on a home computer. It is even cheaper to spread: software can be copied in a way that physical objects cannot. But it is also insuperable — the mathematics at the heart of modern cryptography are sound, and can withstand the might of a superpower. The same technologies that allowed the Allies to encrypt their radio communications against Axis intercepts can now be downloaded over a dial-up Internet connection and deployed with a cheap laptop.

In fact, the best free software or proprietary (but verifiable) software crypto solutions have this shortfalls that prevent them to provide meaningful assurance:

  1. Are currently way too complex and non compartmentized enough,  relative to auditing effort,
  2. Do not protect from vulnerabilities in critical part, of both the laptop and USB keys used, that are introduced during design, fabrication or assembly. It is true that some low-cost low-volume laptops out running less common, low-volume and low-performance CPUs may be free from malicious backdoors, but it’s very hard to verify. And the user experience is terrible

Solving such 2 core problems needs extremely-resilient user-accountable organizational processes around certain fabrication and assembly phases, as well as critical server-side components, if any, but also for the standardization, update and auditing processes themselves. In this recent post Cyber-libertarianism vs. Rousseau’s Social Contract in cyberspace, I further argue on the failed assumption of Assange’s approach, that I define cyber-libertarianism, and why solutions can only be non-territorial group based.

Such organizational processes, in turn, have a high degree of geolocatization, and therefore can’t be manage “in the hide”, and so could effectively be made illegal in and/or compromised surreptitiously.

We have a plan to solve all of the above with the User Verified Social Telematics project.

What we propose, may still not deliver meaningful privacy. We expect however that, once it is realised, it’s assurance level will be estimate-able with sufficient precision.

If even UVST, or other similar attempts,  fails, then one possibility we would be bound to test, experiment and evaluate would – before it is too late for freedom and democracy – would be to “flip privacy on power” through sousveillance, by designing a new form of democracy that sacrifices privacy in order maintain freedom and democracy. We’d promote constitutional and legal changes in which instead (almost all) privacy protections would be replaced by mandatory and enforceable all on transparency for all, especially those in power.

After more in depth analysis, such possibility may not work at all. There are in fact many unanswered tech questions, however, about the organizational, policy and tech provisions that will give us sufficient assurance that the powerful are NOT communicating privately (steganography, “code speak”, etc.) while the weak are all naked out there.

To ensure transparency of the power, therefore, it would probably require much of the same extremely-resilient user-accountable organizational processes and techs, that are need to try to achieve meaningful privacy …

Why we won’t have ultra-private IoT without ultra-private ICT

(Originally published for Meet-IoT 2015)

A large segment of the booming Internet-of-Things market is made of solutions comprising devices with external sensors that are within the sensing reach of their users and/or other passerby citizens. These include wearables, home automation solutions, smart city solutions, airborne connected objects, etc.
Such IoT devices are in almost all cases currently designed, fabricated and assembled according to socio-technical standards that are very similar to those of other end-user computing devices like phones and PCs, which place performance, features and cost considerations way ahead of security, privacy or resiliency.
In almost all of these use case scenarios, a malfunction or breakdown will cause no or insignificant physical or economic harm to users or passerbies. Therefore, they are and can be discounted as a minor requirements. Privacy breach, on the other hand, appears at first to be a strong concern for users.
After Snowden, with a deluge of revelations, attacks and discovered vulnerabilities, it has become clear that businesses and citizens are hugely exposed to attacks, by massive as well as targeted, yet highly-scalable remote attacks beyond-point-of-encryption, by criminal actors and state security agencies, which seek access to industrial secrets and personal data.
While, in the case of smartphones and PCs, it can be expected that scalable targeted access may mostly be available to high-level attackers, and entities close to them. Whereas In fact, IoT solutions have currently less regulatory requirements, liabilities, secure technology standards involved, and are often offered by smaller newer companies that have less to loose, overall, from public discovery of critical security flaws in their products. It follows that IoT presents substantial additional assurance problems, that make it substantially more likely that such access is available to even mid- and low-level attacker.
However, any privacy concerns will soon have to face the fact that IoT users are surrounded at any given time by a smartphone, PC or connected TV which can very easily be listening and sensing everything. Privacy is already so compromised that users don’t, won’t and probably shouldn’t care if one additional devices listens in.
From these considerations, we can attempt a prediction for such IoT sub-market. It may be characterised in the near and mid future by 3 kind of solutions: (1) A “no privacy” kind of solutions which will completely ignore or just pay “lip service” to privacy, vulnerable to even scalable low- and mid-level attacks; (2) A smaller “privacy but not from government” kind – similar to the approach of Blackphone in smartphone market – where you have reasonable expectations of privacy from all, except from highly-scalable massive targeted high-level threats; (3) An even smaller “meaningful privacy” kind, for very privacy sensitive use cases or individuals, where assurance can be reasonably expected against such highly-scalable massive targeted high-level threats, but not against non scalable proximity-based surveillance techniques.
The creation of this last “meaningful privacy ” kind of IoT solutions, will need radical changes on the socio-technical paradigms for the design, fabrication, assembly and provisioning of all the software, hardware and processes critically involved in their life-cycle and provisioning. Such changes will need be adopted by a critical mass of actors, which may initially be small, but comprised the entire computing life-cycle.
But such solutions may never provide meaningful utility to a user if, as we said, at any given time by ICT devices, such as a smartphone, PC or connected TV are easily be listening and sensing everything the user’s doing. Almost all IoT solutions interface – for operation, configuration or update – with ICT components that can be turned into a critical point of failure of the IoT solution, if they do not also provide “meaningful privacy”. Such dependency also works the other way around. The market for “meaningful privacy” ICT devices may well be dependent on the availability of “meaningful privacy” IoT devices, or at the very least IoT devices that can reliably be turned off by the user. In fact, it would be inconvenient enough to have to place your ordinary phone in a purse, or under a thick pillow, before making a call with your (ultra-) private device, but it would be unbearable to most to have go in the garden because their TV or my fridge may be listening.
For “meaningful privacy” ICT devices to gain any wide consumer adoption, it is crucial, therefore, to press for national laws providing for a wide-market availability of any Internet-connectible home and office devices with a certified physical switch-off for mic, camera and power.
Given these interdependencies, and the huge costs of creating and sustaining a “meaningful privacy” computing platform supply-chain and ecosystems, it is worth considering if the socio-technical standards and technology platforms for “meaningful privacy” IoT, and those for ICT, may well be shared to a large extent. These may be possible if such initial shares platform define a relatively small form factor, low energy consumption, and most of all a low cost of production at scale.

Cyber-libertarianism vs. Rousseau’s Social Contract in cyberspace

In this post, I argue that the cyberlibertarian belief that we can individually protect our rights in cyberspace is incorrect, as it is impossible for an individual to provide him/herself meaningful assurance from hardware fabrication and assembly undetectable backdooring – even if supported by informal digital communities of trust. As for offline freedom, world citizens need to build international social contracts for cyberspace by deliberately and carefully building organizations to whom they will delegate part of the freedoms, in order to receive in return both protection of both their online civil liberties and their physical safety.

In its 1762 “Social Contract” (pdf) Rousseau wrote:

“‘Find a form of association that will bring the whole common force to bear on defending and protecting each associate’s person and goods, doing this in such a way that each of them, while uniting himself with all, still obeys only himself and remains as free as before.” There’s the basic problem that is solved by the social contract”.

Flash forward 250 years later and half our time is spent in cyberspace, where virtually all citizens have NO access to end-user devices nor cloud services with meaningful assurance that their privacy, identity and security is not completely and continuously compromised at extremely low marginal cost.

In fact, adequate protection is not required by the state – as it does for nuclear weapons, air planes or housing standards – nor is it offered by companies or traditional social organizations. Citizens are left alone to protect themselves.

In cyberspace, would citizens be better able to protect themselves alone or through adequate joint associations? Should we let users alone to protect themselves or is there a need for some forms of cyberspace social contracts? Would delegating part of one’s control of its computing to jointly-managed organizations produce more or less freedom overall?

Rousseau wen ton saying: “Each man in giving himself to everyone gives himself to no-one; and the right over himself that the others get is matched by the right that he gets over each of them. So he gains as much as he loses, and also gains extra force for the preservation of what he has“.

The current mainstream answer is that we can and should do it alone. Cyber-libertarianism has completely prevailed globally among activists and IT experts dedicated to freedom, democracy and human rights in and through cyberspace (arguably because of the nature of anarcho-libertarian geek political culture of the US west coast, especially north west).

But achieving meaningful protection is completely impossible by an individual – even if supported by informal digital communities of trust, shared values and joint action, more or less hidden in cyberspace.

In fact, achieving meaningful assurance of one’s computing device requires meaningful trustworthiness of the oversight processes of fabrication and assembly of critical hardware components that can completely compromised such devices.  So therefore, even for a pure P2P solution we need anyway those in-person processes for the fabrication and assembly. Until a user will be able to 3D print a device in its basement, there will be a need for such geolocated complex organizational process, with which the NSA and others can completely compromise surreptitiously or outlaw it.

The necessity of such oversight organizational processes can be desumed from this 3 minute video except in which Bruce Schneier clearly explains how we must assume CPUs as untrustworthy and therefore we may need to develop intrinsically trustworthy organizational processes, similar to those that guarantee the integrity of election ballot boxes. (As we at UVST apply to the CivicRoom and the CivicFab).

In fact, since NSA and similar, after the 90s popularisation of high-grade software encryption and after the Clipper Chip failed, they were at risk of loosing their (legal sanctioned and constitutionally) authority of intercept, search and seizure. They therefore have had the excuse (and reason) to break all end-points at birth, surreptitiously or not, all the way down to the assembly or at the CPU or SoC foundry. They succeeded wildly and, even more importantly, succeeded in letting most criminals and dissenters think that some crytpo software or device where safe enough to share critical information. Recent Snowden tight-lipped revelations about intrusion NSA in Korean and China IT companies, show that things have not changed since 1969 when most governments of the world were using Swiss Crypto AG equipment thinking it secure, while they were undetectably spied upon by NSA.

Therefore, we must have some form of social contracts to have any chance of gaining and retaining any freedom in cyberspace.

The great news is that those social contracts – and related socio-technical systems – can be enacted among relatively-small number of individuals that share values, aims and trust, rather than the some territory, and they can be changed at will by the user, enabling much more decentralized and democratically resilient forms of democratic association.

Since we must have such social contracts in place for each such communities of trust to handle the oversight processes, we may as well extend those, in (redundant pairs of) democratic countries, to provide in-person processes controlled by effective citizen-jury-based processes to allow constitutional – no more no less – access to intercept, search and seizure, to discourage its use by criminals and avoid giving a reason & excuse for the state to outlaw it, or surreptitiously break it.

A sort of social contract for cyberspace was enacted in 2004 by the founders of the Debian GNU/Linux operating systems, through the Debian Social Contract. It eventually became a huge adoption success, as it developed the world leading free software OS, and originated much of the tech leaders of the leading free software privacy tools. But ultimately it did not deliver trustworthy computing, even to its most developers, no matter how much convenience and user-friendliness was sacrificed.

In addition to poor technical architecture choices – such as the belief in their ability to make huge software stacks adequately secure with limited resources – what ultimately caused their failure by the fact that the contract was for the users but not by the users, i.e. the users were not substantially involved in its governance. For this reason, it’s priorities were those of geek developers, i.e. the freedom of hack around and share, through barely functioning code, as opposed to freedom from abuse of core civil rights – through extreme engineering and auditing intensity relative to resources, extreme minimization, trustworthy critical hardware life-cycle and compartmentation, in order to deliver minimal functionality but meaningful assurance that your device is your instrument and not someone else’s.

The User Verified Social Telematics project proposes to extend on the organizational and technical, to enable effective autonomous cyberspace social contracts of the users, by the users and for the users.

UPDATED Nov 24th: Added an abstract.

Court-mandated malware installation for “search and seizure” is coming in the US. Are safeguards possible?

From wired:

It’s clear that the Justice Department wants to scale up its use of the drive-by download. It’s now asking the Judicial Conference of the United States to, tweak the rules governing when and how federal judges issue search warrants. The revision would explicitly allow for warrants to “use remote access to search electronic storage media and to seize or copy electronically stored information” regardless of jurisdiction.

The revision, a conference committee concluded last May (.pdf), is the only way to confront the use of anonymization software like Tor, “because the target of the search has deliberately disguised the location of the media or information to be searched.”Is it possible to prevent abuse by state and other actors? What safeguards should requested?!

Can citizens self-provide proper safeguards without obstracting crime persecution and prevention?!

We believe so at the User Verified Social Telematics project…