Computational Culture

a journal of software studies

Issue Six information

    Rashid al-Din Sinan

    Al-Hashshashin; Masyaf, Syria

    Publication Date
    28th November 2017

Print Options

Poisoned Fruit: Booby-Trapped “Privacy Guides” as State-Sponsored Propaganda — A Case Study of Obfuscation

Review of, Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest
The MIT Press
136 pages


Our starting point is an attempt at explicating a seemingly innocuous, irrelevant, or even perhaps just trivially banal question: why does a book about obfuscation, which co-authors Finn Brunton and Helen Nissenbaum describe as being a mere tool 1, albeit one “particularly well suited to the category of people without access to other modes of recourse” 2 —people whom Brunton and Nissenbaum, here drawing upon James C. Scott’s study of Malaysian peasant resistance 3, repeatedly 4 insist are “weak” 5 —appear on the 2016 United States (US) Army Deputy Chief of Staff’s G-2 Reading List?6 Specifically, Obfuscation is listed in the ‘Professional Development’ portion, meant to “encourage the reader to: invest time on self-development in order to learn and grow as an Army Professional”7, with the overarching vision of the G-28, the Military Intelligence (MI) division of the US Army, itself being, in part, a team “that enables ground commanders to fight and win our nation’s wars across the operational spectrum”9. The tone of a given text included in the reading list of course need not be overtly favorable to US intelligence or military forces, particularly as the broad scope of the list is “to understand the operational environment that is and will emerge over the coming decades”10, and thus it stands to reason that there may be operational and intelligence gains to be procured from reading even explicitly adversarial, critical or even agnostic literature. Further, the book’s appearance on any given list not authored by them is certainly out of Brunton’s and Nissenbaum’s control and hence seemingly out of bounds for a review that aims to be one of the text itself and not of a peripheral reading list. Nonetheless, it will be argued here that in this particular instance the given text, Obfuscation, does in fact present a—at times itself obfuscated—forceful statist stance, which is further accompanied by the distribution of information flawed by omissions which is tantamount, at best, to misinformation11. As such, Obfuscation may be limned as a reactionary and dangerous work of state-sponsored propaganda (as manifested through the potentiality of funding bias); masquerading as pro-privacy literature, it instead aims at garnering acquiescence to government monitoring of the populace via presenting mass surveillance as a palatable, normative narrative under the twin veneers of projecting the illusion of a state under which both surveillance and privacy are possible and by advocating for ineffectual, crippled privacy (in)assurance strategies12, perhaps to all the better achieve the authors’ seeming aim of cementing data collection as a legitimate state operation.

Although other reviews 13 of the text dutifully point out Brunton and Nissenbaum’s introductory working definition of obfuscation—“the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection14 [emphasis in original]—they all fail to present the critical corollary that appears two sentences after this definition: “[o]bfuscation also offers ways for government agencies to accomplish many of the goals of data collection while minimizing the potential misuses”15. This singular addendum is pivotal to unraveling Brunton and Nissenbaum’s project of normalizing and legitimizing not just government in general, but—as this review will aim to demonstrate—the authors generally mean the United States government (and its allies) as a surveillant actor whose interests are to be respected, preserved, and propagated. As an exploration of Brunton and Nissenbaum’s advocacy of a vehemently statist stance, let us turn to one of Brunton and Nissenbaum’s case studies16

Readers of Obfuscation may initially be perplexed as to why a book subtitled ‘A User’s Guide for Privacy and Protest’ [emphasis added] presents, as its first study of obfuscation, the example of military planes during the Second World War cloaking their presence on German radar systems amidst a cloud of ‘chaff’: vast quantities of strips of black paper with aluminum foil attached released to render electronic intelligence operatives incapable of distinguishing which of the radar signals denote an actual plane, and which are chaff-induced false positives17. The specific situational circumstances of this case study are not, however, immediately apparent. The case is ambiguously titled, ‘Chaff: defeating military radar’:

[d]uring the Second World War, a radar operator tracks an airplane over Hamburg, guiding searchlights and anti-aircraft guns […] Abruptly, dots that seem to represent airplanes begin to multiply, quickly swamping the display. The actual plane is in there somewhere, impossible to locate owing to the presence of ‘false echoes’. The plane has released chaff […]18

There are no territorial qualifiers for either subject or object, only a plane and a radar operator. Phrasing it so neutrally suggests that‘the plane’ may be an Allied bomber or one operated by a private German recreational pilot who simply doesn’t wish to be erroneously shot down by their country’s anti-aircraft guns. The neutral phrasing is clinical, or technical, but its neutrality is a feint. Brunton and Nissenbaum would surely agree that “vague language produces many confusing gestures of possible activity and attribution”19, since that is precisely how they describe another case study. In other words, Brunton and Nissenbaum, in describing Case 1.1, are also apparently deploying Case 2.8: ‘Deliberately vague language: obfuscating agency’20. Although we will present a possible explanation for this case study’s mysteriously, foregrounded presence later on in the review, just as Brunton and Nissenbaum likewise return to it in their text21, it will suffice at this point to leave a marker highlighting that the strategy of “rendering who is doing what puzzling and unclear”22. Here such a strategy performs the pivotal function of pre-emptively introducing plausible deniability should, say, one at some point make the charge that Brunton and Nissenbaum are attempting to legitimize the murder of civilians (characteristic of Allied carpet-bombing raids) by portraying said killings as either being or serving a ‘positive purpose’23, as at this point the authors can simply counter that they were not here talking about Allied bombers and German military analysts, but simply about some unspecified radar enthusiast’s inability to observe the flight of an ambiguous, unaffiliated aircraft.

Though this example serves to illustrate that Brunton and Nissenbaum themselves clandestinely or perhaps unconsciously deploy the strategies of obfuscation in the book itself; this strategy of potential ambiguous deniability ultimately proves ineffective as the authors do not carry out their obfuscation successfully throughout the entirety of the text. Later, there occurs a slippage wherein Brunton and Nissenbaum let the previous ambiguity fall to the wayside by openly letting drop that they were in fact referring to Allied bombers24. Hence, given the authors’ latter specification of the particular kind of plane, we can return to the previously-quoted passage and amend it as follows, dispensing with the hitherto-present obfuscated ambiguity:

[d]uring the Second World War, a radar operator tracks an airplane over Hamburg, guiding searchlights and anti-aircraft guns […] Abruptly, dots that seem to represent airplanes begin to multiply, quickly swamping the display. The actual [Allied bomber] is in there somewhere, impossible to locate owing to the presence of ‘false echoes’. The [Allied bomber] has released chaff […]25

As the given case study now unambiguously refers to Allied bombers deploying chaff, a number of questions may be presented to ease the reader’s potential confusion at this choice of opening case study. The initial pragmatic inquiry regarding what usefulness knowledge of archaic anti-radar obfuscation methods has in the present day for the protesting user is of course here still entirely applicable, though Brunton and Nissenbaum are observed to defend their choice on two grounds: 1) that the case study of the Allied bombers deploying radar-foiling chaff “may be the purest, simplest example of the obfuscation approach”26 and thus may be applicable for its general ability in helping the reader “to distinguish, at the most basic level between approaches to obfuscation”27, if not for the specific use-case utility of literally deploying chaff to thwart some currently operational legacy radar system; and 2) that while “strong” forces may indeed use obfuscation, it nonetheless remains “more readily adopted by those stuck with a weak system”28. Regarding the first claim, it is not at all clear why the chaff case study is any purer or simpler a display of obfuscation than, say, Case 1.13: ‘Operation Vula: obfuscation in the struggle against Apartheid’29, Case 2.10: ‘Code obfuscation: baffling humans not machines’30, or any number of other cases which likewise may help “us to distinguish, at the most basic level, between approaches to obfuscation”31; indeed, as Brunton and Nissenbaum themselves admit32, the chaff example is neither an example of generating genuine but misleading signals, nor an example of shuffling genuine signals, these two being the aforementioned ‘approaches to obfuscation’. How exactly this example helps one to distinguish between two approaches that are not otherwise explicated other than to explicitly mention what the given case study is not, is not made apparent.

Going further, Obfuscation presents at least thirty-one particular case studies of obfuscation in action, of which the radar chaff example is the first. Thirteen of the cases are presented as being ‘core cases’ which are “organized into a narrative that introduces fundamental questions about obfuscation and describes important approaches to it”33, while an additional eighteen cases are presented to, “illustrate the range and variety of obfuscation applications while also reinforcing underlying concepts”34. The rest of the book is devoted to discussing the necessity of obfuscation, its ethics, and finally its goals. The case study organization, however, appears to be both inconsistent and arbitrary. For instance, while Brunton and Nissenbaum claim that that the secondary selections, grouped into the second chapter under the heading of ‘Other Examples’, are “shorter cases”35, in fact numerous entries in this category are actually longer than those in the primary ‘Core Cases’ category of the first chapter36. Thus, not only is Brunton and Nissenbaum’s defense of the particular placement of the chaff example as the opening case study unconvincing, the justification for the general arrangement of the case studies presented throughout the text in general, as well as in the particular instance of presenting the chaff case as the opening case, is demonstrably inaccurate.

Turning our attention to the authors’ second defence of including an example of military obfuscation, despite Brunton and Nissenbaum’s claim that though obfuscation may also be used by “strong” forces it is nonetheless particularly best-suited for deployment by the “weak”, one wonders how, precisely the knowledge of archaic aircraft radar detection systems may help said “weaker” forces? Which is to say that it does not appear to be explained in the text exactly how the example of Allied bomber chaff release would in this instance constitute “a tool more readily adopted by those stuck with a weak system”37, as the case study certainly appears to be much more readily adopted by those with intricate knowledge of military radar and access to aircraft, which is to say predominantly various militarized “strong” forces. Moreover, what a strange choice of case study to start off the book with while further describing it as the “purest” example of obfuscation, if the authors later portray that same case study to be an ill-suited minority example that is not representative of the forces obfuscation is best suited for!

Thus having addressed both of Brunton and Nissenbaum’s defences of their choice of opening salvo, we may now turn to postulating potential alternate explanations for said selection. It is perhaps a mere coincidence that the composition of Obfuscation was funded in part by the Air Force Office of Scientific Research38 and that the first case study in the text presents a deployment of obfuscation carried out by Allied aerial military forces which is further favorably recounted throughout the text; but this congruity of financial sponsorship and case study choice nonetheless certainly bears explicit mentioning so as to highlight the potential of funding bias39 impacting both the authors’ sample selection and the ensuing favorable tone of the discussions. ‘Let’s put this bluntly:’—as Brunton and Nissenbaum are prone to say40—was the choice of case studies and general tone found in Obfuscation influenced by grant selection? Not only is the answer unknown to the reader, but the very question itself, lest one makes the potential connection between a fleeting reference in the Acknowledgements section and the choice of case study, is one that readers may not know to ask in the first place. Thus, much like the authors deploying the obfuscation technique of vague language so as to ambiguate the affiliation of the plane and radar operator, Brunton and Nissenbaum themselves create the very “problem”41 that they claim obfuscation is meant to counter. This is namely that of an “asymmetrical power relationship”42, one in which the reader is faced with “unknown unknowns”43 about the veracity and possible grant-based bias of case study selection. The text here once again reveals itself to be an agent of obfuscation in the services of the state, with the authors on the “strong” end of the resultant informational asymmetry with regards to the text’s tone and funding.

It is further curious to note that, despite it being generally acknowledged44 that Axis forces had independently, albeit concurrently, developed the same radar-defeating obfuscation technology of chaff— termed Düppel in German—as had the Allies, this fact is entirely absent from Brunton and Nissenbaum’s examination of chaff45, which instead focuses solely on its Allied development. This omission is significant, because artificially limiting the discussion of chaff usage to Allied forces allows Brunton and Nissenbaum to present the Allied bomber deployment of chaff as an example of obfuscation being used for the aforementioned “positive purposes”46. That Axis military forces also developed and deployed the same obfuscation technology would complicate the claim. However, the assertion of the positive purpose of chaff may be cast into doubt by virtue of Allied usage alone, rendering Brunton and Nissenbaum’s attempt at friction-free simplification a failed effort; which is to say that chaff having a positive purpose could have been questioned not just by noting the multiple cases of its deployment by rival parties, but on the solitary grounds of Allied deployment itself.

Specifically, in Section 5.2 of the text, which is focused on discussing potential use cases of obfuscation, Brunton and Nissenbaum state that they “are no longer relying heavily on examples of obfuscation used by powerful groups for malign ends […] We want this section to focus on how obfuscation can be used for positive purposes”47, before proceeding to once again discuss the efficacy of chaff deployment48. Hence, given that the authors have stated that said section will focus on ‘positive’ obfuscation, it then syllogistically follows that the authors consider chaff (to clarify, recall that the authors are presumably referring specifically Allied bomber chaff, as Axis chaff does not exist in this constructed narrative) to be a positive application of obfuscation. The chaff-facilitated attacks on Hamburg (this being the attack presumably alluded to by Brunton and Nissenbaum in Case 1.1 which they explicitly mention as a target, albeit not any single military strike) alone resulted in 42,600 civilian deaths49. How these deaths, enabled as they were by obfuscation, amount to positive purposes is unexplained by Brunton and Nissenbaum.

Lest an objection is raised that these murders amounted to what has been more recently phrased as “collateral damage”50, as opposed to being the purpose behind the chaff-facilitated attack, and hence are not relevant to a discussion regarding examples of positive purposes of obfuscation, it may be countered that, as Ward Thomas points out, The Royal Air Force (RAF) Bomber Command advocated a campaign of so-called ‘morale bombing’, the aim being the “breaking the will of the German people”51, with bombing targets to be chosen based on the “‘supplementary effects’”52 of the bombs. A further RAF directive also betrayed an explicit aim to kill German civilians, thus “all but abandoning the fiction that bombing raids were directed at military or industrial targets”53. Hence when Brunton and Nissenbaum state that they will focus on the utilization of obfuscation in the service of positive purposes, and then proceed to once again discuss Allied bomber deployment of radar chaff—the explicit purpose of which was to allow the bomber planes to more effectively render the German populace “homeless, spiritless, and dead”54, they are here tacitly acknowledging that murder is a positive purpose that obfuscation may serve, albeit once again said veneration comes sans explanation for why this constitutes a positive force. Allied usage of obfuscation for successful bomb deployment with the explicit aim of killing German civilians, in other words, is taken by Brunton and Nissenbaum to be a positive as a given without warranting explication.

Such uncritically statist themes—which is to say the valorization of state forces, military or otherwise—run throughout Obfuscation55, and indeed appear to merely be a particularized manifestation of the book’s seeming larger project of normalizing persistent government interventionism via mass surveillance under the pretence of advocating individual privacy. To wit, one of the explicit use cases of obfuscation showcased by Brunton and Nissenbaum is that of preventing “individual exposure”56. Should one then ask what of preventing mass exposure, Brunton and Nissenbaum preemptively deny the legitimacy of such potential use of obfuscation by proclaiming that mass surveillance is in fact yet another ‘positive’, stating that “[c]ertain obfuscation approaches are well suited to achieving the positive social outcome of enabling individuals, companies, institutions, and governments to use aggregate data while keeping the data from being used to observe any particular person”57 [emphasis in original]. To once again put things bluntly, for Brunton and Nissenbaum, obfuscation is thus a viable weapon of the “weak”, so long as it doesn’t interfere with mass surveillance conducted by the “strong”. The authors, in other words, would have users deploying obfuscation know their place as weaklings who are at best allowed to contest surveillance of themselves, but not of surveillance as such. Brunton and Nissenbaum further go on to normalize mass surveillance under the veneer of a certain governmental pragmatism, asking their readers to “consider a system designed to preserve socially valuable classes of data – derived from the census, for example, in order to allocate resources effectively or to govern efficiently58 [emphasis added]. Such a system, the Brunton and Nissenbaum go on to claim, would provide “wider benefits”59—albeit without questioning for whom or for what.

The answer may be found by turning our attention once again to the potential of funding bias, with the aforementioned passages in the text regarding obfuscation serving positive social outcomes further bears a striking thematic resemblance to a certain National Science Foundation (NSF) funded project, whereof Nissenbaum states that:

“[a] major technical theme of the project is privacy-preserving data mining, and, more generally, techniques for meeting the potentially conflicting goals of respecting individual rights and allowing law enforcement and other legitimate organizations to collect and mine massive data sets”60.

This grant also co-funded the production of Obfuscation61, wherein once again, questions about the a priori assumption of the legitimacy of law enforcement and military forces as legitimate actors are not explored: it simply being taken for granted that they are such and could not be otherwise. The underlying problem for Brunton and Nissenbaum hence does not appear to be surveillance qua mass data collection, but is instead limited to maintaining the narrow scope of individual privacy in the accrued data sets. The potential that, even if such a scenario were actionable, that it could still lead to overarching oppression—for instance, under the ethically-acceptable, privacy-conscious calculus of mass surveillance that Brunton and Nissenbaum support, one could, say, accommodate the mass tracking of users’ Twitter feeds, so long as the individuals in those tweets are not identified. That this would still allow state interests to detect that a protest is going to happen at a given location and deploy law enforcement forces there62—is once again unexplored. For Brunton and Nissenbaum, mass surveillance is rendered as not merely unproblematic, but as an explicit positive. That it could also result in mass, as opposed to targeted oppression, or for that matter that mass surveillance is itself, being an aggregate thereof, not possible without a mass amount of individualized surveillances (irrespective of the potential individualized anonymization that may occur within a given aggregate set), is unfortunately not addressed in the text.

Yet, not only does Obfuscation present a vehement defence of obfuscated state surveillance via ideological normalization, it further does so through the spreading of technical misinformation63 in the form of oversimplification and omission, thus effectively sabotaging and neutralizing users should they actually deploy the case studies described in the text. The outfall of said misinformation crosses the spectrum from that of unanticipated mild annoyance and inconvenience to emotional distress, harassment, and incarceration; all serving to achieve the neutralization of a potentially dissident actor. As an example of the former, of relatively mild inconvenience, we can turn to Case 1.4: ‘TrackMeNot: blending genuine and artificial search queries’64. Co-created by Obfuscation co-author Nissenbaum, TrackMeNot is a web browser extension that generates and submits Google search queries that mask a user’s actual searches amidst a slew of automated ones, potentially affording a user plausible deniability of the actual conduct of any of their searches, which could have potentially been auto-generated, rather than being typed or carried out by the user. The discussion of the case study, however, fails to mention the potential costs for users choosing to deploy this particular obfuscation tool, for instance the possibility that Google may start blocking all searches, including actual ones, at least pending the entry of a CAPTCHA text. That this is a tangible possibility is only acknowledged towards the end of the extension’s Frequently Asked Questions (FAQ) page65.

Similar hidden operational costs may likewise arise with Case 2.4: ‘AdNauseum: clicking all the ads’66. AdNauseum, another browser extension which happens to be co-initiated by Nissenbaum, aims to obfuscate user metrics harvested by advertisers by clicking all the ads on any given webpage. As one user later discovered, however, if a user has their own YouTube channel that they are logged into while the extension is running, YouTube may delete the channel due to a violation of its terms of use dictating the deployment of any sort of automated system, with the result that the user unwittingly lost their channel as an apparent consequence of running the extension67. Thus, if one simply installs either of the aforementioned extensions based on reading the case studies presented in Obfuscation, there then looms the potential of being inconvenienced via any number of potential pitfalls and snares which find themselves entirely absent from the text.

There are, however, far greater booby traps than the comparatively mild risks presented by CAPTCHAs or loss of accounts that budding obfuscators may unwittingly succumb to if other case studies from the text are deployed without preemptive precautionary measures and thorough risk-analysis. These are neither explicated nor even hinted at. For instance, in their analysis of Case 1.11: ‘Tor relays: requests on behalf of others that conceal personal traffic’68, Brunton and Nissenbaum present the benefit of plausible deniability afforded to a user running their own Tor relay: “[i]f you are on Tor and not running a relay, they [potential rogue Tor relay operators] know that you wrote the message you gave to them. But if you are letting your computer operate as a relay, the message may be yours or may be just one among many that you are passing on for other people”69. What Brunton and Nissenbaum fail to mention, however, is that if one operates a particular kind of relay, known as an exit relay due to the fact that this is the point at which traffic leaves the Tor network70, one may be subjected to police raids71—a resultant punji stick-induced wound which may further fester into actual conviction72. Note that such examples of government antagonism of Tor relay operators are specifically selected based on their dates of occurrence73: dates which precede the publication of Obfuscation (September 201574) by eight and nearly two years, respectively; examples which one therefore can reasonably expect Brunton and Nissenbaum to have been adequately familiar with so as to meet the duty of care or a desire to avoid charges of negligence, if not just out of general courtesy, in cautioning their readers of the potential risks they may face prior to presenting to them the blanket suggestion of running Tor relays75. The key point here being that by leaving out potential pitfalls of the various case studies, Brunton and Nissenbaum endanger the reader, presenting potentialities which amount to misinformation based on omission. The project of the book would have been much better served had each case study analysis included a ‘potential risks’ section so to better enable one to make informed, situational cost-benefit analyses.


Ultimately, Obfuscation ends in a terminal collapse under the weight of its own contradiction. The book’s conclusion supports data collection by the state saying “[p]rivacy does not mean stopping the flow of data; it means channeling it wisely and justly to serve societal ends and values”76. This falls in stark contrast to its vehemently anti data-gathering introduction, “properly deployed obfuscation can aid in the protection of privacy and in the defeat of data collection, observation, and analysis”77, thus seemingly affirming Burton and Nissenbaum’s own cautionary mid-way note that “[p]rivacy is a complex and even contradictory concept, a word of such broad meanings that in some cases it can become misleading, or almost meaningless”78. And yet, this seeming recourse to ambivalence and contradiction also masks the underlying pernicious malice of normalizing state-intervention as manifested throughout Obfuscation either in generalities such as exculpating mass surveillance or in particularities like legitimizing mass civilian execution achieved via the assistance of obfuscation strategies by codifying it as a ‘positive purpose’, or via setting users up to fail via misinformed case studies. Obfuscation is thus doubly dangerous, both in its normative function as a piece of state-sponsored ideological propaganda presenting mass government surveillance as a legitimate and positive aim, and as a compendium of technical misinformation, with both operations masquerading under the subtitle of what the text is decidedly not: a user’s guide for privacy and protest.


Works Cited

Ashley, Robert P., Jr. “US Army Deputy Chief of Staff, G-2 Reading List 2016”. United States Army Intelligence, 2016.

black, jeff. “Don’t use on YouTube if signed-in”. Reviews for AdNauseam, November 2015.

Brodie, Bernard and Fawn M. Brodie. From Crossbow to H-Bomb: The Evolution of the Weapons and Tactics of Warfare (Revised and Enlarged Edition). Bloomington: Indiana University Press, 1973.

Brunton, Finn and Helen Nissenbaum. Obfuscation: A User’s Guide for Privacy and Protest. Cambridge: The MIT Press, 2015.

Cagle, Matt. “Facebook, Instagram, and Twitter Provided Data Access for a Surveillance Product Marketed to Target Activists of Color”. ACLU Northern California, 2016.

Chairman of the Joint Chiefs of Staff (CJCS). Department of Defense (DOD) Dictionary of Military and Associated Terms, 2016.

Cox, Joseph. “The People Who Risk Jail to Maintain the Tor Network”. Motherboard, April 2015.

Doyle, Tony. “Finn Brunton and Helen Nissenbaum: Obfuscation: a user’s guide for privacy and protest”. [Review of Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, 2015]. Ethics and Information Technology 18:3, September 2016: 237-239.

Electronic Frontier Foundation (EFF). “Legal FAQ for Tor Relay Operators”. Tor, 2004.

Finkel, Meir. On Flexibility: Recovery from Technological and Doctrinal Surprise on the Battlefield. Stanford University Press, 2011.

“Google sends me to a ‘sorry’ page and tells me that it suspects my machine of being infected with spyware or a virus, and even prompts me for a captcha when I try to search. Did the search engine detect TrackMeNot?”. TrackMeNot Help / Frequently Asked Questions.

Horning, Rob. “Hide and Seek: The Problem with Obfuscation”. [Review of Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, 2015]. Los Angeles Review of Books, November 2015.

Introna, Lucas D. and Helen Nissenbaum. Facial Recognition Technology: A Survey of Policy and Implementation Issues. The Center for Catastrophe Preparedness & Response, 2009.

Janßen, Alex “Yalla”. “Tor madness reloaded”. Blog of too many things, September 2007.

Jones, R. V. Most Secret War: British Scientific Intelligence 1939-1945. London: Penguin Books, 2009.

Kaste, Martin. “When A Dark Web Volunteer Gets Raided By The Police”. All Tech Considered – NPR, April 2016.

Krimsky, Sheldon. Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research?. Lanham: Rowman & Littlefield Publishers, 2004.

Lee, Michael. “Austrian man raided for operating TOR exit node”. ZDNet, November 2012.

Lerner, K. Lee and Brenda Wilmoth Lerner (eds.). Encyclopedia of Espionage, Intelligence, and Security, Volume 2: F-Q. Detroit: Thomson Gale, 2004.

Lorber, Azriel. Ready for Battle: Technological Intelligence on the Battlefield. Lanham: Rowman & Littlefield, 2015.

mikeperry. “Tips for Running an Exit Node”. The Tor Blog, June 2010.

Neumann, Rico. [Review of Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, 2015]. New Media & Society, August 2016.

Nissenbaum, Helen. “Information Technology Research (ITR): Sensitive Information in a Wired World”. Award Abstract #0331542, 2008.

Nissenbaum, Helen. Curriculum Vitae. 2016.

“Obfuscation”. The MIT Press, 2016.

Pauli, Darren. “Austrian Tor exit relay operator guilty of ferrying child porn”. The Register, July 2014.

Scott, James C. Weapons of the Weak: Everyday Forms of Peasant Resistance. Yale University Press, 1987.

Suri, Harsh. Towards Methodologically Inclusive Research Syntheses: Expanding Possibilities. Abingdon: Routledge, 2014.

Thomas, Ward. The Ethics of Destruction: Norms and Force in International Relations. Ithaca: Cornell University Press, 2001.

“USAF Intelligence Targeting Guide”. Air Force Pamphlet 14-210. 1998.

“Vision & Mission”. United States Army Intelligence.


  1. Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest (Cambridge: The MIT Press, 2015), pp. 2-3.
  2. Ibid, p.3.
  3. James C. Scott, Weapons of the Weak: Everyday Forms of Peasant Resistance (Yale University Press, 1987), as referenced in Brunton and Nissenbaum, op ., pp. 55-56, 107; though Brunton and Nissenbaum are, however, careful to point out that no “reasonable analogy” (ibid., p. 57) can ultimately be made between Scott’s peasants and Brunton and Nissenbaum’s obfuscators, with the two groups of actors both having access to differing resources and encountering disparate “mechanisms of coercion and control” (ibid.).
  4. E.g., Brunton and Nissenbaum, op. cit.;, pp. 3, 9, 55, 57-58, 62, 79
  5. The (“)weak(”) (Brunton and Nissenbaum appear to be inconsistent in their punctuation of the term, only sporadically deploying it in shudder quotes (e.g., ibid, pp. 9, 57), while at other times electing to forego them (e.g., ibid, pp. 3, 55, 58, 62, 79), albeit with no explanation as to any potential saliency of the distinction) being defined as “the situationally disadvantaged, those at the wrong end of asymmetrical power relationship” (ibid., p. 3), with weakness in turn further defined as the “obligat(ion) to accept choices we should probably refuse” (ibid., p. 55).
  6. Robert P. Ashley, Jr, “US Army Deputy Chief of Staff, G-2 Reading List 2016”, United States Army Intelligence, 2016,, p. 14.
  7. Ibid., p. 1.
  8. Cf. the other G-system army staffing classifications: G-1 (Personnel), G-2X (Counterintellgience), G-3 (Operations), G-4 (Logistics), G-5 (Plans), G-6 (Command, Control, Communications, and Computer Systems), G-7 (Information Operations) (Chairman of the Joint Chiefs of Staff (CJCS), Department of Defense (DOD) Dictionary of Military and Associated Terms (2016),; see also: K. Lee Lerner and Brenda Wilmoth Lerner (eds.), Encyclopedia of Espionage, Intelligence, and Security, Volume 2: F-Q (Detroit: Thomson Gale, 2004), p. 47.
  9. “Vision & Mission”, United States Army Intelligence,
  10. Ashley, op. cit.
  11. Misinformation being predicated on the good faith assumption that the presented information is not intentionally flawed, only incidentally so (cf. spreading disinformation: the deliberate and knowing propagation of false information).
  12. Given such ineffectual, booby-trapped privacy strategies presented throughout the text, the book’s inclusion in the mentioned military reading list may thus indeed effectively amount to an inadvertent counter-countersubversion, wherein a text sponsored by military intelligence and functioning as a piece of sabotage to quell dissident acts of subversion, may have inadvertently ricocheted back into suggested military reading material.
  13. E.g., Tony Doyle, “Finn Brunton and Helen Nissenbaum: Obfuscation: a user’s guide for privacy and protest”. Review of Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, 2015. Ethics and Information Technology (18:3, September 2016), pp. 237-239; Rob Horning, “Hide and Seek: The Problem with Obfuscation”. Review of Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, 2015. Los Angeles Review of Books (November 2015),; Rico Neumann. Review of Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest, 2015. New Media & Society (August 2016),
  14. Brunton and Nissenbaum, op. cit., p. 1.
  15. Ibid.
  16. Lest an adversary eventually presents the charge that a disproportionate amount of said review of what is ostensibly aimed at being a comprehensive book review is overly fixated on a solitary case study discussed therein, the following counter-arguments are preemptively deployed: 1) Brunton and Nissenbaum themselves describe said solitary example as being of crystalline, singular importance (“(t)his may well be the purest, simplest example of the obfuscation approach” (ibid., p. 8)); 2) said example is repeatedly referenced throughout the entirety of the text (e.g., ibid., pp. 12, 26, 63, 64, 88, 92); 3) said example, despite being a single example, is nonetheless generally demonstrative and illustrative of Brunton and Nissenbaum’s larger project of rendering the readership acquiescent to a palatable mass surveillance state which likewise permeates the entirety of the text and is thus of repeated and consistent relevance.
  17. Case 1.1: ‘Chaff: defeating military radar’ (ibid., pp. 8-9).
  18. Ibid., p. 8.
  19. Ibid., p. 30.
  20. Ibid., pp. 30-31.
  21. Ibid., p. 64.
  22. Ibid., p. 31.
  23. Ibid., p. 88.
  24. Ibid., p. 64.
  25. Ibid., p. 8.
  26. Ibid.
  27. Ibid.
  28. Ibid., p. 58.
  29. Ibid., pp. 21-24.
  30. Ibid., pp. 33-35.
  31. Ibid., p. 8.
  32. Ibid.
  33. Ibid., p. 2.
  34. Ibid.
  35. Ibid.
  36. E.g., Case 2.6: ‘Swapping loyalty cards to interfere with analysis of shopping patterns’ (ibid., pp.28-29) — 482 words, cf. Case 1.3: ‘CacheCloak: location services without location tracking’ (ibid., pp. 12-13) — 368 words; Case 2.9: ‘Obfuscation of anonymous text: stopping stylometric analysis’ (ibid., pp. 31-33) — 897 words, cf. Case 1.1: ‘Chaff: defeating military radar’ (ibid., pp. 8-9) — 510 words; etc. Lest an emphasis on case study length appears to be a particularly picayune point, the reader is advised to recall that this is explicitly a factor presented as justification for the text’s categorization schema by Brunton and Nissenbaum themselves; this review is thus merely exploring its potential efficacy.
  37. Ibid., p. 58.
  38. Ibid., p. ix; specifically, co-author Nissenbaum appears to have been awarded $500,000, of a total grant of $4,453,881, from the “Air Force Office of Scientific Research (AFOSR), Program of the University Research Initiative (MURI), Collaborative Policies and Assured Information Sharing, ONR BAA 07-036” (Helen Nissenbaum, Curriculum Vitae, 2016,
  39. Funding bias being not just a situation in which “private funding can bias the outcomes of studies towards the interests of the sponsor” (Sheldon Krimsky, Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? (Lanham: Rowman & Littlefield Publishers, 2004), p. 146), but also potentially arising a priori, in the choice of work to be conducted in the first place: “(o)ften educational researchers have to seek external funding to make their research financially viable. A bias may be introduced at this stage if some types of primary research are more likely to be funded than others” (Harsh Suri, Towards Methodologically Inclusive Research Syntheses: Expanding Possibilities (Abingdon: Routledge, 2014), p. 90).
  40. Brunton and Nissenbaum, op. cit., p. 58.
  41. Ibid., p. 50.
  42. Ibid., pp. 48-50.
  43. Ibid., p. 49.
  44. See e.g., Bernard and Fawn M. Brodie, From Crossbow to H-Bomb: The Evolution of the Weapons and Tactics of Warfare (Revised and Enlarged Edition) (Bloomington: Indiana University Press, 1973), p. 211; R. V. Jones, Most Secret War: British Scientific Intelligence 1939-1945 (London: Penguin Books, 2009), p. 299; Azriel Lorber, Ready for Battle: Technological Intelligence on the Battlefield (Lanham: Rowman & Littlefield, 2015), p. 92.
  45. The absence is rendered all the more striking owing to the fact that Brunton and Nissenbaum’s reference for this case study (Meir Finkel, On Flexibility: Recovery from Technological and Doctrinal Surprise on the Battlefield (Stanford University Press, 2011)) even mentions that Germans had likewise experimented with chaff on the very same page (ibid., p. 125) that Brunton and Nissenbaum cite (Brunton and Nissenbaum, op. cit., p. 99).
  46. Brunton and Nissenbaum, op. cit., p. 88.
  47. Ibid., pp. 87-88.
  48. Ibid., p. 88.
  49. Ward Thomas, The Ethics of Destruction: Norms and Force in International Relations (Ithaca: Cornell University Press, 2001), p. 169.
  50. “USAF Intelligence Targeting Guide”, Air Force Pamphlet 14-240, 1998,, p. 180.
  51. Thomas, op. cit., p. 131.
  52. Ibid., p. 132.
  53. Ibid.
  54. Ibid.
  55. The few sections of the text wherein the tone regarding government interference is explicitly cautionary (e.g., Brunton and Nissenbaum, op. cit., pp. 52, 74) are jarring precisely due to their incongruity with the overarching theme throughout the rest of the text.
  56. Ibid., pp. 89-90.
  57. Ibid., p. 89.
  58. Ibid., p. 94.
  59. Ibid.
  60. Helen Nissenbaum, “Information Technology Research (ITR): Sensitive Information in a Wired World”, Award Abstract #0331542, 2008, Nissenbaum is certainly no stranger to government-backed surveillance research, having also previously worked under the auspices of the Department of Homeland Security (alongside Lucas D. Introna) to produce a report on facial recognition technology which came to the conclusion that said technology “must function as part of a intelligence and security infrastructure” (Lucas D. Introna and Helen Nissenbaum, Facial Recognition Technology: A Survey of Policy and Implementation Issues (The Center for Catastrophe Preparedness & Response, 2009),, p. 46) (despite presenting token, mild cautions of said technological deployment, the report nonetheless seeks to normalize facial recognition as part of state intelligence and security gathering).
  61. Brunton and Nissenbaum, op. cit., p. ix; specifically, with Nissenbaum being awarded $406,000, of a total grant of $12,500,000 (Nissenbaum, Curriculum Vitae, op. cit.).
  62. E.g., Matt Cagle, “Facebook, Instagram, and Twitter Provided Data Access for a Surveillance Product Marketed to Target Activists of Color”, ACLU Northern California, 2016,
  63. Recall the earlier distinction between mis and disinformation; with the benefit of the doubt here being extended.
  64. Brunton and Nissenbaum, op. cit., pp. 13-14.
  65. “Google sends me to a ‘sorry’ page and tells me that it suspects my machine of being infected with spyware or a virus, and even prompts me for a captcha when I try to search. Did the search engine detect TrackMeNot?”, TrackMeNot Help / Frequently Asked Questions,
  66. Brunton and Nissenbaum, op. cit., pp. 26-27.
  67. jeff black, “Don’t use on YouTube if signed-in”, Reviews for AdNauseam, November 2015,
  68. Brunton and Nissenbaum, op. cit., pp. 19-20.
  69. Ibid., p. 20.
  70. Unlike, for instance, the Electronic Frontier Foundation’s “Legal FAQ for Tor Relay Operators” (Tor, 2004,, Brunton and Nissenbaum do not make any distinction between bridge or middle relays and exit relays; thus, a casual reader of Obfuscation may not be aware of the risks of checking the ‘run exit relay’ versus the ‘run non-exit relay’ option during relay configuration, and may thus unknowingly significantly increase one’s chances of being subject to state neutralization in the forms of harassment as manifested via property raids and possible conviction if the more high-risk exit relay option is unwittingly selected.
  71. See, e.g., Alex “Yalla” Janßen, “Tor madness reloaded”, Blog of too many things, September 2007,; Michael Lee, “Austrian man raided for operating TOR exit node”, ZDNet, November 2012,
  72. E.g., Darren Pauli, “Austrian Tor exit relay operator guilty of ferrying child porn”, The Register, July 2014,
  73. Though more recent examples are unfortunately all too prevalent as well (e.g., Joseph Cox, “The People Who Risk Jail to Maintain the Tor Network”, Motherboard, April 2015,; Martin Kaste, “When A Dark Web Volunteer Gets Raided By The Police”, All Tech Considered – NPR, April 2016,
  74. “Obfuscation”, The MIT Press, 2016,
  75. Which is not to say that there are no risk mitigation strategies for running said relays (see, e.g., mikeperry, “Tips for Running an Exit Node”, The Tor Blog, June 2010,, only that Brunton and Nissenbaum neither discuss nor mention any, let alone present the potential need for said protective strategies in the first place.
  76. Brunton and Nissenbaum, op. cit., p. 98.
  77. Ibid., p. 2.
  78. Ibid., p. 45.
Series Navigation<< Ctrl Episteme, a review of ‘Control: Digitality as a Cultural Logic’, by Seb Franklin,Review of Archaeology of Algorithmic Artefacts, by David Link >>