Empty Internet

Article Information

  • Author(s): Olga Goriunova
  • Affiliation(s): London Metropolitan University
  • Publication Date: November 2011
  • Issue: 1
  • Citation: Olga Goriunova. “Empty Internet.” Computational Culture 1 (November 2011). http://computationalculture.net/empty-internet/.


Abstract

Review of Eli Pariser, The Filter Bubble, what the internet is hiding from you, NY/London: Penguin, 2011, 304 pages, ISBN 978-0670920389.


The Filter Bubble pronounces, in populist terms, the agenda that software studies has been developing since the mid 1990s [1]: everything is governed, enframed and molded by software-mediated processes, while the systems/people creating and overseeing such processes have little ability or power to subject them to doubt, debate, analysis, reinterpretation or control by the public, philosophy, democratic institutions or humanist system of coordinates – whatever each of these may still be able to offer.

The Filter Bubble’s author Eli Pariser, is an online campaigner and now President of the campaigning NGO, MoveOn.org. Pariser does not tackle software or computation in general, but focuses on one current aspect of software development: personalisation, his biggest cases being those deployed by Google, Facebook, Amazon, and other, less visible companies, dealing with the digital footprint of netizens.

Google’s PageRank made the world’s online content subject to a procedural equivalent of the exact sciences’ citation index principle, thus making it nearly impossible to obtain information from any other source other than a large mainstream one (relying on a link, its position and ‘weight’ as value-assigning ‘signals’). PageRank helped concentrate rather than disperse power on the Internet, limiting the possibility of making unexpected or dissenting voices heard. Taking the principle further, harnessing clicks (other forms of value-assigning signals) and predicting the next ones, became one of the biggest business preoccupations of the decade. When you are browsing while signed in to a Gmail account, and visit a website that serves (Google’s) DoubleClick ad service, Google gathers your personal data history and serves you advertisements on the basis of it, while also delivering such advertisement on third parties sites. Whether you own a Gmail account or not, Google personalizes your search, also by tracking your IP number, obtaining information about the browser, and taking into account such snippets of data as the time of logging in and out. The list is not exhaustive, and the full account of personalization in action is unobtainable (though there are empirically grounded reflections on its constitutive tension between coherency, randomness and violence in a recent research project by Feuz, Fuller and Stadler [2]). Pariser argues that personalization of search results which Google rather quietly announced and switched on in December 2009, is further aided by the personalization of the Web by Facebook (through the Like button, and EdgeRank algorithm). Facebook’s EdgeRank makes sure the most prominently displayed posts from your friends are those calculated as having the most affinity to you, by being the most recent and of the type of content most liked (a leftist, Pariser recounts the story of his conservative friends disappearing from the top of his Facebook page). While, as The Filter Bubble claims, more and more people rely on recommendations and peers from their social networks to obtain news, advice or novel data (and if they search independently from those, they will be served personalized search results producing another bubble), data already left behind predefines the range of things traceable and findable online.

The harnessing / circulation of personal data for commercial interests profoundly changes not only what we currently and mostly unknowingly experience online, but also what we can principally experience in the future. Being online now means encountering information that supposedly one has already expressed a liking for, and isolated ‘filter bubbles’ populate networks, only serving highly individualized data products. Unawareness of being inside such a bubble and an inability to leave it affects learning, shapes the process of subjectification, severely limiting what one can become, and threatens democratic processes (owning a Gmail account means loosing some constitutional rights (no need for the police to obtain a judge’s warrant to search) . Pariser claims that when no extra-ordinariness is possible, personalization poses a threat to public life, producing a ‘public sphere sorted and manipulated by algorithms.’

The concentration of power over data-handling and its unaccountability are the two themes that Pariser manages to demonstrate very well. While some of the facts he bombards the readers with are relatively well-known, such as the scale of the personal data industry with companies such as BluKai and Acxiom selling personal data footprints to ‘the highest bidder’, some information helps reveal the scope of such changes, many of which are still to come. Of this order is his discussion of the Kindle which feeds Amazon the data on, for instance, which pages in a book the reader highlighted or skipped. Not only will this inform future recommendations from Amazon itself, but it does not require a Cassandra to foresee how this will influence publishing, and writing. It is not that people will migrate to Kindle and similar devices to read Dostoevsky or even Chekhov, but that new types of books altogether will be written, ones that only make sense to read on Kindle and own, maybe only intermittently, in a format designed to fit the information handling behaviour of Kindle readers. Not only will such personalization change how or what we read, it will change what and how is written.

The twentieth century, represented by Zamyatin and Orwell was wary of the dangers of collectivization and the loss of individuality; the twenty-first century appears to inhabit the loss of the collective, of political imagination, and the reign of over-individuation mediated by technology. There are a few questions Pariser asks: how personalization works, what it works towards and who is making it work. As with software studies, Pariser maintains that a programmer / company does not necessarily have a cultural or political awareness of the power it seizes or is engulfed by, nor a ready capacity to construct such an awareness, precluded by reasons ranging from educational handicaps to business interests. Neither programmers nor large corporations such as Google are able or willing to reflect on the power of code and submit it to public scrutiny.

But the scariest finding of The Filter Bubble is not that there seems to slowly be drawn a cordon around each agent capable of engaging with data through digital devices, or that it is done by forces beyond public control, but that such a cordon is controlled by no body in particular and there is probably no one who fully understands how it works. The Filter Bubble at some point is about what the author cannot find out and, somewhat as a result, there is a great lack of technical specificity. In this respect, the book presents a sharp contrast to Jonathan Zittrain’s The Future of the Internet: And How to Stop It. [3] Maybe we have already failed to stop it. The technical specification Pariser was unable to obtain might not even exist. Pariser suggests that there might be no one able to scrutinize, adjust, control and understand the code that runs the mechanics of personalization, all the way up to the Google engineers whose own algorithms have, even to them, long become a species of chaotic system. Pariser suggests that they only work on code by observing and reacting to how such complex system responds to the proposed interaction or intervention without a pretense of having an understanding of why it so responds. (Feuz, Fuller, and Stadler offer a different interpretation in which commercial revenues rate high [2]).

Pariser links this to the strangest part of the book – his account of geeks as deeply asocial traumatized weirdoes who only manage to compose themselves by the promise of almighty control over the programmable world (this must be something personal). For Pariser, the reality is the inverse of such, admittedly public, fantasy: geeks, hierophants of code are as much subject to its unruliness as everyone else. This insight can potentially help formulate interesting ontological and epistemological questions for software explorers to come, whose mode of work now includes infiltration, collection, and interpretation of evidence, and the testing of a closed object, rather than the outright conversion of programmers to philosophy.

To come back to the question of geekdom, while the author suggests that teenage geeks are an abnormality as they are unable to accept, and are not versed in, social reality (or vice versa), one might claim that it is profoundly normal not to able to accept society upon encountering it as a 16 years old. It would indeed be tragic if such an embrace could happen. Here, Pariser misses a romantico-political opportunity: the way to engage with a world which one does not relish or aspire to inhabit is to change it. For instance, Ullman’s account of ‘anarcho-capitalists offers a richer explanatory and mobilizing power, not to mention Kelty’s figure of ‘recursive publics’.[4]

However, to be fair, Pariser tries to entertain the politico-ideological allowances of the geeks he meets. Sarah Palin seems a cutie in relation to such figures and their ideas. The version of the future given by Peter Thiel (founder and ex-owner of PayPal), who is cited, amongst other things, as saying that the increase in beneficiaries of welfare, and ‘the extension of franchise to women’ are ‘particularly tough for libertarians’ is not scary, but uninterestingly retrograde and sci-fi simultaneously. Programmers and designers are in a position of power, yet in all the accounts Pariser is able to obtain or create, they do not care, do not realize, or are just idiosyncratically crazy, but not in any way that opens things up. Thiel is described by Mark Zuckerberg as a mentor, and sits on the Facebook board; and, as Pariser shows, can be seen as one fine example of people shaping technology to govern the present and future.

The main drawback of the book is its lack of ontological and epistemological grounding. Pariser oscillates between technocratic and humanist statements and resolves the tension between them by citing Kranzberg’s first law: ‘Technology is neither good or bad, nor is it neutral’, a kind of oxymoronic synthesis that is not taken anywhere further. The inability to choose a conceptual standpoint and place the problem within a certain socio-politically and historically grounded conceptual ecology, which might then show the routes away and out seriously undercuts the potential of the book to produce anything other than a bit of scare. The unrealistic conclusions are a direct result of the poor state of the book’s conceptual spine.

Pariser suggests three types of solutions to be upheld, correspondingly, by users, companies, and governments. The users’ narrative links straight back to the Enlightment, a politically failing belief in the doubtless value of developing the self into a better, brighter, and more interesting individual (being ‘interesting’ to each other though is also something that Vinciane Despret currently proposes as a means to construct an ecological balance [6]). The ‘software studies to the masses’ formula and the stretching of everybody towards the bright sun of intelligence and knowledge appeal are reminiscent of everything best in the Soviet educational system that produced dozens of mathematics and physics geniuses: a project entirely absent from the agendas of the right-wing governments coming to power across the world. Pariser’s second set of solutions is an appeal to companies to open their code and subject its processes to public scrutiny, disclose how the data is used, cultivate ‘public space and citizenship’ and provide filters that expose rather than hide. One wonders if Pariser, able to dream of Google open-sourcing the mechanics of their search and personalization engines has not yet discovered Marx. But the third set, governments, can possibly force corporations to provide ways of opting out of the filter bubble, while engaging in the creation of legislation to protect personal information and control online tools. Whether this is doable in any foreseeable future remains to be seen. Maybe governments will get their hands onto this issue once they sort out the world’s financial crisis and deal with neoliberalism, so a little bit of a wait may be involved.

Other drawbacks of the book include three chapters of nearly behaviorist, heavily psychology-driven explanation of how thinking, innovation and creativity takes place and what happens if we stop coming into contact with anything other than narrowly defined versions of our existing habits and interests. Pariser relies, nearly exclusively, on behavioural economists and psychologists for his arguments against the filter bubble. The apparent desire to draw upon scientific-like explanations to prove the argument is commiserable, like any envy, even if it is a ‘physics envy’, but is unlikely to be pardonable: such a strategy in fact relies on the same logic the book inherently criticizes (exact sciences being in the position of knowledgeableness and power to control data-handling to a high degree of efficiency). However, Pariser’s use of Dewey and Lippmann is arguably good, and though his references are not always correct (his introduction to cybernetics is not the best example of the genre), their presence and application are commendable in a non-academic book (in this and other respects, The Filter Bubble outperforms Morozov’s oeuvre [5]).

Aside from the above, the value of The Filter Bubble for academics, especially junior ones, can be found in unexpected places. One is the structure: its first two chapters are perfect and everyone who has gone through publishing has assimilated the truth that it is only those two that matter, both for the editor’s first impressions and for the initial reviews. Unlike many other (academic) books that never set themselves right after their third chapter plunges, irreversibly damaged by the author’s overadaptation to the priorities and mechanisms of the publishing industry (an algorithm itself), but The Filter Bubble becomes very good again towards its last third.

In fact, in parts it becomes so well done, it is able to infect the reader with fear, gloom and sadness. The apocalyptic side of the narrative is leveled by the writer’s call for research and public intervention, however a few other scenarios might seem possible. First of all, at the moment, the Internet is not as integrated and does not yet quite work the way described in this book. Google’s personalization can seem quite random and commercially-driven rather than being able to serve you the self. Google’s version of yourself and your self are two different entities. True, it has become impossible to find the kinds of things I used to be able to locate in the 1990s. But, for instance, having Russian set as an OS interface language makes Google serve details of printers shops in Moscow while I am sitting in London, searching for the local ones in English. It has become more dysfunctional and absurd rather than enclosing. I would not even mention my trip to a countryside event 300 km away from Moscow in which an attempt was made to navigate using Google maps and GPS service, which quickly resulted in a complete failure to find a way out of the woods, meadows and fords.

Following from this, and secondly, there is a lingering hope that various interlocking levels and bubbles of personalization can not work properly. This is probably a bad conclusion for a review, but is the main finding of the soon-to-be-extinct humanities: collapse takes different forms and phases; it may present itself continuously and before order is sufficiently imposed.

On a completely serious note, the book calls for a new politico-technical agency, awareness, analysis and narrative, and the more of such books – similar and dissimilar to this – are written, the quicker a variety of such analyses and agencies can be constructed and usefully deployed.

References
1. William Bowles, ‘The Macintosh Computer: Archetypal Capitalist Machine?’ 1987, online at: http://art.runme.org/1046901687-6008-0/RETROFUTURISM.htm/; Simon Pope and Matthew Fuller, ‘WARNING: This Computer Has Multiple Personality Disorder’, 1995, online at: http://web.archive.org/web/20010418182313/http://www.axia.demon.co.uk/CompMPD.html/; Matthew Fuller, ‘Visceral Façades: taking Matta-Clark’s crowbar to software’, 1997, http://web.archive.org/web/20010426051211/http://www.axia.demon.co.uk/visceral.html/, all accessed 21 Sept 2011.
2. Martin Feuz, Matthew Fuller, Felix Stadler, ‘Personal Web searching in the age of semantic capitalism: Diagnosing the mechanisms of personalisation’, First Monday, Volume 16, Number 2, 7 February, 2011, online at: http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3344/2766/, accessed 21 Sept 2011.
3. Jonathan Zittrain, The Future of the Internet: And How to Stop It, London: Penguin, 2009.
4. Ellen Ullman, Close To The Machine: Technophilia and Its Discontents, San Francisco: City Lights Books, 1997; Christopher Kelty, Two Bits: The Cultural Significance of Free Software and The Internet, Durham NC: Duke University Press, 2008.
5. Evgeny Morozov, The Net Delusion: How Not to Liberate The World, London: Allen Lane, 2011.
6. Vinciane Despret, ‘Experimenting with Politics and Happiness — through Sheep, Cows and Pigs’ talk, given at the ‘Unruly Creatures: The Art and Politics of the Animal’ conference at the Centre for Arts and Humanities Research, Natural History Museum, London, 2011.