From the First to the Zero Person Perspective: Neutering the Mediated Life of Affinity

Article Information

  • Author(s): Greg Elmer
  • Affiliation(s): Toronto Metropolitan University
  • Publication Date: July 2023
  • Issue: nine
  • Citation: Greg Elmer. “From the First to the Zero Person Perspective: Neutering the Mediated Life of Affinity.” Computational Culture nine (July 2023). http://computationalculture.net/from-the-first-to-the-zero-person/.


Abstract

This short commentary borrows from Roland Barthes’ theory of The Neutral to develop conceptual alternatives to profiling algorithms. Of particular concern is the hegemony of first person perspectives or techniques of personalization. The paper rejects recent efforts to conceptualize digital media properties as sites of unlimited and unfettered first person (user) choices, in favour of a zero person perspective. The paper speculates that the building blocks of a zero person perspective would start by decentering the platform panorama, untethered from the first person user. The zero person perspective is offered as a means of addressing digital harms produced by techniques of personalization, as amplified by first person interfaces and algorithms.


‘Another Analytics is Possible’ says the sticker on my laptop, one of the more intriguing items included in the swag bag of a scholarly conference in London. As a slogan it’s not the most radical, provocative or even comprehensible at first glance. It might be a critique of existing computational analytics – a concern with the values that guide automated decision-making systems. Given that it adorns my computer, it might also be a critique of personal computing, what I refer to in this short commentary as the first-person perspective, in the larger context of digital media properties, platforms, and interfaces. Somewhat akin to the user, the first person is the cornerstone of digital media logics, protocols, and algorithms — the networked digital subject if you will. In this commentary, though I extend the notion of the first person well beyond the individual user. Rather, it is the interface between person and platform, and the algorithms that manage the relationship between person and media more generally that I refer to here as the first person perspective. And if I do this, it is because, more significantly, it enables me – following Barthes 1 in some respects – to posit a contrasting ‘neutral’ – or zero – person perspective susceptible of ‘baffling’ the prevailing paradigm of algorithmic life: a computational life of affinity (eg. the filter bubble) on the one hand and the mediated politics of free will (the freedom to choose ‘different’ content) on the other. The building blocks of this critique are based on the significance of the first person as a perspective, as a system that curates, highlights and/or recommends particular media and news content, goods and services, and friends for the individual user.

Much has already been written about the harmful effects of a personalized, algorithmic life. 2 In this short commentary I similarly delve into the logics of algorithmic computing to question the technical rules that reproduce and reinforce the first-person perspective. In so doing, though, I move beyond calls for transparency and accountability on first person platforms 3, in order to posit a depersonalized, a zero, perspective. Building upon linguistic, mathematical, and cultural theories, particularly Roland Barthes’ theory of The Neutral 4, the paper will speculate on how we might suspend, move beyond or ‘outside of’ our respective first-person perspectives, challenging the foundational logic that reproduces a life of affinity, a life that works against the circulation of different opinions, discomforting experiences, and ultimately solutions to social, political and economic injustices. How can such a first-person perspective be nullified or ‘neutered’ as Barthes suggests, or otherwise de-personalized? And what would a resulting zero-person perspective look like? How would it displace the life of affinity produced by first-person algorithms and interfaces and a bifurcated paradigm that merely posits personal choice as an antidote the seeming infinite amount of content across media platforms and properties?

For digital humanities and critical information studies scholars, the first-person perspective will resonate with a number of likeminded concepts such as profiling, collaborative filtering, recommendation systems, affinity networks, clusters, bubbles, homophily, cybertypes, the quantified self, and so on. 5 Of course there are important distinctions to be made within this list of concepts, but overall they exhibit at least two remarkably common set of logics. First, to varying degrees they recognize that contemporary media, and in particular digital media, seek to personalize experiences and content, while at the same time rationalizing their own business models, their relationship to friends, businesses, vendors, advertisers and others. The supply chain, the just in time system – all these tropes of post-Fordist business require not just management and coordination but probabilistic efficiency, getting the right message, experience, service or commodity to clusters of users who, based on specific sets of factors [gleaned from data] are likely to appreciate it, consume it, pay for it, vote for it. This of course also means excluding some groups and communities from access to such goods and services as well. In short, the first-person perspective guides and clusters an individual’s entrée into this discriminatory, aggregated economy.

Second, such logics extend well beyond media, to many other social practices and sectors of the economy, notably healthcare, banking and insurance, housing, policing, politics, dating, etc. The first-person is therefore rendered operational in systems that manage the patient, the investor, the incarcerated, and the voter. David Lyon captured this nicely when he referred to the surveillance society as constituted by ‘leaky’ databases 6 though the first-person perspective is not merely a datafied subject or apparatus that leaches across identities and data infrastructures. Rather the first person perspective is a system that searches for – and reproduces – commonalities, proximities and affinities. The first-person perspective is not a singular, isolated perspective on the world, rather it is increasingly a decentered, relational and computational one. The first-person perspective is a system of habit that (re)produces a pervasive life of affinity, 7 not a closed life, but clearly one that privileges so-called friction free user-friendly experiences, rationalized temporalities, and the efficient circulation of business and capital.

The first-person perspective can be exceptionally convenient, cutting through the clutter of consumer capitalism and the information attention economy, yet at the same time it can also be exceptionally boring, eerily familiar and repetitive. Who hasn’t had unbought shoes or sweaters follow them across the recursive advertising spaces of the internet? Yet this digital hard sell is hardly the crux of the problem, it is rather just emblematic of a larger social problem produced by a life of affinity, a first-person life. The more serious problems produced by the first-person perspective are in no particular order: cyber-redlining, the amplification of antisocial and abusive activities online and off, the viral spread of rumours, innuendo, and disinformation, and the reinforcement of ‘group think’ to the exclusion of not only different perspectives and opinions, but also expert knowledges.

To pause briefly, I don’t want to give the impression that the first-person perspective produces a hermetically sealed life, where all but previously consumed goods and ‘liked’ opinions flow gently and quietly into a feedback loop. Rather the first-person perspective is more akin to product placement, where preferred commodities are strategically placed near the eye and hand of the shopper. Does this mean that shoppers never bend down in the supermarket aisle to pick up goods placed near the floor – no, but the architecture of embodied shopping works against it, particularly for individuals with physical ailments or disabilities.

A growing number of scholars however have begun to question the pervasiveness of first-person perspectives. Axel Bruns’ recent book Are Filter Bubbles Real? 8 offers arguably the most polemic of such positions. And while Bruns questions whether such concepts are overstated, his contribution to the debate reveals a rather questionable reliance upon classical liberal orthodoxy (the power of individual, rational choice) as the only factor that mitigates a first-person life of affinity. For Bruns the mere existence of a multiplicity of media channels, opinions, and content seems to be evidence in itself of common access and exposure to different world views and perspectives online. In such a pluralistic media environment, Bruns goes so far as to suggest that concerns over filter bubbles and other homophilic algorithms are essentially ‘moral panics’. 9 Yet he never offers a competing view, an alternative governing logic or set of values that guide media audiences and users, there is simply individual choice. Are there no systematic constraints on finding countervailing opinions or counterintuitive services and experiences on such platforms?

Bruns’ ‘overstated’ thesis likewise forms the basis of Dubois and Blanks 10 more focused discussion of the ‘moderating effect of political interest and diverse media, at first glance an important question about the potential mitigating impact on an algorithmic life of affinity. The authors however frame the debate as either one of unfettered free will (choice) or an ideologically enclosed echo chamber: ‘there are two possible outcomes from a diverse media environment. Individuals may be exposed to information and perspectives which are also diverse or they may select varied media in a way that produces the echo chamber effect.’ 11 Given this framing it’s perhaps not surprising that the authors simply choose to ignore algorithms: ‘We are primarily concerned with the choices individuals make in their news and political information-seeking practices in this study rather than the impact of algorithmic filtering.’ 12 I would likely agree that if we altogether remove algorithms from the equation then other factors determining media choices would obviously stand considerably taller. The problem though is that platforms are governed by algorithms. One might even argue that platforms are nothing but mediated interfaces of algorithms — as a set of modulated affordances.

Vaccari and Valeriani 13 likewise reject the foundational filter bubble arguments of Sunstein 14 and Pariser 15 arguing that they are ‘at best exaggerated and at worst unfounded’. This forceful claim immediately follows the framing of their book as limited to ‘…politically relevant outcomes of citizen’ use of social media rather than on the technical affordances of those platforms…’. (p.6) In short, Vaccari and Valeriani dismiss claims of homophilic effects on platform algorithms without themselves studying platform algorithms. Wither the hybrid media system! 16
Other digital media scholars have more subtly displaced the logics of platform algorithms by overstating their invisibility or opacity. Indeed Bruns’ polemic follows this path as well, claiming that the exact algorithmic logic of filter bubbles and other affinity machines “remains unknown” 17 Velkova and Kaun likewise note that algorithms ‘…presence and governing capacities seem to have remained invisible and impossible to penetrate…’ 18 Framing algorithms as secretive and ‘black boxed’ 19 has arguably done little to break out of binary frameworks, as is evidenced in Dubois and Blank’s article.

To be sure, digital media companies keep many elements of their algorithms under wraps, both from the public, government and their business competitors. However, imprecise knowledge of decision-making systems should not detract us from developing structural critiques of algorithms 20 Why not start by building upon the considerable amount that we do know about the common first-person logic that govern media properties and platforms? Take for instance Google: the founders of the company first published the underlying logic of their PageRank search and results ranking algorithm in a scientific journal in the same year as the founding of the company. [Sergey Brin & Larry Page 1998) Did Brin and Page publicize the exact metrics and logic of subsequent commercial search engine? Of course not. But in the years since, countless articles have reconfirmed common aspects of the algorithm in updated versions of Google search rebranded as ‘Caffeine’, ‘Panda’, ‘Penguin’”, and ‘Hummingbird’. All such updates built upon the core logic of PageRank by increasingly integrating the behaviours and word choices of users. 21

Google’s PageRank is not the only algorithm to be revealed publicly, albeit not fully or exactly. There are many other examples of algorithmic technologies that are neither fully transparent nor entirely black boxed. Netscape and soon thereafter most other web browsers integrated ‘cookies’ into their software in an effort to speed up web browsing through establishing a constant “state” between user and website. Cookies were among the first online techniques introduced to personalize online content for users, a key building block in the establishment of the first-person perspective. The technology would also allow e-tailers and other content providers to customize goods and services to repeat customers online. Such a persistent ‘state’, constructed by saving a unique website ID on a user’s personal computer, was included in Netscape’s software documentation, with added details and language revised for each new version. 22 In short, the web cookie (as first-person perspective) was introduced in the manual.

In an effort to attract advertisers and encourage the participation of software developers, Facebook has long publicized aspects of their governing algorithms that highlight the relationship between friend networks and the ranking and prominence of content shown on a user’s friend feed. 23 And it’s not hard to find technology industry commentary and analysis following every new revision of the Facebook algorithm. 24 The same was also true of Twitter’s 2017 widely publicized ‘who to follow’ algorithm, modeled on Google’s affinity page rank algorithm. 25

To summarize, there is enough information in the public domain to confirm that algorithms in one capacity or another guide users to preferred content, and that affinity with — and proximity to — other users, clusters of users, past user behaviours, and popularity of content/services serves as the overarching logic, hence the prevalence of the first-person perspective, a personalized view of a mediated world. In short, there is more than ample information in public circulation that paints a consistent and persistent picture of predictive, scoring, profiling, and recommendation systems that pervade media platforms and beyond. This does not suggest stasis however, that challenges may not emerge to this predominant logic. The zero person is just one potential imagined perspective, yet its power emanates from a displacement of the predominant algorithmic perspective we continue see repeated throughout platform iterations. It may be time to ‘look’ beyond the iterative first person platform and interface.

Neither/Nor: Neutering the First Person

If unfettered choice is not the answer to the first-person life of affinity, then what is? How can we move beyond bifurcated and individualized control/choice frameworks? What critiques persist beyond the transparency paradigm? How can we rethink the positionality of users, the perspective from which life is assembled and rendered visible or present? I’d like to suggest that one way forward is to radically shift positions and perspectives, from one to zero, that is to posit a zero-person perspective as a conceptual grounding to script new forms of media governmentality. Graham Harman (2009) not surprisingly invokes the zero-person in his larger objected-oriented ontology, where he “coin(s) the adjective ‘zero-person’ to refer to the reality of any entity apart from its interactions with other entities of any kind.” 26 For Harman then the zero-person is an essence much like an inanimate object. As a consequence, the zero person is not a perspective for Harman, in fact he quickly dismisses the 1st and 3rd person as mere ‘descriptions’ in his attempts to get outside of the body. In this next section of the commentary, I suggest otherwise. Through Barthes work The Neutral, I argue that efforts at mitigating a mediated life and politics of affinity can start with an existential move away from the first person toward a neutral or zero-person perspective.

Historians of mathematics, numbers and culture have long been obsessed with the enigma of the number zero. Jessie Szalay argues that zero was first commonly used by the Mayans around 350 CE, as a “placeholder”, an as yet determined value. 27 Wallin nevertheless argues that this placeholder function sought to differentiate between large sums such as hundreds and thousands. 28 Zero is however not solely restricted to questions of numerical or financial value. Szalay also noted that by 458 CE in India, zero came to be equated with common words such as “void,” “sky” or “space”, words used in songs and chants. Finnish linguists have conversely discussed the zero perspective in both linguistic and subjective terms, as a missing person, a null person. But this does not simply imply an absence. Rather, as Laitinen notes the Finnish zero person is also an ‘accusative form’, a provocation that seeks to understand subjectivity and world views beyond the first-person perspective, or at least not as an a priori starting point. 29

For others such as Wallin, this linguistic accusation refers to the role of the verb, the action or motion. Thus, the breaking from the first-person perspective is not merely a recentering of the subject, rather it represents a move beyond its personalized perspective to a broader set of relations, an apparatus which he posits more as a game, a problem or riddle to be solved. And here, Wallin reminds us that zero, as a corrective to the first person, formed the basis of algorithmic thinking: ‘In the ninth century, Mohammed ibn-Musa al-Khowarizmi was the first to work on equations that equaled zero, or algebra as it has come to be known. He also developed quick methods for multiplying and dividing numbers known as algorithms (a corruption of his name). Al-Khowarizmi called zero ‘sifr’, from which our cipher is derived.’ 30 The establishment of mathematical rules, and the invention of algorithms, was in short predicated upon the null, or zero — a concept that served to represent an enigma.

Barthes too posits the neutral or zero-person as a game of sorts. He first started to question the relationship between language, literary form and zero subjectivity in his earlier work Writing Degree Zero 31 A more fully developed critique of the first person however is outlined, albeit in incomplete form, in his posthumously published lecture notes entitled The Neutral. The book, culled from twelve lectures delivered at the College de France from 1977-1978, frames the neutral as an effort at ‘outplaying’ or ‘outsmarting’ the first person, particularly as paradigmatically framed in opposition to the social. Barthes subsequently writes that: ‘I define the Neutral as that which outplays [déjoué] the paradigm’ 32, which he reiterates is a rather violent process of creating meaning from faux opposites. Instead Barthes argues for ‘a refusal of pure discourse of opposition’, embracing ‘neither one nor the other’. 33

Barthes opens his seminar on the Neutral with a stark, contrasting set of passages, the first of which mirrors Foucault’s essay on the scaffold. The introduction highlights the violence of the scaffold as a truth-seeking technology, an ‘inquisitional apparatus’. 34 The criminal must either plead to the crime and ask for penance or, if he fails to “retract,” he will die a violent, gruesome death. Contrasting what Barthes posits as the violence of truth telling is an existential move out of the subject, as told through passages from Tolstoy and Rousseau. In lieu of subjective experience and enunciative statements of culpability or regret told through the looming threat of punishment, Barthes instead takes us to a form of nonconscious, a fleeting plane of existence or ‘near consciousness’ that one may experience when waking up. Barthes returns to the moment through the lectures, describing it as a ‘white, neutral awakening’, a ‘suspended-time’, not dreaming nor awake (p. 37), the ‘time of the not yet’. 35

Not surprisingly to get to this place and time, this nonconsciousness beyond first personhood and subjectivity, The Neutral defaults largely to questions of language. Like Laitinen, Barthes posits the neutral or zero perspective as a rejection of affirmation, defined as a confirmation of the self. Rather moving ‘toward negation, doubt’ ‘the neutral would be a language with no predication’ it would be ‘nonpredicable’. 36 By removing the predicates function in language, asserting and confirming the preeminence of the subject, Barthes begins to lay the groundwork of his critique of subjectivity, which he contends is ‘arrogant’, and ‘pulls toward oneself’. 37

Part of Barthes concern over the first person is its embrace of latent materialism, indeed many passages in his lectures drift into concerns over not only violence and meaning, but the commodifiable rationality of first-person subjectivity. Against this concern Barthes asserts the neutral as ‘unmarketable’, not ‘invisible’ but rather ‘unsustainable’. The passion of the Neutral, argues Barthes, is ‘not …a will-to-possess’, rather it’s a ‘suspension of narcissism’ a dissolving of ‘one’s own image’. 38

Moving into the latter weeks of his seminar, Barthes neutral moves from questions of language, violence and paradigms to more overt concerns with ‘perspectives’. The neutral thus produces a ‘bridgehead’ or a ‘a projective space’ out of ‘a state of continuous flux’ 39 to ‘discover a region, a horizon, a direction’. 40 Barthes consequently offers the panorama as a concept, a perspective, that negates the arrogance of the first person, though he could be accused here too of a paradigmatic form of interior/exterior thinking. 41 A more sympathetic reading of this decidedly incomplete strain of thought would however point to Barthes’ seeming insistence on defining the halcyon functions of the panorama, which would couple viewing with a form of temporality or ‘rhythm’. 42 Barthes seemingly admits the failure to fully develop the panoramic characteristics of the Neutral, and thus perhaps a more convincing critique algorithmic power as a particular perspective. Rather instead he concludes with a direct passage from the gospel of Luke in the New Testament that lacks context or analysis. The quotation, which could easily have been voiced by Barthes himself, reads:

‘Someone rightly brought to my attention that a third vision, different, and contrasted, could be added: that of perspective…’ 43

There remains a need for another perspective to neutralize the myth of either the perfect profiling machine or the sovereign platform user. We can begin to chip away at this predominate paradigm first by recognizing that while first person algorithms prevail they are imperfect profiling machines. We should not interpret this assertion however as reason to dismiss algorithmic power or position it in opposition to the unconditional agency (freedom) of the user, particularly since published reports of their logics continually reaffirm the first-person perspective. That said, it should come as no surprise that personal choice is offered as an antidote to an intensely personalized computational form of media governance, for what other depersonalized perspectives do commercial media platforms afford? Nor should we be surprised that such algorithms make personal life easier by curating and suggesting a more plausible and manageable set of choices in our busy lives. Neutralizing such a powerful paradigm thus requires an altogether different perspective, beyond the pull of technological affordances.

But given the brevity of Barthes’ zero perspective, what does his Neutral ‘Neither-Norism’ leave us with? To begin, a ‘neutral’ perspective as Barthes argues should not be confused with disengagement or other forms of political neutrality, it cannot sidestep questions of social and political harm produced by the first-person perspective. A zero-person perspective should not cultivate an inhumane, unethical zero person. Rather the move to the neutral or zero-person perspective should break free of the paradigm of affinity/choice and reject a narcissistic horizon that affords things for me or my network. Nor should the zero-person perspective, to reiterate Barthes, simply make visible the mediated plane, on, off and through platforms. Rather Barthes’ neutral position challenges us to rethink the comfort and sustainability of a me-centric life, not to imagine ourselves as Other, but rather to neutralize the recursive power of the self.

Indeed Barthes’ neutral position suggests how we might embrace the unfamiliar and uncertain worldview through a form of play, a gaming of the preexisting paradigm so as to imagine other perspectives. Yet, to confuse or to ‘baffle’ the paradigm of the first person seems to offer just one step toward the zero-person perspective, or moreover, competing paradigms that could redress the worst narcissistic harms produced by the first person perspective. To change perspectives we might simply ask ourselves where the harm is most acutely located, and better still from what perspective(s) it is rendered, particularly from the perspective of solutions. The first-person perspective is not designed to systematically address problems and harms beyond the reach of the individual and their clusters of ‘friends’. To render such harms zero would therefore suggest a provocation, what perspective best serves a solution? The zero in such a question thus simply serves as neutralizing the endemic bias of not only networked computing and media, but also avenues for generating non-subjective knowledge and solutions. Harms would, as a consequence, become decoupled from the selfie-self – they would be generalized, and harder to dismiss as ‘things to do with me’.

To be sure there are many efforts underway at depersonalizing technologies and interfaces, some of which highlight the harmful effects of first-person perspectives. While the selfie image might serve as perhaps one of the most iconic examples of first-person practices and representations on a first-person platform, such images have also served as building blocks for the zero-person perspective. For example, while Laís Pontes’ masking of online human faces, poses and expressions with Q-codes creatively point to the vulgarities of first-person life 44 it also simultaneously reminds us that first person perspectives are by design networked and clustered. Zach Blas’ aggregated mask images likewise depersonalizes first person selfie image to tactically evade surveillant forms of identification. 45 In both examples we see artists expose and intervene in first person politics by recognizing the power of the perspective or algorithm. Yet these examples are somehow incomplete, in much the same way that Chun’s call for a ‘responsible AI’ returns us back to questions of ethics and use, and more broadly transparency and accountability. 46

The zero-person perspective, conversely, demands a paradigmatic change, a displacing of the first person as the driving force of informational logic in networked and platformed computing. A mediated politics of difference or indifference requires not just enhancing the user’s panorama of networked content and services, it needs to nullify the default ‘magnet’ of relevance, convenience and affinity while also recognizing the individualized realities of media consumption, and indeed the tremendous pressure that work, community, friends, and family place on our time, energies, and resources. To this end the zero-person perspective, as previously noted, cannot exclude the first person, but nor can it continue to displace harmful externalities reinforced by algorithms of affinity. Rather, it must embrace these externalities at its core, embrace the impact of individual and social decisions on other individuals, communities, in particular places and common times.

Conclusion: Neutered Interfaces
Much attention in this short essay has focused on the algorithmic component of first personhood, and on its ‘choice’ critics. But depersonalizing a panorama, emphasizing the zero perspective, would not mean displacing interfaces we all personally use, only the logic that aggregates the same, the personally familiar. Zero person interfaces could simply integrate the unfamiliar, the unusual and the rarely if ever read, consumed or browsed. But this is hardly what Barthes imagined, or what a zero person perspective could envision and enact. What’s more, such a ‘difference machine’ would only seem to automate, through inverted personalization algorithms, the personal choice to ‘find something new or different’. And while such a change might at least move us away from the tyranny of the same — via the affinity machines that dominate social media platforms, media properties and online ‘e-tailers’ — we would remain tethered to the first person.

A refashioned horizon or ‘bridgehead’ could be imagined as an impersonally curated interface, one that not only personalizes ‘difference’ but also integrates 3rd person perspectives from Others, non-human objects, rhythms and temporalties. This zero perspective interface could also curate or aggregate harms, gaps in knowledge and solutions, not just the similar, the same, the recognizably personal. Let us consider this as an unfamiliar (im)personal interface at the outset, a projection that makes accusations, as linguists suggest, not predicates. An interface that can aggregate by accusation, and that does not personalize in the first instance the world of relations and actions.

Such a perspective might bring into focus an expansive and hopefully sustained view of environmental toxins and harms, or signs that warn of such possibilities. The zero perspective could thus baffle or better still grind platforms into the ground, to embrace the elemental and ecological. Yet, we must recognize that such nullified human ‘de-platforming’ would likely meet with confusion and bewilderment. It would almost certainly baffle. Such a state of disarray would thus require a rich mode of speculation, nonhuman imaginaries, a suspension of egos and selves. The zero perspective might only move from the conceptual to the actionable from a sustained mode form of bafflement, or a series of – and deeper engagement with such moments. The zero, cipher, or puzzle generated through such moments could hopefully provoke a depersonalized perspective, an acknowledgement of the limits of knowledge reproduced by the comfort and rhythm of aggregated personal affinities.

Acknowledgements: The author would like to acknowledge the helpful feedback provided by Ganaele Langlois, Robert Gehl, Fenwick McKelvey, Henry Warwick, Andrew Goffey and Matthew Fuller. Erica Biddle also provided early research support for this article.

Greg Elmer is Bell Media Research Chair and Professor of Communication & Culture and Professional Communication at Toronto Metropolitan University. Greg has published a number of books and produced a series of documentary films on questions of media financialization, digital surveillance, internet based politics, and social protest. His next book, co-authored with Stephen Neville entitled The Politics of Media Scarcity, is forthcoming in 2024 from Routledge.

References

Barassi, Veronica. Child Data Citizen: How Tech Companies are Profiling Us Before Birth. Cambridge: MIT Press, 2020.

Barthes, Roland. The Neutral. New York: Columbia University Press, 2005.

Barthes, Roland. Writing Degree Zero. New York: Hill & Wang, 1977.

Blas, Zach. “Collective masks”, https://zachblas.info/works/facial-weaponization-suite/, 2022.

Brin, S. & L. Page. “The anatomy of a large-scale hypertextual Web search engine.” Computer Networks and ISDN Systems 30, no. 1–7 (1998): 107–117.

Bruns, Axel. Are Filter Bubbles Real? New York: Wiley, 2019.

Bucher, Taina. If…Then: Algorithmic Power and Politics. Oxford: Oxford University Press, 2018.

Card, Dallas. “The ‘Black Box’ metaphor in Machine Learning.” Medium.com, , 2007.

Chadwick, Andrew. The Hybrid Media System: Politics and Power. Oxford: Oxford University Press, 2013.

Chun, Wendy. Discriminating Data: Correlation, Neighbourhoods, and the New Politics of Recognition. Cambridge: MIT Press, 2021.

Chun, Wendy. Updating to Remain the Same. Cambridge: MIT Press, 2017.

Dubois, Elizabeth & Grant Blank. “The Echo Chamber is Overstated: The Moderating Effect of Political Interest and Diverse Media.” Information, Communication & Society 21, no. 5 (2018): 729-745.

Elmer, Greg. “Prospecting Facebook: The Limits of the Economy of Attention.”
Media, Culture and Society 41, no. 3 (2018): 332-346.

Elmer, Greg. Profiling Machines: Mapping the Personal Information Economy. Cambridge: MIT Press, 2004.

Fisher, Eran. Algorithms and Subjectivity: The Subversion of Critical Knowledge. London: Routledge, 2022.

Foucault, Michel. The Spectacle of Scaffold. London: Penguin UK, 2008.

Gupta, Pankaj, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang & Reza Zadeh, “WTF: The Who to Follow Service at Twitter.” WWW ’13: Proceedings of the 22nd international conference on World Wide Web (2013), 505-514.

Harman, Graham. “Zero-person and the psyche.” In Mind that Abides
Panpsychism in the New Millennium, edited by D. Skrbina, 253-282. Amsterdam: Benjamins, 2009.

Laitinen, Lea. “Zero person in Finnish.” Amsterdam Journal in the Theory and History of Linguistic Science 4, (2006): 209-231.

Laps, Michael. “Every Google Algorithm Change (so far) Explained.” Yoghurt Digital, (2016). Available at: https://www.yoghurtdigital.com.au/insights/every-google-algorithm-change-explained.

Lupton, Deborah. The Quantified Self. London: Polity, 2016.

Lyon, David. Surveillance Society: Monitoring Everyday Life. Buckingham: Open University Press, 2001.

Nakamura, Lisa. Cybertypes: Race, Ethnicity and Identity on the Internet. London: Routledge, 2002.

Newberry, Christina. “Facebook Algorithm: How to Get Your Content Seen.” Hootsuite, (2023), https://blog.hootsuite.com/facebook-algorithm/.

Neyland, Daniel. The Everyday Life of an Algorithm. London: Palgrave, 2019.

Noble, Sofiya. Algorithms of Oppression. New York: NYU Press, 2018.

Pariser, Eli. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin Books, 2012.

Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press, 2016.

Pontes, Lais. “Amber Virtual Mask”, http://www.laispontes.com/virtual-mask/Amber-Virtual-Mask/, (2022).

Sunstein. Cass. Republic.com 2.0. Princeton, N.J.: Princeton University Press, 2009.

Szalay, Jessie. “Who Invented Zero.” Live Science, https://www.livescience.com/27853-who-invented-zero.html, 2017.

Vaccari, Cristian & Augusto Valeriani. Outside the Bubble: Social Media and Political Participation in Western Democracies. Oxford: Oxford University Press, 2021.

Velkova, Julia, and Anne Kaun. “Algorithmic Resistance: Media Practices and the Politics of Repair.” Information, Communication & Society 24, no. 4 (2019): 523-540.

Wallin, Nils-Bertil. “The History of Zero”. YaleGlobal Online, https://yaleglobal.yale.edu/history-zero, 2020.

Notes

  1. Roland Barthes, The Neutral (New York: Columbia University Press, 2005).
  2. Eran Fisher, Algorithms and Subjectivity: The Subversion of Critical Knowledge (London: Routledge, 2022), Taina Bucher, If…Then: Algorithmic Power and Politics (Oxford: Oxford University Press, 2018).
  3. Daniel Neyland, The Everyday Life of an Algorithm (London: Palgrave, 2019).
  4. Roland Barthes ibid.
  5. Wendy Chun, Updating the Remain the Same (Cambridge: MIT Press, 2017), Veronica Barassi, Child Data Citizen: How Tech Companies are Profiling Us Before Birth (Cambridge: MIT Press, 2020), Sofiya Noble, Algorithms of Oppression (New York: NYU Press, 2018), Deborah Lupton, The Quantified Self (London: Polity, 2016), Eli Pariser, The Filter Bubble, How the New Personalized Web Is Changing What We Read and How We Think (New York: Penguin Books, 2012), Greg Elmer, Profiling Machines: Mapping the Personal Information economy (Cambridge: MIT Press, 2004), Lisa Nakamura, Cybertypes: Race, Ethnicity and Identity on the Internet (London: Routledge, 2002).
  6. David Lyon, Surveillance Society: Monitoring Everyday Life (Buckingham: Open University Press, 2001).
  7. Wendy Chun, Updating to Remain the Same (Cambridge: MIT Press, 2017).
  8. Axel Bruns, Are Filter Bubbles Real? (New York: Wiley, 2019).
  9. Axel Bruns, ibid, 8.
  10. Elizabeth Dubois and Grant Blank, “The Echo Chamber is Overstated: The Moderating Effect of Political Interest and Diverse Media,” Information, Communication & Society 21, no. 5 (2018): 729-745.
  11. Elizabeth Dubois and Grant Blank, ibid, 730.
  12. Elizabeth Dubois and Grant Blank, ibid, 730.
  13. Cristian Vaccari and Augusto Valeriani, Outside the Bubble Social Media and Political Participation in Western Democracies (Oxford: Oxford University Press, 2021, 7).
  14. Cass Sunstein, Republic.com 2.0 (Princeton, N.J.: Princeton University Press, 2009).
  15. Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (New York: Penguin Books, 2012).
  16. Andrew Chadwick, The Hybrid Media System: Politics and Power (Oxford: Oxford University Press, 2013, 4). Chadwick’s hybrid media thesis argues that politics is inherently intertwined with media systems. He writes: “The hybrid media system is built upon interactions among older and newer media logics – where logics are defined as technologies, genres, behaviours, and organizational forms – in the reflexively connected fields of media and politics.”.
  17. Axel Bruns, ibid, 5.
  18. Julia Velkova & Anne Kaun, “Algorithmic Resistance: Media Practices and the Politics of Repair,” Information, Communication & Society 24, no. 4 (2019): 523-540.
  19. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge: Harvard University Press, 2016).
  20. Taina Bucher ibid, Dallas Card, “The ‘Black Box’ metaphor in Machine Learning,” Medium.com, (2017), .
  21. Michael Laps, “Every Google Algorithm Change (so far) Explained,” Yoghurt Digital, (2016), https://www.yoghurtdigital.com.au/insights/every-google-algorithm-change-explained.
  22. Greg Elmer, ibid.
  23. Elmer, Greg, “Prospecting Facebook: The Limits of the Economy of Attention,” Media, Culture and Society 41, no. 3 (2018): 332-346.
  24. Christina Newberry, “2023 Facebook Algorithm: How to Get Your Content Seen,” Hootsuite, (February 2023), https://blog.hootsuite.com/facebook-algorithm/.
  25. Gupta Pankaj et als., “WTF: The Who to Follow Service at Twitter,” WWW ’13: Proceedings of the 22nd international conference on World Wide Web (May 2013), 505-514.
  26. Graham Harman, “Zero-person and the psyche,” in Mind that Abides Panpsychism in the New Millennium, ed D. Skrbina (Amsterdam: Benjamins, 2009), 253-282, 261.
  27. Jessie Szalay, “Who Invented Zero,” Live Science (2017), https://www.livescience.com/27853-who-invented-zero.html.
  28. Nils-Bertil Wallin, “The History of Zero”, YaleGlobal Online (2002), https://yaleglobal.yale.edu/history-zero.
  29. Lea Laitinen, “Zero person in Finnish” Amsterdam Journal in the Theory and History of Linguistic Science 4 (2006): 209-231, 212.
  30. Nils-Bertil Wallin, ibid, 2.
  31. Roland Barthes, Writing Degree Zero (New York: Hill & Wang, 1977).
  32. Roland Barthes, ibid, 6.
  33. Roland Barthes, ibid, 71.
  34. Michel Foucault, The Spectacle of Scaffold (London: Penguin UK, 2008, 4.)
  35. Michel Foucault, ibid, 50.
  36. Roland Barthes, ibid, 4-5.
  37. Roland Barthes, ibid, 162.
  38. Roland Barthes, ibid, 12-13.
  39. Roland Barthes, ibid, 10.
  40. Roland Barthes, ibid, 45.
  41. Roland Barthes, ibid, 163.
  42. Roland Barthes, ibid, 165.
  43. Roland Barthes, ibid, 166.
  44. Lais Pontes, “Amber Virtual Mask”, http://www.laispontes.com/virtual-mask/Amber-Virtual-Mask/.
  45. Zach Blas, “Collective masks”, https://zachblas.info/works/facial-weaponization-suite/.
  46. Wendy Chun, ibid, 253.