SciELO - Scientific Electronic Library Online

 
vol.15 número2Sin necesidad de nadie (excepto del algoritmo)Cuidar las I. A.: aportes de la filosofía feminista de la tecnología para la ciberbioética índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

  • No hay articulos citadosCitado por SciELO

Links relacionados

  • No hay articulos similaresSimilares en SciELO

Compartir


Desde el Sur

versión impresa ISSN 2076-2674versión On-line ISSN 2415-0959

Desde el Sur vol.15 no.2 Lima abr./jun. 2023  Epub 25-Abr-2023

http://dx.doi.org/10.21142/des-1502-2023-0021 

Tecnopolíticas contemporáneas

Predictive analytics and digital protentionality: On algorithmic prediction and anticipation

Análisis predictivo y protencionalidad digital: sobre predicción y anticipación algorítmicas

* Leuphana Universität Lüneburg. Lüneburg, Alemania. alan.diaz.al@gmail.com.

RESUMEN

¿Qué implica decir que los algoritmos pueden predecir lo que va acontecer y anticipar nuestros comportamientos? En el siguiente artículo se analizarán los fenómenos de predicción algorítmica y anticipación conductual y se mostrará cómo es que convergen en las técnicas contemporáneas de análisis predictivo. En primer lugar, se abordarán algunas de las ideas erróneas sobre las facultades predictivas de los sistemas algorítmicos mediante una breve incursión en la teoría de la probabilidad. En segundo lugar, se presentará la teorización posfenomenológica sobre la gubernamentalidad algorítmica desarrollada por Bernard Stiegler como un modelo adecuado para entender la capacidad que tienen estos sistemas para la anticipación conductual. Por último, se expondrá la lectura whiteheadiana que Mark Hansen hace de analítica predictiva, en la cual nos ofrece una manera de pensar el trasfondo ontológico que yace detrás del poder de estos sistemas algorítmicos y los límites epistemológicos de los mismos. Además de subrayar el carácter novedoso de este planteamiento, en la conclusión se argumentará que también nos presenta con nuevas herramientas para extender el proyecto farmacológico de Stiegler, presentando la posibilidad de pensar maneras en las cuales la predicción algorítmica pudiera ser implementada hacia resultados positivos.

Palabras clave: Probabilidad estadística; algoritmos; predicción; pronóstico

ABSTRACT

What do we mean when we say that algorithms are capable of predicting what is going to happen and anticipating our actions? In the following article I will analyse the phenomena of algorithmic prediction and behavioural anticipation, explaining their convergence in contemporary techniques of predictive analytics. First, I will deal with some possible misconceptions about the predictive capacities of algorithms by delving into probability theory. Secondly, I will take Bernard Stiegler's post-phenomenological theory of algorithmic governmentality as a model to explain their capacity for behavioural anticipation. Lastly, I will present Mark Hansen's Whiteheadian reading of predictive analytics, in which he provides a way to understand the ontological basis of the power of these algorithmic systems and also highlight their epistemological limits. Besides the theoretical purposiveness of this account, in the conclusion I will argue that this also provides us with new tools to extend Stielger's pharmacological project further, opening up the possibility of thinking about ways in which algorithmic prediction could be implemented towards positive outcomes.

Keywords: Probability theory; algorithms; prediction; forecasting

Introduction

Perhaps more often than I would like to admit, the Amazon recommendation feature shows me books that I end up buying or at least adding to the shopping cart for them to languish indefinitely. Scrolling down social media feeds amount to a similarly curated experience by means of a different algorithm. Sponsored ads for events that I would actually be interested in going are frequent, as well as the infuriating posts from a particular profile with which I regularly interact masochistically looking to engage in entirely fruitless online discussions. The fact that these seemingly banal experiences imply the presence of algorithmic techniques designed to anticipate behaviour is pretty much common knowledge by now, even if their actual workings are entirely black boxed to the average person. Furthermore, these techniques are present in many other fields beyond targeted advertisements and social media feeds, such as healthcare, finance, transportation, entertainment and-perhaps most worryingly- even politics. Moreover, these techniques are often fueled by data gathered through an ever widening array of services and devices and with ever increasing degrees of precision.2 As Shoshana Zuboff points out, "nearly every product or service that begins with the word 'smart' or 'personalised', every internet-enabled device, every 'digital assistant', is simply a supply-chain interface for the unobstructed flow of behavioural data on its way to predicting our futures" (Naughton, 2019).

As we can see, the notions of prediction and anticipation are often used in an attempt to explain what goes on in situations such as those mentioned above. While the interchangeable use of these notions might suffice for the purposes of common parlance, a clearer distinction is due if we are to probe matters any further. Thus, we can argue that the notions of prediction and anticipation point towards two different-albeit interrelated-operational levels: on the one hand, the acts of prediction that contemporary technologies perform through techniques such as datamining and machine learning; on the other, the way that these can influence or even transform the anticipative capacities of human subjects. Thus, we might say that, in the examples mentioned above, algorithmic systems deploy the tools of predictive analytics with the purpose of anticipating human behaviour; a statement which posits predictive techniques as the condition of possibility of technically-mediated behavioural anticipation. Likewise, these two levels point towards two distinct domains of futurity which give rise to different questions. What are the claims that contemporary algorithmic predictive analytics make on the future? How does this influence the way the future is construed within the phenomenological domain of the subject and its consciousness of time?

In the following, I will analyse the domains of predictive analytics and behavioural anticipation with the purpose of elucidating how their interrelation works within our contemporary algorithmically mediated situation. I will begin with a brief incursion into the notion of algorithmic prediction. After critiquing one of the common ways of understanding algorithmic prediction-in which it is unreflectively understood as a kind machinic clairvoyance into the future-I will explicate its epistemological grounding in probability theory, as well as outlining a particular ontology of probability which understands it as an index of real propensities. Then I will turn to Bernard Stiegler's theory of algorithmic governmentality, in which he presents behavioural anticipation as one of the central traits of a data-driven societal control system which attempts to 'outstrip and overtake' the human subjects' own anticipative-or protentional-capacities. I will attempt to bring this theory of behavioural anticipation into contact with the previous analysis of prediction and probability. Finally, I will turn to Mark B.N. Hansen's Whiteheadian reading of predictive analytics in order to question the actual possibility of a totalizing scenario of control such as the one that seems to emerge from time to time in Stiegler's musings. I will explain how Hansen's use of a Whiteheadian ontological framework renders the scenario of a totalizing societal control through complete behavioural anticipation as empirically impossible, as well as opening the possibility for predictive analytics' grasping of "the future's inherence in the present" (2015, p. 120) having positive effects on the phenomenological subject's own construal of futurity.

Predictive analytics and the ontology of probability

As Nicholas Rescher (1997, p. 11) points out, "the future is, for us, an object both of curiosity and of intense practical concern, and prediction is our only access to it". Attempting to predict the future has been an activity that has preoccupied humans since ancient times, assuming forms as varied as seers, astrology and Delphic oracles, down to weather forecasting and predictive computational modelling in the present. Since Newton, modern science has played a central part in making nature amenable to rational prediction by discovering the physical laws underlying it, prompting further attempts-throughout the modern era-that tried to find similar laws underlying the messiness of history and human affairs for the same purposes. This future oriented ethos gave rise to the domain of futurology which, according to Rescher (1997, p. 32), remained consistent during the nineteenth and twentieth centuries, suffering a diminishing enthusiasm and trust during the 80's decade for various reasons. Writing in the 90's, Rescher could have not foreseen the notorious and widespread renewal of this ethos as a consequence of the emergence of big data and the predictive capacities that it purportedly affords through algorithmic processes of analysis.

Before delving into these algorithmic processes of prediction-known today as predictive analytics-we must first inquire into the nature of prediction itself. Here, we must distinguish between the notions of foreknowledge and foresight: while the former points towards an absolutely certain apprehension of a future outcome, the latter involves "reflectively mediated evidence and inference" (Rescher, 1997, p. 54). Foreknowledge is akin to the precognitive clairvoyance attributable to seers and oracles; foresight, on the other hand, consists in future-oriented assertions which are the result of inferential processes and which "always involve an inherently risky, error-liable epistemic leap from information regarding the past-&-present to claims regarding the yet unrealized future" (Rescher, 1997, p. 54). This epistemic leap happens when a future-oriented assertion is endorsed, its claims accepted as putatively correct until proven otherwise by future developments. Thus, unlike foreknowledge or clairvoyance, prediction qua foresight is rational, in the sense that its credibility is based on reasons, evidence and inferences which can, in principle, be scrutinised.

David Spiegelhalter (2019, chapter 6) defines predictive analytics as the "using of data to create algorithms for making predictions." For him, predictive analytics is fundamentally a means of practical problem-solving through the use of past data: the function of algorithmic prediction is "to tell us what is going to happen. For example, what the weather will be next week, what a stock price might do tomorrow, what products that customer might buy, or whether that child is going to run out in front of our self-driving car" (2019, chapter 6). Thus, algorithmic procedures are supposed to predict what is going to happen on the basis of an analysis of data from past events. Here it might be helpful to distinguish between analytics and prediction to better understand what is going on. Although it is not mentioned, the kind of prediction that Spiegelhalter mentions depends on what today is commonly known as datamining. Data mining can be defined as the "extracting or 'mining' [of] knowledge from large amounts of data" (Han and Camber, 2006, p. 5).3 It is, in other words, an algorithmically automated process which analyses and correlates data in search for patterns from which statistical knowledge about past events can be inferred, which can then be used to predict the probabilities of something happening in the future. Thus, the notion of predictive analytics implies a coupling between the patterning and analysis (or 'mining') of data from the past with the predicting of future outcomes.

What do we mean when we claim that predictive analytics 'tell us what is going to happen'? At first glance, this simplified statement might seem to allude to a kind of algorithmic clairvoyance. Although Spiegelhalter certainly does not subscribe to this understanding of predictive analytics, I argue that this notion of algorithmic prediction-as-clairvoyance plays a significant role in the common misconceptions and confusions formed around these technologies. To be clear, this is not to claim that people explicitly hold the belief or endorse the claim that algorithms can actually 'see' into the future in the same way as the oracles of antiquity or the Precogs from Minority Report, but rather that the notion of 'prediction' itself carries a cultural and semantic baggage that can often inadvertently seep into the way that algorithmic prediction is depicted in popular culture and understood by the layperson. It is common to come across blog posts which claim that in the future algorithms will be able to predict when you will die (van Hooijdonk, 2022); narrations of how teenagers are claiming that the TikTok algorithm 'knew' their sexual orientation or gender identity before they themselves did (Joho, 2022), thus "treat[ing] TikTok's probabilistic functions as diagnostic, or even deterministic" (Cummins, 2022); and articles which, while recognising their probabilistic nature, they still seem to portray the difference between algorithmic prediction and ancient divination as one of degree rather than kind (Véliz, 2021).

One possible explanation behind these misunderstandings can be attributed to the black-boxed character of these technologies which might induce us into forgetting that the predictions they perform are endorsed error-liable claims on the future which are based on statistical inferences made from past data. Another explanation could be due to the important fact that contemporary algorithmic prediction implies a very different use of statistics when compared to the 'classic' probability calculus that emerged in the 17th century. While classical statistics operates at the level of populations to infer abstract averages, contemporary algorithmic prediction focuses on predicting specific events and individual phenomena in their fine-grained variability. In classic statistics, individuals represent imperfect approximations of the average (i.e. no one has 1.7 children), while machine learning is more personalised. As Elena Esposito explains, in contemporary algorithmic prediction "society is calculated without categorising individuals, but by considering the specificity of everyone. Calculations start from people's activities and do not try to infer features applicable to larger phenomena" (2022, p. 96).

Having said this, it is worth explaining in a bit more detail why the idea of algorithmic clairvoyance is an entirely inadequate way to understand the way algorithmic prediction actually functions. Most importantly, it implies an unacknowledged subscription to a deterministic view of the universe: a view in which the future is causally predetermined by what has happened, with no room for contingency. This is a view which derives from the classical mechanistic view of Newtonian physics developed by mathematicians such as Jacques Bernoulli and Pierre-Simon Laplace; one in which "all states of the future [are] in principle calculable in complete detail via natural laws on the basis of a sufficiently complete characterization of the past-&-present" (Rescher, 1997, p. 72). As Laplace himself wrote in 1779:

An intelligence which, for a given instant, knew all the forces by which nature is animated, and the respective situation of the beings which made it up, if furthermore it was vast enough to submit these data to analysis, would then embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom: nothing would be uncertain for it, and the future, as the past, would be present to its eyes. (cited in Rescher, 1997, p. 72)

If we abide by the present state of scientific knowledge-and particularly twentieth century particle physics-the deterministic view of the Laplacean world is "somewhere between implausible and false" (Rescher, 1997, p. 73). However, abandoning determinism does not imply denying predictability as well. In an indeterminate or chancy world, rational predictions can be made on the basis of statistical information capable of providing the probabilistic evidence necessary to endorse future-oriented claims.4

The kind of knowledge of the future that predictive analytics presents us with is fundamentally a probabilistic one, and buying into the notion of algorithmic clairvoyance would amount to an eschewal of this fact. Predictive analytics do not literally 'tell us what is going to happen', but rather the probability of a certain outcome to take place in the future; a probability which guides the endorsement of a specific predictive assertion as being purportedly correct. As Rescher (1997, p. 43) argues: "A probability distribution of possibilities cannot tell us what will happen, it only tells us the comparative likelihoods of what might happen. To use probabilistic information for forecasting requires a decision, a step from probability distribution across the range of alternatives to the selection of a unique alternative."

To this day there is no consensus regarding the nature or ontology of probability, and several definitions of it exist within the field of probability theory. The classical definition of probability comes to us from Laplace himself, who in 1812 claimed that "the probability for an event is the ratio of the number of cases favourable to it, to the number of all cases possible when nothing leads us to expect that any one of these cases should occur more than any other, which renders them, for us, equally possible" (cited in Jaynes, 2003, p. 43). This definition of probability implies an equiposibility of aleatory future outcomes: every future outcome in the possibility space is equally as likely to happen as the rest.5 This understanding of probability was modelled on dice throws and games of chance, and it falls short when applied to other aspects of reality with a higher complexity. Another way in which probability has been defined is as propensity. Karl Popper (1995, p. 12) explains this as "the theory that there exist weighted possibilities which are more than mere possibilities, but tendencies or propensities to become real [...] inherent in all possibilities in various degrees." In this case, the future is depicted as still undetermined, but nonetheless prefigured to a certain extent by actual propensities or tendencies that point towards its differently weighted outcomes. It is precisely this kind of probabilistic knowledge of the future which, Hansen argues (2015, p. 120), underpins contemporary predictive analytics: "Whatever explanatory and causal value predictive analytics of large datasets have is, I suggest, ultimately rooted in this ontological transformation whereby probabilities are understood to be expressions of the actual propensity of things." According to him, the power of contemporary predictive analytics, which operate on the ever burgeoning expanses of big data, lies in their capacity "to reveal partial propensities stretching forward from the present world to (differently weighted) possible future worlds" (Hansen, 2015, p. 110). Siding with Hansen, I argue that an understanding of algorithmic prediction as a process which attempts to identify probabilities-propensities can be seen as an antidote to the erroneous notion of algorithmic clairvoyance and its unacknowledged influence in popular culture.

The technical capacity to discover present propensities has important social and political consequences, particularly if we acknowledge that many of the fields where predictive analytics are currently being used are either of a governmental, financial or commercial nature. A way to understand these consequences-and to gradually transition towards the topic behavioural anticipation-is through Thomas Berns and Antoinette Rouvroy's (2013) important theorisation of algorithmic governmentality and Bernard Stielgers further post-phenomenological development of this concept.

Automatized digital protentionality

Reiterating some of the points made thus far, Rouvroy and Berns describe algorithmic governmentality as being constituted by three moments (an analytical distinction of a process where they actually intermingle). Firstly, the automated harvesting of massive amounts of data from diverse sources (sensors, browsers, apps, cookies, etc.) which constitutes what we know as big data. Secondly, the algorithmic processing (or datamining) of these huge swathes of data extracts often unforeseen patterns and correlations that are applied as statistical tools for various uses such as marketing, governance, and surveillance.6 Lastly, the third stage "consists in using this probabilistic statistical knowledge to anticipate individual behaviours and associate them with profiles defined on the basis of correlations discovered through datamining" (Rouvroy and Berns, 2013, p. 8). According to them, the radical novelty of this form of governmentality is its essential indifference towards concrete individuals. In contrast to previous forms of governmentality-which, if we follow Foucault, combine a statistical-populational and a subjective-disciplinary register-algorithmic governmentality acts on a rarefied and 'environmental' domain constituted by statistical correlations and fragmentary digital traces: "the normative action deriving from these statistical processes will always be closer to action on and therefore by the environment than to action on the individual themselves" (Rouvroy and Berns, 2013, p. 17). In other words, the concept of algorithmic governmentality is the name given to the power of predictive analytics to discover and functionalise real propensities for the purpose of developing increasingly complex mechanisms of behavioural anticipation. Rouvroy and Berns describe it as a form of governance which is capable of operating immanently on the realm of possibilities:

This form of governance essentially relates to what could become, to propensities rather than actions taken [...] algorithmic governance not only perceives possibility in the present moment, producing an "augmented reality", an actuality with a "memory of the future", it also gives substance to the dream of systematised serendipity. From this point of view, our reality has become the realm of possibility; our norms wish to anticipate possibility correctly and immanently, and the best way of doing that of course is to present us with a realm of possibility that corresponds to us and into which subjects then just need to slip (p. 170).

The scenario that Rouvroy and Berns present seems as bleak-or even more so-than a dystopian scenario of algorithmic clairvoyance. The idea of a systematised serendipity alludes to a situation where human behaviour would be so perfectly anticipated that the seemingly chance occurrences of everyday life would actually be the result of algorithmic procedures always running one step ahead of us due to their capacity of identifying propensities and acting on them. Although we will later question the possibility of such a totalizing algorithmic serendipity by analysing Hansen's use of the Whiteheadian concepts of 'real potentiality' and 'total situation', we must now turn to the domain of behavioural anticipation to understand the effects that predictive analytics-based algorithmic governance-or governmentality-has on the sphere of the human subject and its experience of time.

As we mentioned above, algorithmic governmentality possesses a fundamental temporal dimension: its capacity to set into motion techniques of predictive analytics for the purpose of anticipating human behaviour. How does it achieve this anticipation? How can it affect the temporality of human consciousness and its construal of futurity? The answer that Stiegler gives-and which will require some unpacking-could be parsed as follows: algorithmic governmentality overtakes human temporalization by outstripping the subject's anticipatory faculties through the automated production of digital protentions.

The concept of protention is an essential part of Stiegler's philosophy of technics, remitting us back to Husserl's first meditations on the phenomenology of time consciousness. Here, Husserl discovered that the experience of time is structured through retentions (or memories) and protentions (or anticipations), every present moment being a synthesis of past and future elements. In the case of retentions, Husserl (1991) identified three different kinds. Primary retention is the instant just passed-the "comet's tail" (1991, p. 32)-that the present must retain in order to constitute itself as present, albeit as a 'widened' present or a 'large now' (Stiegler, 1998, p. 246). Secondary retention is what is usually understood as remembrance or recall, that is, the imaginative reactualization of past occurrences. Lastly, tertiary retention is the objective memory deposited in material artefacts. Protentionality, on the other hand, is understood as the capacity of consciousness to project a horizon of futurity and anticipation; a capacity that itself depends on the existence of retentionality as its condition of possibility. Husserl developed this schema by carefully analyzing his own experience of listening to a phonographic recording several times, where the anticipation of the musical notes to come depend both on those notes that have immediately passed, as well as those that can be recalled from previous acts of listening.

A good deal of Stiegler's philosophy of technics is based on a reappraisal of the role of tertiary retention-qua memory externalized on technical supports-within the overall functioning of temporalization. He argues against the Husserlian refusal to abandon the strictly immanent domain of phenomenological consciousness in order to consider the constitutive role that externalized memory plays with regards to primary and secondary memory and, concomitantly, for protention. For Stiegler, the fallible and finite nature of 'living' human memory-what he calls retentional finitude-means that it largely depends on 'dead' external technical supports which serve as its constitutive supplement; it depends on the existence of a not-lived past, sedimented in material artifacts and recording devices.7 Furthermore, Stiegler's quasi-trascendental analysis of the technical constitution of temporality finds itself intermingled with an empirical-historical8 analysis of the evolution of the different (mnemo) technical milieus throughout human history, fundamentally arguing that the material specificities of these technical supports are instrumental in conditioning the way human experience of time is constructed.9

To discern how algorithmic governmentality can be said to affect temporalization, first we must understand the way it can be situated within the wider historical narrative of technical milieus with mnesic functions. 'Grammatization' is the term Stiegler uses to describe the technical procedure of discretization, wherein something temporal-thoughts, experiences, events-is spacialized, objectified in material supports in the form of tertiary memory. For Stiegler (2010, p. 34), "grammatization is the history of the exteriorization of memory in all its forms: nervous and cerebral memory, corporeal and muscular memory, biogenetic memory. When technologically exteriorized. Memory can become the object of sociopolitical and biopolitical controls." The digital data format is the latest chapter in this history of grammatization, and algorithmic governmentality the specific way in which the deluge of societally-produced tertiary memory made possible by this format has been functionalized for the purposes of behavioural control. The 'automatic society of hyper-control' in which algorithmic governmentality takes place, is "founded on the industrial, systemic and systematic exploitation of digital tertiary retentions. All aspects of behaviour thereby come to generate traces, and all traces beco-me objects of calculations" (Stiegler, 2016, p. 28). The diffusion of digital technologies, from handheld devices to ubiquitous computation, have greatly expanded the range of aspects of life which can be grammatized in the form of data; a new kind of tertiary memory which-in our times of hypercentralized data infrastructures-is held within the databases of a handful of capitalist platforms and service providers, mostly inaccessible to the people who-often inadvertently-produce it.

Earlier, it was mentioned that tertiary retention is the condition of possibility for both of the other kinds of retention, which in turn serve as the conditions for protention (or anticipation). In other words, our horizon of futurity-our capacity to project the future and anticipate what is coming-is made possible by the past we have retained. For Stiegler, the main feature of algorithmic governmentality lies in capacity to technically produce prefabricated anticipations-or tertiary protentions-using digital tertiary retentions as its raw material: "in the case of digital and reticulated tertiary retention [...] the retentional selections through which experience occurs as the production of primary retentions and protentions are outstripped and overtaken by prefabricated tertiary retentions and protentions that are 'made to measure' through user profiling and auto-completion technologies" (Stiegler, 2016, p. 140). This is where the domain of predictive analytics presented above ties in with the domain of behavioural anticipation as a means of explaining some of the examples with which we began: the books that Amazon suggests to me, or the posts that are prompted to appear in my Facebook news feed, are tertiary protentions construed on predictive analytics' capacity to pinpoint certain tendencies of my behaviour based on past data I have produced. Furthermore, once extrapolated to a larger scale, the overtaking of the protentional capacities of subjects seems to point towards harrowing prospects:

The power over individual and collective protentions acquired through the production of automatic, dividual protentions destroys any collective outstripping or overtaking by psychic protentions that could come from psychic and collective secondary retentions. And this also amounts to a mutation of the relation to the possible, to the possible itself, so that it is de-realized in advance, that is, emptied of its potential bifurcations. (Stiegler, 2016, p.112)

Stiegler's automatic society of hyper-control is one in which the tertiary protentions produced automatically through predictive analytics inhibit the individual and collective capacity to project future horizons beyond those that attempt to enclose the domain of the possible within the limits of the controllable and the profitable; and one in which subjective experience of serendipity is the facade for a system of generalized auto-completion.10

In order to question the possibility of this totalizing scenario of behavioural control, we must inquire into the relationship between the automatic production of tertiary protentions and the understanding of prediction as the discovery propensities. In line with what has been said above, we should remember that when we say that 'algorithms tell us what is going to happen' what we really mean is that they tell us the probability of something happening, and that this probability is grounded on tendencies or propensities of the present. A predictive act of targeted marketing (for example) is to be understood as the automatic production of a tertiary protention which is construed on an identified propensity of the consumer to act in a certain way on the basis of her previously recorded behaviour. The targeted ad I see does not predict the items or services I will buy next-what would amount to algorithmic clairvoyance-but those which I'm prone to on the basis of my preferences and shopping habits. Although this might seem glaringly obvious, it is important to mention in order to keep in mind how mere behavioural anticipation often goes hand in hand with what is another operational or effective dimension of tertiary protentions. In cases such as the one just mentioned, tertiary protentions ride on the back of discovered probabilities-propensities with the hope of finding the most functional paths through which to performatively produce consumer needs, inscribe desire within prespecified pathways, and guide action in what aims to be a closed-looped fashion. Stiegler recognizes this, couching it terms of a short-circuiting of deliberation: "these technologies calculate correlations, then, in order to automatically anticipate individual and collective behaviour, which they also provoke and 'auto-realize' by short-circuiting and bypassing any deliberation" (Stiegler, 2016, p. 231).

When it comes to predicting human behaviour, predictive analytics'capacity to discover propensities is limited to the input which can be gathered from those domains of human life which are currently grammatizable in the form of data. Although the range of grammatizable domains continues to expand, a reluctance to affirm the possibility of its totalization would perhaps find itself to be well founded. After all, can we possibly fathom a scenario where every aspect of human life-from neuronal activity, to externally observable behavior and libidinal dynamics-becomes quantifiable and translatable to data? Could the dystopian scenario of a fully automatic society of hyper-control be realized this way?

Real potentiality and prediction in the wild

First of all, once again we must resist the temptation of conceiving this dystopian society of control along the lines of what we have called algorithmic clairvoyance. To reiterate: this sort of predictive foreknowledge would imply a determinist view of world in which chance plays no part and in which we can dispense with probabilistic considerations once we have enough knowledge of the causal mechanisms of the world in order to precisely predict its future unfolding. Endorsing the existence of stochastic (or chance involving) processes-be it at the level of quantum physics or at the level of human behaviour-implies accepting that predictive foreknowledge runs into ontological limits. In addition to this, there are also epistemological limits, which pertain to the cognitive infeasibility of foreknowledge, "either because we cannot secure the needed data, or because it is impossible for us to discover the operative laws, or even possibly because the requisite inferences and/or calculations involve complexities that outrun the reach of our capabilities" (Rescher, 1997, p.134).

Another way we can address the limitations of predictive analytics is by turning once again to the work of Hansen, who draws from the ontology proposed by Alfred N. Whitehead in order to identify certain epistemological limits to the scenario of a totalizing society of hyper-control unfeasible. Hansen's work (2015b) itself is noteworthy for its incisive rereading of Whitehead's philosophy, one which also departs in many key aspects from several of the more popular readings that tend to frame it primarily via Deleuze.11 These disputes notwithstanding, what is more important for us is Hansen's bold claim that positions Whitehead as "the preeminent philosopher of twenty-first-century media", due to what he identifies as the "probabilistic underpinnings of 'real potentiality'" (2015, p. 120). The concept of real potentiality is central in Hansen's reading of Whitehead's speculative metaphysics, designating "the potentiality of the settled universe that informs the genesis of every new actuality along with the incessant renewal of the 'societies' that make up the world's materiality [...] as such it instigates a feeling of the future in the present: an experience of the future exercising its power in anticipation of its own actuality" (2015, p. 120). In Whitehead's (1979, p. 66) own words: "The reality of the future [...] is the reality of what is potential, in its character of a real component of what is actual." Thus, a clear link is drawn between the idea of the settled universe's causal efficacy impinging on the future-or vice versa, as the future being 'felt' in the present-and the notion of propensities or tendencies as that in the present which points towards the future.

According to Hansen, the probabilistic underpinnings of real potentiality are tied to Whitehead's audacious speculative12 postulate which includes the totality of the universe-the 'total situation' of the world-at any given time as causally informing every moment of its becoming and every new concrescence of actual entities: "every item of the universe, including all the other actual entities, is a constituent in the constitution of any one actual entity" (2015, p. 148). According to Whitehead, it is the entirety of the present state of the universe-the potentiality of every single datum that conforms it-which impinges on the future in ways we cannot fully fathom; it is the real potentiality of the total situation which determines the future-oriented propensities found in the present. According to Hansen, this complex network of potentiality can only be represented probabilistically:

Because this power remains that of potentiality-and indeed of an incredibly complex network of potentiality, a network inclusive of the potentiality of every datum comprising the universe's current state-it can only be fixed or arrested probabilistically [...] The force of the future-the future force of every single datum informing the universe at a given moment-is felt in the present in a way that can only be represented probabilistically and where such representation designates neither a purely abstract likelihood nor a statistical likelihood relative to a provisionally closed dataset, but a properly ontological likelihood: a propensity, which is to say, a likelihood that is, paradoxically, real (Hansen, 2015, p. 121).

This is why Hansen affirms that Whitehead's notion of real potentiality provides us with a way of understanding the ontological basis of the power of contemporary predictive analytics. These techniques of prediction piggyback on what is the more general power of the causal efficacy of the present universe as it impinges on the future; they are capable of 'feeling' the future anticipating itself in the present in the form of propensities. As Hansen (2015, p. 133) points out: "The access that large-scale data mining and predictive analytics gives to this propensity is [...] a partial glimpse into the present operation of real forces that will produce-that are already producing-the future to come."

It's paramount to emphasise the partial nature of this technically mediated glimpse into the realm of propensities. The real propensities discovered by predictive algorithmic techniques remain tied to the unavoidable partiality of the datasets upon which they are constructed. If we are to look at the problem through a Whiteheadian lens, we must recognize the empirical unfeasability of a totalizing dataset of all the universe as it impinges in every moment of its becoming and in every actual entity; we must recognize that every predictive system depends on a provisional delimitation of a part of an always immensely larger field of environmental data that exceeds it. As Hansen (Hansen, 2015, p. 128) argues: "Whitehead's account in effect foregrounds the impossibility for any empirical analytic system-no matter how computationally sophisticated and how much data it can process-to grapple with the entirety of real potentiality, or anything close to it." It is precisely this inescapable partiality of prediction what drives Hansen to coin the term 'prediction in the wild' in order to refer to the activity of contemporary predictive analytics: a discovery of propensities-probabilities which are always open-ended due to the empirical unknowability of the total situation of the actual state of the universe.

Viewed from another angle, this excess or surplus of data can be seen as having a positive side to it. Hansen points out that the reliability of prediction is purchased at the cost of inclusiveness. The reliability of a predictive system comes from its closing off-by constituting provisionally closed datasets-this surrounding excess of data which threatens to complicate it. This provisional closure is what underpins the concrete networks of predictive algorithmic power implemented by corporate and governmental agents with the purposes of societal control and consumer behavioural anticipation; in short, it is what underpins the operational capacities of algorithmic governmentality. The imperative of algorithmic governmentality lies in "the reduction of general potentiality-the potentiality stemming from the sum of attained actualities constituting the settled universe (what Whitehead calls 'real potentiality')-to a fully instrumentalized deployment of potentiality in a narrow coupling with specific functional ends" (Hansen, 2015b, p. 70). When gazed through a Whiteheadian lens, even when we take into consideration the immense scale of big data-it is estimated that currently more data is produced every two days that in all of human history prior to 2003 (Kitchin, 2014, p. 4)-upon which these predictive techniques depend, a totalization is not fathomable, and an excess beyond narrow functionalization is ineliminable. Hence,

there will always be a surplus of data that remain available for the future in the mode of potentiality. In this sense, Whitehead's speculative account serves as a critical check on the totalizing impulses of today's data industries, a guarantee of sorts that the future, insofar as it can be felt in the present, can never be fully known in advance (Hansen, 2015, p. 128).

Conclusion: pharmacology of predictive analytics

In the previous pages we have attempted to outline our contemporary technopolitical conjuncture through the concept of algorithmic governmentality-which can be understood as the convergence of predictive analytics and behavioural anticipation-and through the concept of real potentiality-which establishes the ontological and epistemological limits of prediction. The widespread recognition of the pressing issues that arise from this conjecture has fostered a fair amount of theoretical and practical work which has attempted to devise practical strategies of critical engagement and resistance. Strategies such as counter-surveillance (or sousveillance), obfuscation, refusal, reverse engineering, culture jamming and commoning have been some of the ways in which artists and activists have attempted to resist things such as data capture, surveillance and compulsory connectivity (O'Dwyer, 2019).13 Although these certainly provide us with valuable tools and with much needed insight into the currently black boxed algorithmic systems we live in, they more often than not assume-understandably-a reactive or defensive position towards these.

One of Stiegler's main contributions has been his ongoing attempt to understand the relationship between the human and its technical milieu through the concept of the pharmakon, a greek term that names both a poison and its remedy. Following Derrida's analysis of Plato's Phaedrus, Stiegler's use of the term points to the conflictive relationship between technics-qua tertiary retention-and the human. Beyond the banal-albeit correct-observation that technology can have 'good' or 'bad' applications, the notion of pharmacology is meant to foreground the ambivalent role that the technical exteriorization of memory in its different forms can have on the constitution of various aspects of the subject, from its libidinal dynamics to its conscious experience of time. The paradigmatic example is that of writing: at the same that it weakens anamnesic ('or internal') memory by removing the need for practices of memorization, it also greatly expands its scope and its intergenerational transmissibility.

Hansen doubts that the contemporary technological landscape can be thought pharmacologically without first introducing some important modifications to this concept. According to him, Stiegler's pharmacology depends on a view of technics which circumscribes it to its role as surrogate memory and privileges the relation that conscious human experience can maintain with it. Such is the basis of 'pharmacological pact' that characterized media from writing to cinema but which, he argues, has now been broken: whereas previously the correlation of technical dispossession and recompense were two sides of the same coin and directly concerned human senses and faculties, new media technologies present a split between their experiential and operational registers which sidelines or marginalises human conscious experience, in great part due to a disjunction between temporal regimes. Contemporary data technologies operate at microtemporalities well below the threshold of human conscious experience, modes of awareness and cognitive processing. This is a temporal imbalance that is exploited by algorithmic governmentality, as we have seen above.

However, from a pharmacological perspective the response to algorithmic governmentality's use of predictive analytics need not necessarily be limited opposition or resistance. Certainly, the first task to embark on is that of the struggle for alternative ways of managing data beyond its current dominion by capitalist platforms and data industries. In other words, before we can think about a pharmacological recompense of predictive analytics, we would first have to transform the hypercentralized nature of databases and infrastructures that underpins it. Nonetheless, we can presently go a step further and inquire about the ways in which human experience and sensibility themselves could take advantage and be somehow enriched despite-or perhaps because of-the operational splits and temporal disjunctions that underlie predictive analytics. As we have seen, predictive analytics presents us with the possibility of a technically mediated insight into the realm of real potentiality, of 'feeling' the future anticipating itself in the present in the form of propensities or tendencies that often implicate us. Moreover, they present us with the possibility of using these propensities to produce tertiary protentions which, as we have seen, circumvent the usual anticipative faculties of human subjects and transform the way in which they construct their horizon of futurity. How can a pharmacological inversion of the currently poisonous effects of predictive analytics and digital protentionality come about? How could the overtaking of human protentionality through automatized tertiary protentions be reconceived and applied beyond social media curation and targeted advertising? How could it transform our experience of time in more interesting-or even useful-ways?

BIBLIOGRAPHIC REFERENCES

Bradley, A. (2011). Originary technicity: The Theory of Technology from Marx to Derrida. Palgrave Macmillan. [ Links ]

Cummins, E. (2022). The creepy TikTok algorithm doesn't know you. Wired. https://www.wired.com/story/tiktok-algorithm-mental-health-psychology/. [ Links ]

Debaise, D. (2017). Speculative empiricism: Revisiting Whitehead. Edinburgh University Press. [ Links ]

Esposito, E. (2022). Artificial communication: how algorithms produce social intelligence, Strong ideas series. The MIT Press. [ Links ]

Hacking, I. (1975). The emergence of probability: A philosophical study of early ideas about probability, induction and statistical inference. Cambridge University Press. [ Links ]

Han, J. and Camber, M. (2006). Data mining: Concepts and techniques. (2nd ed.). Morgan Kaufmann. [ Links ]

Hansen, M. B. N. (2015). Our predictive condition; or, prediction in the wild. In R. Grusin (Ed.), The Nonhuman Turn (pp. 101-138). University of Minnesota Press. [ Links ]

Hansen, M. B. N. (2015b). Feed-Forward: On the Future of Twenty-First Century Media. The University of Chicago Press. [ Links ]

Husserl, E. (1991). On the phenomenology of the consciousness of internal time (1893-1917). Kluwer Academic Publishers. [ Links ]

Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge University Press. [ Links ]

Joho, J. (2022). TikTok's algorithms knew I was bi before I did. I'm not the only one. Mashable. https://mashable.com/article/bisexuality-queertiktok. [ Links ]

Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79, pp. 1-14. [ Links ]

Koopman, C. (2019). How we became our data: A genealogy of the informational person. The University of Chicago Press. [ Links ]

Naughton, J. (2019). "The goal is to automate us": welcome to the age of surveillance capitalism. The Guardian. [ Links ]

O'Dwyer, R. (2019). Surveying surveillance capitalism. Neural, 63, pp. 36-40. [ Links ]

Popper, K. (1995). A world of propensities. Thoemmes Press. [ Links ]

Rescher, N. (1997). Predicting the future: An introduction to the Theory of Forecasting. State University of New York Press. [ Links ]

Rouvroy, A. and Berns, T. (2013). Algorithmic governmentality and prospects of emancipation: Disparateness as a precondition for individuation through relationships? Réseaux, 177, pp. 163-196. [ Links ]

Shvartzshnaider, Y. and Josephson, C. (2019). Every move you make, I'll be watching you: Privacy implications of the Apple U1 chip and ultrawideband. Freedom to Tinker. https://freedom-to-tinker.com/2019/12/21/every-move-you-make-ill-be-watching-you-privacy-implications-of-theapple-u1-chip-and-ultra-wideband/. [ Links ]

Spiegelhalter, D. (2019). Learning from data: The art of statistics. Pelican. [ Links ]

Stiegler, B. (1998). Technics and Time 1: The Fault of Epimetheus. Stanford University Press. [ Links ]

Stiegler, B. (2009). Technics and Time 2: Disorientation. Stanford University Press. [ Links ]

Stiegler, B. (2010). For a new critique of Political Economy. Polity. [ Links ]

Stiegler, B. (2016). Automatic Society Vol.1: The Future of Work. Polity. [ Links ]

van Hooijdonk, R. (2022). Four ways in which algorithms can predict your future. But should they? Richard Van Hooijdonk. https://blog.richardvanhooijdonk.com/en/four-ways-in-which-algorithms-can-predict-your-future-but-should-they/. [ Links ]

Véliz, C. (2021). If AI is predicting your future, are you still free?" Wired. https://www.wired.com/story/algorithmic-prophecies-undermine-freewill/. [ Links ]

Whitehead, A. N. (1979). Process and reality: An essay on cosmology. Free Press. [ Links ]

Fuente de financiamiento: Autofinanciado.

Citar como: Diaz Alva, A. (2023). Predictive analytics and digital protentionality: on algorithmic prediction and anticipation. Desde el Sur, 15(2), e0021.

1Estudiante de doctorado en la Universidad Leuphana en Lüneburg (Alemania). Con estudios previos en Arquitectura, ahora se enfoca en la investigación en los campos de la teoría crítica, filosofía de la tecnología y teoría marxista. Cuenta con estudios de maestría en 17, Instituto de Estudios Críticos de México, y en el Goldsmiths College de Londres.

2Most recently, for example, Apple introduced the U1 ultra-wideband chip into its new Iphone 11, which will permit precise indoors location tracking-with accuracies between 10-0.5cm-something previously unheard of when it comes to mobile phones (Shvartzshnaider and Josephson, 2019).

3According to Jiawei Han and Micheline Kamber, the term datamining is prone to misunderstandings, due to the fact that the purpose of this process isn't the extraction or 'mining' of massive amounts of data per se, but rather the production of correlations and patterns from which statistical indexes from them. They point out that "the term is actually a misnomer [...] data mining should have been more appropriately named 'knowledge mining from data', which is unfortunately somewhat long." (Han and Camber, 2006, p. 5)

4Ian Hacking (1990) has argued that the erosion of the deterministic view of the universe in the nineteenth century is historically linked to specific endeavours (mostly directed by the state) that attempted to make society more controllable by discovering the statistical laws underlying its dynamics. Although we cannot delve deeper into this topic, perhaps a genealogy of algorithmic governmentality's could be attempted by pursuing this thread.

5In the case of Laplace, the notion of equipossibility does not contradict his view of a deterministic universe. For him, probability fractions are just a result of our lack of sufficient knowledge about causal chains. (Hacking, 1975, p. 32)

6The emergence of a 'correlational paradigm' as a consequence of the predominance of machine learning has sparked a lot of discussion or controversy. This implies a radical transformation of causal reasoning and the attempts (scientific and otherwise) to connect events to underlying causes. In traditional scientific logic, one needed to build theories and models that guided causal explanations. In machine learning, causal explanations are eschewed in favour of predictive adequacy. One does not search for causal relationships to prove the hypothesis (since there is still none), but rather one searches for correlations and associations, "for patterns whose detection discloses underlying structures and should make it possible to formulate effective predictions." (Esposito, 2022, p. 90)

7This move away from the strict imminent domain of time-consciousness to include an inherited past was actually one of the main points that Heidegger argued, against his teacher, in Being and Time. "Though a student of Husserl-who defined trascendental philosophy as the analysis of lived experience in the conscious, living present-Heidegger breaks with phenomenolgy precisely on this point: in the existential analytic of Being and Time, the past that Dasein has not experienced, which it inherits, is an existential characteristic of its originary temporality (essential to its existence)." (Stiegler, 2009, p. 4)

8Arthur Bradley points out that this reconfiguration of the opposition between empirical and transcendental is precisely Stiegler's most original (and also most polemical) contribution, as well as one of the central points of contention with Derrida. (Bradley, 2011, pp. 126-127)

9"Technical specificities, as the medium or ground for the recording of the past, condition the modalities according to which Dasein has access to its past, for each age." (Stiegler, 2009, p. 4)

10 Rouvroy and Berns (2013) emphasise the fragmentary nature of the data gathered from individuals, and the way that the individual is never addressed directly-as it is in Foucault's paradigmatic account of disciplinary power-but rather indirectly through correlative-and protentional- 'digital doubles'. This is similar to Colin Koopmans (2019) recent genealogy of contemporary information politics in which, according to him, the 'informational persons' on which power acts are not indexed to human morphology (as is the case with biometrics), but are rather defined by data sets accumulated by the digital traces of everyday conduct.

11Hansen's rereading of Whitehead is articulated around what he terms the 'claim for inversion' (CFI), which "contends that we should invert the orthodox understanding of creativity provided by Whitehead and ratified by virtually all of his commentators: rather than looking to concresences as the sole source of creativity, we must view them as vehicles for the ongoing production and expansion of worldly sensibility, as instruments for the expression of a creative power that necessarily involves the entirety of the superjective force of the world" (2015b, p. 13).

12Whitehead uses of the term 'speculative' to name his philosophical method, which refer to his attempt of giving an account of how the universe must be in order for experience to be what it is. For an account that focuses on the role of speculation in Whitehead's philosophy, see the recent work of Didier Debaise (2017).

13For an ongoing list of projects and essays dealing with the problem of "Resisting Smartness", see: https://www.are.na/shannon-mattern/resisting-smartness.

Received: December 31, 2022; Accepted: April 03, 2023

Contribución de autoría:

Alan Díaz Alva fue el único autor.

Potenciales conflictos de interés:

Ninguno.

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License