The following is an essay which I wrote in 2020, right at the beginning of the COVID-19 Pandemic. It is a bit outdated and the tone is sooooo pretentious (I was that kid back then) but I think the ideas were worth putting into the world and if anyone with a background in the Phenomenology of Technology would like to leave any irate responses, pls email or DM me because I’d love to chat :)
The question of technology has perhaps never been more present. By the very construction of the term, however, that has never not been true. As Dasein continues to reach beyond herself, attempting to be more than her mere existence, technology lends the answer—or rather, is the answer. As described by De Beauvoir, technology is that which apotheosizes. By making human the creator of something to her own design, she becomes more than merely an animal left to discover and rediscover—she becomes likened to God in her ability to create. She has bridged the gap from ontological to ontic by morphing that which could be to that which is. She leaves behind the Farmer who is left to the wills of nature and learns to challenge that which is imminent.
The face of technology is always changing as its creator does. From the simple tools we study in preliminary physics classes—the lever, the pulley, etc, to Homo Faber—who creates tools which themselves create tools, to the tools we use to extract energy from the earth and fuel our livelihoods and our industries, to the technology of today. What, exactly, is the technology of today, though? Before answering that, we must understand first what we consider to be technology. Above we described technology as something which extends the Being of Dasein. This cannot, however, be its description. Such a category is far too vague and would necessarily include experiences like childbirth and death—which we should not count as technology. The examples presented (lever, pulley, energy extraction, etc) offer insight into a description of technology which is simultaneously too narrow and too opaque. Of course we could include things like language—a communication technology, or bureaucracy —an organizational technology (as most authors of works regarding technology do) to our list, and muddy it up even further. This makes it seem that perhaps Technology is simply something which allows us to do. Such a description makes no sense, though, because then would our arms—which allow us to do so much—be a technology? Our minds? No. Clearly, technology can not be a physical part of us. We wouldn’t imagine the rage—which often allows us to do the worst—to be a technology either. In that way, technology also can not be a metaphysical part of us. No, technology is something entirely separate. As we can begin to see, its nature seems to become more elusive as we examine it further. Often, when a subject acts as such, philosophers say that it is in, “our great blind spot.” However, this blind spot is meant by them to be that which is so proximal to us that we can not see it. Often, it is used to describe why we have no true ontology of Selfhood. This leads us to something of a paradox, though. How can technology seem to occupy the same place as Self when we have already claimed that it can not be a part of us?
This is the difficulty with technology. It remains a necessary part of our existence, close to the very definition of what it means to be human—to have a Self—but simultaneously, it remains entirely separate from us. This holds true for the lever and pulley as well as it does for the windmill and the deep-sea oil rig, the printing press, the smartphones we carry in our pockets, and the websites we “visit”. There is, of course, a natural distinction between the natural couples: smartphone and website, and windmill and oil rig. The latter is meant to extract and convert for purposes of energy and for this reason I will refer to them and the like as “energy technologies.” The purpose of the former is a bit more ambiguous. Of course, both the smartphone and the website can trace back to the telephone and the printed word, respectively. One might imagine that this might make them communication technologies. However, to consider the smartphone and the web page—especially one that could also be considered a social media site, such as Facebook or Twitter—simply communication would be a grand reduction. The smartphone in my hand is not equivalent to a rotary dial in any sense besides the fact that both can make calls. Even that similarity is compromised in that, while it is unequivocally the primary function of the rotary phone to make calls, hardly the same can be said about an iPhone whose utility has become increasingly all-encompassing. Concerning the web page, while the main body of text is able to offer the user much the same as a pamphlet, it is completely separate from a pamphlet in that—while a pamphlet looks the same to all who have received it, most websites differ depending on who is looking and via what access point. Of course, for any social media, the “feed” will be customized to the user, but even for most static websites, the specific collection ad content which appears is unique to the user logged in. This personalization which strips away some of the common ground between experiences of the same site makes it unique from simply a pamphlet. If communication technology is meant to connect people, this specifically works to disjoint them. Indeed, it must come time that we discuss the most fundamental difference between communication technologies and this new form. Communications technologies only make another available and specifically in a binary form. The author, or the person on the other side of the phone, is the only one engaging with the user (barring any considerations of wire-tapping, etc). These technologies also make another available, but there is also a latent, active component of the user as well. Besides making another available, it also makes the user present. The term here should be taken in the most literal way. The identity of the user is, quite literally, necessarily being shown for means of extraction by a third party. Here, of course, is the other main difference. This interaction between author, or correspondent, and the user is not binary, but rather tertiary. The third member being what we could call a “mechanical observer.” Often, this would be some part of the program embedded in a website, or the operating system in our smartphone which collects this obscure object we often call “data” on our interactions with the technology being used. It is an entity dependent, yet entirely separate from us. It is not itself technology, yet it is at the heart of it.
The word “data” has evolved from the latin term dare which translates to, “to give.” Our systems of exchange, labor, science, and the way in which we understand ourselves and each other have all taken hold of this concept. This notion of being “given” remains a crucial part of how we understand the power and ontology of data. To say that it is given is to understand it as being inherent to the world as it is. It is as if data is something fixed in by nature for us to extract. This is not a claim to make lightly, so we must consider some examples. The paradigmatic example seems to be one which many have likely never heard of, and exists in the paper Automated Inference on Criminality using Face Images, written by Xiaolin Wu and Xi Zhang of Shanghai Jiao Tong University. This paper uses four unsupervised machine learning techniques to see if criminality can be predicted using only face images which have been collected from ID photographs. One of the tools being used, the Convolutional Neural Network is often described as a “black box” in that it is simply fed data and asked to produce a result, with neither user nor coder having insight into how the result came about. In an abstracted way, the machine is simply using statistics and linear algebra to update parameters and optimize a given metric that is unique to the method being used. Little insight beyond this is given by the machine, as we are given no clues as to what cues the changes in parameters. As a whole, we can look at machine learning (statistical inference techniques) as something which searches for patterns which we can not see. In this particular case, the researchers gave the algorithms a set of ID photos and sought to find out how accurate their machines were. It was found that the Convolutional Neural Network performed with almost 90% accuracy in predicting whether or not the person in the photo was a criminal. What characteristics about that face made the machine “aware” of such a thing is unknown, and can never be known. Nevertheless, this type of technology is built on the idea that there may be something encoded, by nature—a word which we will simply allow to be present without questioning for the time being—left for us to find within the structure of these faces which determines the inclination of an individual to be a life of crime.
Alternatively, we can imagine the recent developments in DNA testing. Companies like 23andMe allow clients to send in samples of DNA which the company analyzes and thus informs the client of their lineages based on geography. Once again, this technology is founded on an idea similar to that of the bloodline. It is the belief that one can be described, at least in part, by the lineage which exists—by nature—in their blood. Indeed, such a technology has been used also in the criminal system, with similar companies having admitted to collaboration with local and federal law enforcement to identify and charge individuals whose DNA has been found at crime scenes. Much like the Neural Network, this too has not been 100% correct—causing much imaginable trouble for those wrongly convicted. One would imagine that, of course, these wrongfully convicted people would have pleaded their innocence but obviously to little avail. How could a human being—full of faults and the ability to so easily lie—be considered more correct than these answers given to us by nature? In a similar vein, how could one disagree with Elizabeth Warren’s claim of Cherokee Heritage, when it is so clearly stated in her DNA test? Indeed all of the examples presented above have received heavy criticism from the public, and peers. The question is—why?
In the examples presented, we see that the results handed over by these technologies take priority over the word of people under the assumption that the answers given to us by this technology are more correct, having been given by nature and merely extracted by man. The idea of extracting and making available what is given to by nature is particularly interesting because it falls directly in line with Heidegger’s ideas on what technology is, in his essay The Question Concerning Technology. In discussing modern technologies of the time—particularly those which extract energy, he suggests that technology is that which “reveals” what is in nature by “challenging” in the form of extraction and expediting - or “bringing-forth.” This bringing-forth makes turns what is in nature into tools as once they are brought forth, they are “set aside” and made ready-at-hand - which is the ontological characteristic of a tool. He says also,
The revealing that rules throughout modern technology has the character of a setting-upon, in the sense of a challenging-forth. That challenging happens in that the energy concealed innature is unlocked, what is unlocked is transformed, what is transformed is stored up, what is stored up is, in turn, distributed,and what is distributed is switched about ever anew. Unlocking, transforming, storing, distributing, and switching about are ways of revealing. But the revealing never simply comes to an end. Neither does it run off into the indeterminate. The revealing reveals to itself its own manifoldly interlocking paths, through regulating their course. This regulating itself is, for its part, everywhere secured. Regulating and securing even become the chief characteristics of the challenging revealing.
This would suggest that there is something that there exists a deep similitude between the technologies of Heidegger’s time and the technologies we see in our world today in that their ontologies are one and the same. However, there exists one fundamental difference. While the technologies which Heidegger discusses extract the energy from the water and the earth, these technologies do not extract any such thing. Thus far, we have considered them only to interpret data and extract patterns that are given by nature. What is the worth in a pattern? In a vacuum, there clearly is none. But to human beings, patterns are everything. Patterns are the ways in which we make sense of the world and the way in which we build our realities, our Weltanschauung—in the literal sense of each individual’s view of the world.
To make sense of the significance of this and gain an understanding of what exactly it means to “extract patterns,” we look to Foucault’s idea of the episteme. To Foucault, an episteme is the relation between the hermeneutics—the “totality of learning and skills which enable one to make the signs speak and to discover their meaning,” and the semiology—“the totality of the learning and skills that enable one to distinguish the location of the signs and, to define what constitutes them as signs, and to know how and by what laws they are linked.” Foucault relates epistemes to time period and location. In The Order of Things, he lays forth the epistemes of what he calls “Classical thought” and “modernity,” both being located in the Western world (i.e. Western Europe) with “Classical” referring to the 16th century and prior, and “modern” referring to the 17th century until the time of publishing, 1970. He believes that at the beginning of the 17th century, there was a radical shift in the structures of knowledge, thus forming the episteme of modernity. What Foucault means by signs is rooted in, and forms the basis of, the study of semiotics. For our intents and purposes, I will describe a sign as, ‘that which communicates another to the interpreter.’ One can imagine examples, such as the relationship between a cloudy day to melancholy, the sun to warmth, or that between this symbol
to the social media website Twitter. Indeed, if these relationships hold some meaning to the reader, then it is very revealing of her Weltanschauung. For example, to those stranded in the desert for a decade, the clouds may be a welcome sign of water and thus a secondary sign of life, while the sun is a sign of harshness and difficulty, and indeed the symbol above would merely be a poor depiction of a bird. Foucauldian thought suggests that it is our ability to make the connection between Twitter and the blue bird depicted—an example of our hermeneutics—in combination with our ability to discern that the bird depicted is, in fact, a symbol as well as our abilities to make clear that the bird appears on the website and all things produced by the company Twitter and is linked to it by means of being a “brand logo” - an example of our semiology—which constitutes our episteme.
In his book, Technopoly, Niel Postman more or less concludes with a chapter entitled, “The Great Symbol Drain.” He begins this portentously named chapter with the thought that one day, we might see an advertisement for California Chardonnay in which a depiction of Jesus, meant to allude to the phenomenon of transubstantiation, would be used. In doing so, he is suggesting that it is not unimaginable that in the current era, a financially motivated corporation would work to turn their cheap wine into a sign which signifies the bible. He mentions that such a thing has already happened with the Uncle Sam being used to sell Hebrew National hotdogs. The remainder of the chapter, which the entirety of the book has worked to lay the theoretical foundation for, describes how the technology has distorted our hermeneutics in such a way that all signs become trivialized. He considers this a result of the current technopolistic world’s inherent characteristic of “information glut.” Thus far, I’ve worked very hard to avoid the word “information”—a word so ambiguous, yet so ubiquitous in the current age. For now, we will not try to define it fully, but consider a description of it lent to us by epistemologist Fred Dretsky and the great mathematician, Claude Shannon. To these two, information is tied directly to “Shannon entropy”—a metric that calculates the amount of information in a sign-signified relationship. For example, we imagine the communication between a catcher and a pitcher in baseball. If we see that every time the catcher puts up one finger, the pitcher throws a fastball, and every time the catcher puts up two fingers, the pitcher throws a curveball, then we can reasonably say that one finger is the sign which signifies ‘fastball’ and two fingers is the sign which signifies ‘curveball’. Here, we have come to fully understand the hermeneutics of this minimally entropic, maximum information sign-signified relation. If, however, we saw that the pitcher throws a fastball half of the time that the catcher puts up one finger, and curveball the other half (and vice versa for two fingers), then we have no information about the system and it is maximally entropic. If the pitcher throws a fastball three out of four times that the catcher puts up one finger, then we have enough information to say that one finger is likely to result in a fastball. In this case, we have more information than in the previous case, but less than fully certain case. So, in this model, information is understood as being related to the probability that the signifier (the pitch throw) agrees with the sign (the number of fingers the catcher puts up). While this is in no way a complete description of information, it is enough for us to understand Postman’s argument. He argues that the flood of sign-symbol relationships that has come with technology has reduced the information in each sign. Certainly, if a depiction of Jesus acts as a sign of both the scripture of the Bible sometimes, and a sign of cheap California wine the rest of the time, then the Jesus, then there information being lost as we are not able to say, with certainty which decreases proportionally to the number of things a depiction of Jesus becomes linked to, what exactly Jesus symbolizes. When Postman speaks of the symbolic drain, he is suggesting a hermeneutics of society which is becoming more opaque, and ending in a final state in which it is impossible to know with certainty what is a sign for what.
This concern avoids entirely the question of how we know anything is a sign at all—the semiology of the society. In the baseball example, I told the reader that the number of fingers put up by the catcher is a sign. What if that too were not certain? What if the sign was actually a shake of the glove, or a wiggle of the foot, or even a fan in the seats standing up and taking her cap off? Walter Benn Michaels in The Shape of the Signifier speaks extensively on what we consider to be signs, where we think them to be located, how they relate to the signified, and the consequences of all of this. He begins with the poems of Emily Dickinson, asking what exactly constitutes a poem? Dickinson, in particular, is known for her idiosyncratic style which played with the shape of the text on the page and markings left in the margins. He treats the debate about what should be considered when reproducing her poems as paradigmatic for the difference between reading and experiencing. On one hand, there are those who believe that the poem is the words alone, meant to be read and interpreted by the audience in a way similar to postmodernist art. The meaning of the piece is embedded in it by the author, and it is a construction of the author with some intent. Any marks which are not relevant to that construction are not relevant to the meaning or the piece. One example he cites is the white painting—a painting which was intentionally created to be read as “blankness.” The marking on the painting—the uneven brushstrokes, the globs of paint—hold no significance in that they are looked over when the painting is being “read” as blank. On the other hand are those who believe the blank spaces and obscure markings on the page must also be included in any reproduction of the poem. This, Michaels argues, relates to the de Manian materialist conception which places primacy upon those material markings which can not be understood over those which can. It places an emphasis on the “material vision” of humans, a form of semiotic reduction which is meant to be the way that one might imagine the world was seen by the most primal form of human being, “not yet severed from any from any purpose or use.” This would be to say that not discourse has yet “touched” the object seen. We might imagine this as looking at something for which we have no expertise - perhaps the markings on a scroll of chinese text. In this, the possibilities for meaning, use, and purpose are endless and bounded only by the imagination and the materiality of the page upon which they lie. To Michaels, those who prefer to maintain the blank spaces and unintelligible markings in reproductions Dickinson’s poem suggest that the essence, or identity, of the poem reside in these markings in as much as they do the words themselves because each part of the poem—including the paper upon which it was written - holds equal possibility for meaning. It is these people, Michaels suggest, that aim to “experience” the poem in the same way that a minimalist piece of art by Rothko or Mondrian is experienced. There is no intended meaning, and all marks on the piece are equally meaningful because all have capacity to lend meaning to some observer. But if everything has significance, he argues, then nothing does. Michaels suggests that modern literary theory has fallen into this latter form of thinking. He believes that literary critics attempt to extract meaning from within in the text itself, removing entirely the intent of the author, thus rendering it almost a material object. It is my position in this essay that this form of semiology is true not only in literary criticism, but, along with the implications of Postman’s hermeneutic of symbolic reduction, forms the episteme of the current era as is evidenced by the technological innovations of the last decade.
Having made this claim, let us lay forth exactly what the two parts of this episteme are. Its hermeneutics, as described by Postman, are that of a world which is composed entirely of signs. In this way, the term “symbolic reduction” is a bit misleading. In comparison to the episteme which Foucault ascribes to the modern era and using Shannon and Dretske’s understanding of information, this hermeneutic indeed reduces the amount of information in each sign. However, it does so by introducing new sign-signifier relationships, creating an increasingly interconnected web of semiotic relations. The term “introducing” is used because it is not fair to consider signs as being “assigning” or “creating” as Jesus is not being assigned to the California wine as much as the inherent connection between Jesus and wine, as created by transubstantiation is being revealed and made ready-at-hand by the advertising agency, so that it may be repurposed for their own use. Here we see the first instance of technology, as described by Heidegger, being used. The sign-signifier relation between Jesus and wine is brought forth or extracted by the advertising and thus made ready-at-hand, i.e. turned into a tool to be used. So, when Postman speaks of the connection between technology and symbolic reduction, he speaks of current technologies extracting sign-signifier relations and making them ready-at-hand. This function, as is being carried out, results in a Weltanschauung in which all that can be observed forms a web of interconnected signs. In the study of networks, an interesting phenomenon occurs when such links, which connect two things that were thought to be very distant, are revealed. The entirety of the network becomes a bit tighter. The phenomenon known as the “small-world effect” begins to take charge. Once a link is revealed where there was once thought to be none, the distance between the neighbors of the first and the neighbors of latter reduces from infinity, to a finite amount. Once the Jesus-Wine relation is made ready-at-hand and used and subverted to reveal a connection between Jesus and California Chardonnay, then those things “close” to both also reveal a path of connection. For example, we could go forward and draw a path between Mother Mary and the Napa Valley. Of course, once the company has aired their commercial of Jesus sipping California Chardonnay, it is no longer a distant jump to present a Nativity Scene in the middle of a California vineyard in their next commercial. Now this advertisement, in turn, makes available its own adjacent signs to form yet another sign-signified relation, and this process continues on and on until there is little to no semiotic distance between any two things which exist in our world. Thus, this hermeneutic has also the characteristic of closeness. The semiology, in turn, has the characteristic of a de Manian form of material reduction, prioritizing the “nature”-given material vision, or “experience” in the Michael’s sense, over the societally constructed “reading”. This is why signs are not assigned to anything. This can considered as “naming” the world, in the ways that were done by Enlightenment thinkers. Instead, the focus is on shifted to finding the names—or rather, finding the signs—through use of technology. We can refer back to the example of the criminality prediction paper for a clear example of this episteme. The methodology, in particular, is a very clear indication of the semiology. By relying on a convolutional neural network, the researchers have already made the unspoken decision that a material vision is to be at work. Indeed, the neural network which parses images is something of the paradigm of material vision. It understands nothing, and therefore it rejects no possibility at the outset. As the number of epochs increase, it begins to weigh out the likelihoods to see different patterns in the image, using nothing but what is in front of it. In de Mane’s own words, he described material vision as, “the eye, left to itself, entirely ignores understanding.” What is a neural network if it does not aim to be the most skillful form of exactly this? We do not look to neural networks to tell us what we already can. They are not meant to imitate us, but rather be better than us, faster than us. The neural network—often also called “artificial intelligence” is, by its construction, meant to resemble the un-socialized human being, but with eyes so keen that they are sensitive to parameters of near infinite precision. And while the methodology shows is revealing of the semiology, the subject matter itself is revealing of the hermeneutic. In taking on the endeavor to search for signs of criminality within a photograph of a human face itself, one is attempting to bridge that gap between “nature” and society. It is, without a doubt, an attempt to reveal, with the aid of technology, a link in our reality. Again, the word “reveal” is used because the authors make no claim to create or assign any sort of sign-signifier relation, but rather suggest that the use of the neural network—a tool which has no power to assign—and the “data” given to them by nature (human faces), they are merely searching for some relation which may already exist in nature. Of course, this paper received a great deal of negativity because there already exists a relation between ID photo and criminality when either is being “read”—i.e. they are already mutually relevant factors in understanding the meaning of the other - and in the U.S, this is skin color. But while this paper may have received such criticism, consider all those that have not. Every single day, some dozens of paper arrive on arXiv making some similar claim, but successfully manage to do so for things that are not already considered when being read. Consider every study which arrives that links obesity, or cancer, or heart disease to something new. All of these work together to make closer the sign-signifier relations in the world as well as simultaneously reducing the amount of information in each sign. All of these work to “extract” patterns from nature-given “data.”
There exists here an uncanny resemblance to episteme which was described by Foucault as belonging to the era of Classical thought. This episteme, summed up, had a hermeneutics of “resemblance” and a semiology of “signature.” That is to say, symbols were decidedly signs of that which they resembled. The term “resemble” here is analyzed by Foucault to be separated into five distinct forms of resemblance: convenientia, aemulatio, convenientia, sympathy, and antipathy. Each of these takes its own form and holds relations with the others, but the details of such are not relevant to this essay. The only important consideration for us is that, in this hermeneutic, the sympathy-antipathy pair give rise to all forms of resemblance. The signs of these sympathetic and antipathetic relations were sought out in “nature”. This is what the term”signature” refers to. These sign-signified relations were meant to be understood by looking as keenly as possible at the world around them to find these signs. Nothing could be overlooked because it was believed that these signs were embedded within “nature” and within the world by the Almighty himself. This necessitated an experiencing of the world, as all things in the world had their sympathies and antipathies, and therefore all things were signs and signified by all others. In an episteme of this form, “everything would be manifest and immediately knowable if the hermeneutics of resemblance, and the semiology of signatures coincided without the slightest parallax.” Nature, in this episteme, is what one must continuously challenge and question, as it is what exists in the misalignment of the hermeneutic and the semiology. This challenging of nature, experiencing of the world, and understanding of the world as infinitely interconnected signs indeed seems nearly identical to what we have described as the episteme understood by the analysis of modern technology. In The Order of Things, Foucault says, “In the sixteenth century, one asked oneself how it was possible to know that a sign did in fact designate what it signified; from the seventeenth century, one began to ask how a sign could be linked to what it signified” This distinction, although subtle, is fundamental. The latter is a question of creating connections, while the former is one of revealing connections. In the study of networks, this implies a fundamentally different geometry but we will leave that for further study, at the moment. The importance of the difference between revealing and creation, for us, is the primacy of the subject. Creation puts the subject and her Self as the reader who interprets the signs and symbols of the world as she sees them. The world is, to her, already decided to hold meaning or identity—as Dickinson’s poems do, and as the white painting does. Her task is merely to create the links which justify this interpretation by creating the necessary and plausible links. Revealing, however, is an endless task for which the subject is placed secondary to nature. In this, the subject becomes an observer whose Self is not her own, but rather a mere reflection of what is outside of it and connected by resemblance. This is made obvious by the primacy of DNA, face structure, race—all things to be observed—in the identification of a person. There is no room here for character, as character is something to be read. Character can not be extracted and analyzed as a sign, as it is not something which can be seen by the material vision of a neural network. Skin tone, as a color code, can.
How can we make the claim, however, that technology is pushing us towards the middle ages? We are not engaging neither in alchemy nor divinity. In fact, belief in any form of Almighty—which held together this archaic episteme—has shown to have been rapidly decreasing in the Western world. In the sixteenth century, they thought that walnuts were good for the brain because they just looked like a brain, and so God must have put them on the earth to be a cure for intracranial diseases. Now, I can Google it and find out that it’s because walnuts have a high concentration of DHA. From research, we know that DHA is connected to cell growth and neuronal signaling and that, with some reasonable probability, a lack of it is linked to cognitive decline and psychiatric disorders. This type of research, however, relies on its own Almighty being—the rules of Probability. Consider the ontology of a Probability—it is not something which exists ontically, as it can not be seen or interacted with without the Self which understands it. Indeed, Probability exists in the realm of belief. When we say that a coin is fifty percent likely to land heads, it is something which is inherent to the coin as it relates to the natural world. When we say that DHA within a walnut is most likely to be correlated to brain development, we are making a claim about something within the walnut which can not be seen, as it relates to ourselves. When we considered the notion of information, it was shown that it is comprised of nothing but Probabilities, arranged in such a way (a negative sum of logarithms, to be precise) that they reveal something about the most fundamental connection between the sign and the signifier. In the language of Foucault, the Probability is the signature. It is that which signifies some innate order which is to be understood. Much like in the Classical episteme, if the Postman’s hermeneutics—the hermeneutics of Probability—lined up exactly with the semiology of experience, all things would be knowable. Said otherwise, if probability acted exactly as we predicted it to—if all probabilities, such as the likelihood that lack of DHA causes cognitive decline, or the probability that someone with an upturned nose or dark skin tone will commit a crime, were one-hundred percent—then all would be knowable. Such a notion is so obvious to us, that it is almost self-explanatory. Nature, in the modern episteme, is that within which Probability exists. Nature is that which misaligns the signs from the state and makes fuzzy the information. If we were to know all Probabilities, then we would know the exact shape and exact construction of the world which we are experiencing.
Technology, by challenging nature, is challenging Probability. Heidegger, in his essay, asks, “Who accomplishes the challenging setting-upon through which what we call the real is revealed as standing-reserve? Obviously, man.” Technology is extracting probabilities and information, to then be put aside and ready-at-hand for advertising companies, actuaries, criminologists, geneticists, and the like. By extracting and using Probability, these people are acting as those in the Middle Ages who claim to have heard the word of God. What argument can be made against this? If there is shown to be enough correlation between skin color and crime, then how could we not understand skin color as a sign for criminality when we superimpose a semiology of “experiencing” on top of it? In this episteme, it is not possible that the society we are in is creating such relations, as we are no authors. We are only observers, prevented by the existence of the Self from being capable of material vision, and thus left to rely on machines which extract probabilities to tell us of the links in nature. Thus, technology is just that. It is what extracts and translates to us that which we can not observe ourselves. But, it is not the technology which decides what to translate—only we can do that. We must ask it, what is the relation between walnuts and the human mind? What is the connection between skin color and crime? Or that of nose shape to crime? What part of my DNA is similar to that of people born in India? As it has always been, technology continues to extend our Being. But perhaps, at the cost of abandoning the Self entirely.