It began with a list. When addressing "the birth of a new medium," Janet Murray responded with a list of properties. Digital environments have four essential properties, she argued. Digital environments are procedural, participatory, spatial, and encyclopedic.
When tasked with the definition of "new media" a few years later, Lev Manovich answered in a similar way. "We may begin...by listing," he claimed, before issuing a stream of empirical references: "the Internet, Web sites, computer multimedia, computer games, CD-ROMs and DVD, virtual reality." Yet Manovich's primary litany was but prelude for another one, the second list more important for him, a series of five "principles" or "general tendencies" for new media: numerical representation, modularity, automation, variability, and transcoding. And the fourth principle (variability) was itself so internally variable that it required its own sub-list enumerating no less than seven "particular cases of the variability principle." Continue reading →
I hope to write a book on Alain Badiou some day. In the mean time, I have a new article just published on Badiou's digital philosophy. The title of the essay is "Mathification" and it aims to wrangle the central issue in Badiou, indeed the cause of some controversy: Badiou's relation to mathematics. Email me for the PDF if you're caught behind the paywall.
The essay is part of a special issue on "Economies of Existence" edited by Emily Apter and Martin Crowley, and containing texts by Arjun Appadurai, Gabriel Rockhill, and Peter Szendy, among others.
Also of potential interest: "21 Paragraphs on Badiou," a series of guesses and prognostications written in anticipation of Badiou's (then still forthcoming) Being and Event 3: The Immanence of Truths.
Please join me on March 2 for an event to celebrate the work of Catherine Malabou, who is a visitor this term at NYU. As I understand it, Malabou will begin the proceedings, followed by a series of responses from Emily Apter, Emma Bianchi, Peter Szendy, and me.
I wrote before about the anti-computer. Let me continue some of those themes, using instead an adjacent label, the uncomputer.
In an initial sense, the uncomputer comes out of whatever is subordinated or excluded as a result of the standard model of the digital. The excluded term might be the flesh, or it might be affect. It might be intuition, or aesthetic experience. The excluded term might evoke a certain poetry, mysticism, or romanticism. Or it might simply be life, mundane and unexceptional. The uncomputer means all of these things, and more. The gist is that there exists a mode of being in which discrete symbols do not take hold, or at least do not hold sway. And in the absence of such rational symbols, modern digital computation becomes difficult or impossible. Sometimes this is called the realm of "life" or "experience." Sometimes it is called the "analog" realm--indeed analog computers are some of the oldest computers. Continue reading →
What the Digital Humanities features, is rather the digital processing and representation of data. The concept of the digital itself is just as little explored as manual aspects of programming, which always include the question of how data is classified. "Digital" has, strangely enough, the meaning of “objective” against "analog", "hermeneutic", or "interpretative". On the Herrenhausener Conference in December 2013 was as predicted the end of theory or hypothesis-based research and Lev Manovich advised: "Do not start with research questions! Look at the data instead." This I think is a dangerous misconception. The success of the CERN in discovering Higgs boson can teach us something quite different. In an article in DIE ZEIT of 2011, the Speaker of the detector team explained that out of 40 million data delivered by the LHC only 200 "interesting results" were used for evaluation. This makes 0.005 per thousand! And when is a result interesting? If it fits a previously formulated model or a hypothesis. Physicists are safe to ignore 99.9995, or rounded 100 per cent of their data due to a hypothesis. Data does not organize itself.
Lev Manovich's book Language of New Media, published almost twenty years ago, did much to propel the incipient field of digital studies. His turn toward big data in recent years is more problematic. I would agree with Harlizius-Klück that Manovich's "theory < data" is a "dangerous misconception." It's also just wrong: data are always the result of theory; there is no data that is not already the result of a hypothesis, which is to say a kind of active mental speculation. Continue reading →
I should have said this before, but the word itself is a monstrosity. Whosoever would attach a Greek prefix to a Latin root should be driven out of the city, egads. But let's overlook this superficial fact, at least for the time being. We saw before that metadata is an engineering problem. But metadata is a problem for other reasons too; metadata is a problem for thinking.
The notion that metadata might be a problem for society emerged onto the world stage with the Snowden revelations, although people have been worried about such issues already for some time. The conversation then was about the collection of so-called metadata -- telephone call records, who called whom, and so on -- and the lawful or unlawful ends to which such data might be used by state and commercial actors.
Here I'm not so much interested in whether metadata is a problem for society, but rather how metadata relates to thought and whether metadata might be a problem for thinking. Continue reading →
In thinking about the problem of metadata, I was reminded of an old discussion -- addressed already in Protocol but worth repeating -- easily summed up by the expression "metadata = data." Continue reading →
I was recently interviewed by Denisse Vega de Santiago and George Jepson for Volume magazine on labor, play, capitalism, and other topics. Oh and I get to say some catty things about architecture and use profanity! Read the whole interview here.
I said before that no one has yet patented a Meaning Machine. While that's true in the abstract, I want to talk about the two most common ways to hack around the problem. First is labor and second is scale.
Meaning is the "hard problem" of computation, at least today. How do we know that data means something as opposed to something else? What's the difference between noise and signal? Is artificial intelligence (AI) able to discern meaning? And, perhaps more esoterically if not also pedantically, is meaning an analog technology or a digital technology? (For the final question, I take meaning formally as an analog technology, in that meaning entails a kind of Gestalt synthesis of complex arrangements of terms; yet practically speaking meaning is always the result of an interaction between the analog and the digital, and thus cannot be reduced to one or the other.)
Today there are two basic solutions to the "hard problem," the problem of meaning. The first solution is to outsource the problem to humans, effectively to make humans shoulder the burden of the Meaning Machine. To the extent that significance is measurable, it's because a human put it there and marked it as measurable. In other words, if you find meaning, it's the result of human labor. Continue reading →