Kriegspiel -- An Update

I already posed an update on Carnivore, the network traffic visualization tool first launched in 2001 and currently usable as a library inside Processing. Carnivore is up and running -- so happy coding!

Screen Shot 2015-07-29 at 1.50.27 PM

Kriegspiel is a different beast entirely. As a digital reinterpretation of Guy Debord's 1977 tabletop game, I was happy with the original beta. But the code gradually became obsolete and the game is offline at the moment. In retrospect I can see some problems in how it was conceived and designed. The application was just too bloated. It relied on a 3D graphics engine that was overpowered and unnecessary for what was needed. Online play relied on a large networking library (itself now discontinued), which was unwieldy. Trying to keep it all up and running on three platforms, plus hosting a server, was hard to pull off. Truth be told, Java was probably not the right choice from the beginning. There have been some notable games written in Java. But the vast majority of game coding is done in languages like C++ that compile to native instructions. Continue reading

Carnivore -- An Update

tl;dr: Carnivore is up and running -- so happy coding!

I've been getting a lot of email recently about Carnivore and Kriegspiel, the  two software projects that I've spent the most time on over the years. As anyone who's released software knows, it's unclear how long code should be maintained. A few years? Five? Ten? I've gone back and forth about whether to maintain these projects. The Kriegspiel beta went offline a few years ago and is due for a major overhaul. Carnivore was first launched over fifteen years ago -- that's some old code! -- and has changed dramatically as networks themselves have changed. So here's the latest update on Carnivore. I'll post a separate update for Kriegspiel.

Continue reading

The AT&T Long Lines building

33 Thomas Street cropped

Laura Poitras has a new short film called "Project X," which screened at the IFC theater in New York a few months ago before being released online. (No, not that Project X.) Using phrases and texts gleaned from documents released by Edward Snowden, the film deals with the affective experiences of working every day as an employee of a government spy agency: don't wear clothing that could reveal your identity, drive in an unmarked car, etc. Dark and dour, the Poitras short feels like a contemporary riff on the old conspiracy film, where architecture and infrastructure add to a sense of pervasive dread.

While it narrates the lives of spies, the real star of the film is not a person but a building, the AT&T Long Lines building at 33 Thomas Street in lower Manhattan. I wrote about this building previously, under the heading of "black box architecture." But after a recent report in the Intercept, followed by a piece in the New York Times, we now have a more concrete picture of this looming monolith.

As many had long suspected and as the Snowden revelations now confirm, the AT&T Long Lines building operates as one of the main NSA listening posts in the region. The spy agency apparently identified this key chokepoint for communications going in and out of New York -- hardlines going overseas but also satellite dishes on the roof -- and installed itself in the building, taking over a few floors from their landlords, AT&T. As the Intercept reports, "the Manhattan skyscraper appears to be a core location used for a controversial NSA surveillance program that has targeted the communications of the United Nations, the International Monetary Fund, the World Bank, and at least 38 countries, including close U.S. allies such as Germany, Japan, and France."

These documents and others show that commercial entities like AT&T have long colluded with government agencies on data collection when it comes to questions of national security. Data is value. And the NSA has proven itself adept at gaining access to telephonic and digital information. The same is true for the titans of industry, who for about a decade or more have focused all their attention on data collection at all cost. Take Google or Amazon Web Services -- they're in the same business as the NSA. The main distinction is that the NSA can tap into data reserves forbidden to its Silicon Valley counterparts.

We, and I personally, believe very strongly that more information is better, even if it’s wrong,” said Eric Schmidt, executive chairman of Google’s parent company Alphabet, Inc. And we should take him at his word. For companies like AT&T and Google, more is better. It's also wrong.


A Lossy Manifesto

I recently co-authored an article with Jason LaRiviere titled "Compression in Philosophy," published in a special issue of the journal boundary 2 devoted to the work of Bernard Stiegler. (Email me if you can't access the article due to paywall.) The piece ends with the following "Lossy Manifesto."

By way of conclusion let us return to Sterne and the debates surrounding compressive media. Such debates usually entail a number of claims: (1) media abstract, reduce, and encode a complex and heterogenous world, and (2) once encoded, media files may be compressed and expanded using lossless algorithms that preserve the integrity of data, or alternately (3) media files may be compressed using lossy algorithms that necessarily delete data. Given the above discussion we are in a better position to amplify and evaluate these various positions. Continue reading

The Last Instance

“Our uchromia: to learn to think from the point of view of Black as what determines color in the last instance rather than what limits it.” Our uchromia, our non-chromia or non-color — what does Laruelle mean by this?


As Laruelle would say, color always has a position. Color always has a stance. The color palette or the color spectrum provide a complex field of difference and alternation. The primary colors reside in their determining positions, while other colors compliment each other as contrasts. Hence the color posture: purple complimenting yellow, red complimenting green, the primary colors’ posture vis-à-vis the palette, and ultimately the posture of color itself governing the continuum of light and dark, as colors take turns emerging into a luminous and supersaturated visibility, or receding into a sunless gloom. Continue reading

My Assignments

It's good to have a path to follow, some structure to help organize one's explorations. So a few years ago I gave myself some assignments. I've tried to stick to them as best I can, and they guide most of my current work.

My first assignment concerns politics -- avoid a representational politics. I interpret this assignment in a broad sense. It includes both simple representational models but also models that pertain to metaphysics. In other words, avoid representationalism in both politics and in the structural domains that intersect it.

My second assignment concerns economics -- avoid a post-fordist economics. Again I interpret this broadly. It includes things like financialization and informatization, affect and sensation (as sites for proletarianization), reversibility and exchange, creative play, rhizomatics and distribution (a.k.a. “network fetishism”), the industrialization of difference, and so on.

Fulfilling the first assignment is easier than fulfilling the second one. Anti-representationalism is not rare in theory. I naturally gravitate to thinkers who push in that direction. Yet avoiding a post-fordist economics is much harder, since the principles of post-fordism are so deeply integrated into contemporary life, so much so that both the forces of order and their putative enemies often evoke such principles. It's almost impossible to identify a contemporary mode of thought that doesn't, in some way, endorse distribution, or horizontality, or affect, or play, or the new, or some other aspect of the contemporary mode of production.

This is why I'm interested in the tradition of radical immanence (ex: Michel Henry, and François Laruelle -- and perhaps certain readings of Deleuze). Radical immanence fulfills both assignments. It doesn't require a representational politics, if you do it right. And it doesn't propagate a post-fordist economics.

But you say: how doctrinaire! You're just obsessed with purity! Perhaps. But thought is a technology like any other and it's useful to build the right kinds of tech -- the right assignments -- that keep you moving in a good direction.

To What Question Is The Image an Answer?


The following short text appears in a new volume titled For Machine Use Only, edited by Mohammad Salemy and published by &&& / The New Centre.

Computers are very good at putting things to work. So it's no surprise that images have been activated by the computer, the erstwhile passivity of all renderings now made active and useful. Still, what's interesting is not so much that an image can furnish useful answers, but that an image can be a question, and a good question at that. Shall we not wonder what sort of question is asked by the image? Or the reverse, to what question is the image an answer? Continue reading

The Nonhuman: Apophatic or Cataphatic?

"Join us!" --the trees in Evil Dead

A major theme to emerge from my current seminar on the nonhuman is the distinction between apophatic nonhumanism and cataphatic nonhumanism. Since these terms don't seem to come up that often in contemporary discussions I figured it would be useful to lay it out here.

The nonhuman is an especially active topic today, as it overlaps with so many important fields of inquiry, from climate change and animal studies, to media archaeology and the turn in media studies toward infrastructure. Of course objects and things have long been at the center of conversations in critical theory, from Marx's inspection of the commodity form, to psychoanalytic theories of the object. I'll also point out the interesting work being done at the Sensory Ethnography Lab, including the astounding 2012 film Leviathan, which may be the best exploration of nonhuman perception that I've ever seen. Speculative realism too has its interest in the nonhuman, that being the crux of the critique of correlationism, or at least as I interpret it. And certainly the largest discourse on the nonhuman comes from theories of the subject, broadly conceived, including examinations of the not-quite-human (the proletarian, the child), the sub-human (the colonized, the leper, the schizophrenic), the post-human (cyborgs, queerness, Afro-futurism, Prometheanism), and the generic person (the common, the whatever).


Through all this, one truth emerges: the question of the nonhuman is an exceptionally difficult one. The question often bumps up against the very limits of philosophical method. What does it mean to be? What does it mean to know? Often such questions are prefigured precisely in human terms, making the question of the nonhuman practically incompatible with intellectual inquiry as we understand it. And at the same time the category “human” is often predefined, either overtly or covertly, in ways that bar admittance to certain kinds of subjects, putting the very integrity of the term “nonhuman” in doubt. At best we're wildly speculative in our conclusions about nonhuman entities like animals, plants, machines, or physical matter. At worst we sadistically ascribe our own special qualities to them, through a kind of boundless colonial expansion.

In short, to query after the nonhuman is to confront the symbolic apparatus--the language itself--that defines the human, and keeps the rest silent. “I have not tried to write the history of that language,” wrote Foucault in his first book, “but rather the archaeology of that silence.” Continue reading


In the wake of the Trump election, there has been a lot of hand-wringing and self flagellation in tech communities about the so-called “filter bubble” created by social media. Was Trump elected by Facebook? Is this “our” Twitter revolution -- only in the wrong direction? I wrote about this previously, invoking a strange coinage, versity, as an inversion and mutation of diversity:

I think there is work to be done on collaborative filtering in the context of ideology and identity. Surely this is a type of group interpellation. The technology of collaborative filtering, also called suggestive filtering and included in the growing field of intelligent agents, allows one to predict characteristics (particularly our so-called desires) based on survey data. Identity in this context is formulated on certain hegemonic (negotiated, but never actively negotiated) patterns. In this massive algorithmic collaboration the user is always suggested to be like someone else, who, in order for this to work, is already like the user. As Matt Silvia of Firefly describes: "a user's ratings are compared to a database full of other member's ratings. A search is done for the users that rated selections the same way as this user, and then the filter will use the other ratings of this group to build a profile of that person's tastes." This type of suggestive identification, requiring a critical mass of identity data, crosses vast distances of information to versify (to make similar) objects.

Firefly was one of the very first companies to deploy collaborative filtering technologies. They were bought by Microsoft (and more or less shelved as far as I can remember) -- and the notion of “web 2.0” wouldn't become a viable category until a few years later. But even in these early days it was clear that algorithms for filtering large databases of users were fundamentally oriented around logics of grouping, clustering, similarity, identity, unity-through-diversity... or, we might say, “versity.” They're called clustering algorithms, after all. Continue reading