A prediction... we're entering a new phase of normal science. I expect this to last a decade or more.
To understand "normal science" we return to Thomas Kuhn's influential 1962 book The Structure of Scientific Revolutions. In that text--devoted to the history of science although we might extrapolate outward to other domains as well--Kuhn differentiated between historical periods of relative equilibrium, punctuated periodically by "paradigm shifts," during which entire schools and traditions are overturned and reinvented. During the periods of "normal" equilibrium, scientists are mostly intent on probing the local, internal inconsistencies in their theories, toward the ultimate goal of grounding their work in a series of more consistent claims. Yet, at certain points, there emerge observations (or theories) that can't be subsumed within the existing paradigm of knowledge. These new claims are first judged to be inexplicable, perhaps even impossible, yet slowly gain adherents resulting in the realization of a "scientific revolution" within knowledge. It might grow gradually, then seem to break all of a sudden. Other thinkers have proffered similar lenses through which to view intellectual history and the mechanics of how history changes. The French scene around Gaston Bachelard and Louis Althusser was keen to talk about "epistemological breaks." More recently Alain Badiou has theorized conditions of consistency or stasis, which he terms "natural" or "normal," periodically punctuated by transformative "events."
In any case, my prediction is that, today, we are entering--or more likely have already entered--a new phase of normal science. There's some anecdotal evidence I could push. But overall recent events in theory have largely consisted of returns and rediscoveries. The return of Marx (mirabile dictu!) with help from folks like Michael Heinrich, or the rediscovery of Hegel (not so mirabile). Literary studies started returning to formalism a few years ago. Political theory has recently reverted to classical models of power and government that are decades if not centuries old. And other fields have started to look inward in an attempt to coast along for a bit longer. I'd venture that the only fields these days with any real vim are Black studies and queer theory.
But let's skip that thread for a moment. Stronger evidence for the new phase of normal science comes from tech, specifically from AI. There has been much excitement, and not a small amount of hand-wringing, over the new sorts of production tools rolled out in recent years. It began, in this latest iteration, with image tools like DALL-E and Stable Diffusion, and more recently with language models such as ChatGPT. Some tech evangelists are trumpeting a new AI revolution; could this be one of those paradigm shifts that Kuhn talked about? I suspect it's the reverse, in fact. Today's AI doesn't represent a new paradigm, so much as a general condition for normal science.
To explain this prediction, let's project outward and assume categorical saturation. Let's assume that every undergraduate essay is written by ChatGPT, that every programmer uses Copilot to auto-generate code, that every designer uses Stable Diffusion for storyboarding and art direction. What would happen if we analyzed such a scenario, the full saturation of AI across all categories?
The first thing we can observe about the categorical saturation of AI is that it is fundamentally centripetal. Functionally this is the equivalent of positive feedback followed by normalization. So the signal compounds itself, while always being re-registered to remain within a certain gamut. Today's AI is a bit like Alvin Lucier's "I Am Sitting in a Room"--or that time when Cory Arcangel fed "The Number of the Beast" through a compression algorithm 666 times--where the signal encounters itself over and over again, only to end the cycle in a mush of feedback. Lucier's was analog, Arcangel's digital, yet both reveal the resonant frequency of the signal, the "room hum" in Lucier's case, while discarding less and less of the original signal. I won't deny the aesthetic pleasures of entropy; these things can sound good, they can look good, no question. Yet today's AI is similarly entropic. Or to be precise, it's entropic because it's extractive; value is taken out of the system, while less and less is replenished.
The second observation is what we might call fractal failure. When AI fails, it fails at all levels of scale. And, in fact, its successes, if we can call them that, are not real or true in any proper sense, but rather are probabilistic scatter charts of truth conditions. I'm not even sure what it would mean to call an AI signal "true." So labelling it "false" or "a failure" doesn't quite make sense either. David Golumbia nicely captured this conundrum in a recent post on the inherent nihilism of AI. Lord knows I love nihilism as much as the next person. Although the new wave of corporate nihilism is something else altogether. In this context, Golumbia uses nihilism to refer to meaninglessness, a sense of having given up on the very premises and promises of the human itself. As he puts it, "the point of these [AI] projects is to produce nihilism and despair about what humans do and can do." He's right.
So the failure is fractal, which leads directly to a third observation: the failure of AI is also absolute. I said above that AI doesn't know when it's wrong, but at the same time, when it's wrong, AI also doesn't know that it's wrong. There's nothing in the meta structure of this paradigm of knowledge that can even confirm or deny the truth of a statement. It's a probabilistic distribution. There's nothing you can even "tell" the AI to amend or correct this problem, apart from a human editor actively munging the input data and re-training the ML function. If Stable Diffusion draws a beautiful portrait of a person with eight fingers on each hand, you can't "tell" the neural net that hands typically have five fingers. Today's AI is, in this sense, hyper-ideological--the word makes no sense in this framework, but there you are--in that it focuses intently on the manifest materials, while disavowing the latent conditions that make these materials possible. (This is one reason why empiricism is a terrible way to think about politics; empiricism structurally can't grasp the political... and it fails at the political not by accident but by design.)
By the end of full AI, we'll be left with the room hum and little else. Is that enough? Will today's technology--based fundamentally on empirical and aesthetic axioms--furnish enough aesthetic pleasures to make up for its many other loses? If the room hum can't nourish the soul, or at least not nourish it sufficiently, this hum, this resonant frequency, will at the very least provide a precise picture of the infrastructure itself. Just as the resonant frequency of a room will tell us a lot about the acoustical shape of the room, AI's hum will enunciate, like Kant in 1781, the conditions of possibility for any form of knowledge whatsoever, and hence will perversely outline its own overcoming in precise detail, albeit a detail it itself has no ability to recognize.
And of course blindness to the conditions is a textbook definition of normal science. So, when the industrial infrastructure is based on the normalization of empirical inputs, when it allies itself with extraction rather than production, and when it abdicates the very mechanism for judging the truth or meaning of any given claim--for all of these reasons I predict we're at the start of a new phase of normal science.