A recent article in the Chronicle of Higher Education by Phil Christman reflects a view of AI that I frequently encounter when working with humanities faculty at the University of Florida. Christman makes an admirable defense of the traditional mission of the humanities as a search for truth. He rightly points out that technology’s bloviators, as he calls them, continue to overhype the impact of a technology that has failed so far to “blow him away.” Having lived through one AI winter, I can appreciate this perspective. In the late 1980s and early 1990s, the technology gurus were hard at work, painting a picture of a world run mainly by expert systems, the AI technology of that time. It never happened. Instead, large corporations discovered that expert systems cost too much and delivered too little. The recent American tech sell-off in response to the latest model release from DeepSeek, a small Chinese company that created a LLM as good as ChatGPT-4o at a fraction of the cost, highlights the perils of tech prognostication. The Americans may not have this one in the bag after all.
Christman concludes his article with a rousing commitment to “labor militancy,” as the folks from Deloitte, McKinsey, or Boston Consulting Group will never respond to a well-argued defense of the humanities. All they can see is money. All they can do is ruin our academic paradise. This is where Christman and I part ways, though this line of reasoning is familiar to me. For Christman, AI is just the latest in a long line of things we react to. In other words, the best way to deal with AI is to assume a reactive stance. And when necessary, call in the local union if you have one. In my opinion, this is a strategic mistake of the first order. I first understood this while reading Carlo D’Este’s excellent biography of Patton. As D’Este notes, one of Patton’s strategic principles was that you always wanted to be in a position where the enemy was constantly reacting to you, not the other way around. This was beautifully illustrated in Normandy, where Patton concentrated his forces, punched through the hedgerows, and then moved like lightning. As the 3rd Army raced across northern France, all the Germans could do was react and fall back.
A reactive stance to AI is like that of the Germans facing Patton. The momentum is with the other side – the technology pundits, the AI futurists, etc. Hunkering down and attempting to create fortified and unassailable positions is the last thing one should do in situations like this. What the humanities need right now is a strategy of attack, one where we act, leaving others to react to our vision. But what might that strategy look like? What possible touchpoints exist between AI and the humanities? Obviously, ethical AI is one place where the humanities can lead, and many are doing so. The only problem there is that so many voices are already in this space. In a crowded field, it’s much harder to stand out, to make a creative contribution that others view as substantive. Or, as Thomas Kuhn, author of The Structure of Scientific Revolutions, might say, “To make a contribution that leads to a paradigm shift.” Paradigm-level contributions change entire disciplines. For example, consider Einstein’s theory of relativity and physics. The field wasn’t the same after he published his two ground-breaking articles in 1905. These kinds of breakthroughs are similar to the one Patton accomplished in France. Once freed from the constraints of an existing reference frame, movement happens quickly as new thinkers redefine the scholarly landscape. This is precisely the kind of situation we face right now with AI. Is there a place besides ethics where the humanities can achieve a hedgerow breakthrough with AI? I believe there is.
The fundamental idea I’m working on right now is this: data and information lie at the heart of EVERY civilization, without exception. No data. No civilization. So why not introduce data in all its various formats into our classical education curriculum? The same holds for AI. No data. No AI. In both cases, data is foundational, whether it’s supporting an AI model or a civilization. What this means is that humanists and computational scientists both recognize the value of the written document. In computer science, they say, “Garbage in, garbage out.” If the documents in your dataset are junk, don’t be surprised when your algorithm spews nonsense. The humanities interest in and focus on archives therefore aligns perfectly with AI. In fact, knowledge of how humans have historically managed and made sense of data is instead a prerequisite for success in an AI-saturated world. The data management issues of the 16th-century Spanish empire, for instance, are worthy of study because similar problems exist today. And the same could be said of any of the great human civilizations. The added benefit of civilizational data study is that it can serve as a wellspring of new and innovative ideas. In other words, the data innovations of previous centuries may hold valuable insights for us today.
It appears that we may have found an opening in the hedgerow, a place where the humanities can break through and steer AI in a new direction. That place is called civilizational data and information systems. Unlike the saturated field of AI ethics, this space has almost no competitors. It’s wide open, just like northern France was once Patton found an opening. Are the humanities ready to roll?