While talking with a faculty colleague about the limitations of AI, he asked a rather profound question, “How can we model institutions in AI?” He noted that historians typically spend much of their time analyzing written documents to reconstruct a historical event or period. But can two-dimensional pages capture the actual operation of an institution as it existed at a point in time? Now, when we say institution what we’re talking about is a department or organization created by a state to maintain its control over the social environment. The various cabinet level departments in the United States, for instance, are institutions. They interact with each other as components of a much larger machine called the government.
All civilizations are built this way. The machinery of state is what keeps things running, ensures social stability, and sets overall direction and tone. My colleague continued, “Consider, for example, Hitler’s 3rd Reich. I can read hundreds of documents produced during that period. What kept Germany running, however, was a complex system of institutions. How did that system work? And to what extent did it reflect what was written in those documents?”
The problem, as my friend points out, is that a one dimensional string of letters grouped into words fails to capture the actual operations of a high-dimensional institution or system. This is an unfortunate byproduct of what’s called dimensionality reduction. In this example, what’s lost is tacit institutional knowledge and the practical know-how of those making these systems work.
Tacit knowledge, as the philosopher Michael Polanyi points out, is often more important than what has been written down. Unions frequently exploit this truth by staging work-to-rule actions. In this type of labor strike, workers stay on the job but follow safety regulations or other rules strictly and literally. No improvisation is allowed. They stop following the unwritten rules that make the actual system work and do everything by the book. Productivity plummets and operations slow to a crawl when that happens. Humans, not written words, make organizations work.
The world of tacit understanding, of human relationships, and of just good enough work practices is difficult to express in words and written regulations. Yet words are all our large language models (LLMs) have. Thus, the value of my colleague’s question. Can we model institutional knowledge? We’ve not yet figured out how to capture tacit know-how or the unwritten practices that underpin our institutions. What that means, in practical terms, is that AI still has a blind spot. How big is it? And should it warrant concern? We don’t know right now, but these questions are worth asking and pursuing.