There are some things AI will never do, no matter how advanced the technology. In college, I majored in History and French. As noted in my December 4, 2024 post, I don’t know why I loved those fields. I just did. Now, as I enter the latter part of my career, I’m learning Italian and studying the Italian Middle Ages. At least now, I have a good reason to do so. As my department’s official AI liaison to the Arts and Humanities, it’s essential that I immerse myself in the same kind of activities my clients are engaged in. I call it learning from the inside out. “Eating your own dog food” is how they say it in business. In my case, that translates into creating AI-enabled tools that I, too, would use. Naturally, I have no pretensions of becoming a professional historian. Still, I can execute limited research projects to better understand a particular research workflow and its inherent challenges.
Over the past year, I’ve bounced around a bit, searching for a specific historical period on which to focus. Okay, I’ll come clean. I’ve been all over the place! But as the process has unfolded, I’ve gained a better understanding of myself and the effort it takes to figure out an area of specialization. History graduate students face a similar task when they matriculate into a graduate program. What period interests me? What topics? What do I find fascinating? And yes, human emotion plays a critical role in all this. Without it, the chances of success are slim to none. The typical graduate program requires at least 4 years of intense study, followed by the writing of a dissertation. To make it, you better love what you’re doing.
Today’s computers do not experience emotion. An AI model doesn’t become excited as it masters new material, as the back-propagation process updates its many parameters/weights. A model isn’t conscious. It doesn’t set goals or acquire a passion for Byzantine mosaics. Humans do.
In recent years, computer scientists have begun to study how to impart human-like emotions to machines. This emerging field of study is called affective computing. Emotions are essential in decision-making, perception, learning, and more. Research suggests, for example, that they assist in forming long-term memories. Apparently, a pinch of passion is just what you need to master a given subject or skill. Emotions are interesting things. On the one hand, they fuel and drive one’s learning arc. This works well when one has a singular focus. But when that’s missing, emotions can be less helpful, pushing the learner in many directions, resulting in much energy expenditure with little forward progress.
Lacking emotions, deep learning models are never pulled one way or another. A model’s focus never wavers during supervised learning. Supervised learning is a popular way of training AI models. Imagine you’re a superhuman student. Where others take an hour or so to finish a test, you complete it in minutes. Recognizing your special abilities, your instructor (supervisor) asks you to take a thousand tests. Each time you take a test, you immediately receive a score with the wrong answers highlighted and the correct responses given. After each exam, you take some time to figure out what went right and wrong. The test is then placed back in the file for you to take again in the next round of exams. Because you’re so fast, you see the same test repeatedly. Each time, your error rate goes down, little by little. At some point, you master the content, and the training ends.
Now, in the scenario I just described, only your final grade counts. That’s great news because there’s no need to get upset by poor scores on earlier rounds of tests. You only have to focus on just one thing: minimizing those errors! This is essentially how deep learning models are trained. The supervised learning process mimics goal-directed behavior minus all the emotional baggage.
Over the past year, an algorithmic approach to my various historical explorations might have been nice. But wait! When seeking a research focus, there are no right answers, just options. My decision to focus on Venetian / Byzantine history from 500 to 1204 AD is neither correct nor incorrect. An algorithm cannot grade a response like that or even calculate its error. All one has are preferences, choices driven by human emotions and desires. And with no right answer, supervised learning falls apart. Here, at last, is something AI can never do.
Have I finally found my research passion? I think so, but I’m not sure. I remain, after all, an emotional human with too many interest and too little time.