Artificial intelligence (AI) isn’t just on its way to humanities classrooms—it’s already here!
From students asking philosophy questions to ChatGPT to professors using AI platforms for sharpening writing and research skills, AI is transforming the humanities world every bit as much as computer science labs.
And despite fears that it may encourage cheating or erode basic skills, some humanities scholars are seizing AI’s classroom potential—like self-proclaimed AI “early adopter” Alexa Alice Joubin, professor of English, theatre, international affairs, East Asian languages and cultures and women’s, gender and sexuality studies.
Joubin has made AI a centerpiece of her scholarship. She’s an affiliate of the National Science Foundation and National Institute of Standards and Technology’s Institute for Trustworthy AI in Law & Society, a faculty member of the GW Trustworthy AI Initiative, founding co-director of the Digital Humanities Institute and an inaugural GW Public Interest Technology (PIT) Scholar.
In the classroom, Joubin has embraced AI as a technology tool that can be as instructive to the humanities as an encyclopedia—or even the written word itself. And to reluctant colleagues, Joubin stresses that the technology is here to stay.
“AI will always be in the classroom. Instructors who cannot pick up on that may feel frustrated and may not acknowledge it—but it’ll still be there,” she said. “The other alternative is to actively engage with AI as one of many promising learning tools.”
In her courses, Joubin uses AI platforms to help students learn how to ask quality questions, conduct in-depth research and refine their critical questioning skills. As a PIT scholar, she’s pioneering trustworthy AI projects, including creating an open-access AI tutor based on her own teaching model. And she also champions the technology’s potential to promote a more inclusive classroom for international students who may struggle with English and students with varying learning needs. “It’s an empowering tool if you deploy it responsibly,” she said.
In a recent conversation, Joubin explained what AI can bring to the humanities landscape—and how humanities can help shape the future of AI.
Q: You describe yourself as an early adopter of AI in the classroom. How did you first become interested in the technology?
A: I’m very interested in the relationship between art and technology. Technology relies on art. When you launch a new technology, you are telling a story, a narrative. There is technicity in art, and artistic imagination brings forth new technologies.
And, of course, art needs technology. If you think about it, what is a quill pen? It’s a craft for writing—a technology. Technology is any application of conceptual knowledge for practical goals. As early as ancient Greece, people were dreaming of machines that could do things autonomously. And even in the 20th century, [mathematician] Alan Turing famously gave us the Turing Test on whether there is consciousness in the computer—and consciousness is a humanities question. So, this didn’t start with ChatGPT. It’s one famous iteration over a long history.
When generative AI came along in late 2022, I was thrilled. I jumped on it right away. I was disappointed in the early days. But I’ve been steadily teaching with AI and urging my students to look at it realistically and critically. It’s not a devil and it’s not an angel. But AI is in our mix and it’s not going away.
Q: Where are we in the relationship between AI and the humanities? What does that landscape look like?
A: AI really is a humanistic issue, and it has ignited broad interest in questions about free will, mind and body and moral agency. When people talk about ChatGPT, they talk about these questions. That’s why the humanities are front and center in this [debate].
Humanities provide a range of tools for people to think critically about our relationship to technology and about the so-called eternal questions. What makes us human? How do you define consciousness? These classic philosophical questions have gone mainstream thanks to all the debate about ChatGPT. Free will has suddenly become an important topic.
Q: How do you think the humanities world is adapting to AI? It seems that most people are either pro-AI or anti-AI—and the humanities largely fall into the anti-camp. Am I wrong?
A: Unfortunately, there seems to be a lot of fear and uncertainty. Even worse, there’s an indifference—a thinking that this has nothing to do with humanists. But it actually has everything to do with everyone in fields ranging from humanities to social sciences and theory. It is forcing us to pause and rethink some fundamental assumptions.
But technophobia, fear and indifference can lead to a shunning of AI. And that translates into an unhealthy classroom. We know students are using it. When they graduate, they are expected to have literacy in it. And writing, critical thinking and meta-cognition are becoming all the more central because of AI’s challenges. The bar is being raised.
Q: Can you give me an example of what AI technology can bring to the humanities classroom?
A: It can bring a level of self-awareness—because AI is a social simulation machine. It cannot create new knowledge, but it’s a repository of social attitudes. I teach my students to treat it like a shadow image of society. It allows you to think at a meta-level about your role in a society and how society reacts to certain things
For example, when I teach Romeo and Juliet in my drama class, students invariably have ideas about performing the play in a modern setting. AI can generate visuals for the scenes they describe in their heads.
But students often come back to me and say: Why are Romeo and Juliet always white? Why aren’t they Black or Latinx or a queer couple? It forces them to rethink how they phrase their questions and their default assumption. It’s an extremely fun and eye-opening exercise, but it also helps us examine our unspoken, unconscious racism or sexism.
Q: As a PIT Scholar, one of your priorities has been to explore issues around trustworthy AI. How do you see humanities contributing to that conversation?
A: How do you build trust? That’s fundamentally a humanistic question. And there are many ways to define it—transparency, ethics, accountability, interpretability.
Humanities is particularly good at exploring these critical theories in complex domains that deal with open-endedness. They require agile thinking. You have to be dynamic and always assessing and reassessing the context. Humanities scholars know that there’s no single universal morality. It depends on perspective. And a key humanities contribution is the ability to entertain ambiguity and multiple perspectives at once.
Q: What would you say to humanities colleagues who are skeptical—even hostile—toward using AI in the classroom?
A: I want to urge colleagues to try their hand at the open access version of ChatGPT. There are a lot of articles floating around, but I don’t think they’re very helpful. I know there is a technophobia in the humanities. But it’s just an interface, there’s really no technological skills required. Just fool around with it, gain a concrete sense of what it can and cannot do. It’s all theoretical until you actually try it.