> Hello, I am Eliza. I'll be your therapist today.
* I'm nervous about the future of communication because of ChatGPT.
> Did you come to me because you are nervous about the future of communication because of ChatGPT?
> I see. Let's explore that a bit more.
—Conversation with an ELIZA chatbot
In 1966, Joseph Weizenbaum, then a professor at the Massachusetts Institute of Technology (MIT), published a paper that introduced his chatbot ELIZA, a computer program that made “certain kinds of natural language conversation between man [sic] and computer possible.” In the paper, he writes about a chatbot, a program that allows a human user to have a seemingly natural conversation with a computer program. Weizenbaum’s ELIZA chatbot imitates a Rogerian psychotherapist who puts the burden of the conversation primarily on the client by prompting them to share more and expand on their experience, as shown above. ELIZA is considered one of the first chatbots ever made, and its convincing dialogue sparked the term “ELIZA Effect” to describe how users anthropomorphize conversational machines.
Like Weizenbaum, I have created a bot. Mine, @RhetoricTweeter, is a Twitter bot that produces a hypothetical thesis for rhetorical criticism once per day, e.g., “In my next paper, I will use decolonial theory to uncover that Gay Instagram Memes cultivated the critic. #TeamRhetoric.” Created in my first year of graduate school, I was experimenting with a more productive use of automation than popular examples of election-meddling and anti-vaccine bots. @RhetoricTweeter joined a legion of “good bots”—like Protest Bots and Art Bots—on Twitter that have since been largely killed off under Elon Musk’s ownership.
After creating @RhetoricTweeter, I hosted a “Deliberative Bots” workshop in 2019 during a department colloquium at the University of Maryland, College Park. My colleagues and I created and published our own Twitter bot, and I attempted to demonstrate why experimenting with bots and automation—when there were plenty of humans to engage with!—might be worth their time. In the last nine months or so, those of us who are studying artificial intelligence (AI) and communication have rarely needed to explain the relevance of our research. That’s because one chatbot has fundamentally reshaped our cultural conversations: ChatGPT.
What is ChatGPT?
In November 2022, ChatGPT was made available to the public as a web app: head to chat.openai.com, make an account, and you can start chatting. ChatGPT, considered “generative AI,” is powered by a sophisticated machine learning model that allows it to generate clear written responses to prompts. Back in 2019 when I was playing with Twitter bots, OpenAI, the AI research and deployment company responsible for creating ChatGPT, was tinkering with GPT-2, the precursor model to GPT-3 which powers the eminent chatbot. GPT stands for “generative pre-trained transformer,” which is a specific type of Large Language Model (LLM). As Open AI explains, ChatGPT has been trained on “vast amounts of data from the internet written by humans, including conversations,” and as a result, has “learned” to generate responses to prompts.
My Twitter bot @RhetoricTweeter doesn’t really “chat” and doesn’t respond to human prompting. It runs on what is effectively an automated “Mad Lib,” generating a hypothesis using fill-in-the-blank sentences populated with banks of keywords I have fed it over time. It hasn’t “learned” anything and can’t process any prompts; it just follows the rules I’ve laid out for it. In contrast, ELIZA and ChatGPT are pre-trained models that use Natural Language Processing. Both receive inputs or “prompts” from a user, process them, and generate an output—an appropriate reply (hopefully). However, ELIZA doesn’t have a huge corpus of knowledge to use to understand and generate material. ELIZA only recognizes a limited set of keywords Weizenbaum programmed; ChatGPT, on the other hand, has a huge training dataset of millions of documents.
The incredibly sophisticated technology that powers ChatGPT allows the chatbot to give much more advanced responses than my bot or Weizenbaum’s ELIZA. When I told ChatGPT the same thing I told ELIZA—”I'm nervous about the future of communication because of ChatGPT”—it attempted to reassure me with a nearly-300 word reply. “I understand that you may have concerns about the future of communication, especially with the advancements in natural language processing and artificial intelligence like ChatGPT. It's natural to feel a mix of excitement and apprehension when new technologies emerge and start playing a significant role in our lives… As developers, researchers, and users, it is our responsibility to ensure that AI tools are developed ethically, with transparency, and with a focus on augmenting human capabilities rather than replacing them.” This output, albeit a bit patronizing, is incredibly sophisticated. As The New York Times explains, ChatGPT’s huge training dataset has given it so much material to accurately assess context and generate an appropriate response.
Ready or not
Few—if any—have felt properly prepared to reckon with the effects of such a lucid chatbot. ChatGPT has fueled an outright crisis, especially in education. In The Chronicle of Higher Education, one well-shared story on social media and department listservs featured this headline: “I’m a Student. You Have No Idea How Much We’re Using ChatGPT. No professor or software could ever pick up on It.” At Texas A&M University at Commerce, students in an animal science class were all threatened with incompletes when their professor misunderstood ChatGPT and thought every student had cheated.
But let’s not miss the forest for the trees: ChatGPT represents an advance in artificial intelligence that should fuel concerns far beyond academic dishonesty. The Washington Post reports that “economists predict artificial intelligence technology like ChatGPT could replace hundreds of millions of jobs, in a cataclysmic reorganization of the workforce mirroring the industrial revolution.” Digital humanities scholar Matthew Kirschenbaum predicts a coming “Textpocalypse” where a “tsunami of text” will make it “functionally impossible to reliably communicate in any digital setting.” How should we face these daunting issues?
Communication, Rhetoric, and Ethical AI
I met fellow NCA member Misti Yang when I was a brand-new Ph.D. student in 2018, and she introduced me then to the premise that automation and artificial intelligence were already pressing concerns for communication scholars. Her research on rhetorical theory and ethical AI, explored important topics like the use of AI chatbots in therapeutic contexts.
Misti, who was the Mellon Assistant Professor of the Public Communication of Science and Technology at Vanderbilt University, passed away in March 2023. We have lost a promising voice to guide us as we face the unsettling new challenges and opportunities that come with AI development. We should be guided by Misti’s directive for us as communication scholars and participants in a shared world. She believed that communication and rhetorical studies scholars need to be at the center of conversations about the development of technology.
Misti’s dissertation, “Code Me a Good Reason: Joseph Weizenbaum and a Rhetoric of Ethical AI,” is a rhetorical biography of the man considered one of the pioneers of artificial intelligence. Working from Weizenbaum’s published works and papers housed in MIT archives, Misti identifies him as a “rhetorical theorist of computation” who recognized how the ethical reasoning of human engineers influences the tools that they create and, in turn, those tools influence capacities for ethical reasoning. She argues that “recognizing the practice of rhetoric inherent in engineering and ethics can better equip engineers and the public to manage scientific and technological uncertainty with the care necessary for a humane future.”
Like Misti, I believe that engineering and ethics are inherently rhetorical. Through communication, we invent and modify the world we live in—our values and our tools. We know, for example, that decision-making algorithms encoded into AI tools reinforce discrimination, inequality, and human biases; it’s what data scientist and mathematician Cathy O’Neil provocatively calls “weapons of math destruction.” Those tools then influence our values. Misti and I authored a paper published in the NCA journal Critical Studies in Media Communication that discusses how AI and automated tools attempt to manage human emotions at the expense of “building capacities for nuanced, challenging conversation.”
ChatGPT and other generative AI tools are bringing this conversation to the forefront of public attention—and with a sense of urgency. Now, people must communicate about how to create a humane future, and what that future looks like.
Facing AI in Communication Studies
We need communication theory, research, and ethical communication practice to guide conversations about our values and tools, about our lives as creatures who share the earth. With Misti’s perspective in mind, these are some ways that I believe communication scholars can and should serve as needed contributors and guides:
- Holding developers accountable for AI bias: Communication researchers like Heather S. Woods and Taylor C. Moran have already published important work about AI virtual assistants (like Siri and Alexa) and gender and racial stereotypes that show how AI reflects and reifies human biases and values. More research will be needed in this area as generative AI and new applications are introduced. Researchers can investigate how publics hold accountable organizations that produce AI software that reinforces such biases.
- Addressing AI and communication careers: “When confronted with the cold efficiency of the machine, the human appears as just an organic collection of potential errors,” writes Joshua Reeves regarding communication, automation, and labor. OpenAI research predicts that “80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs,” like ChatGPT. To make matters worse, “19% of workers may see at least 50% of their tasks impacted.” Public relations professionals, copywriters, graphic designers, social media marketers—all common careers for people who majored in undergraduate communication, and all careers that could be threatened by generative AI that quickly produces text and images. We need research to identify how to ethically use AI tools in various communication workplace contexts, and we need advocacy from professional organizations to protect our profession.
- Examining AI, communication, and classrooms: Communication education researchers like Autumn Edwards and Chad Edwards are showing how AI might change the nature of communication instruction. In addition, the bevy of discourse about teaching and academic dishonesty offers an opportunity for scholars in the discipline to investigate and challenge our assumptions about teaching and learning, a way communication researchers could enliven conversations about AI, detection software, and academic dishonesty. We should also identify practices in which working with AI would be beneficial to advancing student learning and enhancing teaching, drawing on a long history in communication studies of incorporating new media technologies into our classrooms.
Matthew Salzano, Ph.D., is an IDEA Fellow in Ethical AI, Information Systems, and Data Science and Literacy in the School of Communication and Journalism and the Program in Writing and Rhetoric at Stony Brook University (SUNY). He researches the intersection(s) of digital media, social change, and rhetoric and communication theory.