New Series, Vol. 2, No. 11
The course syllabus is “one of the most recognizable instantiations of academic genres” by which an instructor communicates a substantial amount of information to their students. Syllabi serve several communication purposes, which can be more or less congruent, including logistical information about course objectives, assignments, and policies; an instructor’s approach to learning, authority, and expectations of students; and socializing messages about class and institutional culture. Thus, the authors of this study conducted a content analysis of the style and substance of syllabi policies on generative artificial intelligence (AI), specifically, and is among the first to investigate student-teacher interactions on this crucial classroom topic.
According to Common Sense Media, around 40% of high school students and 30% of college students report using AI—or technology designed to simulate human thought to perform cognitive tasks—to complete their assignments. Research on AI in higher education settings is mixed, demonstrating both its potential for enhancing faculty and staff efficiency and challenges regarding ethical use of this technology. Research also suggests that, while institutions generally grant instructors the agency to define their own policies around use of AI, instructors generally lack support from their institutions to make informed policy decisions about technical tools that they lack experience with.
In addition to examining syllabi policies on AI for emergent themes, the authors were also interested in several elements of linguistic style. These included modal elements (i.e., reflecting permissiveness vs. necessity), stance elements (i.e., reflecting the degree of commitment to a policy), hedging (i.e., reflecting the degree of flexibility in the policy), and how use of pronouns communicate power relations and/or inclusivity in the classroom. Researchers were also interested in the degree to which policies were learner-centered vs. instructor-centered and the overall tone (i.e., warm vs. cold) of the policies. A combination of human coding and a linguistic software package were used to examine the AI policies from 92 syllabi in English, voluntarily provided by instructors from various disciplines and institutions in the U.S. for these elements. This study did not investigate student reactions to or perceptions of these policies.
Regarding themes, 76 syllabi explicitly defined AI, 51 cautioned students about the potential for AI to generate inaccurate information, 32 cautioned students about discrimination and bias in AI output, and a mere 16 mentioned integrity-related concerns (e.g., plagiarism). The majority of policies (n = 52) allowed restricted use of AI, whereas 27 encouraged its use but with restrictions, 12 were completely restrictive or “zero tolerance” forbidding the use of AI completely, and two encouraged AI’s use with no restrictions. Broadly, use of hedges, stance verbs, and modal verbs were uncommon and the tone of policies was generally positive/warm. Policies also typically included learner-centered elements such as a rationale for the AI policy, statements of student responsibility, and a description of penalties for policy violation. Less common were invitations for instructor feedback, but it’s possible that these appeared elsewhere in other portions of the syllabi that were not of empirical interest in this study. Second-person pronouns (i.e., “you”) were most commonly used. The presence of increased learner-centered elements, use of personal pronouns (i.e., “I” and “we”), more permissive language, use of hedges, and warnings about inaccuracy and bias were positively correlated with warmer perceptual tones of policies.
In a post-hoc analysis, researchers also investigated correlations between learner-centered elements in syllabi AI policies and software-generated variables of clout (i.e., “the relative social status, confidence, or leadership that people display through their writing or talking”) and authenticity (i.e., language that feels spontaneous rather than regulated or filtered). They found that both providing rationales for policies and general perceptual warmth were positively associated with both clout and authenticity.
These findings offer several points of practical guidance for instructors as they design AI use policy and tailor their syllabi more broadly. The authors concluded that most instructors who allow AI use in their classrooms are also attempting to guide their students toward accountability and ethical use of this technology, with 90% of the sampled syllabi policies placing the onus of AI competency on students themselves. Authors also advocated for more conscious and intentional use of pronouns in syllabi policy such that instructors clearly signal where they want authority and where they want to create a sense of inclusiveness and community with their students. Moreover, including invitations for instructor feedback would also likely facilitate students’ ethical use of and competent decision-making surrounding AI.
Communication Currents Discussion Questions
- Think about a time when you used AI. How did you decide whether or not that use was ethical? Do you think students should be taught how to use AI tools responsibly in college? Why or why not?
- Why do you think AI tools might be biased or make errors? What are some possible consequences of relying too heavily on AI-generated information, both in school and in society at large? What are some other ethical or practical concerns related to AI (e.g., its environmental impact)?
- Have you ever had a syllabus that made you feel empowered or welcome—or one that made you feel the opposite? What specific features contributed to that feeling? What, in your opinion, makes a syllabus “good” or “bad” and how does a syllabus shape your expectations for the class or instructor?
For additional suggestions about how to use this and other Communication Currents in the classroom, see: https://www.natcom.org/publications/communication-currents/integrating-communication-currents-classroom
ABOUT THE AUTHORS
Stephanie Tom Tong is a Professor in the Department of Communication at Wayne State University.
Ashley DeTone is a Ph.D. student in the Department of Communication at Wayne State University.
Austin Frederick is a Ph.D. student in the Department of Communication at Wayne State University.
Stephen Odebiyi is a Ph.D. student in the Department of Communication at Wayne State University.
This essay, by R. E. Purtell, translates the scholarly journal article, S. Tom Tong, A. DeTone, A. Frederick & S. Odebiyi (2025). What are we telling our students about AI? An exploratory analysis of university instructors’ generative AI syllabi policies. Communication Education. Advance online publication. https://doi.org/10.1080/03634523.2025.2477479
The full copyright and private policy link is available at natcom.org/privacy-policy/








