We have reached a point in our world where Artificial Intelligence is everywhere, bringing with it ethical and logistical challenges. In educational spaces, this new technology often means reckoning with multiple conflicting truths: On one hand, AI is a useful tool with increasing relevance in everyday life. On the other hand, it can be used as a false replacement for exercises in learning and critical thinking. As these considerations persist, the Brandeis community continues to grapple with its usage in our classrooms. 

Just as many students have differing opinions about using AI, members of this editorial board have observed a wide range of AI regulations across our classes, with some professors encouraging students to use AI to facilitate completing their assignments, some explicitly using AI themselves and others vehemently bashing it to their students. As students, how can we know how and when it is permissible to use AI if there is no clear consensus among the faculty? How can we learn the best way to interact with AI as we prepare to enter a workforce that engages with it in so many different ways as well? 

The University provides resources for students to navigate and understand how AI will be used in each class. However, with each professor setting their own policy — policies that are often unclear to students because of vague, general language in course syllabi — some students struggle to know how and when AI is acceptable in each class unless they explicitly ask their professor about it, and even then the rules can remain unclear. This uncertainty is indicative of a larger problem: a lack of consensus about when and how to use AI in our classes. 

This editorial board recognizes that the University has launched a smattering of resources to address AI usage in classes from the instructor perspective. The University’s AI Steering Committee’s website features an Artificial Intelligence Acceptable Use policy, and last February, the Center for Teaching and Learning shared some Preliminary Guidelines for Teaching with Gen AI. Unfortunately, these guidelines place the responsibility of how to implement AI onto each individual professor. The Acceptable Use policy states that “under this policy, there will be disciplinary differences in how AI is integrated into coursework depending on learning objectives.” 

This language is extremely noncommittal and vague, and does not offer specific guidance on when and how each department or discipline should expect to interact differently with AI in their classrooms. Many of the proposed guidelines for teaching with AI suggest asking students to include personal experiences or current events in their assignments to make it more difficult for AI to successfully complete the assignment — but not all assignments can logically include elements like this. Assignments built to foil AI rather than inspire students’ critical thinking and learning can feel like an unproductive use of time for students already juggling so many assignments.

This editorial board calls on the University to implement department-level policy on AI usage for students. It is unfair and confusing to expect each professor to curate individual AI policy for their class without clear guidelines from the departmental or administrative level. While it is nearly impossible to write clear AI policy on a university-level due to the breadth of possible usage, this editorial board believes that department-level guidelines would benefit both students and faculty. Ideally, faculty from each department would collaborate on a detailed set of guidelines that apply to all classes across the department. These documents would provide specific guidance for assignments typical of that department, relieving individual professors from having to make their own policy and providing consistent expectations for students. 

Furthermore, this editorial board calls for transparent, faculty-level guidelines on appropriate AI usage. If the purpose of defining appropriate AI usage for students is to preserve the integrity of our liberal arts education, it only makes sense that faculty are held to similar standards. The necessity for this kind of regulation is being made clear in universities across the country: already, a Northeastern University student has asked for an $8,000 refund after learning that her professor was using ChatGPT to generate materials. While no such event has taken place at Brandeis, members of this editorial board have expressed similar concerns that their professors are using AI to consult or generate course material. It is clear that discontinuities between student and faculty AI usage foster mistrust and discontentment, a problem which the University would be wise to examine sooner rather than later. 

It is clear that concerns surrounding AI usage on college campuses will not go away anytime soon. If Brandeis truly wants to become a leader in liberal arts education, the time is now to implement curated, intentional AI policies that will effectively protect our school’s standards of academic integrity and excellence.