Before I began to write this article, one of my professors had given me the suggestion to use ChatGPT to create a title for this piece. I did not do that, and will be very offended if you think I did. However, I did decide to give ChatGPT a chance and typed, “Can you please create a title for a school newspaper article which features three interviews with professors at Brandeis University discussing the potential benefits and drawback of ChatGPT in their respective fields of study and the classrooms in which they teach in?” In response, I got:

“Exploring the Impact of ChatGPT: Perspectives from Brandeis University Professors”

Aside from the use of title making and the temptation of lightening one’s neverending workload, AI usage has been a rising concern in the education sector, which can both serve as a resource and threaten the purpose of education in the first place. I was able to speak with three Brandeis professors community, all teaching different subjects and with different experiences regarding the use of ChatGPT and other forms of artificial intelligence in their classrooms. 

On Feb. 13, I spoke with Prof. Elizabeth Bradfield (ENG). As a poet, Bradfield believes that AI should have no role in the creative process of writing poetry and other creative writing pieces. ChatGPT could be useful for things like getting lists of poems or finding useful information for a poem, but Bradfield said,“I still have to do the reading and the thinking.” She said using artificial intelligence would be “the opposite of creating art.” 

When talking about the joy and emotions that accompany writing and the writing process, Bradfield added on, “Why would I give that away to AI?” As an educator, Bradfield would not encourage her students in the use of AI to create a poem. If she found out that someone had handed her a poem created by AI, Bradfield stated that it would be “a huge betrayal of trust. And why would I want to waste my time writing feedback for an AI poem?”

After speaking with Bradfield, I also got the opportunity to have a conversation with Prof. Dylan Cashman (COSI) on Feb. 29. Cashman teaches two Computer Science elective courses as well as a few introductory courses. When first discussing the invention of AI and its rising popularity, Cashman stated that “it has changed a lot of people’s lives,” regarding the ethical and professional questions that have risen out of its increased usage. When asked about what measures he would take in the case that a student handed a coded assignment using AI, Cashman replied with, “I think we are still learning what to do in that case.” 

On the use of artificial intelligence in elective computer science courses versus introductory ones, Cashman said his greatest concern with the usage of AI in computer science classrooms would be, “Do you care about the product that they are producing, or the process that they undergo while doing it? And I think it’s a case by case basis by class.” 

Cashman also mentioned the concern of the fairness of grading when grading an assignment with AI usage versus one without one, as many “AI detecting” softwares are not very accurate, according to Cashman. An increasing concern for Cashman has been maintaining the essence of the learning process, where he stated, “In a formative assessment: I want them to hit a wall and I want them to get over that wall. That is truly the value of education. If someone uses AI … I worry about that a lot.” 

However, Cashman believes that in some cases, like editing, writing and advanced electives more concerned with short-term research, using artificial intelligence can have an optimistic outcome. As a final remark, Cashman stated, “I think people are trying to decide what policies and cultural norms about AI should be based on how AI is being used right now. And people should get aware of how it will get better.”

Finally, on March 1, I was able to speak briefly about AI in the field of legal studies with Prof.  Douglas Smith (LGLS), who began working at Brandeis as a Guberman Teaching Fellow. Smith works as the director of Legal and Education Programs with The Right to Immigration Institute. When asked about the use of AI in his professional career, Smith replied, “I used it at a conference we just had, a law and society conference in Puerto Rico. I think it’s great. I don’t think I would rely on it, but it’s great to talk to.” 

As an educator, Smith is not opposed to the use of ChatGPT by his students when used properly. “I love ChatGPT. I encourage students to use it as a tool, as a research tool, and as a research tool they should cite it,” said Smith.

From the various insights of these three educators, the common consensus seems to be that we are still figuring it out. ChatGPT and other artificial intelligence platforms and applications can be useful as a guide or aiding resource, but also presents bigger problems like corrupting academic integrity and presenting bigger implications for professional fields such as medicine and law.


— Editor’s Note: Justice Arts & Culture Editor Nemma Kalra ’26 is associated with The Right to Immigration Institute and was not consulted, did not contribute to, nor edit any parts of this article.