On Nov. 13, students gathered in the Mandel Atrium for a panel on Artificial Intelligence and politics, hosted by the Brandeis Society for International Affairs and the Alexander Hamilton Society. The event brought together Prof. Steven Wilson (POL), Prof. Ayumi Teraoka (IGS) and Prof. Constantine Lignos (COSI/LING) to discuss how AI is shaping political systems, global competition and everyday life.

Prof. Wilson’s area of expertise focuses on the relationship between the internet and politics. Specifically, he focuses on the intersection between internet usage and how it changes the way people communicate and interact. He explained that he participates in events such as these because part of being a professor is sharing research with the public. “If your expertise is valuable to people in such a way that they would like you to participate in something, that’s a good and constructive thing to do,” he said in a Nov. 17 interview with The Justice.

During the panel, Wilson discussed the impact of AI-generated misinformation. He described concerns about “dead internet theory” — a theory that proposes that online spaces have become dominated by bots or mass-produced content. “Democracy doesn’t function without communication,” he said. When communication channels are flooded with bad information, “it’s horrible for democracy,” Wilson commented.

Building on Wilson’s points about online misinformation, Prof. Lignos discussed how AI systems can determine what users see. Lignos, whose work centers on natural language processing, explained that recommendation algorithms on apps like TikTok or Instagram already influence what political content people see. “Almost everything that you see is in some way affected by a recommendation system,” Lignos told The Justice in a Nov. 21 interview. He highlighted recent examples of deepfakes, like a 2024 video falsely showing President Joe Biden telling voters not to vote, to show how easily AI-generated media can spread.

Lignos also noted that AI tools allow individuals to create large amounts of misleading information at once. “It may be that a person came up with a disinformation campaign, but they were able to be much more effective by using AI to write maybe 10 different articles,” he said. 

He also pointed to challenges that come with identifying AI-generated text. Lignos explained that current detection tools are unreliable, which makes it difficult to prevent AI output from becoming part of the data that new models are trained on. “As these models become better at hiding the [fact] that they are models, their output becomes less distinguishable from people,” Lignos said. He explained that researchers are still unsure how to keep training data from becoming “heavily contaminated,” an issue he described as an open problem for the field.

The conversation about AI development goes far beyond the borders of the United States alone. Beyond concerns about the effect of AI algorithmic technology and the importance of communication, Prof. Teraoka raised concerns about the developmental competition AI has created. She focused on the growing competition between the United States and China, including the use of cheaper models like DeepSeek in countries that cannot afford more expensive systems. She also noted that militaries are becoming more dependent on AI-driven data and that the energy demands of new data centers raise questions for national security.

Despite these concerns, Wilson believes AI can also give ordinary citizens new ways to organize and express themselves. He described cases where people use digital tools to highlight corruption or challenge government narratives. “Being able to leverage that tool is democratizing in the sense that it puts power in the hands of anybody who’s willing to use it,” he said.

Wilson also noted that AI is reshaping research by giving scholars tools they did not have before. He said political scientists can now analyze large amounts of text or track online patterns that would have been impossible to study manually. “There’s not enough time in the universe for you to do that by hand,” he said, explaining that automated systems allow researchers to classify texts and take on projects that they could not before.

According to the panel, governments and companies also play an important role in guiding the use of AI technology. Wilson explained that progress will rely on incentives and regulations that encourage responsible use. “We have a responsibility to try to guide it in ways that it can be socially constructive,” Wilson said.

The panel highlighted how quickly AI is becoming part of political communication, global competition and everyday decision making. The professors emphasized how important it is for students to understand how these systems work and how information circulates online.

The discussion welcomed other members of the community to get involved. The moderator Stephen Gaughan '26 said in a Nov. 17 interview with The Justice that one goal of the event was to increase student engagement across different fields. “I think this event was a really great opportunity to think about things in different ways, to apply different perspectives and to learn about how important things can relate to each other,” Gaughan said.

Gaughan believes events like this panel matter because they bring together people across campus who might not normally interact. He emphasized that meaningful conversations happen when students and faculty are able to share their perspectives. “We’ve got some very knowledgeable people who work here, and we have some very knowledgeable people who study here,” he said. “And I think it’s really great to get those people in conversation with each other.”