In the last couple of years, the discussion surrounding the use of artificial intelligence in academic settings has been a point of contention. For some students, AI has been a source to generate ideas and act as personal editors, while professors have mixed feelings of its usage. While many have embraced the convenience that AI offers, Peizhao Li Ph.D. ’24 was recently awarded the $55,000 Graduate Research Fellowship Program from the National Institute of Justice for his research to better understand bias in artificial intelligence and machine learning and to better regulate its potential discriminatory impact. 

This February, Li graduated from the University with his Ph.D. in computer science. His dissertation titled “Harmonizing Fairness with Utility in Data and Learning” uses data collection and algorithms to develop machine learning models that would provide fair and non-discriminatory predictions, while minimizing the side effects of its performance. Li’s interest in AI began in 2019 when he began pursuing his Ph.D. “I saw a lot of news saying that AI is the next generation technology. It’s very powerful,” Li said in a Feb. 9 interview with The Justice. “After I started my Ph.D., this field became a very interesting topic and at that time, there was very limited literature. It’s really important and critical for the later deployment of AI to ensure that such a new technology won’t bring us any negative impact in our daily life.” 

Li’s dissertation has every day relevance. For example, the acceptance or denial of credit card applications could be heavily influenced by AI software that is not trained as independently from social differences, or what Li says are “sensitive attributes,” like gender, race and age. 

Two individuals who share similar financial profiles, but different sensitive attributes, should receive similar treatment; however, the predictions were different for the two applicants. “There are more benefits or more resources allocated to our privileged group compared to [the] under privileged group,” Li said. “We do expect a system to give a higher probability to assign credit cards to a male applicant, rather than a female applicant.” 

The International Business Machines Corporation points out that the bias found within AI could be inherited from human judgment and preferences who develop the programs. Bias in data selection or evaluation can also lead to flawed algorithms that perpetuate errors and unfair outcomes. Additionally, the usage of flawed and biased training data can heighten the inaccurate output. IBM gives the example of  how training data for a facial recognition algorithm that over-represents white people can lead to inaccurate results when the same algorithm is used to recognize people of color. 

“We want the decision to be entirely independent of the gender or any other sensitive attribute we are considering,” Li said. 

Li himself has to keep in mind his bias while conducting his research. “I think everyone [who] does research is biased,” Li told The Justice. “So I’m glad to collaborate with my advisor and many other collaborators. I think that the perspective coming from everyone can help us to ... reduce our bias.” Li worked on his dissertation under the guidance of Hongfu Liu, Assistant Professor of Computer Science, and plans to continue his research in collaboration with Liu. 

Since finishing his Ph.D., Li has relocated to Bellevue, Washington, working as an AI Scientist at GE HealthCare. His current work focuses on using AI to allocate medical resources, predict patient behavior and facilitate decisions for patient care. Especially in the medical industry, there has been a long history of gender and racial disparity, disproportionately impacting historically marginalized groups. With the added layer of technology, a focus on AI bias is needed to mitigate the preexisting medical disparities. “It’s a very critical topic in this field as well,” said Li. “Responsible AI is something [GE HealthCare] is considering. They want to ensure all the decisions coming from the health care AI system doesn’t contain any bias towards different races or gender. They want to allocate equal resources to different demographics.”

In addition to continuing a collaboration with Brandeis and Liu, with the NIJ fellowship, Li hopes to expand his research in AI fairness, connect with newly coming Ph.D. students and use the funds from NIJ to develop workshops and events that would bring his findings to a larger audience. “I think that’s a very good opportunity to attract people to get to know about the technology and how we solve the problem programmably and methodologically,” Li said. 

Addressing AI is not just an academic pursuit but a necessity for everyday life. Literature surrounding AI remains sparse, but ongoing research will help uncover how AI can be used to create equitable and just systems that benefit all demographics.