Tech

School Apologizes After Using ChatGPT to Write Email About Mass-Shooting

Students at Vanderbilt University called the use of the AI tool “disgusting” after a school shooting killed 3 and injured 5 in Michigan last week.
Janus Rose
New York, US
GettyImages-1247345588
NurPhoto / Getty Images

Last Friday, students at a university in Tennessee received an email from school administrators about the mass-shooting which killed three and injured five at Michigan State University. It was a typical response to a tragically common occurrence in the U.S., except for one thing: the email stated it was written using ChatGPT, the automated tool that uses AI language models to generate text.

“In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus,” reads the email sent to students, which came from the Office of Equity, Diversity and Inclusion at Vanderbilt University’s Peabody College and was first reported by the school’s student newspaper, the Vanderbilt Hustler. “By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all.”

Advertisement

A final line in parentheses at the bottom of the email reads: “Paraphrase from OpenAI’s ChatGPT language model, personal communication, February 15, 2023.”

A screenshot of the email allegedly generated by ChatGPT

Image credit: The Vanderbilt Hustler

Understandably, Peabody students were horrified and angry that the school apparently used the automated tool to generate a message about the mass-shooting, with one calling it “disgusting.”

“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” Laith Kayat, a senior from Michigan said, according to the Vanderbilt Hustler. “[Administrators] only care about perception and their institutional politics of saving face.”

Administrators at the college were quick to issue an apology, telling the Hustler that the use of the AI tool to comment on the tragedy was “poor judgment.”

“As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI,” wrote Nicole Joseph, the college’s Associate Dean for Equity, Diversity and Inclusion, in a follow-up email sent to the student newspaper. 

When contacted by Motherboard, a Peabody College spokesperson sent a statement from Camilla P. Benbow, the school’s Dean of Education and Human Development.

“The development and distribution of the initial email did not follow Peabody’s normal processes providing for multiple layers of review before being sent. The university’s administrators, including myself, were unaware of the email before it was sent,” wrote Benbow in the statement emailed to Motherboard. Benbow also said the incident would be reviewed by the school’s administration, and that the deans in charge of the equity and diversity office would temporarily step back from their roles.

The incident is not the first time OpenAI’s automated language tool has caused conflict in academia. Large language models do not understand the content they generate and are not conscious machines. But their ability to produce text that seems like it was written by humans has led to worries about students using the tools for generating essays and cheating. Some schools have banned ChatGPT outright, but some instructors are cautiously incorporating AI generators into their curriculum.