Home » Vanderbilt Staff Used AI to Email Students About the Michigan State Shooting
News

Vanderbilt Staff Used AI to Email Students About the Michigan State Shooting

Vanderbilt UniversitySeanPavonePhoto/Getty

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

After the recent shooting at the University of Michigan, staff at Vanderbilt University in Nashville sent an email offering support and comfort to students. “In the wake of the Michigan shootings,” it read, referring to the February 13 attack that left three students dead and five injured, “let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus.”

Then, in small text at the end of the message, was a line revealing that the 300-word email had been, at least in part, generated by artificial intelligence: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” it read, the Vanderbilt Hustler reported.

The email was signed by three affiliates of the Office of Equity, Diversity, and Inclusion in Vanderbilt’s Peabody College of Education and Human Development, including Associate Dean Nicole Joseph.

Joseph, in an email to students the following day, reportedly apologized for the incident, calling it “poor judgment.” “While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” it read, according to the Hustler

Camilla P. Benbow, Peabody’s dean of education and human development apologized further with a statement noting that her office would conduct “a complete review” of how the email was generated and sent. Her statement added that Joseph and another signee, an assistant dean, would “step back from their responsibilities with the EDI office.”

“The development and distribution of the initial email did not follow Peabody’s normal processes providing for multiple layers of review before being sent,” Benbow’s statement reads. “The university’s administrators, including myself, were unaware of the email before it was sent.”

The faux pas only adds to concerns over the inappropriate use of AIs to generate false news articles, breakup scripts, fraudulent school papers, and college admissions essays. It also illustrates how AI-generated messages can backfire in situations that demand human-generated empathy.

As one Vanderbilt student, sophomore Samuel Lu, told the Hustler, “It’s hard to take a message seriously when I know that the sender didn’t even take the time to put their genuine thoughts and feelings into words. In times of tragedies such as this, we need more, not less humanity.”

Newsletter

February 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728