Share

ChatGPT-wary universities in the US scramble to prepare for new school year

accreditation
0:00
play article
Subscribers can listen to this article
Generative AI tools, like ChatGPT, are fed vast amounts of data and then use that training to answer users’ queries — often with eerie accuracy.
Generative AI tools, like ChatGPT, are fed vast amounts of data and then use that training to answer users’ queries — often with eerie accuracy.
Manu Ros/Unsplash
  • ChatGPT set the academic world ablaze after it was introduced in November, when the AI chatbot suddenly gave students a hard-to-detect shortcut for completing essays and assignments.
  • Professors and administrators seeking to integrate generative AI into their curriculums are left with a big question: How?
  • For more stories, visit the Tech and Trends homepage


ChatGPT set the academic world ablaze after it was introduced in November, when the AI chatbot suddenly gave students a hard-to-detect shortcut for completing essays and assignments.

Nine months later, as a new school year nears, many universities are still crafting their response.Colleges around the world spent much of the previous academic year adopting ad hoc approaches to the software — or no policy at all.

Some professors banned the use of it outright, citing plagiarism, while others looked to incorporate it more intentionally into their curriculum. That led to inconsistent approaches across classes and departments.

The situation is only slowly changing now: Without clear guidelines that apply to various departments, universities risk repeating the free-for-all they experienced during 2023 final exams. But many are realizing they need to find a way to live with artificial intelligence.

“It’s moving so quickly,” said Eric Fournier, director of educational development at Washington University in St. Louis.

ChatGPT reached 100 million users in under two months, leaving academic officials in the dark as students latched on to the technology. “It went from curiosity to panic to a grudging acceptance that these tools are here,” he said.

From the outset, professors suspected that students were cheating, said Madison White, a student at Stetson University. “Without professors fully looking into the software, they often immediately assumed that it was a hack for students to get away from doing readings or homework.”

Generative AI tools like ChatGPT, developed by the Microsoft-backed startup OpenAI, are fed vast amounts of data and then use that training to answer users’ queries — often with eerie accuracy.

The software represents one of the biggest shifts in the tech world in decades, bringing a trillion-dollar opportunity, which makes it all the harder for schools to ban or ignore it.

But professors and administrators seeking to integrate generative AI into their curriculums are left with a big question: How? They need to find the right middle ground, said Steve Weber, vice provost of undergraduate curriculum and education at Drexel University.

Educators can’t completely prohibit use of the tool and neglect to teach it, but they also can’t allow its use with no constraints, he said.

Weber said:

It may be a good tool to use in certain later courses, especially those that are preparing students for careers in industries.

One professor at Washington University structured his final exam so students would generate ChatGPT responses with a prompt and correct the text in a way that only a human well-versed in the topic could do.

At the University of Southern California, business professors are experimenting with “TA chatbots” that will help answer logistical questions about the class syllabus.

Harvard University, meanwhile, relies on a duck-themed bot to answer student questions about its CS50 introductory computer science course. The “CS50 Duck” is designed to explain lines of codes and advise students on how to improve their programming.

Such tools could work for all sorts of university departments, said David Malan, a Harvard professor who teaches the CS50 course. For now, though, integrating AI into classroom work is mostly relegated to technical fields.

“I’m sure it will take time for folks to decide for themselves how they’d like to address, if not incorporate as well, these new tools into their classrooms,” Malan said.

In some cases, professor-approved AI is spreading beyond the computer lab. At the University of Pennsylvania’s Wharton business school, Ethan Mollick was one of the first educators to add an AI policy to his syllabus.

The associate professor expects students to use AI and ChatGPT thoughtfully, while knowing the technology’s limits.

ChatGPT has helped make it clear that many students are just trying to pass classes to obtain their degree, said Arya Thapar, a rising junior at Chapman University. Unchecked, it’s not going to foster a love of learning or build critical thinking skills.

But universitywide policies have been slow to take shape. Drexel University is still hammering out its guidelines, but they’re expected to include the idea that students “don’t use it if it is not permitted, and if you do use it, then the usage must be cited,” according to Weber.

At Washington University and the University of Southern California, the use of AI in classrooms still remains within the professor's discretion.

“The technology is evolving so quickly,” said Peter Cardon, professor of business communication at USC, “you really depend on the community to help you make informed decisions.”

But the uncertainty can create gray zones for students. If a professor doesn’t say anything about using AI in class, is it allowed — or could students face disciplinary actions?

That makes it a threat unlike other classroom technology helpers, like calculators. “It feels more like a profound change,” Washington University’s Fournier said.

Fournier added:

Our goal would be that we don’t think backwards like last semester.

A student at Santa Clara University said that ChatGPT single-handedly improved their grades in economics and was extremely helpful.

The chatbot would generate answers that the student didn’t fully understand, but were good enough to get full scores on problem sets and quizzes.

The student, who asked not to be identified because of the ethical questions surrounding ChatGPT, compared the situation to being a child of divorce: Each parent has different rules, and the guidelines become confusing without a unified approach.

A key step is to educate faculty on what ChatGPT actually can and can’t do, said Ramandeep Randhawa, senior vice dean for the USC Marshall School of Business.

“Our goal would be that we don’t think backwards like last semester,” he said. “Everyone is going to be racing against the clock continuously.”



We live in a world where facts and fiction get blurred
Who we choose to trust can have a profound impact on our lives. Join thousands of devoted South Africans who look to News24 to bring them news they can trust every day. As we celebrate 25 years, become a News24 subscriber as we strive to keep you informed, inspired and empowered.
Join News24 today
heading
description
username
Show Comments ()
Voting Booth
Should the Proteas pick Faf du Plessis for the T20 World Cup in West Indies and the United States in June?
Please select an option Oops! Something went wrong, please try again later.
Results
Yes! Faf still has a lot to give ...
68% - 1959 votes
No! It's time to move on ...
32% - 928 votes
Vote
Rand - Dollar
18.58
+0.2%
Rand - Pound
23.27
+0.2%
Rand - Euro
19.92
+0.2%
Rand - Aus dollar
12.14
+0.0%
Rand - Yen
0.12
+1.1%
Platinum
969.70
+0.8%
Palladium
957.50
+0.3%
Gold
2,315.28
-0.2%
Silver
26.56
-0.3%
Brent Crude
83.44
-3.5%
Top 40
69,925
0.0%
All Share
76,076
0.0%
Resource 10
61,271
0.0%
Industrial 25
105,022
0.0%
Financial 15
16,592
0.0%
All JSE data delayed by at least 15 minutes Iress logo
Editorial feedback and complaints

Contact the public editor with feedback for our journalists, complaints, queries or suggestions about articles on News24.

LEARN MORE