Cal Poly professor Jason Peters
While the artificial intelligence (AI) chatbot ChatGPT has caused disruption and concern in many educational and professional settings across the nation, one Cal Poly English professor said the program is only as good as its user — at least for now.
“I’ve been experimenting with making ChatGPT produce what I would consider to be college-level writing,” said Jason Peters, associate professor of English, “the kind of prose a highly capable first-year Cal Poly student might compose — and what immediately became apparent to me is that the quality of ChatGPT’s output is only as good as the prompts you feed it,”
As an expert in composition and linguistics, Peters, coordinator of the Cal Poly English Department’s first-year composition program in the College of Liberal Arts, has experimented with the program since its release in November 2022. He found that while the AI is effective at composing essays, it currently lacks the ability and sophistication needed for college-level writing.
“You can tell it to write a five-paragraph essay on a specific topic, a business letter, a legal contract or even formulas for Excel or Python,” Peters said. “However, in terms of using the AI to produce college-level writing, it works best when you know how to write good prompts for it and get it to focus closely on small pieces of text”
Through his experience testing the product, Peters found that the program cannot quote from a text or integrate source material into its prose, it doesn’t know how to select quotes to emphasize specific points, and if asked to quote a specific source, it will just make up a quote “out of thin air.”
For faculty who worry about students using ChatGPT to produce essays for their writing assignments, Peters said that students may find it isn’t as easy as it seems.
“The technology works best when you already know your topic well, when you know the specific argument or analysis you want to make, when you understand the conventions for writing in your discipline, and when you understand the specific audience you’re trying to reach and how best to reach them,” Peters said. “It’s actually very labor-intensive to feed it the right kinds of prompts that will get it to produce good writing.”
Peters noted, however, that his findings are only provisional as they relate to the newest version of ChatGPT. Research at UCLA is examining how GPT-3 language models are capable of using logic and abstract thinking to reason about emergent issues and novel problems — things that only human cognition is thought to be capable of doing.
Until that version is released, Peters said there are ways to both recognize AI writing and reduce student use of it. Faculty can input text into GPT Zero, which will guess whether a human wrote it. They can also assign writing tasks to focus on an inquiry-driven writing process, incorporate drafting and written reflection, and include in-class interactions such as peer reviews and presentations.
“But the best way to identify whether your students are doing their own writing is to change the way you assign and teach writing,” he added. “If your writing assignments can be credibly completed by a bot, there’s something wrong with your writing assignments.”
Peters remains optimistic about the future uses of the program. For example, AI can be leveraged to teach critical thinking skills and transform the writing process. He noted a webinar hosted by UC Irvine’s WRITE Center, which explored how to use ChatGPT as a teaching tool.
“Teachers can use it to create rubrics, lesson plans, writing assignments and even to evaluate student writing,” Peters said. “Writing is a technology, and you can’t do it without tools. Better writing-assistive technologies enable us to devote less labor and intellectual energy to the writing process and more of that labor and energy to the rhetorical purposes underlying our writing — analyzing the situations, audiences, purposes, and motives for what we want to say and what we want our writing to do.”