Artificial, not yet intelligent

0
418
Many students now face the temptation to utilize AI software for assignments. Photo by Henry Gramling.

AI language models such as ChatGPT have ushered in a new era for academic dishonesty. While at one time the average plagiarist relied on a block of text swiped from Wikipedia, the new school of cheaters can have a machine write an entire essay on the spot; they need only supply the program with a sufficient prompt.

For a dishonest student, this is a seemingly perfect crime. There is virtually no effort required, and as the text is freshly generated, there is no out-and-out proof of cheating. While this new technology is not without its limitations, it still seems to pose a threat to academic institutions.

AI is “a new sophistry, which promises to make us successful without making us any wiser. But note, success in education is just this: growing in wisdom,” said Dr. Chad Engelland, professor of philosophy.

On this point, plagiarists who use AI language models stunt themselves in two directions: they not only prevent themselves from developing their own thoughts, but they also block their professors from being able to assist them.

“Assignments are not a matter of providing good reading opportunities for professors: they are occasions for appropriating the truth for oneself and to discover when one has failed to do so effectively,” said Engelland.

The use of AI specifically undermines UD’s objective to produce effective independent thinkers. Language models incorporate an impossibly large body of information from databases, but they are limited to producing content that merely appears intelligent. While essay writing requires time and effort, the UD student is capable of achieving insights that dwarf ChatGPT’s ability to string facts together.

Cheating with the use of AI is a cheap way of seeming insightful. But, “Who wants seeming when being is there for the taking?” asked Engelland.

Dr. Theresa Kenney in the English Department cautions that, beyond the level of ideas alone, AI generated systems can actually corrupt the quality of work that a student has already produced. While discussing the popular editorial software Grammarly, Kenney noted, “We are seeing errors in student writing that are not their errors.”

In Kenney’s experience, students commonly place an implicit trust in Grammarly to solve errors in spelling or punctuation. However, the site “continually fixes these problems wrongly,” said Kenney. Her example of this was Grammarly’s consistent inability to recognize the word “probable,” which it frequently autocorrects to “probably,” causing major issues with a student’s prose.

We may be tempted to worry about what further trouble this technology could cause as it continues to improve. Upon this point, Kenney has no anxieties. While AI will become more formidable over time, “the AI that allows Brightspace to find AI is getting continuously improved as well,” said Kenney.

While plagiarism is typically caught because a student pulled something from an uncited source, AI plagiarism has a very particular Achilles’ heel — it invents its own sources and quotations. “It’s not finding them anywhere, so it’s making them up,” said Kenney. “It’s a good way of catching plagiarism even if you don’t have a button on Brightspace that will do it for you.”

According to Kenney, the most apparent flaw of AI language models is their inability to reason adequately. “Even if you have a student who is having a rough time or is not really as good a writer as some other students, they are usually not going to have problems with basic logical connections,” said Kenney. “Even at a rudimentary level, you are thinking logically, and ChatGPT is not necessarily doing that.”

AI systems may promise a fully formed product, but the reality is that it has a very limited grasp on hard facts and lacks the ability to make a real argument. If it did not have these weaknesses, its invention of quotations and sources would not be necessary.

Overall, while AI is constantly becoming stronger, it still has quite a way to go before it can pose as a convincing stand-in for the capacity of most students. AI is not only antithetical to a student’s purpose for attending a university, but it also betrays them on the fundamental levels of grammar and logical reasoning.

When a student uses AI systems to cheat, it is a more profound offense than basic plagiarism; it is a reliance on a product that is several times less effective than what he could produce on his own. It is cheating in the worst sense, for it cheats a student out of their own potential.

LEAVE A REPLY

Please enter your comment!
Please enter your name here