The infinite ‘Seinfeld’ problem

0
604

How should we approach artificial intelligence?

Around the midway point of 2021, I remember having my first experience with true, widely-available “artificial intelligence” through DALL-E. The app’s promise was intriguing — type anything in and it’ll generate a photo of it! Sometimes it worked, too. 

However, after getting more specific with my prompts, I soon realized that its results were much closer to a pastiche of its namesake than its ambitions, creating pictures that veered between laughably surreal and entirely incomprehensible. The technology was certainly interesting, but at the time DALL-E struck me as an amusing novelty; certainly not the technology of the future. 

A year and a half later, I must admit that I was incredibly, bafflingly wrong. In such a short amount of time, the reality of public AI technology has shifted from “we gave a robot 100 gangster movie scripts, and it wrote this funny mess!” to “could we be replacing entire industries with computers next year?” Among other advancements, DALL-E can now make pieces that win art competitions, chat bots have been able to pass exams at some of the most prestigious schools in the country, and an AI on the streaming site Twitch writes new, endlessly running episodes of “Seinfeld” with tens of thousands of viewers — at least until recently, when it was suspended for making anti-transgender jokes. 

ChatGPT, the most notable of these AI services, reached over 100 million users just two months after launch, proving itself to be adept at everything from basic coding to writing hip-hop verses in the style of Herman Melville. 

A prime example of the handiwork that ChatGPT deftly crafts on the fly. Photo courtesy of Kelly Dougherty via ChatGPT.

The speed and ubiquity with which these programs have entered the public consciousness has  raised many concerns. One of the most notable complaints is with factual accuracy, as chatbots have been found to get basic questions wrong or even make up false answers. For example, Google’s newest ChatGPT competitor, Bard, dropped Google’s market value by 8 percent at its launch event when it claimed that the recently launched James Webb Space Telescope had taken “the very first pictures of planets outside our solar system,” an achievement that in reality dates back to 2004.

Another pressing issue with chatbots, especially here at the University of Dallas, is the danger it presents to academic integrity, as they open the potential for homework answers and even entire essays to be artificially written. 

Dr. Jonathan Sanford, president of the University of Dallas, argues that the biggest threat AI poses in this regard isn’t strictly the threat of cheating, but the negative effects it can have on students’ academic ability. “The writing is not particularly reflective,” he says, “and even if you could generate essays that look perfect for a Lit Trad III class, I think there are ways to encourage students to write themselves because that is critical to their education. So anyone who’s looking to cheat, whether it’s through [ChatGPT] or some other mechanism, is just really depriving themselves of a great good.” 

With how crucial these concerns about AI can be after only a few months, what will come of its future as it continues to learn and advance? Could it even be sentient? Yet before anyone begins stockpiling weapons for the war against Skynet, it would be reassuring to know that while AI can easily outclass humanity in purely informational knowledge, it’s hard to say that it could ever achieve the facets that make humans, human. 

AIs are smart, but entirely analytic; they see facts and devise solutions based on pre-known circumstances. They’re hardly able to read circumstances outside of themselves, understand emotional consequences or hold actual beliefs outside of pure data. Humans are not machines; we can be stupid, irrational, and unpredictable, sure, but also compassionate and philosophical, seeking something great even if we can’t grasp it right away. 

One way to demonstrate this is to consider the aforementioned “Seinfeld” AI, “Nothing, Forever.” Most of the conversations in it are entirely unremarkable, with characters usually making vague smalltalk about going to a new restaurant or how the traffic was on the way home. When actual jokes are told, it seems to understand a vague outline of humor, setup → punchline, but they are jokes that either have existed since the Victorian era or end up not being jokes altogether. Most of the real humor of the show is derived from situational absurdity, pure meta-humor like “oh, they’re talking about TV shows, in a TV show!” 

It cannot, and probably never will, grasp things like timing, vocal inflection or subtlety, on-the-spot human quirks that can’t be mathematically explained, yet make the difference in turning a scene from amusing to genuinely hilarious. For AI, what can be done is only what can be given a rational explanation, fitting within countless lines of code on a limited chip. Humanity’s ability to break out of that and do something for the sake of conviction, taking a leap of faith, is our greatest comfort in securing ourselves against the computers.

Of course, at this point it seems that any proper predictions about AI are a matter of “when,” not “if.” Our most ludicrous ideas about its abilities will probably be fulfilled at some point, maybe sooner than we think. 

It’s entirely likely that there will be a future where politicians will have to defend themselves against artificially-generated videos of their likenesses committing unspeakable — for a Catholic school newspaper — misdeeds, or where the same computer that helps you pick out your dream car will be hard at work at Marvel Studios writing the script for “Spider Man 16: He’s Almost Home, We Swear!” But a future where the line between humans and machines is gone? Call me an old geezer, but it doesn’t seem likely.

LEAVE A REPLY

Please enter your comment!
Please enter your name here