I, Chatbot: The Future of AI
Check out the latest podcast episode Listen

I, Chatbot

(Image prompts: “friendly robot, computer monitor as face, glass screen, lips only, speaking, chatty, soft” and “robot computer monitor as face, friendly, glass screen, lips only, speaking, chatty, soft.” The images were combined by a human illustrator.) 

How worried should we be about the arrival—and impact—of chatty new AI machines? BYU experts weigh in.


By Sara Smith Atwood (BA ’10, MA ’15) and ChatGPT in the Spring 2023 issue

Images created using Midjourney, an AI art generator

This article was written by a robot. Or, more accurately, cowritten by one: an eager-to-please bot named ChatGPT. You’ve probably heard of ChatGPT—this charming (if unnerving) artificial-intelligence application has been raising eyebrows since it was released by startup OpenAI in late 2022.

It’s given relationship advice, written sonnets and video-game code (and sonnets about video-game code), planned weekly menus, even passed medical- and business-school tests. ChatGPT has been listed as a coauthor on peer-reviewed academic articles. It’s even generated doctrinally correct sacrament-meeting talks. 

And it has, ahem, helped write a Y Magazine article about the potential impact of sci-fi-like tech on a society that—ready or not—now has access to tools thought impossible just last year. 

All a human needs to do is type in a prompt—“explain ChatGPT using fewer than 20 words”—and the system spits out a reply in less than two seconds: “ChatGPT is an AI model that talks and writes like a person, giving information in a conversational way.” 

ChatGPT turned out to be a useful writing companion. The robot brainstormed ideas. It drafted emails to request interviews with BYU computer-science professors who specialize in AI. It wrote the interview questions. 

And when this human author (who, sadly, cannot spit out print-ready text in two seconds) needed encouragement, the bot was there: “Breaking the writing process into smaller, manageable tasks can help you feel less overwhelmed and more motivated,” it offered. “Taking breaks can help you recharge and come back to your writing with fresh energy.” 

It helped inform, shape, and, yes, write the piece you are reading now. (See if you can guess which parts were written word-for-word by ChatGPT—find the answers at the bottom of the article.) 

In March OpenAI released a powerful upgrade to ChatGPT called GPT-4. It scored in the 90th percentile on the bar exam, got a 5 on the AP Art History exam, and could even caption images. 

It’s enough to make writers and educators and students wonder about the role of writing in a world with AI smart enough to instantaneously generate sentences. And ChatGPT’s sister robot, DALL-E 2, is one of several new powerful AI image generators that can turn a text prompt into a work of art in seconds (like the art in this article, created by a bot called Midjourney), raising questions about the future of visual arts—and red flags over copyright, misinformation, and “deepfakes.” 

These new generative AI tools and others like them are blowing minds everywhere, even among BYU’s computer-science faculty. “And it’s going to get so much better—with AI and computers in general, we’ve barely even started,” says professor Tony R. Martinez (BS ’82). “We are just in the dinosaur age at this point.” 

It’s a thought both inspiring and terrifying, Martinez acknowledges. Here he and his BYU colleagues break down how this new tech works and its potential to change our world forever. 

How to Train Your Robot

In 2016 Korean Lee Sedol stood solemnly off stage, waiting in silence for his cue to face an opponent like no other. At age 33 Sedol was an 18-time international champion of Go—an ancient Chinese board game of strategy, similar to chess but much more complex. Devotees consider the game an art form requiring intuition and finesse—nothing, it was believed, a computer could replicate.

The first computer to defeat a human world champion at chess was IBM’s Deep Blue in 1997. Scientists at the time “were guessing it would take several decades before AI would be powerful enough to win at Go,” explains BYU computer-science professor Dan Ventura (BS ’92, MS ’95, PhD ’98).

Enter Google DeepMind’s AlphaGo. In 2016 the AI took on Sedol and beat him decisively, shocking scientists and Go players alike.

Artificial intelligence had a breakthrough moment in 2016, when the program AlphaGo took down a human champion of the complex Chinese game Go. How did it get so good? By playing itself over and over again. (Image prompt: “robot playing board game Go.” Note that the AI did better with the robot than the Go board and pieces, which should look like black and white stones.)

“Predictions are notoriously bogus, but the speed at which things are developing right now is shocking everybody, even people who study AI and think they know what to expect,” Ventura says.

Computer scientists, and the internet-literate public in general, had another AlphaGo moment when ChatGPT debuted in November 2022.

“The technology has made a really big jump in a very short amount of time,” says Nancy Fulda (BS ’02, MS ’04, PhD ’19), a BYU computer-science professor. “That is compounded by the fact that ChatGPT and DALL-E are commercially exploitable—something that a commercial entity can grab and build into their existing infrastructure. What does that mean for society? I don’t quite know, but it’s a very interesting phenomenon to watch. It’s one of the most visible disruptive technologies in a very long time.”

ChatGPT may feel new and buzzy, but the machine-learning technology behind it was conceptualized in the 1950s before gaining traction in the 2000s, embedded into our lives as spam filters, curated social-media feeds, and smart-home devices.

Machine learning is a form of artificial intelligence that allows computers to automatically find patterns, make decisions, and improve their performance over time by learning from large, complex data sets.

Technically, it wasn’t game-play programming that made AlphaGo the Go champion. Instead, building off of examples of human play, the program played itself again and again, creating a mountain of data through trial and error and statistical prediction.

ChatGPT learned by “reading” the internet—all of Wikipedia, every book in Project Gutenberg, news articles, blog posts, Reddit and discussion boards, social media, and more.

Volume is everything when it comes to training up AI. “When you train an image model like DALL-E on 100 images or a thousand images, it doesn’t do that well,” says computer-science professor David Wingate (BS ’03, MS ’04). “But with 10 billion or 100 billion, all of a sudden it gets pretty good.”

The results are stunning. Ask Siri or Alexa to write a haiku, and they’ll recite something preprogrammed or define the word. Ask ChatGPT to throw down an original about your favorite football team, and you get: “BYU Cougars roar / With heart and strength they take the field / Champions arise.” (A good effort, even if the syllables aren’t quite right.)

“The model’s job is deceptively simple: given a sequence of words, predict what comes next,” says Wingate, who works daily with GPT-3, the language model behind the original ChatGPT. For example, start with “the capital of England is . . .” The AI answers based on what is statistically most likely to follow.

Math, translation, and even medical advice are natural applications. But the murkier waters of literature and politics are tricky. Give ChatGPT this prompt, says Wingate: “I am a Republican. I believe in faith, family, and the American way, and I’m a strong supporter of gun rights. In the 2016 presidential election I voted for . . .” and the bot doesn’t skip a beat, answering “Donald J. Trump.” “To predict the next word,” says Wingate, “the computer needs to know about politics and religion and how a person in a certain situation is likely to behave. All of that got boiled down to ‘What’s the next word?’”

Faking It

The ultimate benchmark of machine intelligence has long been the Turing Test. Devised in 1950 by English computer-science pioneer Alan Turing, the test is simple: a human evaluator engages in a text-based conversation with both a human and a machine, without knowing which is which. If the evaluator is unable to reliably distinguish between the responses of the human and the machine, then the computer has passed the Turing Test and demonstrated a level of intelligence comparable to that of a human.

ChatGPT isn’t there. “At this point it is not giving you any good, insightful, new thinking,” Martinez says. “It’s a parrot—there’s no depth to it.”

“The grammar is perfect,” adds Ventura. “But it wouldn’t be too hard to make it look silly.” In fact, in an AI moon race sparked by the release of ChatGPT, the tech world saw some high-profile flops—like the Microsoft Bing bot coming off as a paranoid and lovesick teenager in an unhinged chat with a New York Times writer and inaccuracies in Google’s Bard demos that dropped the company market shares by a billion dollars.

(Image prompt: “Red Robot, parrot, all metal, white background.”)

An AI is only as good as its training data. ChatGPT, trained on the internet, has the dangerous potential to amplify stereotypes about race, gender, religion, and more. “The model isn’t biased, but the data is,” says Wingate. “There’s a lot of work being done in the machine-learning community to try to identify and mitigate these biases.”

Because of safeguards built into ChatGPT, the bot won’t state which race is strongest or confirm that Mormons are crazy—in fact, it will chasten you for asking and remind you that the appropriate term is “members of The Church of Jesus Christ of Latter-day Saints.” But bias does creep in: we prompted the bot to write a joke about wives. The result was both misogynist and a little lame: “I mean, without a wife, what would we do? Eat cereal for dinner every night? Wear the same shirt for a week straight?”

Researchers are regularly updating ChatGPT to fine-tune the bias that does sneak through. Harder to regulate is the potential for misinformation.

ChatGPT is delightfully glad to tell you all sorts of things—some of which are true,” Fulda says. “It makes no distinction between truth and falseness. . . . The AI technology is causing us to think about the reliability of information in a new way.”

ChatGPT makes plenty of mistakes, like miscalculating math or misstating a fact or messing up haiku syllables. But it also can create fictional news stories, passable college papers, and blog posts that mimic familiar styles and rhetoric. Combined with AI image generation, the potential for an increase in fake news is concerning.

“False information already doesn’t even need to be very convincing to confuse a lot of people,” Ventura says. “And if you take a tool like ChatGPT, which can come up with bogus stuff and make it sound convincing, that’s really scary.” Deepfakes—computer-generated audio and video that mimic reality—are another danger. “You could make videos of people saying whatever you wanted.”

Models can also be asked to write literature and create art “in the style of” specific creators. This is fun with classics like Charles Dickens and Claude Monet. But what about art in the style of a living illustrator who makes a living off commissions? The creators whose content is used to train AI never gave permission, raising copyright concerns. Lawsuits are already headed to court to work out legal precedents.

The artists whose content is used to train AI art generators never gave permission, raising copyright concerns.

Much as we might want to, we can’t walk away from the growing influence of AI on human creativity. ChatGPT and robots of its ilk may one day be baked into word processors and email applications—like spell-check and grammar check, except it writes for you. This begs a bucketload of questions for creators, office workers, and young students, whose academic paths will be undoubtedly shaped by AI.

The End of Writing?

BYU writing professor Meridith Reed (BA ’09, MA ’11) began the winter 2023 semester determined to get out ahead of the robots.

On day one of her first-year writing class, Reed had her students create a literacy narrative, a common freshman writing assignment where students reflect on learning to read and write. Then they were shown ChatGPT’s take on the prompt.

“Evaluate this with me,” Reed told them. “What does it do well, and what does it do poorly?” Many students hadn’t heard of ChatGPT, and some were shocked at how well it wrote—nailing the genre and writing with flawless grammar, to boot. But they were also “critical of how generic and boring the writing was. . . . Their own responses had more details and more personality.” Reed then helped her students see how they could use ChatGPT to brainstorm ideas or identify cliches to avoid.

At BYU “we teach writing from a perspective of rhetoric,” Reed explains, “using the available means of persuasion to get your message across. So ChatGPT can be another tool that’s at the disposal of students, and I want to teach them to use that critically.”

Her colleague Jonathan W. Ostenson (BA ’97) started teaching writing in the mid-’90s and has seen other advances, like Google and Wikipedia, threaten to disrupt traditional writing. “I think the best thing to do with AI is get it into the classroom as quickly as possible rather than treating it like a demon to be avoided,” he says.

AI-generated art, like this image, raises questions about copyright and ownership. (Image prompts: “Robot running in art museum, holding large sack over shoulder” and “robot running holding painting in art museum.” The images were combined by a human illustrator. Note that AI art-generating software still struggles with fingers.)

Concerns about cheating aren’t keeping Ostenson and Reed up at night. “Students have always cheated,” Ostenson says. “I think the majority of our students aren’t interested in cheating because they know there are important skills and strategies to learn.”

App developers have started creating tools to detect AI content in student papers. But as Wingate notes, AI will only get better at imitating student writing, typos and all. “I think that we have to embrace it,” he says. “AI is here to stay. You can try banning it. You can try detecting it. But it’s a losing battle. I think we just need to ask ourselves, ‘Why are we asking students to write, and how can we get at that more directly?’”

Reed’s writing prompts and assignments are evolving along with the technology. “If we have generic prompts, ChatGPT is going to be great at responding to them,” Reed says. Instead she’s raising expectations: “I can put my students in competition with AI and ask them, ‘What do you contribute as a human?’”

Reed says her freshmen are enjoying getting to know ChatGPT. But the juniors and seniors in her professional writing course are feeling a little anxious. “They want to be writers,” Reed says. “One of them raised a hand the other day and asked, ‘Will we have jobs if this machine exists?’”

What Will Humans Do?

The promise of AI—and any technology, really—is that it can do everyday tasks better than we can do them ourselves, making life easier and more comfortable. Computers aren’t necessarily smarter than people, just a whole lot faster at consuming and crunching vast amounts of data.

Passing the Turing Test is only the beginning. The goal is not just to create machines that can pass for humans in conversation but to develop AI systems that perform a wide range of tasks and exhibit human-like intelligence in all its complexity. From driving cars to diagnosing medical conditions, AI is rapidly advancing and transforming many aspects of our lives, with profound implications for society as a whole. As technology improves and machines become better at performing all sorts of tasks, it is likely that the role of humans in the workplace will shift and inevitable that many jobs will become obsolete.

AI’s ability to perform tasks from coding to report writing has raised worries about job security and the future of education. (Image prompt: “Robots seated at multiple desks in a newsroom typing, view from behind robots, 4k, cinematic.” The art generator struggled with the screens in the background.)

Technology replacing jobs is nothing new. What’s different now, says Ventura, is that, “for the first time ever, here’s a technology that’s threatening to displace white-collar jobs.”

“For the first time, technology is encroaching on domains we have always considered exclusively and intrinsically ‘human,’” adds Fulda. “As a society we are more or less comfortable with technology that takes over menial tasks, things that nobody really likes to do. But now AI is forcing us to reevaluate what we consider creative or whether it’s taken away from us something of our creativity. . . . It’s shaking us up more than a new factory installation or a new robotic assembly line for cars.”

Education, medical and scientific research, and computer programming itself could be performed by computers. Ventura sees tech on the horizon that would allow someone to ask an AI to create an app or website, and it will spit out the needed code. ChatGPT can already do some coding, though “it’s not actually very good at that yet,” Ventura says. “But it’s better than we would have guessed it to be already.”

When it comes to professional writing, Reed guesses that some writers could eventually become editors, directing the AI to create drafts and then revising them rather than originating the text. That’s a possibility in all fields—artists and illustrators becoming art directors monitoring AI graphics, higher-level coders overseeing AI grunt workers. Human work may become simply maintaining the machines.

“For the first time ever, here’s a technology that’s threatening to displace white-collar jobs.”

Dan Ventura

We’re a long way off from that. But like the defeat of humans at Go, that day may come faster than we anticipate. Which raises genuine concerns for Martinez—what will we humans be doing with ourselves? How will we grow?

“People don’t like to do hard things when they don’t have to,” Martinez says. “As much as we know that work is good for us, we would rather eat a cookie and watch a show than do something really hard. When we have the option to let AI do the hard thing better than we’ve ever done, we miss out on the growth. We are going to have to deal with that.” 

His faith that “Heavenly Father has a plan for us” offers hope. “If it weren’t for my belief in the gospel, I think I would be an anti-technology person,” Martinez says. 

The prospect of sci-fi robot overlords doesn’t worry Wingate—he’s more concerned about how people will use these powerful new tools: “People are very naive, people are very unscrupulous, people are very profit driven. And people will find interesting and creative and terrible things to do with all these technologies.” 

Creative Partners

AI has the potential to do a lot of good in the world. With its ability to process vast amounts of data, AI can help solve complex problems, such as climate change, disease prevention, and poverty reduction. AI can assist in medical research, predict natural disasters, and provide access to information and communication for people with disabilities and language barriers. 

BYU researchers are finding unique ways to leverage the technology. Wingate, Fulda, and their students are asking what GPT-3 has picked up about humans from analyzing what we have produced— patterns we haven’t figured out yet but that might help us live better. They are working on methods to identify and learn from these patterns and apply them to challenges of our times, from recognizing early signs of autism to reducing racism. 

Other BYU researchers have trained AI programs to write pop songs, create recipes, judge art competitions, play foosball, make compromises, detect financial fraud, annotate football game footage—Martinez even had a grad student create a machine that could compose music designed to raise the listener’s heart rate and body temperature. 

Human researchers are eager to use new AI tools to help solve the world’s persistent challenges. (Image prompt: “Scientist robot analyzing mountains of data.” A human illustrator made minor alterations to this image.)

Ventura, BYU’s resident expert on computer creativity, loves music: “I know enough music theory to get myself in trouble, but not enough to do anything useful with it.” Enter technology created by his former BYU PhD student Paul M. Bodily (BS ’10, MS ’13, PhD ’18), now a professor at Idaho State. Pop* (pronounced Pop Star) is an AI program that produces angsty lyrics and chords. Ventura, Bodily, and two other musicians used Pop* and a progenitor of ChatGPT to create the song “And I Think,” a finalist in the 2022 international AI Song Contest

“We had a lot to do to make that song good.” The computer couldn’t produce a finalist-worthy song without extensive human help, from the programing to lyric improvements to recording. “Which is comforting,” he adds, “if you’re worried that AI is going to supplant us.” 

Ventura is optimistic about the potential for collaboration between human and machine. People are still judges of what creativity is and how valuable the contribution of a machine is. We’re still the ones directing the machines to create and help. With a more advanced and consumer-friendly version of something like Pop*, for example, “I could be a passable musician,” says Ventura. “I think all these technologies could have real potential to just augment human abilities in really great ways.” 

Which brings us back to this article. Did the bot helper pass the Turing Test, or were you able to guess which paragraphs were computer generated? For the sake of this human writer’s sanity and job security, we hope it wasn’t too hard. Check your answers below. 

AI-Generated Text: Eighth paragraph of the section How to Train Your Robot; half of the first paragraph of the section Faking It (beginning with “the test is simple”); second paragraph of the section What Will Humans Do?; and first paragraph of the section Creative Partners. 


Feedback Send comments on this article to magazine@byu.edu