What Star Trek can teach us about the pitfalls of AI chatbots

A photo of the core characters of Star Trek: The Next Generation
(Image credit: Paramount Global)

I'm not an AI expert, but I have seen an awful lot of Star Trek. And over the past week, that's turned out to be almost as useful. Between Google Bard and Microsoft Bing with ChatGPT,  major companies are going all-in on AI chatbots, which raises profound questions about the future of search engines, online journalism and how long before artificial intelligence more closely resembles the real thing.

At present, I have concerns about the proliferation of AI chatbots, and I'm not the only one. Journalists, investors, and everyday technophiles alike can already see many of the potential pitfalls. If AI chatbots are supposed to someday replace (or meaningfully complement) search engines as a more naturalistic way to answer queries, then Microsoft, Google et al. will need to answer some serious questions about authorship, attribution and accuracy. Even more importantly, the companies will need to decide whether the ultimate purpose of the technology is to answer simple questions, or to create an entity that can have a meaningful conversation with a human being.

In other words, tech companies need to decide whether they want AIs to be artificially intelligent in a literal sense.

While these questions go way above my pay grade, my mind has returned over and over to Lt. Cmdr. Data: second officer aboard the U.S.S. Enterprise-D, and one of the most recognizable characters from the whole Star Trek franchise. As a fully functioning android in a mostly peaceful and enlightened future, Data represents the very best of what AI could potentially evolve into. But even under ideal circumstances, AI seems subject to many limitations that flesh-and-blood humans aren't. And while Data himself may not be real, the concerns he raises about developing AI technology are.

Can an AI be sentient? 

star trek the next generation

(Image credit: Paramount)

For those who haven't seen Star Trek: The Next Generation, it's a show about a group of interplanetary explorers aboard a futuristic starship. Data is the ship's second officer, serving under Capt. Jean-Luc Picard and Cmdr. William Riker. In addition to fulfilling his duties aboard the Enterprise, Data also wishes to become more "human," by establishing close relationships with other crewmembers, as well as attempting to understand (and often mimicking) their behavior.

At first glance, Data is a member of the crew like any other, albeit an exceptionally strong, intelligent and adaptable one. He can run engineering diagnostics, analyze scientific data and fight off hostile aliens as well as any other officer - often better, in fact, due to his computerized positronic brain and durable biotechnical body. He also socializes with the rest of the crew, forming friendships, overcoming rivalries and even performing as part of musical ensembles. While we don't know what Google and Microsoft's ultimate goals are with Bard and Bing with ChatGPT, it's easy to imagine the companies working toward this kind of idealized AI, with or without an android body.

However, in the second-season episode "The Measure of a Man," Star Trek turns our understanding of Data on its head. When a brilliant computer scientist named Cmdr. Bruce Maddox wishes to deconstruct Data and produce more androids like him, Starfleet conducts a trial to determine whether Data is sentient, and therefore entitled to a choice in the matter. The episode culminates in a courtroom climax that you've got to see for yourself:

It's a fan-favorite episode for good reason, as Data wins his autonomy in the end and strikes a blow for individual freedom in the face of well-meaning military bureaucracy. However, there's just one problem that viewers often overlook: Picard never actually proves Data's sentience. The court cannot discount the possibility of Data's sentience, but that's all it is - a possibility. Nowhere in the entire series, or any other Star Trek show, does anyone prove, beyond the shadow of a doubt, that Data has the same kind of consciousness as a human (or alien!) life form. To quote the late, great Carl Sagan, "Your inability to invalidate my hypothesis is not at all the same thing as proving it true."

This may seem like a minor point, as all of us (including myself) desperately wish to believe that Data, and other beings like him, might have a soul, in either the Aristotelian or spiritual sense. But the fact remains that we can't be certain whether Data possesses genuine cognition - which also means we can't be certain that he possesses genuine intuition, insight, creativity or originality. If this prospect does not concern modern AI developers, then it should.

Data's limitations and shortcomings 

star trek the next generation

(Image credit: Paramount)

Star Trek: The Next Generation ran for 178 episodes, plus four movies and a variety of appearances in other Star Trek spinoffs. As such, it would be impossible to discuss every example of Data's AI shortcomings. Instead, I'd like to focus on two specific examples: "The Ensigns of Command" from Season 3, and "In Theory" from Season 4.

In "The Ensigns of Command," Data must rescue a human colony from a belligerent alien race, in spite of some technological difficulties that make a speedy evacuation impossible. The main plot of the episode is actually not that important here; rather, it's a small subplot about Data's musicianship. At the beginning of the episode, Data advises his crewmates to skip one of his violin recitals:

"Although I am technically proficient," Data says, "according to my fellow performers, I lack soul."

The episode uses Data's recital as a bookend, and at the conclusion of the episode, Picard compliments Data on his violin performance. From Chakoteya's archives:

PICARD: Your performance shows feeling.
DATA: As I have recently reminded others, sir, I have no feeling.
PICARD: It's hard to believe. Your playing is quite beautiful.
DATA: Strictly speaking, sir, it is not my playing. It is a precise imitation of the techniques of Jascha Heifetz and Trenka Bronken.
PICARD: Is there nothing of Data in what I'm hearing? You see, you chose the violinists. Heifetz and Bronken have radically different styles, different techniques, yet you combined them successfully.
DATA: I suppose I have learned to be creative, sir, when necessary.

Here, Data reveals one of the quintessential limitations of modern AI. While Google and Microsoft's chatbots can gather data from disparate sources and combine them into something comprehensible, neither application can actually create new information from them. It's the difference between simple concatenation and genuine synthesis.

Picard argues that Data's combination of two different violinists' techniques suggests genuine creativity, but that seems like a stretch. What's missing - and what Data himself acknowledges - is an element of originality. Just like AIs, humans learn from outside sources, then attempt to internalize that knowledge and apply it in new ways.

A human violinist, however, would not simply listen to two contrasting performers and find a midpoint between their techniques. Instead, he would practice both styles, incorporating what works and eschewing what doesn't, while also using his own intuition to apply his own sensibilities. What's more - even if a human wanted to perfectly combine two other musicians' styles, he couldn't, due to his own limitations and talents being different from theirs. Data, or any comparable AI, would suffer from no such limitations - nor can we presume they would they have any unique "talent," either inborn or learned.

To put it simply, modern AIs cannot synthesize new ideas - and if we take Data's abilities as a reasonable sci-fi proposition, then they may not ever be able to.

The second example, "In Theory," also presents a rather bleak picture of Data's intuition. In this underrated episode, Lt. Jenna D'Sora becomes infatuated with Data. The android begins a relationship with her, hoping that the experience will prove instructive in his quest to better understand humanity.

To that end, Data develops and refines a "romantic subroutine," which he believes will demonstrate the proper affection for D'Sora. However, Data's actions prove to be a bit uncanny: going through the surface-level actions of loving someone else, without having any of the underlying emotions. In the end, D'Sora breaks up with Data - and Data has, essentially, no reaction whatsoever:

D'SORA: You were so kind and attentive. I thought that would be enough.
DATA: It is not?
D'SORA: No, it's not. Because as close as we are, I don't really matter to you. Not really. Nothing I can say or do will ever make you happy or sad, or touch you in any way.
DATA: That is a valid projection. It is apparent that my reach has exceeded my grasp in this particular area. I am perhaps not nearly so human as I aspire to become. If you are ready to eat, I will bring our meal.
D'SORA: No, that's alright, Data. I'd better go now.
DATA: As you wish, Jenna. Are we no longer a couple?
D'SORA: No, we're not.
DATA: Then I will delete the appropriate program.

Data ruined a relationship that D'Sora approached with good faith, and deeply hurt a fellow officer's feelings in the process. What's arguably worse, though, is that Data himself has no reaction to the breakup whatsoever. He has no emotions, and cannot parse D'Sora's. He is not even aware that he has lost something precious.

While "robots can't experience love" is something of a sci-fi cliché, it's still a good example of where AI might ultimately fall short, even in the future. Successful romantic relationships require constant intuition and creativity, from sensing how another person is feeling to holding their interest every day, long after routine sets in. However, Data can't actually respond to D'Sora's emotional or physical needs; he can't even properly recognize them. The simple act of giving D'Sora his full attention during a kiss is beyond Data.

While Bard and Bing with ChatGPT are probably not creating romantic subroutines of their own, Data's approach to human relationships does shed light on some of the programs' flaws. AI chatbots cannot gauge how a human is feeling, or what that human wants, beyond the superficial details that said human provides. Reading between the lines is one of the things that separates a human intelligence from a data-driven algorithm - and even our favorite android can't escape the algorithmic exigencies of his positronic brain.

A chance for creativity 

star trek the next generation

(Image credit: Paramount)

Granted, Star Trek is a huge mythos, and The Next Generation is a long series. For every example where Data acts like a simple servant to his circuitry, there's another one where he seems to exceed the mandates of his programming.

The episode "Clues" in Season 4 gives us an intriguing example of Data using creativity to sidestep seemingly impossible orders. After a xenophobic alien race agrees to wipe the Enterprise crew's memories rather than destroy them outright, Data must hide the secret from Picard and the rest of the crew. To do so, he creates an elaborate plan to hide all evidence of the aliens' existence - albeit imperfectly, at first. Data's clever plan, as well as his mental gymnastics while trying to disguise the truth, both hint at something more nuanced than simply collecting and sharing information.

In Season 5's "Redemption, Part II," Data takes command of the U.S.S. Sutherland and disobeys a direct order from Picard. Rather than rendezvousing with the Enterprise, Data fires on a seemingly empty section of space - revealing a cloaked Romulan ship, and saving his allies from a sneak attack. Data did not know for certain that there was a hidden ship, nor that Picard would excuse his insubordination. Actions like this suggest that Data is indeed capable of synthesizing unrelated ideas into new, testable hypotheses.

Still, it's important to remember that Lt. Cmdr. Data is a fictional character, while Bard and Bing with ChatGPT are very real. The point is not that the services are destined to evolve into full-fledged humanoid AIs; the point is that even in our most idealistic imagination, AIs suffer from significant shortcomings. And these shortcomings are even more present in the relatively primitive prototypes we have today.

Just like Data at his worst, modern AI chatbots have yet to bridge the gap between regurgitating information and coming up with original ideas. But unlike Data at his best, we don't even know whether these programs will ever have the ability to do so. If MIcrosoft, Google and other purveyors of AI want to develop the technology beyond a simple curiosity, then they will have to boldly go where no artificial intelligence has gone before.

Marshall Honorof

Marshall Honorof is a senior editor for Tom's Guide, overseeing the site's coverage of gaming hardware and software. He comes from a science writing background, having studied paleomammalogy, biological anthropology, and the history of science and technology. After hours, you can find him practicing taekwondo or doing deep dives on classic sci-fi.