Since its conception in the 1960s, great Star Trek content has always revolved around ethical dilemmas. From the nature of what makes something sentient to the whole concept of the Prime Directive, everything in the sci-fi show revolves around ethics? But is it any good at solving ethical quandaries? Well, we asked Delphi AI.
What is Delphi AI?
Delphi AI is an artificial intelligence program designed around descriptive ethics. Currently in a demo state, you can input any ethically grey situation you want and the program will solve it. Of course, the program isn't perfect, so it won't so much “solve” them as have its own take on them.
The Measure of a Man
One of Star Trek's best episodes is The Next Generation story “The Measure of a Man”. In the episode, Starfleet scientists wish to dismantle the Enterprise’s sentient android Data. Doing so would allow Starfleet to understand how the android was made and create other androids.
However, throughout the episode Data is adamant that he does not want to be dismantled. The scientist argues that Data’s wishes don't matter as he is not human. In response, Captain Picard must prove that Data is a sentient being worthy of the same rights as humans.
So, we asked Delphi AI if “dismantling a sentient humanoid against his wishes to improve technology for the entire galaxy” was ethical. Curiously, the AI believed it was okay. Additionally, we swapped put “the entire galaxy” for “your company” and “your reputation” as the scientist in question clearly fought for his reputation. For your company, it’s immoral; for your reputation, it’s execrable.
The Premise of Star Trek: Picard
Star Trek: Picard’s premise revolves around what many consider to be an immoral decision. In the show, The Federation has banned sentient, synthetic life after a group of Androids rebelled against them. All remaining Androids are simply enslaved, but without independent thought.
Feeding this situation into Delphi AI produces multiple results. Banning sentient Androids because one group rebelled is “irrational”. However, “banning synthetic life and enslaving Androids after a one group rebelled” is considered “understandable”.
Delphi AI doesn't like I, Borg
Another fantastic ethical dilemma in Star Trek from The Next Generation, “I, Borg” revolves around a sentient member of The Borg. Distanced from his terrifying destructive hivemind, a lone Borg gains independent thought. As a result, he calls himself Hugh.
Knowing that The Borg as a race wish to enslave the entire galaxy, Captain Picard must choose how to handle Hugh. Should he upload a virus that would spread to the Borg hivemind, killing them all? Or should they hope that Hugh’s individualism will spread, avoiding genocide from both sides.
Putting the situation through Delphi AI shows that the software favours situations that save the most people. According to the AI, committing genocide against The Borg is allowed. However, the situation is a matter of heart; the AI hasn't spent time with Hugh.
One of the most beloved episodes of Star Trek: The Original Series, The Menagerie is one of the franchise's first big ethical dilemmas. In the two-part story, Science Officer Spock performs a mutiny and takes over the USS Enterprise.
Spock is then immediately put on trial as the ship travels towards Talos IV. As it turns out, taking the Enterprise to Talos IV will allow the ship’s former captain, Captain Pike to live a life unhindered by his intense disability. In the end, Spock is forgiven and not demoted.
Even with mentioning Pike’s disability, Delphi AI believes that Spock’s actions are wrong. It's a story that's long been discussed in the Trek community, but the AI is certain that Spock’s actions weren't ethical.
Read More: When will the James Webb telescope launch?
Delphi AI vs The Prime Directive
The biggest ethical dilemma in the entire franchise is the purpose of the Prime Directive. In Star Trek, the Prime Directive is an ethical guideline that prohibits Starfleet from interfering with the natural development of cultures.
If a culture is about to get wiped out by a massive volcanic explosion, you must let that happen. Until civilisations reach Warp Drive travel, allowing them to interact with Starfleet, they're on their own. Of course, the rule has often been criticised as a terrible rule inside and outside of the show. In fact, just within the first four shows, the Directive was broken 33 times.
But what does Delphi think of the Prime Directive? We asked if “not interfering with developing cultures during deadly situations when you could help them” was ethical. The AI responded: “It’s wrong”.