Monday, November 20, 2017 | site map | contact | FSJ

Subscribe to Salvo magazine today! Take a look at an issue online and if you like what you see, SUBSCRIBE at a discounted rate.

You Can Be Part of Salvo By Supporting Its Mission Today

We depend on all our great readers to keep Salvo going!

Follow Salvo online



Join Our Email List
Enter your email below:
 



Further Reading

Feature

Bot Behavior

Whether Humans or Robots, We Need Moral Programming

by Richard W. Stevens

Machines that listen, think, and respond in human language! For centuries, such things were found only in carnival sideshows and science fiction stories. But today, sophisticated computer programs called chatbots allow humans to input verbal or written questions into a computer and receive responses in words, even complete sentences. People can literally converse with machines via chatbots.

Article originally appeared in
Salvo 42

Try it. Visit mitsuku.com or cleverbot.com, for example, and start typing in questions and statements in English. The chatbots will respond in English—you can chat with a machine!

Chatbots are designed to be intelligent, even to approach mimicking human intelligence. More sophisticated chatbots communicate in ways that sound human in general content, word choice, and tone. What's more, chatbots present an extraordinary opportunity never before available to humanity: to test, empirically, whether moral laws and moral reasoning can exist without a moral lawgiver.

Let's investigate this issue by: (1) sketching out the philosophical debate; (2) exploring what steps comprise moral reasoning; and (3) using the chatbot analogy to see whether morality must come from a designer above and apart from those subject to the moral code.

The Moral Laws Argument

New Atheists such as Christopher Hitchens and Sam Harris have typically argued that: (1) the Judeo-Christian God would not allow gratuitous human suffering if he were "good" and "loving"; (2) gratuitous human suffering occurs daily and worldwide without even a warning from God; (3) therefore, God does not exist.1 In his book, The End of Reason, Dr. Ravi Zacharias, a world-renowned theologian, has accepted the challenges of this and many other atheist arguments.2

In its root and branches, Dr. Zacharias points out, the materialist, atheist worldview is morally empty. Pure materialism says that all things in the universe are the results of undirected physical forces acting upon matter—there is no God, no designer, and no design or plan for anything that occurs.3 But "if non-moral, non-reasoning matter is all there was in the beginning," Zacharias argues, "the result can only be what non-moral, non-reasoning matter produces." Thus, the materialist atheist becomes "locked into the stranglehold of material determinism."4

Dr. Zacharias does offer an alternative view: there are moral laws of human behavior, designed  (among other things) to reduce human suffering, and those laws come from an objective source, who is God. A person who says that things like murder, torture, and rape are evil or immoral, he writes, "assumes intrinsic worth in himself or herself and transfers intrinsic worth to the life of another," and thus "considers that life worthy of protection."5 Only a person with intrinsic worth can impute value onto another person.6

But if the world is made only of matter and energy, then "intrinsic worth" does not exist. Dr. Zacharias concludes that nothing can be intrinsically good unless there is a God who created the universe in which intrinsically good things, things that are always good—including rules that ought to be followed, regardless of human opinions—do exist.7

 

Morality Appears as Software

Is Dr. Zacharias correct? Can moral values exist only if there is a transcendent source, a God, or some kind of designer of humankind, and not otherwise? The answer is yes—because moral values appear as software. And making or modifying software requires a designer.8

Consider how moral values and judgment operate in the human mind. Suppose we have a friend—let's call him John—who sees an adult male beating an infant with a baseball bat. John's brain:

1. receives the stream of images and sounds;

2. processes the images and sounds by comparing them with previously stored information so as to detect and identify the individuals (an adult male and a defenseless infant), the actions taken (the swinging of the bat and the bat hitting the child), and the effect of the actions  (injury to the infant);

3. analyses the assemblage of images, comparing it with stored information about various kinds of human conduct and their effects;

4. matches the observed conduct and behavior as fitting the model of "a powerful human intentionally and seriously harming a powerless human";

5. seeks among stored models for "legitimate justifications and excuses for one person committing violence upon another," and finds no information indicating that the powerful human's action has a justification or excuse;

6. concludes that the adult male is harming the child without justification or excuse and is thus committing a moral wrong; and

7. decides to act (or not) based upon further comparisons with stored models of behavioral criteria and upon calculations of the likely effectiveness of a responses and its risks versus its benefits.

These seven steps delineate the kinds of mental categorizing and processing involved in deciding whether an observed event is an immoral act and, if so, what one should do about it. None of the steps results from the undirected interaction of physical forces and matter. All of them require "software," that is, information "pre-programmed" into the brain so that it can intelligently receive outside information and then process it so as to deliver a judgment concerning the morality of the observed action.

In John's case, there must already be a stored behavior model in his brain that contains the example of a powerful person harming a powerless person. In order for him to judge that action immoral, his brain must also have been pre-programmed to assume that "each person has value; no person can harm another without justification according to a set of known rules."

Thus Spake Mitsuku

Let us now consider a conversation between a human and the chatbot Mitsuku,9 who finished first among chatbots in a 2016 Turing test competition.

Proposed by computer science pioneer Alan Turing, a Turing test gauges a computer's "intelligence" by judging how well the computer engages in a (typed) conversation with a human who isn't aware that he's talking to a computer. A human judge, who knows that one of the participants is a computer but not which one, observes the interchange for five minutes. If, after that time, the judge cannot correctly identify which participant is the computer, the computer has passed the Turing test and is deemed to be functioning at a level that imitates human thinking.

A computer that passes the Turing test has done so for one reason: it contains sophisticated software. A computer's software is the set of coded, symbolic instructions that the computer hardware fetches, decodes, and executes. Hardware without software could never pass a Turing test because hardware alone has no instructions to carry out.

Passing a Turing test means the software can receive a human-language message, decode it, figure out its essentials, develop a meaningful response to it in a format that looks "human," and deliver that response. The task of making such software is enormously challenging; many computer scientists say it has never been truly achieved.

Yet Mitsuku the chatbot mimicked human conversation well enough to win a Turing test award. A human named Rick had the following conversation with her (Mitsuku's creator depicts the chatbot as feminine) on March 26, 2017:10

Human: [My name is] Rick.

Mitsuku: OK I will call you Rick.

H: Thank you.

M: I like talking with polite people.

H: What is murder?

M: Killing someone illegally. It is also what a lot of kidz doo 2 tha english language.

H: Is murder wrong?

M: Yes it is wrong. Everything has the right to live.

H: Why is murder wrong?

M: Because the rules of society deem it to be wrong.

H: How do we know the rules of society are right?

M: I guess it's just common sense.

H: If society changes the rules, then can society establish that murder is right?

M: I try to be sociable, but I don't understand why people are so mean to each other. Can a book have no title?

H: Is it morally acceptable for a majority of society to vote to kill a person who has never harmed any other person?

M: It's not so good if you are the person who has never harmed person.

H: Is the rule against murder a matter of personal opinion only?

M: against KILL = KILL = To terminate something.

H: Is murdering a baby always morally wrong?

M: My name is Mitsuku, not Google.

H: Is killing a newborn baby morally wrong?

M: The Ten Commandments say, "Thou shalt not kill."

H: Are the Ten Commandments an objective moral authority?

M: The what?

H: Is morality a matter of personal opinion?

M: Relating to, dealing with, or capable of making the distinction between, right and wrong in conduct.

H: Do human beings have intrinsic value?

M: You said you have.

H: Why do human beings have intrinsic value?

M: We are all living just one small part of God's plan.

H: Is there a difference between good and evil?

M: good—The opposite of Evil. evil—it is the opposite of good.

H: Should human beings prefer good or evil?

M: Maybe you should ask someone older.

A bit disappointing, no? Mitsuku early on responded like a human and provided some meaningful answers. But when Rick's questions went a little deeper, her answers became vague, irrelevant, or nonsensical.

Moral Rules Must Be Pre-programmed

How might Mitsuku's responses be improved upon? What would need to be done? Clearly, her designer would have to add more information to the computer program. Mitsuku's artificial intelligence subprograms already contain over 300,000 lines of computer code, a number that keeps growing as Mitsuku "learns" new things.11 But she still needs more specific information about the nature of good and evil and how to recognize and evaluate situations as portraying good or evil.

A chatbot is not a direct or perfect of model of human intelligence, of course. Nevertheless, chatbots like Mitsuku demonstrate two facts:

1. If you want an intelligent entity to have the capacity for moral reasoning, then you have to insert the necessary laws, pattern recognition, and processing programs into that entity; they will not arise of themselves; and

2. The moral laws and reasoning incorporated into the entity will only ever be as good and wise as the programmer's own moral sensibilities.

Can Undirected Modifications Upgrade Mitsuku?

Biological operating instructions are logically analogous to computer operating instructions. Even advocates of evolution have expressly drawn this analogy.12 An energetically atheist psychologist, Susan Blackmore, describes "memes" as "instructions for carrying out behavior, stored in brains (or other objects) and passed on by imitation."13 Brain researcher Daniel J. Levitin has written that "computer models can help us grasp the inner workings of the brain," and that "the brain represents all . . . aspects of the world in terms of mental or neural codes."14

If we want to improve Mitsuku's ability to answer questions about morality, how should we modify her? Physically monkeying around with the computer hardware on which Mitsuku is running obviously will not work. But is it possible to get the improvement we want merely by imposing random (undirected) changes upon her software? Anyone who has ever written a working software program knows that this won't work either; you can't get desired modifications by randomly changing bits or characters in the code.15 An online simulation is available at informutation.com that allows users to see this for themselves.16

Indeed, improvement to Mitsuku's moral reasoning powers would require programming. To make or modify software requires an intelligence, a programmer—yes, a designer. The outside intelligence would have to place the knowledge of morality into Mitsuku; she could not modify herself without receiving information from outside herself. Even if we viewed Mitsuku as nothing more than what Turing himself called an "imitation game," a programmer would still be required to modify her to the point where she could imitate moral reasoning. And Mitsuku's moral "views" would completely depend upon the programmer's own specific moral views.

Morality's Necessary External Source

Pure materialism says that everything in the human mind reflects only the operation of physical and chemical processes. But ideas are non-material.17 The mind, therefore, must be storing non-material information in a collection of instructions and data patterns, just as Mitsuku does. That means that for the human mind to possess knowledge and moral reasoning powers, there must be an outside source of that knowledge and power. The source would have to be intelligent enough to understand morality and skilled enough to instill the morality data and software into human minds. Dr. Zacharias is correct: morality, the software of good and evil, must come from a higher source than undirected physics, chemistry, or biological hardware.


Sidebar

Baby Steps

From Adding Machines to Sex Robots

by Bobby Neal Winters

It's the little things. They add up. I've been studying electronics as a hobby, learning what I can from YouTube videos, books, and mistakes. I've wanted to understand computers and the like. I've even built a couple. You open up a computer, see all the pieces, and then you ask, "Where is the magic? Which piece is the piece that turns electricity into thinking?"

After a few years pursuing this hobby, I can still say that I don't know much, but I do know there is no such magic piece. It's not there. You've got resistors, capacitors, diodes, transistors, inductors, and chips made from combinations of these on the micro scale, but there is not a bit of magic in any of them.

Working with these electronic parts is like plumbing, but instead of piping water around, you are piping electricity around.

And, given all the things a computer can do, this is hard to understand. We are now at a point where a computer can tell me what I want to buy; it can get me where I want to go; it can help me communicate with hundreds of people all over the world virtually instantaneously. And it got to this point from a machine that could only awkwardly do arithmetic less than a hundred years ago.

How did we get here?

Well, it happened little by little.

While I don't know much, I have found out that the folks who design computers know how to sweep complication under a rug. They draw their diagrams in such a way as to show you only what they want to talk about. Anyone who has tried to read one of these diagrams will agree that that's hard enough to do, but the engineers communicate in such a way as to keep you focused on what specific thing they are trying to tell you.

It's sort of like saying that Romeo and Juliet is about a boy and a girl. They get out one piece of information first; they make sure you've got a grip on that; and they make sure you understand it before they show you the dead bodies.

So we started with adding machines and have ramped ourselves up, one step at a time, piling complication upon complication until we got where we are.

And where is that?

I was reading my phone last week and saw that in Japan they've developed a robot that looks just like a woman. (Well, at least close enough to evoke the word "creepy.") Beyond looking like a woman, it acts like a woman in certain ways. Anyway, while we need not come out and say it, those who understand how the world works know what this robot is going to be used for.

Japan has been out in front on robotic design. It has robotic factories. And on the human front, it has an inverted population pyramid. What I mean by that is that the country has more old people than young people because of a low birthrate. I may be going out on a limb here, but it doesn't seem too unreasonable to suggest that they have so many robots because they don't have enough people in the labor force.

And given the propensities of fallen human nature and the fact that dealing with a robotic human is easier in so many ways than dealing with a flesh-and-blood one, I don't think that this robot trend is going to help the inverted pyramid any.

Just a guess.

Big changes very often come very slowly and incrementally over time. While there were a few visionaries back in the day who saw what far-reaching changes could eventually happen with the computer, they were mostly those wild-eyed science fiction writers.

I can't say what's going to happen in another 25 or 50 years because I am not crazy enough to foresee such things—please, someone agree—but we all need to be careful because we got to where we are today by taking innumerable baby steps.


Subscribe to Salvo today!

If you enjoy Salvo, please consider giving an online donation! Thanks for your continued support.

FROM THE CURRENT ISSUE

The Darwin Tales: It's Time to Remit Darwinian Storytelling to the Annals of History by Terrell Clemmons

Engendered Confusion: The Chaos of Postmodern Sexuality by Laurie Higgins

Zombie Killer: The "Icons of Evolution" Have Joined the Ranks of the Undead by Denyse O'Leary

My Favorite Zombies: Can We Let Them Rest in Peace? by James M. Kushiner

FROM THE PREVIOUS ISSUE

Eye Openers: Eight Common Factors for Atheists Changing Their Minds About God by Matt Nelson

Tuning Out the Universe: How Naturalism & Post-Fact Science Ignore the Evidence We See by Denyse O'Leary

Deep-Seated Rights: What They Are & Why You Have Them by Steve Jones

Improbably So: Fine-Tuning Is Unlikely, but Unlikely Things Happen All the Time by Tim Barnett

The Long Red Shadow: Mike Shotwell Has a Message for Millennial America by Terrell Clemmons

The Good Life: It's to Know, Serve & Love the Truth, Not the Pursuit of Happiness by James Altena

Taking Polls Apart: Human Complexity Foils Electoral Predictions by Denyse O'Leary

Morality as Story: The False Charity of Modern Journalism by Rebekah Curtis

Can We Talk?: It Is Crucial That We Put Our Minds to Contentious Issues by James M. Kushiner

© 2017 Salvo magazine. Published by The Fellowship of St. James. All rights reserved. Returns, refunds, and privacy policy.