Artificial Intelligence Versus Human Intelligence in the
Turing Test “Imitation Game”
This is a companion article to Follow Up to "Some Thoughts About How Machines Could Think", tying in recent developments in artificial intelligence with a paper published in 19501 by the mathematics and logic genius Alan Turing. Turing is widely recognized as the father of theoretical computer science and artificial intelligence, and in that paper he predicted that by the year 2000 computers would be able to be advanced enough to make them intellectually indistinguishable from human beings in a game he called “the imitation game”, in which answers given by a computer that is hidden from view in one room and answers given by a human who is hidden from view in another room, to any series of questions asked would fool the interrogators at least 30% of the time as to which set of answers were given by the computer. The human is supposed to try to help the interrogator guess correctly, but the computer is trying to fool the interrogator into thinking it is human. So, for example, if the computer is asked to multiply two large numbers together, it might say “I don’t have a calculator with me, and cannot do that in my head, but if you want to wait a while, I can do it with pencil and paper”, and then if told to go ahead and do it that way, it might give a slightly wrong answer because, as it would say when told the answer was incorrect, that it “made a careless mistake in doing the calculations, and forgot to ‘carry the 1’ in doing an addition part of the multiplication process.”
The interrogator is free to ask any questions s/he wants to in order to try to detect which set of answers comes from the computer and which comes from the human. Turing placed a five-minute time limit on the interrogation, but that seems unnecessarily limiting and arbitrary. I would have the interrogation period stop simply when the interrogator is ready to make the judgment or to give up asking questions s/he thinks will help her/him make the judgment. That, of course, risks making this not be a practical parlor game because it could take longer than even playing Monopoly to a full conclusion, but I don’t want the interrogator to guess wrong simply because s/he ran out of time to ask questions that could and would have been asked which would have helped.
Alan Turing, 1950:
“I believe that in about fifty years' time it will be possible to programme computers [with sufficient storage capacity] to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
This seems to me to mean that Turing thought the computer would make sufficient “mistakes” or be at sufficient disadvantage to give up the starting advantage of winning half the time if the ‘interrogator’ were merely to guess without asking any questions. It is like saying that a criminal suspect has a better chance of getting away with a crime by saying, when being questioned by detectives, “no comment” or exercising his/her right to remain silent than to pretend to be innocent and get caught lying. Presumably he may have had in mind that it would be fairly easy to distinguish the computer from the human by asking about feelings and emotions in a way that a computer could not likely answer correctly or reasonable merely by having read about them (that is, had literature input) or observed people talk about them rather than experiencing them for itself – kind of like Sheldon Cooper on the TV series Big Bang Theory trying to understand and mimic appropriate feelings he never has by watching others who are said to have them and deduce when they occur and what behavior results from it. That always showed in a comical way the difficulty or futility inherent in carrying off understanding or successfully mimicking emotions just from attentive observation of the behavior and comments of those who have them.
But I am also willing to stick my neck out further than Turing did and say that it will someday be possible for the computer to fool the interrogator at least 50% of the time. I say “at least” because I think it possible that the computer may even be ‘smart’ enough to trick the interrogator into guessing wrong more often than s/he would have guessed wrong simply by chance without asking any questions. For example, suppose an interrogator were to ask “Tell us about your greatest regret in life” and the computer were to answer “If you mean ‘other than having agreed to participate in this test where I might be judged by experts to be less human than a computer and the best I can hope for is being guessed to be more likely human than a computer’, then I guess it is the time in second grade I helped humiliate someone who didn’t deserve it in order to win the respect and friendship of a bully that I mistakenly thought would bestow it and who I thought deserved to be admired. Not only was I mistaken about what the result would be, but it was wrong to do it to the innocent kid even if it had got me ‘in’ with the bully and his crowd. That error in judgment and morality still haunts me even though I apologized to the boy when we were in high school later and told him how remorseful I was about it, and he said he forgave me. But I still can’t forgive myself for having been so cruel and stupid, and I think it affected that boy more adversely than even he realizes and that my apology could not make up for.” If a computer could spontaneously give that sort of answer and more like it, I would think that would fool most interrogators into considering it human, particularly if the human said s/he didn’t have any major regrets or said it was having got a speeding ticket or worn the wrong clothes to a party. And, more importantly, I think a computer could be made in the not too distant future that could genuinely have such feelings about such an experience, and I will explain that in this paper. And don’t forget, there are human beings who would not have those feelings because they don’t have a sensitive conscience or any sense of decency and morality.2
It has taken a little longer than the 50 years Turing expected, but now school and university instructors are worried about students using artificial intelligence, such as ChatGPT to write their papers and give them answers to exams so well that they are undetectable as plagiarism or cheating because indistinguishable from the work done by a human student. The Imitation Game, or “Turing Test” as it is often called, could perhaps be played now by a computer taking online courses as a student and getting a college degree – or more precisely or likely – by a student using AI to take his/her tests, write papers, and answer discussion questions, perhaps even also give the student questions to ask in class and observations to make about the discussion as it proceeds through a microphone and earpiece, though that latter part might not yet be feasible without detection. But a good artificial intelligence computer could probably anticipate where a discussion topic might go and help program its human to offer appropriate responses if certain things are mentioned during a discussion. A good human tutor can often correctly predict questions which will be on an exam and make sure prior to the exam that the student can answer those questions correctly if and when they do appear. Or, politicians, for example, often prepare for a debate with an opponent by having practice debates with a group of people who have studied the opponent and know what sorts of things the opponent will say and can help the politician formulate a good response in advance and yet appear to be doing so extemporaneously and spontaneously during the debate from greater knowledge and understanding of the subject. The following answers were prepared by the candidates ahead of time, knowing the points they were responding to would likely be raised.
To see how this might go in my own courses at this point in time, I have submitted my class discussion questions to ChatGPT, and although the answers given are not all that good, they are better than many of my students’ initial answers, and better than some students’ final answers during the week after poorly addressing, or failing to address, follow-up questions and critical comments. So far, my problem with ChatGPT is not being fooled into thinking a computer's relatively good answer (for a computer) is a student’s, but being fooled into wondering whether a student’s bad answer is a computer’s. Should the student be flagged for giving a poor answer AND for cheating, or just for giving a poor answer on his/her own? That being said, ChatGPT gives better answers than some of my students in that at least they are written in complete sentences which are grammatically correct. In terms of content, they are at the level of poor students who are not thinking very well and who cannot come up with very good answers.3 Since I allow, and encourage, my students to discuss the course questions with colleagues, family members, and friends, I would not consider it cheating for them to ask ChatGPT what it would say. The student still has to evaluate the responses it gets from anyone (or anything) else, and put it in his or her own terms and give supporting evidence for his or her conclusions. Preferably the student would also say that s/he received help from his/her mother, teen-age children, work colleagues, and/or ChatGPT, and say which responses they thought good and which ones poor, and why, rather than simply turning in the other source’s answer as their own and thus plagiarizing – taking credit for an idea and/or wording of it that were not theirs.
But my discussion questions are unique and require evaluation, which most students cannot do well and which computers cannot do well if only using literature and research that gives the standard conflicting arguments about questions with similar content on the surface. Student answers often are the academic version of the television program Family Feud in which contestants have to guess the most preferred answers people give to questions posed to them in a survey. ChatGPT answers basically did that, but better, stating what the various common views are about the general issue asked, but not answering very well, or even addressing, the specific question asked about it. I will say more about this later, but poor students and ChatGPT do not seem to be able to analyze and evaluate conflicting ideas or positions very well at all, but simply report them and ‘think’ that is all there is that can be done or needs to be done.
I have not tried asking ChatGPT for answers to follow-up questions or for responses to critical comments in opposition to its ‘initial’ answers, but if it is even possible to do that, I suspect it cannot yet give very good follow up answers. Many of my students cannot, though some do. The follow-up questions and comments are intended to be guides to hone their thinking, but that requires they have thinking to hone in the first place or that they are exercising it. Most students tend to just restate, repeat, or double down on their initial answers, and I suspect that the current version of ChatGPT would do something similar – keying in on main words and giving the answers it has found that freely associate with them, rather than understanding the language that shows problems with those answers and that approach.
I have had students express surprise part way through the course that they are supposed to be thinking, and doing it as well as they can. Some students cannot understand that concept and basically throw up their hands, saying they don’t know what I want if there is no answer already given somewhere they are just supposed to find and repeat, or as if there were some arbitrary answer that I am seeking, rather than an answer that will stand scrutiny and be well-explained and logical, even if it is not an answer I have heard before and even if it is one I disagree with, but they have presented it very well. And this, despite the fact that I have told them that they will get a higher grade for a well-reasoned, well-supported answer I disagree with than for a poorly reasoned view whose conclusion I do agree with – as long as the reasoning they use has not already been shown in the course to be flawed.
For example, one bonus points question I ask is about the ethics of a fairly common situation, where traffic is moving at a snail’s pace because of a traffic light up ahead that is not green for very long, and you are in a long lane of cars backed up waiting to get through that light. In the meantime, there is a car that has approached from the right that is trying to exit a parking lot (or a side street) into the traffic by turning to its right, and you or someone else in the lane you are in will have to let the car in front of them or that car will have to keep waiting. Who, if anyone, should let the car in, and why?
There is always a small percentage of students who misunderstand this question to be about merging into remaining lanes on a freeway when signs are posted that a lane is closed ahead. In that case, “zipper merging” is supposed to be the most efficient way to merge, but only if everyone knows that and uses it. But the above problem is not about merging into flowing traffic when a lane is being closed down. There is some chance that because the question I am asking is not nearly as prevalently discussed as is the one about merging traffic with a lane closure, that ChatGPT would address it as the lane closure problem instead of ‘realizing’ what is really being asked. Many students have a difficult time understanding questions specifically because they skim or otherwise loosely read the question or the explanation of the question, notice some keywords, and then freely associate to standard issues and discussions about those keywords. Whether ChatGPT would do better or not, I don’t know. And it would not likely do better for aiding a student if the student gives it the question as the student misunderstands it.
But as to answering the actual question about a car emerging from a parking lot or side street and trying to turn to its right to get into the traffic on the main road, traffic which is halted for two or more minutes by the traffic light ahead every minute or so, almost all my students say they would let the car in front of them, or that the closest car to the emerging one should let that car in because it would be the kind thing to do, instead of making that driver have to wait forever to get onto the road. Many add they would do that because they would want such a kindness done to them if they were the driver trying to get into traffic, essentially invoking the Golden Rule. Some say it will not slow people down that much who are already waiting, and it will keep that emerging driver from having to wait a long time, so that it will gain a lot while others only give up a few seconds of time, invoking a form of utilitarianism about the greatest good or least harm to the greatest number of people. Some say it would prevent a road rage incident from the driver trying to force his/her car into traffic in front of someone or just acting out of frustration at having to wait. But none of those answers are right because they are essentially allowing someone to butt into a line in which other people have been waiting, it will not be kind to the people behind you (and will be unkind to them), it will not be utilitarian in nature because it does not change the amount of overall time all the cars collectively will take to get through the light, but is an issue of fair distribution of that time in terms of which cars will have longer portions of the overall wait and which will have the shorter portions of it, it doesn’t prevent road rage directed at the drivers letting in cars whose turn it is not, and it doesn’t follow the Golden Rule if you consider the feelings of the people behind the driver who holds them up to let the newcomer in instead of just the desires of driver trying to get in. Since you would not want to be behind someone who lets a car butt into line, applying the Golden Rule to that driver rather than to the driver trying to get into the line, you should not let the car into line. You can’t just cherry pick who you would be willing to trade places with in applying the Golden Rule. King Solomon could not use the Golden Rule to decide which woman to give the baby because both of them want it, and so he would first have to choose which of them he would want to be in order to be given the baby by someone else. And a judge, for example, cannot rightfully declare everyone innocent just because s/he would want to be declared innocent if ever put on trial.
While it would be okay for you to let someone in front of you if you are the only car or person left in a line, it is not okay for you to hold up people behind you and deny them their turn by giving it to someone else. Moreover, if the student answer is meant to be the general principle that everyone should let the car into traffic who can, then if there are other cars trying to get into the traffic, you are essentially turning the access point into a four-way stop, which could conceivably hold up cars at the back of the lane interminably so that they never make much, if any, progress. [I have seen combinations of traffic light intersections where traffic in line at the second light can never go, unless they force themselves through the second light when it turns red. That is because that second light only turns green when the light ahead has already turned red, and at that time there is no space for the cars at the second light to move into, because although the cars that had been there have gone ahead through that first light when it was green, they were replaced by the cars that had turned right onto that street at the second intersection while the second light was red. And that same sort of thing would happen, though less drastically, if each of, say, ten cars in your lane would let in a car in the given discussion problem, so that by the time you got to the traffic light, you had to wait for twenty cars to go through it rather than ten.]
Now, if this were a situation in which cars were not involved, but one in which someone was trying to cut into a long, slow moving line at a ticket counter or the restroom of a concert during intermission, almost no one would answer that the person wanting to cut in line should be allowed to. They would be told to go to the back of the line. In the car situation, the car cannot physically go to the back of the line, but it can wait for the car that is at the back of the line when it first gets to the street to get to, and pass, it. The fairness principle of any line is ‘first come, first serve’, but “lines” do not need to be physical in order to be lines that employ that principle. In a store where many customers need to be served and have to wait a turn, they can be given a number when they arrive, and then served when their number comes up. They don’t have to be in a straight line, and there may not even be room for a straight line. And if there is another way, such as an electronic way, to notify them of their turn coming up, they don’t even have to wait in the same room or space with the other people who are waiting for their turn. When there is not such a system in place, decent people themselves often keep track of which persons the newcomers are immediately after and thus whom to follow – who the last person was that entered the store or line prior to them. In those cases, everyone waits their turn and helps others keep track of when their turn is. Anyone already waiting when you arrive at the scene should get to go in their turn which is before yours. (The partial exception is for reservations made prior to the time of the line physically forming. But reservations are themselves made in a first come, first serve order, and are simply a linear way itself of getting into a line prior to physically arriving at where it is. Reservations are themselves a line that constitutes the initial members of the actual physical line, even when the reservation is made for a later time. If dinner service, say, starts at 5pm, and you make the first reservation but book it for 8 pm, you are essentially the first person in line, but are permitting everyone able to be served before 8 pm to go before you, just as you can rightfully allow any one individual already in a physical line immediately behind you to go before you or swap places with you – or allow anyone in the line anywhere behind you to swap places with you or, say, a party of four to swap places with your party of four – because you then are not making anyone behind you wait longer than they would have had to wait for you anyway.)
But what I find interesting about my students is that almost none of them can reason any of this out for themselves individually or collectively. Either none of them have ever been in a store that assigned an order number or in a place where people kept track of whose turn it was when the order could not be in a straight line, or they don’t remember it when facing this question in class. And moreover, none of them seem to have any experience, memory, or concept of being behind some car that lets a car in that has just pulled up to it, even though everyone behind it has been waiting for a long time. They seem unable to remember or imagine being justifiably really upset at a driver in front of them who would do that. And it is not necessarily just a few seconds of your time they are wasting, because it could mean you are having to wait another whole light cycle of the traffic light ahead. And if enough cars in front of you let newcomers in front of them, you may have to wait several light cycles more than you would have if you were able to take your deserved turn getting through the traffic light. And students cannot reason about any of this even when given the hint always to look for ethical analogies, and even when specifically told to imagine an analogy to this situation which does not involve cars.
I don’t know whether that is a memory problem, imagination problem, a “lazy thinking” kind of problem for students, a problem of apathy about whether they solve it or even think about it or not, or something else, but I believe that computers would do better than students if it were about remembering anger in past experiences of other drivers letting people in line ahead of them and you, and if computers could be able to ‘imagine’ or construct analogies to clear cut cases, such as waiting in a line to get tickets or use a bathroom, etc. Computers seem to be great at sorting data into certain or similar patterns, much better than humans, or at least much better than I can. While I was working on this paper, for example, there was a point I couldn’t remember whether I had said some particular thing I wanted to say yet about the Turing test or not, and so I did a quick search for “turing” in order to see. The computer immediately returned five instances of “turing”, four of which I had not imagined, remembered, or considered: manufacturing, gesturing, torturing, and maturing. That prompted me to do a Google search for “verbs ending in ture”, which turned up 45 verbs instantly (https://wordtoolbox.com/verbs-ending-with/ture ), most or all of which would have the gerund or present participle form ending in ‘turing’. Computers don’t have a memory problem or lazy thinking problem. Nor do they have an imagination problem for finding important alternatives or combinations of things, such as chess moves or unscrambling words from letters arranged in a random order or single pattern, because they can quickly sort through thousands or millions of possibilities to assemble ones that fit or work either best or at all.4
The question is whether computers will be able to understand the concept of “thinking” to arrive at a good answer to ethics or philosophical questions like the “letting the car in” one, and do it better. As Turing did, I also believe they will in the not too distant future be able to answer the questions and address any follow-up questions or comments very well. And at some point, most jobs that require thinking should be able to be done by computers using artificial intelligence and it won’t really matter whether there is human intelligence or knowledge anyway. If any humans will still need jobs, they can just become legislators or administrators, or they can use AI to advise them – using it as a tool in the way people use tools in general. Farmers don’t mind using tractors, and shippers and travelers don’t mind using trucks, trains, planes, etc. Tools can assist us; they don’t have to compete with us. When used correctly, they can make our work more efficient and often make us more productive or give us more leisure time or more ‘discretionary’ time to do what we wish.
When computers were first introduced into the workplace, many people thought that would make their own work easier, but their bosses decided the computers should instead allow, and require, them to do more work, so that if computers, for example, made your work take one eighth the amount of time it did before, employers thought you should now do eight times as much work. But it would not have to be that way. Computers could have allowed us more leisure, if they helped us do all that needed to be done in a shorter time and if there were no need to produce more products and services just in order to make more money. FedEx had a commercial that was memorable because it was very clever and funny, but it was ethically perverse if you went beyond its surface humor and point. On the surface this guy will be fired, reprimanded, or otherwise punished, and yet, he is the most productive guy in the shipping department and is willing to help others be as productive as he is. But because he gets all his work done easily and efficiently, and uses the time it frees up for his own benefit, he is ‘dead’. It should not be that way – in the sense that if everyone can do all the work that is necessary in 30 hours rather than 40 per week, than 30 hours of work should be sufficient, just as 40 hours per week, with paid vacation and holidays off is considered sufficient now, but would not have been sufficient 150 years ago.
And we use machines to assist us when they are less expensive and at least as good as the assistance of human labor. Machines which are economical to operate and maintain (or repair or replace when necessary) do not require the salaries of slave labor nor the limited upkeep of even slaves; they are not necessarily in competition for labor but are assistants to laborers. But even if machines/computers/android robots did all the work wanted and needed on earth, as long as the products and services of all that work were distributed fairly among all who need it, it would free people to do whatever was legal and moral they wanted to. If everyone benefited from the work done by computers and machinery with artificial intelligence, that should make life better for humans, not worse. It should allow more or total leisure for everyone.
But at this point, I am not yet seeing evidence that computers can evaluate evidence rather than simply stating it or comparing and contrasting it, or doing what is programmed into it for various conditions. They are like reporters today who report both or all claims about any issue as if they all had merit, no matter how ridiculous any of them are. There are some things computers can do better than humans, because they can do them much, much faster, of course. For example, DNA, fingerprint, or facial recognition. And apparently computers help make fighter jets better able to fly than humans alone can fly them because computers can faster recognize and adjust to different conditions during supersonic flight. And my cameras can focus my lens, and adjust apertures for exposure faster and better than I can. But my camera cannot decide whether a particular exposure will be better for a given subject or whether a picture will be better with one set of tones than another. Will even light make the photo better or Caravaggio-like strong light from one direction with deep shadows be better. When I used to have to turn in color film to professional labs to print portraits or wedding photos I shot, I always added “Please print with rich, vivid, non-yellow, tones” and I had shown them previously tones I liked and tones I did not like in different prints from the same negatives, so that they could see what I meant by requesting “rich, vivid, non-yellow tones”. Otherwise, typical lab prints were far too yellowish and washed out looking for my taste.
But I believe that computers can be made that would have feelings, preferences, desires, emotions or emotional and preferential reactions to perceptions and experiences it has, at least to the same extent we do? I say “to the same extent we do” because people do not necessarily share the same preferences or predilections, so, for example, one android’s preference for more contrast or brightness in a photo may not be some particular human’s or other android’s preference, just as my favorite photos, or differently toned prints of a given photo, may not be someone else’s. I want to describe what computers having feelings5, emotions, desires, preferences, and ‘discriminating tastes’, etc. would be like, further than I did in previous essays, and how it could occur or be designed into the computers? I want to describe how a computer can be disappointed by something or be perplexed, puzzled, challenged or excited and stimulated by a problem or dilemma, or be bored by one it finds too easy or uninteresting for some other reason? Can computers experience the “thrill of victory and the agony of defeat”, as the introduction to Wide World of Sports used to phrase it, or as might also be said, perhaps more ‘poetically’ or aesthetically pleasing, “the ecstasy of victory and the agony of defeat”? Or, since my preference for “the ecstasy of victory and the agony of defeat” as “sounding better” than “the thrill of victory and the agony of defeat” may not be shared by other people, an android’s favorite phrasing may not sound as good, pleasing, poetic, pretty, or interesting as a different way of saying it to another android or to a human being.
The fact that androids might have feelings or preferences and tastes does not mean they will have the same feelings or tastes as other androids or as all humans – just that they will have the same kinds of feelings and tastes. Humans do not all share the same tastes, preferences, sense of humor, feelings for art or literature, etc. as each other. Comedians often have to try out material on guinea pig audiences to see what works and what doesn’t. Seth Meyers, on his TV show Late Night with Seth Meyers, periodically presents jokes, allegedly submitted by his writers, jokes that are not funny even though they meet some of the criteria for humor, criteria which Seth explains in regard to some of the jokes and which might be programmed into a computer or which an android might recognize for itself as being correlated with humor, fitting an algorithm for generating humor, but which are not actually funny. Here are some examples from two of those segments.
An old photographer one time who felt led to comment on my black and white photography said that all good black and white photographs have a full range of gray tones from stark black to stark white and that not many of my photographs had that full range. What he considered to be a flaw in my photos, however, I considered to the part of the essence of what made them good, since I thought the subject matter in them lent itself to a combination of darker shadows and tones, with only few and very selective highlights at most – and certainly not with a full range of tones that hid or eliminated the important contrasts. I doubt that an android would have criticized my work any more than he did, or have been any more mistaken about what constitutes a good black and white photograph than he was. I am not saying that no photographs are good that contain a full range of tones, but simply that some are, or can be, good which do not. And I think an android could and would know that from knowing many of the world’s artworks and commentaries about them.
I want to discuss in this paper how androids or computers can have psychological, emotional, ethical, and aesthetic preferences, along with another important aspect of human intelligence, the ability to cobble together all kinds of different bits of information and logical deductions from it into a meaningful, interesting, useful, inventive, clever and/or witty or just funny insight – particularly involving both ethics and human emotions – and the ability to derive significant new ideas, information, and concepts from facts already known but not previously realized to imply them. On the TV program House, one of Dr. Greg House’s most important abilities was to recognize relevant evidence significant to diagnosing or treating a patient where other people only saw facts without seeing their relevance or significance. The same sort of skill is evinced in good detective stories where the reader or viewer is given all the facts necessary to know who the perpetrator of the crime is and prove it, but they are presented in an innocuous way that disguises their relationship and relevance to each other and to the crime or perpetrator of it. Or consider an insightful scientist, like Einstein, Newton, or Galileo, who realizes or sees that some phenomena have a kind of logical significance to them that previously has gone unnoticed. In many cases evidence is simply embedded in so much irrelevant information that it is difficult to notice as being evidence. Facts which are clues are not labeled as clues in real life and do not distinguish themselves from facts which are not relevant to a particular problem, idea, or deductive conclusion.
Humor involves just one kind of experience or phenomenon that involves relevance to human beings, logical relevance and meaningfulness are others. What is relevant in any of these areas usually has to be learned, often by trial and error and seeing the reactions you get from others, as in the trying out or sharing of intended jokes or ideas to solve a problem or trying out different ways to explain something. Not all attempted explanations work at all, let alone for everyone; and some explanations will work for some people, but not for others who cannot follow or understand them. And like many concepts, even the concept of, “relevance” in different areas or of different kinds, is often vague and not well-defined. And any attempt to give specific criteria for what makes something relevant in that area or that will make it “work”, will often fail because the criteria may be either unnecessary or insufficient. For example, there may be many different ways that something will strike someone or a number of people as funny, and the criteria for some of them will not fit all of them.
I would imagine, for example, that what makes clever, insightful, wit involving language be funny will not be the same thing that makes slapstick comedy (or “physical comedy”) funny, or that what makes some slapstick things funny to someone will not necessarily make other slapstick funny to him or her. I could be wrong and there might be criteria common to both and all forms of humor, but failed jokes in the Seth Meyers’ segments show that merely clever associations are not necessarily funny, even though many clever associations are. Conversely, I do not tend to like slapstick humor and don’t find it funny, but John Ritter one time did a scene on Three’s Company where he had difficulty setting up a hammock and getting it to work that had me laughing out loud alone watching TV, and Dick Van Dyke had what seemed to me at the time (when I was in high school) to be a very funny scene (exerpts of which are here in a shortened, edited version) trying to fix a jeep that wouldn’t start when he was late to his own wedding, although I remember it being funnier to me when I saw it for the first time than it is watching it now, knowing what is about to happen, though slightly different from how I remember it. Part of how I (mis)remember it is that he dropped the keys into the radiator because he burned himself on the radiator or got shocked touching something while standing over the open radiator with his keys in his mouth. That seemed a more natural, believable, surprising, and funny mistake than the way it is in this version of the scene I found where he opens his mouth to say "hey!" for what seems to be no good reason other than to drop the keys. Presumably this version is what I actually saw long ago but just remember differently, though there is always the chance that what I saw was a different version, which I am remembering accurately, or possibly I have it confused with someone else's similar skit.
Similarly it is often difficult to express what makes any given detail relevant to telling a story or relevant, say, to the logic of a prosecution, or to a job interview. There are job interviews where what is asked of the applicant hardly seems relevant to whether they would be qualified for the job or not, such as “If you were a car, what kind of car would you be?” or “If you were a dog, what kind of dog would you be?” How much one knows about cars or dogs, and how quickly one might be able to come up with clever or meaningful associations between one's own characteristics and the best characteristics of cars or dogs, seems unlikely to be relevant to most jobs. And everyone knows someone whose story-telling includes far too many unnecessary details and someone else who generally accidentally leaves out important ones. Knowing what is relevant to logic, and to humor and other sorts of emotions can take trial and error, significant training and practice, a real gift, and/or luck of having a helpful idea at just the right time.
And I believe an android’s having these sorts of skills and abilities would pretty much secure a computer’s or android’s passing the Turing Test and meeting the requirement of “The Imitation Game” to be able to fool human judges as to which contestant is the human and which is the computer/android, as well as any two humans can fool human judges as to which contestant is the man or the woman or the old person versus the young person or a company’s laborer from its CEO or an athlete from an art collector – or the computer/android from any of them. The computer should be able to successfully address interrogators’ questions about its desires, preferences, and any human emotion as well as any human can, I believe, and therefore not give away its identity to the interrogators who ask such questions in order to make the distinction and guess correctly which responses come from the human and which come from the computer/android.
Let me begin by examining the following lengthy preface I am putting in blue font about how human researchers have considered the concept and study of “delayed gratification”. I am doing this as a way of approaching any supposed differences between how humans and how potential androids/computers might do the research or be the subjects in it.
The ‘marshmallow test’ is an experimental design that measures a child’s ability to delay gratification. The child is given the option of either eating a treat within a short but indeterminate amount of time or waiting out that time period for a better treat at its end. “The minutes or seconds a child waits measures their ability to delay gratification.” (from “Marshmallow Test Experiment And Delayed Gratification”).
The way I first heard this experiment done was that a child would be left alone with a treat s/he liked (e.g., a marshmallow or cookie, etc.) after being told that they could eat the treat if they wanted to before the adult returned or that if they waited, they would be given another one, so that they could then eat two of them instead of just the one. If they ate the one before the adult returned, they would not be given a second one.
The article quoted above was mostly about psychological elements used in the experiment that affected how long a child might delay gratification, apparently assuming that delayed gratification was a good thing or the preferable outcome to pursue. That is generally the supposed assumption in any discussion of delayed gratification – that delaying gratification to achieve an overall potential greater good is the right thing to do, rather than pursuing short term gains at the expense of future greater good. Here, I want to discuss what might make that assumption be reasonable or not. In a sense, this will be less about the psychology of delaying gratification than about the ethics of it – with the proviso here however, that the particular gratification, whether delayed or not, is itself ethically okay. So, for example, the treat above would have to be a healthful one, not some sugary marshmallow. I am not talking about something like delaying committing a smaller crime in order to be able later commit a bigger one instead.
Tom and Ray Magliozzi, known as “Click and Clack, the Tappet Brothers” on their weekly NPR radio program Car Talk where they answered call-in questions about automobile maintenance and waxed humorously philosophical about relationships and life in general while doing so, espoused the view that exercise only prolonged your life the amount of time you spent exercising, essentially time-shifting your younger years when you had the ability to do things that were much more enjoyable than exercising to later years when you didn’t. They jokingly championed doing the fun things when young, instead of exercising in order to live longer without having any fun. It is essentially the application to exercising of the sarcastic point that people who eat only healthful food will pointlessly live much longer by never having a delicious meal.
Jacqueline Kennedy is reported to have said, after finding out she had the fatal disease non-Hodgkin's lymphoma, “If I had known I was going to die from this, I would not have done all those sit-ups.”
The point Mrs. Kennedy and the Magliozzis were expressing is that delaying gratification is only right to do if it is the best use of your time – the use that has the right consequences, which, all else being ethically equal, would be the consequences most desirable to the person when all is said and done or ‘in the long run’. The idea of the benefit of delayed gratification is that it is right and better to defer gratification in order to have greater gratification in the long term than to have good immediate or short term consequences which prevent the long term ones. For example, its being supposedly better to save and invest money in order to accrue more to spend later rather than to spend it now on things you do not need. Of course, you need to spend money to meet your actual needs or there won’t be a ‘you’ to spend the money later because you will have died.
But if the Magliozzis and Mrs. Kennedy are correct, it is not always true that delaying gratification is the best use of your time. In Mrs. Kennedy’s case, the delay did not add benefits to her enjoying life longer, though she possibly (but unlikely) enjoyed life more while she was still able to because she was fitter.from doing the sit-ups. There are other real life choices that turn on this idea of the relative benefits of delaying gratification or not. For example, whether to start receiving social security benefits at a younger or older age, where the longer you wait to begin receiving them, the higher each monthly payment will be when you do receive them.
But the monthly payment amounts are not the only important considerations, since 1) if you don’t live long enough to reach the older age to start collecting, you will receive none of the benefits and will have paid Social Security taxes for all the years you worked for nothing for yourself except to fund other people’s benefits, 2) even if you live long enough to begin collecting the higher benefits, you might still not live long enough to make up what you would have earned by that time in total if you had started collecting earlier. The important amount is the overall amount you collect before you die, not just the amount per month. Suppose, for example, that you would get $1200 per month if you begin collecting Social Security benefits at age 65, and $1400 per month if you wait to begin to age 67 – a $200/month difference. But if you start collecting at age 65, you would earn $28,800 by the time you turn 67. (That is $1200/month x 12 months/year x 2 years.) What that would mean is that you have to live long enough past age 67 to make up that $28,800 from the extra $200 per month, since you will not be getting any Social Security benefits before you turn 67. It will take you 12 years (starting at age 67) to make up the $28,800 at $200 extra per month, meaning that you would have to live past age 79 in order to end up making more money overall by deferring social security than collecting it early.
A third consideration is how much you might be able to grow the $28,800 yourself with a wise investment of it (or how much you might lose with a failed investment of it).
But there is a fourth consideration also: the relative value to you, in non-monetary terms, of what the extra monthly benefits would permit you to purchase. How little or how badly do you need the extra $1200 per month between ages 65 and 67 versus how badly or how little you need the extra $200 per month if you wait till age 67? Is the extra $200 per month a seriously diminished return or a significant enhancement necessary at 67 but not at 65? If you have a long life expectancy and don’t need the $1200 per month, it would probably be better to wait. But if you have important needs or uses for the monthly $1200, but not as much need for the extra $200, then you should begin taking the money early. Monetary value is not necessarily relative to non-monetary value of the buying power of money. $1200 that helps you stay alive is worth far more than $1200 you don’t really need or have anything important to spend it on.
On the Magliozzi view, exercise does benefit you in the sense of your living longer, but it is a burden not worth the benefit because although you live longer, you can do far less good overall than you could have if you had spent the time doing something more valuable while you could than the exercise.
Of course, if you really enjoy exercising (or doing something, such as playing a sport that gives you the same amount of exercise), then you get two benefits with no burdens – the benefit of the enjoyment of exercising and the health or prolonged life benefits of the exercise, assuming that living longer will bring you more joy rather than more sorrow, such as infirmity, financial loss, or having to endure tragedies to loved ones, etc..
And even if the method you use to try to prolong your life or productivity doesn’t work, but doesn’t shorten it any, if it is nevertheless something you really enjoyed doing, it would be worthwhile and right to have done. For example, if one enjoys chocolate or sex and thinks that they will prolong one’s life, but finds out later they didn’t (but also did not shorten it), one wouldn’t say “If I had known I was going to die from non-Hodgkin's lymphoma, I would have never had all that chocolate or sex.” Nor would one feel that having sex when one was younger was wrong if one no longer wanted to have sex in one’s older age, even if that older age was made possible only because one had lots of sex when younger.
In short, one wouldn’t and shouldn’t regret giving up later benefits if those benefits were less valuable than what would have had to be sacrificed to attain them. And conversely, it would be right to delay gratification if the benefits of doing so are (significantly) more valuable than the benefits of what is sacrificed to attain them.
I include “significantly more” because it is not clear that it is worth waiting to gain only a bit more benefit, if it requires a difficult or burdensome willful sacrifice early compared to a mere minor lament one didn’t wait later. One might suffer more forbearing what one could have than merely regretting later not having what one then cannot. In other words an early indulgence that costs you a later benefit might still be considered worthwhile to you if foregoing the indulgence was more upsetting to you or more difficult to have to bear than is the consequence of losing its benefit later. If you are really craving the cookie in front of you now, it might be more worth it to you to eat it than the disappointment later of not having two cookies.
The point of all the above is that AI robots (which I will call ‘androids’ or androids/computers) could make all these kinds of decisions just as much as humans, and probably better in terms of probability of consequences. Androids/computers would have all these examples and more at their disposal, better than people, such my students, do – in the same way the computer found all the instances of “turing” I would never have even noticed, or in the way that Google can find all the words with specified random letters in them far, far better than I can.
And androids/computers could have desires in the same way humans do. Human wants, desires, and likes can be relatively roughly recognized and ordered by A) how much energy one is willing to put into achieving and/or maintaining a certain goal or electronic or digital state6 and/or B) by prioritizing what one is willing to give up or not give up when there are conflicts that cannot be resolved in a way that would allow both or all one’s desires to be achieved or fulfilled.7 Computers could make that sort of determination too. Plus, computers could figure out ways to resolve such conflicts whenever possible in order to get the most of both or all one wants just as well as humans, by devising options that require insight, ingenuity, knowledge, and reasoning.
It is not that we want something ‘because’ we like it or ‘because’ it feels good, but that liking something or its feeling good to us is what we consider those neurological states we try to achieve or maintain, and disliking things is what we consider those neurological states we try to avoid, alleviate, or end. I will amend that statement shortly to distinguish between pursuit of a natural psychological desire and pursuit of a moral obligation that might be considered burdensome or unwanted but necessary. But the point I am trying to make is that we have certain neurological states we try to achieve and/or maintain and certain ones we try to avoid, minimize, or end entirely, and we consider and call the things or activities that bring about or maintain the former states those things we like or want, and we call the things or activities that bring about or maintain the latter states those things which we do not like or want. Some feelings conflict in ways, like the pleasure of scratching an itch from an insect bite or from poison ivy though we do not want to have itchy bites or poison ivy. All these neurological states manifest themselves to us as perceptions and/or psychological feelings we pursue or avoid, like or dislike, or think right or wrong, etc.
And we also distinguish between different kinds of pleasures and between different kinds of irritations or pain; we do not just perceive or feel “pleasure in general” or “pain in general”. There are different kinds of pleasures or joys and different kinds of pains or irritations. The pleasure from eating a craved food is different from the pleasure of a good back rub or the pleasure of basking peacefully outside in a perfect evening temperature, especially after a hot and arduous day, and the pleasure of music is different from the pleasure of a joke or your favorite team winning an important game. A burning pain is generally different from that of a cut or of being stuck with a sharp pointed object, although an intense burn may feel like an intense sting or like being stuck with a.very sharp object. Our nerves are generally (though not always) varied enough in what they sense and how they work for us that it allows us to distinguish neurologically among different kinds of pleasurable states and also between different kinds of unpleasant or outright painful ones. Androids could have the same sort of different ‘feelings’ or recognizable varied electronic states. They likely would not ‘feel’ the same to them as ours feel to us because they are not made of the same materials we are, but they would be perceptions or feelings just the same and would have the same sorts of functions as ours do.
In other words, it does not mean what an android experiences in a way similar to us feels to it exactly as it would feel to us. It just means it has a feeling it can recognize and name, say, an itch on its exterior surface, and it calls it an itch in the same way a child does after being told to quit scratching its arm, and the child replies, “but scratching makes it feel better, and I want to scratch it,” and then being told “that is because it ‘itches’, but that doesn’t mean you should scratch it. You might infect it or make it worse, possibly causing a scar, or, if it is poison ivy, you might spread it.” So if we notice an android scratching its exterior surface and saying that it feels good to do it, and that if it doesn’t do it, something bothers it that makes it want to scratch, we could say “Oh, you are scratching because it itches.” And then after that, the android would refer to that feeling anywhere on its exterior as ‘itching’ or its ‘having an itch’, just as the child would from then on too. And, again, remember that I only assume your itch (you being a human being) feels somewhat like my itch, because we are made out of the same materials – skin, nerves, other biological cells, etc. This sort of process of identifying and naming feelings is more difficult with feelings that manifest themselves to us only internally, such as a “scratchy” throat or a “raspy” throat, or a “ticklish” throat, because there is no behavior or object to see or point to for others to see which lets us think we are experiencing the same feelings or perceiving the same things.
A simple electronic or digital model of this would be a heating and cooling system in a house where a computer monitored the system and said the house really likes 72 degrees because it keeps trying to maintain that temperature or close to it – that temperature being what the thermostat is set to. But it would be true of whatever the thermostat is set to, no matter how it got set that way – ‘born’ set that way at the factory through some accidental or random process or through some sort of evolutionary process that kept thermostats set much higher or lower from being purchased and used, or that kept them from working correctly in an HVAC system, or because once installed in the home, it was the electronic or digital setting that took the least energy/effort to maintain. None of that would be any different from liking chocolate, opera, French cuisine, long walks on the beach, and sex by a fireplace or candlelight – because those are the things someone might pursue trying to achieve or to maintain. Someone else, of course, might pursue achieving the state that strawberry induces rather than chocolate and so we would say, and feel, they prefer strawberry. Similarly with regard to Italian or Chinese food (in general or at a given time), avoiding sand, preferring country music to opera, and wanting sex in the dark (or in much fuller light) or not wanting sex at all, either in general or at the time or with a particular person. The point is simply that we attribute “liking”, “desiring”, and “wanting” or “preferring”, “enjoying”, etc. those things that cause those neurological states which our bodies try to achieve or maintain; and androids could have the same sorts of preferences or aversions, desires, dislikes, etc. as determined by what electronic or digital, states their operating systems and programming seek to achieve or to avoid. If an android had chemical sensors that distinguished chocolate from sardines, the android could easily honestly and truthfully say, it likes chocolate (which it is set, or has set itself, to pursue) and hates sardines (which it is set, or has set itself to avoid) and can recognize.
And we make the distinction between pursuit of something like a moral obligation, as opposed to a ‘pleasurable’ desire by there being separate monitoring systems that register neurological states in different ways – one we consider and call pleasure and the other we consider and call conscience or moral obligation because of some difference between them we perceive even if we don’t quite see how to describe or state the difference, other than to say one follows from, or is based on, a moral principle, and the other ‘just feels good’ or some such.
Similarly an “addiction” would be something that registers as a pursuit that one monitoring system or one part of us registers as being desirable at a given time but another monitoring system or another part of us registers as undesirable in its consequences or at a different time – something we are driven to have or do though we know we will regret it later or that something in us pursues while another part of us resists or wants to avoid it. Craving an excess of pizza and beer, for example while wanting to lose weight and also not be sick or hungover. Suppose, for example, that your refrigerator thermostat pursued a temperature so cold that it clogged the water in the evaporator drain tube and made water drain onto the shelves in the refrigerator compartment, and that a sensor in that tube wanted the temperature higher so that it didn’t freeze, and sensor on the shelves wanted it higher so that the shelves did not have water on them. If the sensors could talk to us or to another sensor in the refrigerator, the one in the tube and the one for the shelves might say the thermostat was addicted to a low temperature, though the thermostat might say the other two sensors were just cold-averse or afraid of the cold or just too addicted to warmth, or some such. Or if the refrigerator had a totally separate system that monitored all these sensors in some way or other, it might consider itself addicted to cold (or to warmth), depending on which sensors it might strive to maintain in the state they seek – or that it waffles back and forth trying to maintain.
Or consider what we call the feeling of anger (or feeling or being angry). It is not that we behave, or are prone to behave, in an angry way because we feel anger, but that feeling anger (being angry) is behaving or being prone to behave in certain ways and having certain neurological conditions we call anger, e.g., 1) feeling high blood pressure, 2) having vicious thoughts that someone deserves being hurt because they have behaved despicably or cruelly, wrongfully hurt others, 3) our lashing out or wanting to lash out at the person, 4) hitting or wanting to hit something like tennis balls or a punching bag, or a wall, if we can’t hit the person we are angry at, 5) throwing things like dishes, 6) or being in any of the other states we associate with anger or being angry. An android could have all that programmed or developed into it, and operate in the same way, except that instead of the “feelings” of anger being neurological in nature, they would be electronic, digital, logic-based in regard to a ‘moral’ principle, or readings of the data in whatever other physics operating system or mode the computer/android works by.
And the android may even be able to distinguish justice from vengeance by, in part knowing justice to be rightfully dispensed, dispassionate, deserved punishment of others for cruel harm they caused someone, and knowing vengeance instead just to be angry punishment which may not even rightfully be meted out, for example killing someone’s innocent child to make them suffer as you did from their killing your child. Their child does not deserve to be killed for their action, and the person himself may not deserve punishment at all if the death of your child was an accident for which the person was not at fault. An android can tell whether its electronic state is one that includes ‘feelings’ of anger or just the desire to try to bring about justice. The android can know it is not in the state it associates with anger toward the person it is punishing, but that it simply finds the act of the perpetrator so wrong, that s/he deserves punishment for it. The android could say some vicious criminal’s death was simply justice whether as an execution in the criminal justice system or was karma and “poetic justice” if it occurred as a natural consequence of his own bad act or victimized by his own scheme, as in being accidentally blown up by a bomb he was making to kill innocent people” (“hoist by his own petard”). One can say it was a sad occurrence that was unfortunately fair or necessary, but too bad that it had to happen. Or an android could honestly and truthfully vengefully and angrily say while pounding on a table or punching a wall, “I’m glad that miserable scumbag is finally dead, though they should have tortured him every day, till he pleaded for death each evening, for a year or two before they finally did execute him. Just killing him was too good for him. He deserved to suffer far more because of the terrible things he did. I always hated that no good excuse for a human being (or no good excuse for an android).”
Oppositely, there is no reason that an android cannot honestly and truthfully profess its undying romantic love for a person or other android, saying that it wants to spend the rest of its life with the other person, that every moment they are away from them is agony, and that they want to be with them and sincerely vow to be “faithful in malfunction or in perfect working order, for full energy charge or low one, till one or both irreparably crash or have operating systems no longer supported.” There is no reason an android couldn’t write a sonnet comparing the person it loves to, say, “a summer’s day” as Shakespeare wrote or that it couldn’t honestly miss someone, human who has died, or android who has irretrievably crashed and say, as Tennyson wrote, “O for the touch of a vanished hand, And the sound of a voice that is still!”.
As to delaying gratification – which is simply deciding between one kind of good or another, in this case a lesser quantity at a sooner time, or a greater quantity later – or making any other kind of sacrifice, there are ways to make some sacrifices less burdensome, and AI computers could utilize those too. In the article about the marshmallow test quoted above, children who used the time delay to entertain themselves with pleasant thoughts (probably other than thoughts about the treat they were having to wait for), were able to delay gratification better than children who did not use their minds for something worthwhile to them. And presumably an even easier way for children to get two treats instead of one would be for them to have a really enjoyable activity they do during the waiting time that diverts (i.e., totally distracts) their attention away from the state of desiring the treat – like watching TV, playing outside with friends or their dog, checking their social media on a device, or reading an interesting book.
I have argued elsewhere that in end of life decisions about euthanasia for a person who is suffering and dying, the same sort of operating principle does and should apply – that if a person who is dying and suffering could be distracted from their suffering and sorrow (particularly if living longer allowed them to achieve some desire or desired experience, some desired goal, they would not achieve if they died sooner), they would be willing to live longer rather than to die sooner through euthanasia. It is not simply the amount of suffering nor the amount of time remaining that is the relevant factor, but “the relevant factor is whether it is possible or likely for the person to find (or be helped to find) something worth living for that makes up for all the suffering they have to endure – that somehow redeems their suffering. Moreover, any distraction from their pain and suffering will thereby lessen it, including their contemplation, anticipation, and pursuit of the experience they know will make enduring the suffering be worthwhile. That is what makes life valuable in general, and I see no reason it should be any different in the last few days than it is when one seemingly has all the time in the world.”
There are other dilemmas androids could face, feel the weight of, and be torn between, such as whether or not to buy insurance in cases where a major cost or loss without it is highly improbable or where in general the harm caused by a loss is inversely proportional to the probability of its occurring. We make this decision, for example, every time we have to choose whether to purchase an extended warranty policy for some product we are buying. Or, currently I am trying to decide between keeping my home and car insurance with a company that charges more but has higher coverage, including coverage for earthquake damage to my home, versus switching to a company that charges less but has lower coverage caps and does not provide insurance against loss due to earthquake. I will discuss dealing with this sort of problem in more detail for humans and for androids later.
Any time we spend money, the question is whether what we get for the money is worth it to us or not, and that can be different for different people. On a personal level, some people would rather spend money on travel, some on jewelry or clothes, some on electronics of one sort or another, some on dining in a restaurant, some on fine wine, some on art, some on attendance at sports events, or concerts; some people would rather save or invest it to have it or more for later. Preferences of that sort are often disguised even to oneself, as ‘what is affordable’, as opposed to what is preferable. When my wife pointed out one time that diamond earrings were on sale for $400, I jokingly said (what was not funny to her) that I couldn’t afford that for diamond earrings because one could get two VCRs (large electronic instruments, now obsolete, for recording video from television or from live action with a camera) for that amount. I didn’t want to buy either the earrings or the VCRs, but it was not because I didn’t have $400 to spend, but because I didn’t want to spend it on either thing. (By the way, she had far more discretionary income and savings than I did at the time, so if she really wanted the earrings, she could have bought them instead of making that be a demonstration or test of how much I loved her and was willing to sacrifice for her.) On the social/cultural level, politicians often object to programs they do not like on the basis of their costs, but extol the virtues of programs they want to pass by saying they will create jobs. But those are two sides of the same dollar bill, in that spending money can create jobs and creating jobs requires spending money. The real issue is not generally whether the programs are affordable or not but whether they are worth the expense and/or the best use of the money as a tool for channeling labor into it or not. Politicians tend to disagree about the value, not really the affordability because they will vote to spend money on a program they like while saying a program they don’t like, which costs the same or even less, is unaffordable. It would be the government equivalent of my buying two VCRs after telling my wife I couldn’t afford the diamond earrings she wants.
Androids with artificial intelligence could do all this and have these disagreements and make these choices for the same sorts of reasons that humans do, based on preferences or principles for deciding between options programmed into them and on preferences and principles they develop from their own experiences. It could be done with feedback systems with alternative options that need to be compared or weighed against each other. Computers could choose or argue for, or simply pursue, the option they “want”, based on preferences and aversions programmed into them originally (as biology ‘programs’ instincts, predilections, and psychological quirks or differences into human beings) and that they (and human beings) develop or that become altered through their own experiences. For example, we know how much someone might want something by how much effort it makes to achieve it and/or by what it gives up in order to achieve it, as in “You must really want to solve that problem you are working on. You keep going to the library and looking through those dusty obscure works even though it triggers your asthma. You must have tried a dozen different solutions, none of which have worked, and yet you still persist. And your wife says you don’t stop working sometimes for meals or even for sex, which is really unlike you. You are really ‘driven’ to get this figured out, aren’t you.”
I’ll come back to writing about how computers could think and feel emotions and have desires, etc., but let me approach it in the following way to begin with. Consider the wild rabbits which live in the patch of woods at the far back portion of my backyard. I put out carrots for them, which they seem to love along with the grass and clover they like to eat in the yard. Every late afternoon and evening they come out for the grass and particularly for the carrots. Sometimes they come out earlier looking for the carrots, and if they don’t find any, they will stare at the window as if to say “Where’s our food? What is taking you so long?” After putting out the carrots, I stand in the window and watch them or photograph them, even wave at them, and they are content to stay there and eat. My next door neighbor has three dogs that are let out sometimes and which can roam through their backyard and mine, and the rabbits keep their eyes and ears out for those dogs, immediately fleeing back to their woods, via going underneath the hedges of rose bushes I have. One such hedge is near my house and the rabbits often wait in the shade under the hedge between the end two rose bushes in particular. If I go outside while they are eating, they flee too. But I have been trying to ‘win them over’, and have been somewhat successful so far, but I hope to be even more successful.
I began by just putting out the carrots and saying out loud the carrots are out and by letting them see me watch them eat the carrots through the window that is close to the carrots. I can even wave at them through the window and they don’t flinch or seem to mind. You can even knock on the window or talk to them through it and they will not flinch or stop eating, but if you unlock the latch to raise the window, they immediately flee. Or if they hear my back door or my neighbor’s back door open, they flee. On the other hand, my other neighbor has a fenced in backyard and a dog that barks when it is out, and the rabbits don’t even look in its direction or flinch at all when it is out and barking. The rabbits seem to have a sense that walls, windows, fences, closed doors, even glass ones, protect them from what is on the other side that they can hear and/or see. Last night, a rabbit even jumped up on the outside brick window sill and sat on it for a while looking through the glass.
Then I started going outside to put the carrots out a bit later, once they had come up to the place where the carrots go and were looking for them or eating clover and grass. That would immediately scare them off into the woods, but they would come back up for the carrots a while after I went back inside. Eventually some of them did not flee all the way into the woods, but watched me from the edge. I tried throwing carrots to them, but at first that motion scared them off into the woods. However, little by little some of them seemed to become less scared and they would let me throw carrots to them and would eat them or they would wait on the other side of the closest row of rose hedges waiting for the carrots to be outside and me to be back inside. Then I began to lie back in a reclining lawn chair that I put out fairly far from them, but now very close to the carrots. Some of them will come up and eat, watching me closely. This video shows how close they will come to my chair (even with me in it if I am very still), but I have not yet been able to film them from the chair while I am sitting in it with them this close, because that is too much motion for them not to flee). I can wave at them slowly and carefully while sitting in the chair, and I can talk to them gently, but I cannot change body positions or adjust the chair without spooking them, or make too loud or abrupt a noise, like a cough or sneeze. Sometimes when they spook they go all the way to the woods; other times they just go behind the hedge or down to the edge of the woods. But if I lie or sit still on the lounge chair, they will eat near me till they are full. Some of them now let me throw carrots to them from a distance. I kidded my wife that one of them will catch the carrots in its mouth or if I throw the carrot too high, the rabbit will jump up and bat it into the air to stop its flight and then catch it in its teeth before it hits the ground. She didn’t buy into that. Yesterday a rabbit only ran off a little way when I got out of the lounge chair near it, and I picked up one of the carrots and showed it to him/her. I held up the carrot which it could clearly see, and I took a step toward it, holding the carrot out to it. It didn’t move. I took another step. It didn’t move. However, I took one step too many or too close to it, and it fled until I went back into the house. Now, here is my interpretation of all that and I will bring this all back to androids shortly:
I think it is not a stretch to say that the rabbits have a certain amount of fear, based on whatever instincts they have or develop or on whatever they are taught by the older ones. (They do seem to play games with each other which may be a form of teaching – charging and jumping or pushing or even swatting at each other.) Their fear of me is less now than it was at first, as evidenced by how close they will let me come to them or how close they will come to me, under certain conditions. And, to be fair, I have a certain amount of fear of being scratched or bit by them if they were to let me get close enough to hand them a carrot. I don’t really want to get that close, at least not with these pieces of cut baby carrots I have been feeding them. I have tamed feral cats by feeding them to the point they would come inside and also let me pet them, but I don’t know anything about rabbits in that regard, and would be afraid to be scratched by one. My fear would kick in at the point where I thought they could reach me if I got close enough.
Now, I can feel or perceive my own fear of getting too close to them by perception of what seems to be a blood pressure rise, increased heart rate, hesitancy to get closer, etc.; I don’t have to gauge or measure it by how close I will go. But I cannot know that they have fear of the same sort I do. All I know is that they flee under certain conditions of proximity and that they now let me get much closer than they ever did at the beginning. But that certainly seems like fear. And it is easy to imagine them feeling wary or fearful as they are eating out in the open, and that they run because they feel the fear. It would be odd to think of them as running just to be running, without having any fear. And I will come back to this in a minute. But if you ever had cockroaches in a dark area that you saw flee in all directions when you turned on the light, it seems less that they are scared than that they are just instinctively fleeing from light. If so, then they may not even feel scared, but their fleeing makes it appear they are. But you don’t need to explain their fleeing by their feeling fear. It seems reasonable that they could just be fleeing light or sudden light, not experiencing any fear at all. This could be like the reverse of a motion sensor light switch automatically activates light when it detects motion – in the cockroaches being a mechanism that automatically activates motion when it detects light (like through its eyes – light hitting its eyes automatically activating its legs to flee till it finds darkness). Attributing fear to a cockroach seems to be anthropomorphizing it too much. But because rabbits are mammals it does not seem to be anthropomorphizing the rabbits to attribute fear to their fleeing. Yet, just as with the cockroaches, it is not necessary to attribute their flight to fear rather than just to some sort of instinct that is automatically triggered by my or a dog’s perceived presence or impending presence.
But what if our own fears are themselves something like anthropomorphizing ourselves in the sense of what if what we call fear is simply noticing our own reactions (internal and external) to certain stimuli? What if ‘being afraid of snakes’ is just about having an instinctive automatic response to flee or not be near them or to kill them before they can bite us, and not about having some feeling which we call fear that is separate from those things? For example, when you don’t hear someone approach you until they say something or you happen to see them out of the corner of your eye, and you jump. The usual thing to say is “You scared me” or if you jumped hard enough to say “You scared the hell out of me!” But notice; you didn’t really jump out of fear; you just jumped by suddenly seeing or hearing what you didn’t expect. It is not like you were walking in a haunted house or looking for an assailant you think might have broken into your home, with fear and trepidation as you take each step or round any corner. You only say that you were startled out of fear because of 1) the way you jumped, and 2) the way you felt immediately afterward with your heart racing. And even in the haunted house or potential intruder scenario, you only know or experience what you call fear as your heart racing and your being extra vigilant, and your mind racing, etc. Or you might jump when someone comes up behind you too quietly without feeling any fear at all, and when they say “Oh, I didn’t mean to scare you”, you say “You didn’t. You just startled me because I didn’t hear or see you coming.” That in particular seems to just be an instinctive reflex reaction, not a response to a “feeling”. (This may even manifest itself in dangerous situations, such as the common one where a person we clearly see as having been brave because it overcame what should have been fear later says they didn’t experience fear till after it was all over.)
Well, what if all our feelings or emotions are simply characterizations of our instinctive reflexive reactions that we notice during or after having them? What if fear is just what we call reacting the way we do, but also monitoring that reaction and calling what we monitor the ‘feeling’ of fear? Suppose for example, that your car’s thermostat didn’t just register a number or needle point on your dashboard gauge, but also talked to you, at first saying “my engine temperature is getting a bit warm” but then getting louder and more vehement sounding about how hot it is getting and thus “feeling” because that is what it calls the gauge going higher, eventually shrieking at you and then saying it feels faint and cutting off as the engine dies. Isn’t that what we do about ourselves: notice reactions we have to things and call what we notice “feelings” of being too hot or too cold or sad or afraid or happy or excited? Why couldn’t a computer do that, based on its circuitry and programming or even programming from learned experiences?
Or consider my Ryobi battery-powered lawn mower which has a mechanism of some sort that makes it rev up more powerfully and use more battery power when I get to higher and/or thicker grass. Clearly that is not because it “feels” anything about the grass, nor because it is “determined” or “conscientious” to try harder in the face of difficulty or adversity and resistance. But when an animal or person exhibits the same sort of behavior, we attribute feelings and qualities of character to it. There is no need to do that for the animal or person than there is to do it for the lawn mower. A lawn mower that does not have that mechanism may not cut through higher or thicker grass, but it won’t be because it has less determination or is less conscientious or is more tired and less energetic. Similarly with a person whose instincts make it try harder versus a person whose instincts do not do that. The forces at work neurologically can be seen to determine its action; there is no need to postulate or translate them into its having character or feelings any more than my lawn mower does.
Suppose, for example, two different androids have some bad experience after making the same choice in the same circumstances, but one of the androids has friends or a ‘support group that makes the experience less troublesome for it in various ways and the other does not. And suppose the problem for the second android spirals out of control or leads to other difficulties for it – lots of extra energy having to be spent and/or its having to give up other things important to it in order to cover the loss, or its becoming incapacitated in some way and unable to function when faced with a similar situation. If faced with those similar situations again, the two androids would likely make different choices from each other, although they made the same choice as each other the first time. And what if we and the one android referred to its ensuing reaction as one of fear or extreme guilt or some other feeling brought on by knowing the result of the choice, but the other android knows that what its friends did got rid of those reactions (i.e., feelings) and so it doesn’t experience them any more in the way the first one continues to? All that could be just automatic reactions to previous experiences, and the android that was helped to overcome the problem might even try to explain to the android that didn’t, that there is really nothing to fear but the fear itself (i.e., its own hesitancy).
The trick would be making sure what we call the android’s fear is similar in triggering and in resulting response to our experience of what we call fear, and what we attribute to the rabbits, so that the term would be appropriate for the android’s condition. Like it would be odd to call it fear if we put in the car’s engine a computerized verbal thermostat or put in the car’s body a computerized verbal burglar alarm, where the voice and messages became more calm sounding and soothing and sonorous as the temperature climbed or as more intruders were detected. For a computer or android to seem to have feelings that are just like ours, the android’s reaction would have to be similar to people’s reactions in a similar situation. Sometimes with regard to people, we cannot tell what their reactions mean. Someone could be crying and moved to tears out of joy rather than sadness. A sudden rush of what we consider to be intense pleasure can cause a shriek that could easily be mistaken for pain. Or I tend to get loud when I discuss something really interesting and it is easily mistaken by others to be yelling in anger or frustration instead of enthusiasm or excitement.
Now love is a form of attraction; and fear, as well as dislike, are forms of aversions. The notions of preferences, desires, wants, aversions, dislikes, etc. could be considered to be kinds of emotions, and the conventional belief often is that computers do not, and cannot, have emotions because they are mechanical or electro-mechanical objects, not biological ones. But biological beings are a kind of electro-mechanical-chemical entity that has various responses to stimuli built into them in what we consider to be ‘instincts’ or that involve underlying “subconscious” processes that result in what we consider to be feelings, and computers could be made to have the same sorts of processes that it and we experience and consider to be, and report to be, “feelings”. The idea is that computers/androids would have electronic or digital end state goals – like a thermostat operates a furnace or air conditioner until a certain temperature is reached, or a camera changes the focus of a lens until an object is in sharpest focus, or a cell phone adjusts its screen brightness to the ambient light, etc.
I believe that androids could reasonably be considered to have feelings, emotions, desires, and other mental states in the same ways human beings do, although they may not have the exact same perceptions that humans do. Let me explain the concept involved first in regard to human beings.
Let me begin by talking about temperature because the concept involved is that our bodies – chemically, neurologically, and however else they try to regulate themselves – seek a certain range of temperatures, both internally and externally. For the sake of ease of the discussion, let’s say the sought internal temperature is 98 degrees Fahrenheit (37 Celsius) and that the sought external point is 72 F (22.2 C). This, of course, will vary from person to person and will be different under different conditions for any given person, as, for example, 72 might feel too warm in the house in winter or too cool in summer. Or, for some reason, 80 F (26.7 C) feels way too hot in England in May, but feels delightful in Arizona in August. I will just talk about external temperatures, because that will be easier, since the ways we try to achieve external temperatures are more observable than the ways our bodies try to maintain an internal temperature (range). But to explain the points I want to make, I’ll just talk about 72 F (22 C).
Since the invention of the thermometer, we have a convenient way to describe variations from the sought temperature of 72/22. We talk about degrees above or below that temperature. And we call temperatures above that spot toward the warm or hot side and temperatures below that spot toward the cool or cold side. In modern buildings with modern heating and cooling systems, we can maintain pretty close to that temperature by the use of a thermostat that activates heat when the indoor ambient temperature is too cool and that activates air conditioning when that ambient temperature is too warm. Let me call 72/22 degrees the “Goldilocks point”, after the children’s story “Goldilocks and the Three Bears” where the little girl named Goldilocks goes into the home of the three bears who have just left – the mama bear, papa bear, and baby bear – and she samples their food, bed, etc. and finds some of the soup too hot, some too cold, and some just right; and one bed too hard, another too soft, and the third just right. The “Goldilocks point” is the “just right” point that works not just for temperatures but for any kind of point our bodies and minds ‘seek’ and which we might refer to as ‘preferable’ or the preferred point: temperature, brightness of various kinds of light (whether indoors, outdoors, or a television, computer, or cellphone screen), hardness or softness of something to touch, sharpness and dullness, contrast, color saturation, sound volume, etc.
I think we tend to have our own Goldilocks points even for the right balance of offensive and defensive potential in a sport to make it interesting to us. Apart from having a particular team or player one is rooting for, I believe that some sports are less interesting than others because they too heavily favor offense or defense. For example, in professional basketball offense dominates and teams will score in far more possessions than they will not score. But in ice hockey and soccer, teams will not score during far more possessions than they will score. I myself find both sports boring for that reason – there is insufficient suspense about whether there will be a score or not. In some sports, adjustments are made when it is considered appropriate to change the balance of scoring to reach a different Goldilocks point or range of points, as in football when they widen or narrow the hash marks or move the kick-off spot on the field closer or further from the end zone. Golf has rules about the liveliness of golf balls and materials and design of golf clubs so that the game stays the right amount challenging and exceptionally good shots rare enough to be amazing and special. If the equipment you use to play golf is the main factor in what your score is, that takes too much of the human element and the ‘sport’ out of it. Major League Baseball so far has prohibited metal bats from being used, but tennis has allowed metal and composite materials to be used in tennis rackets, which has changed the nature of the game to the extent that power counts for more than strategy, and points now are generally shorter than they were when rackets were all wooden and it was more difficult to hit so many shots so hard and fast that the opponent could not likely get to it.
Now it turns out that we tend to seek a Goldilocks point (or Goldilocks range of points) in all kinds of different things, such as the ones just mentioned, but also many others. For some people and some things, the range is very narrow. This is most obvious in people who are OCD (having obsessive/compulsive disorder), where various things have to be a much more precise way for them than they do for most people. [When the “range of points” is just one single point that is right or acceptable and any other point is not, that characterizes simple on/off switches or people who “see things only in black and white”, that is, simplistically just right or wrong and not partially right or wrong or sometimes right and sometimes wrong or partially good and partially bad, etc.]. But even ‘normal’ people have ranges of acceptability or preference in all kinds of things, ranging from traditions that perhaps were arbitrary arrangements to begin with, to matters of taste (“You can’t wear that tie with that suit; the patterns clash” or “You can’t wear that blouse with that skirt; the colors don’t go together at all”), spices and quantities of them in food (i.e., too spicy or too bland), acceptable behaviors (too boisterous or too quiet, too shy or too arrogant, too interested or too apathetic, etc.), hair styles (too short or too long in part or in whole, or too straight or too curly, too much or not enough coloring, etc.), amount of money to spend on one thing or another to avoid being either too miserly or too extravagant and tastelessly ostentatious, etc.
The variety of Goldilocks points can be quite surprising. People can be what some will consider conscientiously meticulous while others will consider it (too) finicky about what the right Goldilocks point is or how it should be reached. Or, as an example of a ‘black and white’ (i.e., just right or wrong altogether, not somewhat close to or far from right) Goldilocks point, the first time I put sheets on the bed when we were first married, my wife said I was doing the top sheet wrong because I put the ‘good side’ up, and she said the good side has to face downward so that you sleep between the ‘good sides’ of both the top and bottom sheet. So while we were married and I was changing the sheets after laundry, I always made the bed that way to appease her taste about that, though it seemed mildly wrong to me since we don’t wear our clothes with the good sides facing our skin; that would be considered having our clothes on ‘inside out’ and since I had grown up with it the opposite way. But it was apparently much more important to her than it was to me, so I did it her way. We divorced after 39 years of marriage, and now I put the top sheet up again, and take a certain amount of perverse satisfaction in that every time I put clean sheets on my bed. It is a minor Goldilocks point for me, but nevertheless a Goldilocks point of satisfaction. Life is filled with a myriad of Goldilocks points, many of which require having parents, a spouse, children, an employer, teachers – in short, other people – for you to learn that the ones you have are wrong.
Basically people have a lot of Goldilocks points they strive for in all kinds of things, whether by instinct, training, experiences that turned out to be either miserable (i.e., sought to be avoided) or enjoyable (sought to be attained or prolonged). The vast variety is periodically made apparent by sociologists, psychologists, anthropologists, political scientists, philosophers, humorists, novelists, and by children who ask questions about things they find strange which adults long ago mistakenly considered natural or normal. Some Goldilocks points are more common or universal than others, and sometimes our own Goldilocks points conflict with each other, such as the Goldilocks point for amount of sweets to eat and the Goldilocks point of the weight we would prefer to have. Or the Goldilocks point for amount of sex and the Goldilocks point for number of children; the Goldilocks point for amount of work versus the Goldilocks point for amount of income or other benefits from work; the Goldilocks point for accomplishment versus the Goldilocks point of not fearing failure or feeling pressure to succeed, or even the Goldilocks points for a successful party, avoiding what Jane Austen described in her novel Persuasion as a “... a mixture of those who had never met before, and those who met too often--a commonplace business, too numerous for intimacy, too small for variety.”
The essence of any Goldilocks point is that it is a state we seek to achieve when we do not have it and which we seek to prolong when we do have it.8 And the further we are off from experiencing certain Goldilocks points, the more we strive to achieve them. We can put up with being a little too cold or a little too warm, but we will work hard not to freeze or burn up, if we can. Goldilocks points can be reasonable or not, useful and helpful or not, even harmful to us. Natural selection tends to eliminate the most harmful Goldilocks points – things sought by an organism that are fatal to seeking or achieving it. But people seem to have many Goldilocks points that have no particular rhyme or reason, no particular purpose, no particular benefit or harm, and that is an important point. We tend to seek Goldilocks points in the way the thermostat in a heating and cooling system seeks temperature control – regardless of what the thermostat is set at. That is the temperature it ‘strives’ or ‘works’ to achieve and maintain.
Now obviously, the thermostat in a heating and air conditioning system doesn’t “care” what temperature it is set at. That will simply be the temperature it seeks to reach and prolong. Thermostats don’t have feelings. And calculators and computers work similarly to reach a goal they were programmed to achieve when operated in certain ways. They do not care what number, words, or other symbols you enter or what processes you command. Feeling and emotions are not involved in that process. They simply do what they are programmed to do. But I think we could add feelings to this in a way equivalent to how animal and human feelings probably are “added” to the idea of biological Goldilocks points. All that is necessary is a way to monitor and characterize the amount of distance or degrees the organism is from its Goldilocks point at any given time, particularly by measuring and characterizing the body’s reactions at various distances and directions from the Goldilocks point. With regard to external temperature, for example, we recognize being too warm/hot/cool/cold we are by whether we are perspiring, feeling thirsty, shivering with cold, losing feeling, stiffening up, and by recognizing some sort of internal effort our body is making to cool itself down or warm itself up. We call all that “feeling warm/hot/chilled/cold”, etc. We, in essence, have ways to tell how far off and in which direction we are from our Goldilocks body temperature, whether external or internal – though sometimes we get it mixed up (or have difficulty distinguishing internal from external temperature), as when we are shivering with the “chills” while burning up with fever. And we often give names to those degrees of difference from the Goldilocks or biologically, biochemically, neurologically, or psychologically sought points.
For example, consider what is called the flight or fight instinct or response to a situation. But clearly we do not always either flee or fight. Going toward the ‘flight’ side, we first may become alert and get ready to flee if necessary, we can be anywhere from “alert” to “wary” to “nervous” to “apprehensive” to “scared to death/terrified”, to hysterical, etc. When I watch my wild backyard rabbits up close to them while they eat or play in the yard, it is easy to see that the slightest sounds of certain sorts or the slightest movements of certain sorts make their bodies shift in certain ways. They stop moving; their ears straighten; they sit up and then remain motionless, they appear to be alert and ready to run if necessary, but not yet in flight.
Last evening, there were four or five rabbits eating and cavorting in the grass after finishing up all the carrots. Suddenly one of the dogs from next door came bearing down on them and the rabbits took off in all directions. It looked like the dog, I’ll call him William, because that is his name, was about to catch one of them when suddenly the rabbit made a left turn on a dime and William could not turn as fast, and the rabbit got between the rose bushes and hightailed it into the woods. In the meantime, another rabbit had run toward the gardenias by the house, and when William had his back turned, looking toward the rabbit that he had just missed, the rabbit took off past him from the gardenia bush, got halfway to the woods before William even started to move toward it, and was in the woods by the time William got into first gear.
I point all that out because it is easy to imagine that the rabbits are just fleeing and escaping on instinct, with no thought or feelings behind it other than, afterwards having to calm down and thinking “Geez, that was close!” But it is also easy to imagine that if rabbits could talk, the conversation between them and me might go something like this:
Me: What happened that William was able to get so close to you guys tonight?
Rabbit 1: I don’t know, but I think it was so much later than he is usually outside that we became complacent, especially after all those great carrots and the evening cool down after so many days in a row of all that severe heat. It was just great being outside on your lawn last night. One of us was supposed to be on the lookout for those neighbor dogs, but he apparently was not doing his job, but we were all responsible, so we can’t just blame him.
Me: But William almost caught you. Were you scared when he got so close?
Rabbit 1: I didn’t really have time to be scared. My instincts and training just kicked in, and I took off like a shot. But he would have had me, except that turn I made – did you see it! – I turned straight left in full stride, even accelerating in the air, and that dog nearly fell on his face trying to make that turn with me. I practically turned him inside out with that move, and thought he was going to break a leg or two just trying to stop. When I saw that, I almost wet myself running and laughing the rest of the way into the woods.
Me (to Rabbit 2): What about you, were you afraid? You had run toward the house and the only cover you had from William was that gardenia bush. That must have been really frightening.
Rabbit 2: Well, a little, I guess because my heart was beating, but he had gone three feet past the bush chasing after Rabbit 1 and I knew I could beat him to the woods before he could see me and get going again. So I just took off and made it with plenty of room to spare. It was like that baseball game the other night when Elly De La Cruz stole home after stealing second and third because the Brewer’s pitcher Elvis Peguero was so hacked off De La Cruz had got to third, he turned around to the outfield and walked slowly back to the mound, so Elly took off for home and made it easy. William is like the Elvis Peguero of dogs, and I took advantage of that to easily get past him. He was so ticked off; did you see that look on his face when he realized he could have had me but blew it!
Me: Well, actually he seemed to be smiling. I’m starting to think he just likes chasing you guys, and doesn’t care whether he catches you or not, as long as he doesn’t get hurt like he did in the spring when he scarfed himself up pretty well chasing one of you to hard into the bushes, and tore up one of his ears.
Rabbit 2: Yeah, I wasn’t born yet when that happened, but it is a legend in our warren. Rabbit 5 did that to him. We were actually surprised William still even makes the effort to chase us after what happened to him, but either he is not the brightest bulb in the package or maybe he just is dogging it when he chases us so he doesn’t get hurt again, and not really trying to catch us. But no, even if we weren’t running purely on instinct, there is no fear of him, because he really is just too slow for us, and it is great sport when he does come after us. We can literally run circles around him if we want to.
Me: How do you know to make those jack rabbit turns, and how are you able to do it so fast?
Rabbit 2: You’ve seen us playing with each other in your yard – the way we chase each other and jump up and turn in the air while another rabbit runs under us! Well, those are not only fun games, but training for escaping other animals. Like when you come outside to feed us and we run away at first, we don’t even have to use our good moves because you are just slower than grass growing. You see we don’t even run very far now because we know that if you even took a step toward us, we can be in the woods before your second foot hits the ground. No offense, but like how old are you!! It’s amazing you can get anywhere!
Now, of course, that is clearly anthropomorphizing the rabbits, but if we and they simply act on instincts, aren’t we anthropomorphizing ourselves in the same way if we talk about feelings when really those are nothing but ways to describe our own instincts and reactions? What if we don’t run and have a fast heartbeat and racing thoughts that feel like time has speeded up because we are afraid, but that all those things are just what we call and think of as being afraid? What if we think we are afraid in the same way we think rabbits are afraid – from our behavior and from our instinctive reaction to things chasing us? We run, our hearts beat fast, and afterward, our hearts still race and our thoughts conjure up the worst that might have happened, etc.? You impute that to fear, but that is not a manifestation of fear; it just is what you are calling fear or its manifestation. There is no “fear” over and beyond the running, heart racing, super alertness, and the afterwards shaking, heart racing, realization it was a close call, etc. So, if you gave an android the ability to monitor, recognize, and name the electronic states of its own operating system, it would give names to things to and be able to say things like “This is going to take an hour or so, so just be patient, and maybe go do something else, because if you just sit here and fret about how long it is taking, that will slow me down and we will both have to attribute that to your rushing me and making me nervous, when really there are no nerves or nervousness and it is just your watching me slows me down and makes me think about your watching me and what it means instead of just doing the task. You and I just call that being nervous, but you think simply naming it shows there is something underlying and causing those responses instead of those responses just being all there is to the stimulus of the dog running toward us. It could be “eyes see dogs run; then our legs run too”, not “seeing dogs run causes us fear that makes us run”.
We don’t need to attribute fear to any part of the phenomenon other than what we call our neurological state of being vigilant to detect dogs’ running and the racing of our hearts that running dogs trigger. Human fears include imagining and thinking about bad or worst case results – imagining them in ways that trigger the same sort of racing heart and thoughts, etc, but rabbits may not even know or think about the consequences of being bitten or killed by a dog. Human desires often include imagining and thinking about good or best case results, but rabbits may not know or think about the consequences of staying away from dogs or eating carrots. They may or may not have the capacity we have to imagine or anticipate consequences. Androids could have that capacity, but would not need to any more than rabbits need to have it in order to be thought to have fears or desires. It can all be about what they and we and rabbits and cockroaches are programmed (by nature and experience, in the case of animals, and by nature, experience, training, and education in the case of people) to seek or to avoid in various circumstances.
A like or attraction would simply be pursuit of what causes or brings about a neurological state we are programmed by instinct or experience (including knowledge of the experiences of others), nature or nurture and education, to achieve, maintain and prolong, or increase. A dislike or an aversion would simply be avoidance or minimized contact or proximity with what causes or brings about neurological states we are programmed by instinct or experience (including knowledge of the experiences of others), nature or nurture and education, not to have or prolong. We crave chocolate sometimes, for example, because of the neurological state (which we call a feeling) that chocolate gives us and we have a drive to achieve that neurological state. But it is not the chocolate itself that gives us what we consider pleasure, other than when we have the drive for the state it brings. If we are already full and are no longer hungry, or if we are suffering from nausea, we do not crave or want to pursue the chocolate and do not find it pleasurable. We crave being with a person we believe we love or are attracted to romantically, because we are in pursuit of having and/or prolonging the neurological state (which we call a feeling, such as love or attraction, and a certain kind of excited or high blood pressure) that being with them gives us under certain conditions. But not when we are angry with them – meaning their presence or the thought of their presence puts us in a different state of high blood pressure, desire to hurt them or lash out at them, etc – or not when they have hurt us in some significant way, and possibly not even when we or they are contagiously ill with something not fatal but just really disgusting to see or imagine or think about contracting. We avoid touching a hot stove or licorice or anchovies because we seek to avoid the neurological states (which we call feelings – pain or tasting terrible) they put us in. We say we love or fear or hate the things or activities, but really it is the neurological states those things or activities bring about that we are seeking to achieve or avoid, and we seek the things or activities only as a means to achieve or avoid those neurological states. We name the end states “feeling” or our attempts to achieve them caused by the pursuit of those feelings, but that is like saying ‘thermostats love heat in winter and coolness in summer and want (and have a desire or craving) to get warm or cool off”.
Or, we see or read about others who have accidents that cause states we call painful and we avoid those things that we think caused them. I was second in line in seventh grade to do a flip off the mini-trampoline our phys ed (i.e., gym) teacher had just got for the school. I was really looking forward to it until I watched the guy in front of me do only half a flip and land on his head and shoulder in a way that broke his collarbone and had him lying on the mat in agony. As the teacher was tending to him, I went to the end of the line then instead of going next, and when my turn came, all I did was jump up on the mini-tramp and then come down feet first. I could not make myself try to flip, even though I had previously turned flips on a full-size trampoline and had enjoyed that. I never tried a mini-tramp again, and I couldn’t really bring myself to do flips on the full-size trampoline again either.
Or you can read about something that seems like fun (i.e., something that you want to achieve) or that seems awful (something you want to avoid). I read an article about boomerang throwing, bought a boomerang, and went out to a large, isolated meadow one morning to work on throwing it – trying to figure out from the directions that came with it how to make it come back. The first gazillion throws didn’t come back, and I got a lot of exercise chasing it – exercise that my mind did not resist or mind because it was not something I wanted to end or avoid, and because I was focused on the pursuit of being able to make the boomerang come back when I threw it. Since a boomerang that doesn’t return is just “a stick”, as they say; I was for the first hour or two throwing and chasing a stick, which at the time seemed a worthwhile enough activity, although I would have preferred for it to return and not have to chase it so much. After an hour or two, I got the boomerang’s flight to curve, and then finally got it to come all the way back and could do that consistently. The pursuit of the desire to make it work was cool, as was finally getting it to return all the way. I have no idea why that struck me as an interesting pursuit, but it was, and has been for sixty years since. I enjoy throwing boomerangs in large fields, but I am not really sure why other than it feels good to throw them – meaning it puts me in a neurological state my body seeks to achieve or prolong, and when they fly correctly, the flight path is really beautiful, desirable, and fascinating (i.e., sought) to watch, forming an arc that goes upward and (for my boomerangs) to the left and then continuing the arc till returning and coming down. Sometimes I luckily even get a figure-eight out of a throw, if the boomerang stays aloft after coming back to me and then forms another loop in the opposite direction before returning the second time and descending. Here I am throwing a modern triangular boomerang, rather than the standard form.
There are all kinds of things that people are good at or that they and others are interested in, many of which are quite odd if you think about it, and which were not particularly popular when introduced. Even as late as 1961, no one cared to watch the NCAA men’s basketball tournament. The championship game that year was between the University of Cincinnati and Ohio State University. Eddie Einhorn bought the television rights from the NCAA for $6,000 and then was only able to sell the broadcasts in markets in Kentucky (which is just across the Ohio River from Cincinnati) and Ohio – a far cry from what has become “March Madness” in today’s television market. The first Super Bowl, in the Los Angeles Memorial Coliseum had fewer than 62,000 people in attendance, leaving 32,000 of its 94,000 seats empty. And golf is now highly popular, and yet one can easily imagine that when Jack Nicklaus was young most people probably thought he was wasting his time playing all the golf he did instead of working at a job when he was in high school. Robin Williams did a great lampoon about “the invention of golf as a sport”, which is in this video excerpt where he pretends to be the person who came up with the idea when terribly drunk. [The video is laced with prolific use of ‘the f-word’ to drive home the point of how ridiculous golf is if you think about it, so do not watch the video if you find foul language offensive.] I particularly find his account funny because I caddied from age 11 to 18, and golf just reminds me of all that work. Plus, I can not play it well (and much prefer tennis instead), and so I have no urge to play it, except possibly as an excuse to be in the most beautiful spot most cities have to offer at sunset – essentially tree-lined meadows with some beautiful designs and layouts. When given the opportunity, I enjoy photographing a golf course far more than trying to play on one.
Or consider this amazing skill a young autistic boy developed with a Rubik Cube, unscrambling one in 3.134 seconds, which I find impossible even to imagine being able to do – in the same ways I find it impossible to imagine how Mozart could compose the music he did and impossible to imagine being able to play music the way any competent musician can do it, because although I love good music, I cannot play by ear nor transpose from one key to another, certainly not on the fly, cannot hear a beat at all, and can only get in the neighborhood of guessing or finding on an instrument a note’s correct pitch. And yet the skill this young man has is not one that seems to have any real utilitarian value, no more than dribbling a ball and trying to throw it in the air through a hoop or hitting a ball into a distant hole with weird-shaped sticks, or sliding a heavy stone on the ice a certain distance and having it stop on its own at the end of that distance, although, as explained in this video, his parents introduced him to what became his passion and amazing skill just to try to help him gain a modicum of normal motor control with his hands. He went far beyond normal motor (and mental) skill, however, at least in regard to this one particular activity.
And sometimes we mistakenly seek, or seek to avoid, something we think will give us the feeling (i.e., neurological state we are programmed to achieve or avoid), but which is not the right thing. For example, we sometimes seek sex when what we really want or need is companionship or emotional intimacy, and although some sex is emotionally intimate, not all is, and some can make you feel even more alone; and sometimes just good conversation will bring a kind of intimacy sex will not. So if we have mistaken loneliness for lust, sex is not necessarily the answer. The heating and cooling system analogy would be the system’s monitor activating cooling, when activating the dehumidifying component of the system would have better helped achieve the proper state. Or it is like adding the wrong kind or amount of spice to achieve a desired taste. In some cases computers do these sorts of adjustments to achieve the desired state better than humans – as in the paint analysis case. A computer can even tell you which pigment(s) and how much to add to a given wrong can of paint that is slightly off because deficient in some pigment(s), in order to make it be the color and shade you really intended.
If an android had speech capability, mobility, etc. which would enable it to express its goals or give visible signs of them, it could also show what we would call determination – if it were a human or animal – to reach various end state goals. And if it had many such goals, we could see which ones it gives priority to at any given time. And if it had a monitoring system of its own internal processes, that system could point out states of the android that are conflicting, and perhaps challenging, perplexing, frustrating, maddening, debilitating, etc. after it “learned” how those words are used to describe the behavior of people or other androids, such as how much time or energy one spends on trying to resolve a problem or complete or accomplish some task, especially if that meant postponing or giving up other pursuits and goals we know one has. The android could tell others are “determined” to do something by observing their behavior but could also monitor its own physical/electronic/digital, etc. state which coincides with such behavior, just as we do, and be able to say “I am determined to see this through. I am going to do this, come hell or high water. I didn’t really care about doing this when I got up this morning, but it appears that once I got started on it, I somehow much have become more driven to do it because I noticed I have been working on it for five hours and missed lunch. I hadn’t realized I was that focused on getting the work completed, but I must have been, since I never skip lunch; I always get too hungry around noon and ravenous by 1:00 if I am running late to eat – but not today.”
Normally, one learns to know what words to use to say things like that after one’s own or someone else’s behavior has been described to you by someone’s saying “You (or that other person) sure are determined to do this, aren’t you? You are working really hard to accomplish it.” One then applies that expression to both 1) the external behavior of others and 2) one’s own internal ‘feelings’ or neurological state one perceives when one is doing or about to do the same behavior. But as just pointed out about noticing one’s own interest and determination in an activity, on those rare occasions in which one does not really attend to one’s internal feelings or neurological state, one might apply the proper description to oneself by noticing one’s own behavior instead, in the same way one applies a term to someone else because of their behavior.
Or consider an android that realizes and says: “I am stuck on this problem and not making any progress but just spinning my wheels and going round and round in circles with the same thoughts, and I am getting low on battery power, which has never helped me think well, so I am going to take a nap and plug myself into a charger for awhile and see whether I can solve this problem when I wake up with more energy and feeling better. Usually I can think much better after I have had some R, R, and R – rest, relaxation, and recharging.” I see no reason an android could not know to say those things and know what they mean by them, as well as any of us know what we mean when we say them about ourselves. We might fight sleep because we think continuing to work on the problem will help us solve it until we realize we are too tired to think well, and past experience has shown us, or someone is telling us, some rest, a nap, and/or a diversion will help us be more productive. The android can know it needs rest and recharge to restore its energy and ability just as a computer today can tell us “there is insufficient free storage space on the designated drive to save this much data; please try another drive, free up space on this one, or decrease the size of the file”. It could tell us or itself “there is insufficient battery power to have the energy to process this command; please recharge the battery and/or rest and save energy by closing some programs”.
Or consider a strange sort of human phenomenon – how much more excited and appreciative we are after finding something we lost than we were about having it before we lost it, even something as simple as a lost sock from laundry, let alone something more important like your glasses or the TV remote or a valuable picture, document, memento, your wallet with money in it, or other material object. Androids could be the same way, obsessing over something lost, even if not all that important other than that they are being ‘driven’ – i.e., programmed or self-programmed – to find it and if they do, they will be in an electrical/digital, etc. state that their monitoring considers to be a feeling of being elated and or relieved. This may have to do with being focused on finding something important that one realizes is lost, but being distracted from it by almost anything else when it is not missing. Upon finding something missing that one has been searching for, an android may exhibit or voice an audible sigh or relief, and act in the same way a human does who is excited about finding the lost object or about accomplishing any other achievement that is important to it.9
If we want computers/androids/robots to have the same kinds of emotions or feelings humans do, instead of, or along with, their own particular neuro-electronic ones, we need to analyze and understand the concepts of different human emotions and how they “work” – what causes them, and what actions they cause, what criteria we use to apply them, etc. and then program the same sorts of ‘triggers’ or causes and behaviors of electronic states into computers which we and the computer would then reasonably call those emotions. For example, although there are some cases where we might interchangeably use the words “ashamed” and “embarrassed”, there are some situations where one of those words is much more appropriate than the other – cases where shame has to do with guilt of having intentionally done something one knows at the time, or later comes to realize, was wrong and that one should have known not to do, whereas one can be embarrassed by another person’s comment or laughter at you for something that is not your fault, or that is not even a moral fault, but just a mistake, such as wearing the wrong clothes to a party (i.e., not being dressed like other people there), or that is not even wrong in any way but is something about which humans or other androids make fun of you or deride you out of ignorance or meanness.
Or consider the case a friend pointed out to me once – a concise way of explaining to someone that they have misunderstood your saying you are sorry about something bad they have recently experienced and they say “You shouldn’t be sorry; you aren’t to blame for what happened.” Her cool response to that sort of thing is “That was an expression of regret, not an apology.” A condolence or expression of sympathy is not a claim, nor an admission, of guilt. An android could be programmed to tell, or programmed to learn to tell, the difference between the two.
Machines, including computers can’t yet do all the things humans can do, but machines can usually do better the things which they and humans both can do. Some things humans using machines can do far better than humans by themselves or machines by themselves. One goal of artificial intelligence is to figure out how to get machines (computers/robots/androids) able to do more and more of the things humans can do, to relieve humans of work, particularly work that is onerous, dangerous, and also to do those things better, with or without humans operating them. Generally that requires understanding the complexity of the human mind and the complexities of the concepts we use, so that we can develop good principles that capture decision making processes – principles which can then be turned into, or expressed as, rules or algorithms for machines to follow.
However, all this needs to leave the possibility open that in particular cases a good argument can override strict application of a rule, so that there are not clearly bad judgments made for purely formal legalistic reasons, as can happen in court when evidence is disallowed because it was technically incorrectly obtained even though the evidence is clear and convincing, indisputable, and was physically available. All formal systems have the potential for error because the rules are either too wide or too narrow, didn’t anticipate or take into account circumstances that later occur, or are just plain mistaken to begin with. So, those rules and errors need to be able to be overridden and modified by the use of sound judgment. Otherwise we end up with a computer like “Hal” in the 1968 film 2001: A Space Odyssey which is following commands that will end up being destructive because it cannot ‘understand’ or incorporate exceptions to its flawed primary directives.
Simple cases of what seem like desires would be an android being OCD in the sense of always “wanting” – i.e., always trying – to place things in certain ways on a desk, separated evenly by certain amounts of space in proportion to the size of the objects, or a ‘child android’ always wanting to carry a lovey or particular toy with it, or an adult android addicted to gambling or to having a certain level of cocaine or other drug in its system, even if it has to go against other protocols or programmed or previously learned processes and needs to achieve those things – being obsessed or addicted to a substance, behavior, or desired experience or perception. The way these things would occur and manifest themselves is the android simply acting to place the things on the desk in some way, and resisting efforts to have them placed otherwise, or yelling at anyone who moves them, etc. It would be like having a very strong camera motor that resists your effort to make a picture intentionally be partially selectively out of focus because it was programmed (or programmed itself) to forcefully focus itself in a certain way only.10
Likewise with carrying around a security blanket or lovey. In more serious cases, an android would be like a gambling addict who places bets even though one knew the cost of losing would be severe. The mechanism would be like a haywire or ‘rogue’ thermostat set to raise the temperature to dangerous levels despite prudential or reasonable manual settings, with the thermostat “knowing” it is going against prudential and reasonable settings and a part of it trying to resist increasing the temperature, but not being able to override the part of the system that is set and does “want” (i.e., try) to raise the temperature too high.
To make computers be self-aware in perhaps the same way people are, you add to any feedback system, such as a thermostat, a sensor that matches the system readings with other data to tell whether something is wrong or not. For instance, you add a sensor and monitor that can override the whole system if it registers and reports the temperature is being set to a level that will cause some sort of explosion or ruin a baking cake or overcook a standing rib roast or a turkey and dry it out. Or a sensor that points out a heating element is about to go bad and needs to be replaced. This could all go through a computer set to do cost/benefit analyses of when to make the replacement or what heating levels to use for what lengths of time to get the most of various chemical elements “flavors” in the turkey or roast. There could be multiple such sensors and feedback loops, some of which may conflict with each other – like a sensor that seeks ice cream, a sensor that seeks a large coke, a sensor that seeks a toy animal, and sensor that seeks the train ride, but where the android calculates it cannot afford all those things, particularly if before going to the zoo, it places a losing bet of all its money on a horse or team that loses.
A more complex behavior might be seeking flattery or a teen-age computer disregarding parental advice. We would call the first easily manipulated and vain; the second, rebellious or disobedient. A computer that purposely shuts down or goes into a resting state when given or faced with various tasks most computers or androids would do, would be ‘lazy’. Etc. More complex yet, if we understood the differences between love and infatuation in humans, we could program computers to distinguish between them and perhaps be able to tell the difference when they ‘experience’ them – claiming they love someone but are just super-infatuated with another. Similarly the differences between hostility, anger, temper, or between different senses of humor, such as slapstick comedy or lame ‘dad jokes’ versus clever wit and what makes something be clever – perhaps involving seeing and pointing out an association other androids didn’t see until one came up with it and pointed it out in a way that is particularly subtle or unusually appropriately stated in a way other androids would not likely have expressed it or ‘thought’ to have expressed it. Give the android the ability to make different kinds and amounts of laughing sounds for another sensor to employ for the different conditions it sees, and you have a computer with a “sense of humor”. If you give it the ability to recognize, or it is able to teach itself to recognize, “dad jokes”, the android’s appropriate response to hearing one would be a groan with the comment “That is sooooo lame. No more dad jokes, please.” Or the android might see, and point out a way to improve it.
Sometimes something is so ridiculous or silly that it strikes us as funny. And androids could laugh at things that are ridiculous in the same way. When one of my grandsons was old enough to identify colors by name, members of the family kept asking him one evening what color various things in the room were, and it seemed to me that he was going to find this getting tedious and old if it went on much longer. So when someone asked him what color the [blue] curtains at the window were, I said “That is too easy, they are clearly orange. He turned to me with a puzzled and somewhat disapproving look on his face and said “They are not orange, they are …” and suddenly he caught on and gave a big smile and turned back to everyone and said “pink – they are pink” and laughed. He got the idea of the joke pretty quickly as being so clearly wrong it was silly or absurd. It would not be a very far leap from there to fairly soon understand or appreciate and enjoy sarcasm and satire as ways of pointing out logically implied absurdities, not just factual, surface ones.
In a more serious demonstration of the same sort of thing, the child of an acquaintance, identified all the colors of different objects on a worksheet in class incorrectly and the teacher was alarmed that the child didn’t know colors and called for a conference with the parents. The parents knew the kid knew her colors, so they asked why she got them all wrong, and she said it was because it was so stupid and boring that she just gave wrong answers on purpose (to mess with the teacher). Another child one time had a worksheet asking students to write down the first letter of pictured objects such as apples, pears, bananas, etc. She wrote down “F” for all of them, and that teacher also was alarmed and got in touch with the parents. When the child was asked why she put an “F” for all of the objects, she said “Because they were all fruit; and this was stupid and a waste of time to have to put down the obvious first letter of the name of each one.” There is no reason a computer couldn’t have the same sorts of insights and ‘sense of humor’ about the ridiculous. And a computer might prefer playing one song or one kind of music rather than another, just as humans do for reasons we do not understand but that seem to be or become programmed into us by nature and the influence of our own cultural or social group somehow.
Or consider an android with artificial intelligence and a body temperature sensor and another computer in it that ‘reads’ and interprets the temperatures given, and that the android is sensitive in various ways to moderately cold ambient temperatures, working to avoid them. It might start complaining mildly and then begin to shiver and say “What is wrong with the heat in this place; I am freezing. It is cold as hell in here.” And suppose another android, one that is friendly with others, knowledgeable about their feelings, etc. and is well-read, says in reply “Everyone else is comfortable just the way it is. Maybe you just need to move around a bit or drink something warm, like some soup or tea or hot chocolate, or put on a sweater or get closer to the fireplace, but be careful you don’t get too close and set your clothes or hair or self on fire. And by the way, hell is not cold, so being ‘cold as hell’ would not be cold at all.” To which the first might respond “You are such a literalist! That is just an expression that means something like ‘it is just as cold in here as it is hot in hell’, or that ‘it is as miserably, uncomfortably, or unbearably cold here as it is miserably, uncomfortably, and unbearably hot in hell. And yes, that is an exaggeration – to make the point it is really, really miserably cold now, and as far from comfort on the cold side as it is toward the hot side in hell.”
I don’t see any of these things being outside the capabilities of a computer with sensors and controls of the sort described – with artificial intelligence, and with exposure to literature, science, history, philosophy, film, etc. stored in huge fast memory built into it or instantly accessible by it – which tries to make sense (i.e., find logical coherence) in all its data as best it can, and keep seeking for answers it cannot make cohere. As a child, I once asked my mother how babies got inside the belly of a woman who was going to have them. My mother answered that “the daddy plants the seed”. That satisfied me – like giving the woman a watermelon seed to eat that would grow a watermelon in her. Then, one day, I saw the episode of I Love Lucy where Lucy was all distraught about telling Ricky she was going to have a baby. So I asked my mother how Ricky wouldn’t already know that if he had planted the seed. Without batting an eye, my mother said “Because the seed doesn’t always grow.” And that explanation satisfied me for years until when I was 11 and another kid told me there was a different explanation, although his didn’t make any sense at the time because it seemed pretty disgusting and that there had to be a better way to make babies than what he said. But androids could try to make sense out of all the information they have access to in the same way.
Or consider with regard to emotions and to moral choices, for example, a self-driving car that comes to a red traffic light at an intersection and is programmed to stop and wait until the light turns green – and programmed to obey the law in general (and it knows all the laws and what they “mean” or pertain to insofar as any person might understand any law). And suppose it has been programmed to avoid being in an accident and that its own learning from its built in artificial intelligence (AI) lets it know that accidents occur when red lights are ignored and run. And further, suppose it “knows” or is programmed to avoid being cited for traffic offenses. But suppose the computer operating the automobile also has a protocol, whether programmed into it originally or learned on its own, not to be late to an important meeting or appointment and it has the technology to know that there are no cars coming and that there is only a minor probability its breaking the law will be detected. And suppose the car is running late through no fault of its own and this light seems to be exceedingly long. Now there will be conflicting protocols at work within it. I will return to this case in a second, but first consider a slightly different, but similar case: the car has to stop for a long slow-moving, perhaps even stopped, train at a railroad crossing on its way to this same important appointment.
When these kinds of things happen to human beings, they induce neurological states we call anxiety and frustration about being late to the important meeting and the fear of the consequences being bad. But the two different cases involve different additional feelings. The train case might involve anger, perhaps even directed at oneself for not leaving earlier or not taking a different route (even though one had left plenty early enough as one always had before and even though alternative routes were longer and problematic in other ways, and a train had never caused a problem previously), but it also might involve resignation at one’s plight if there is no way to turn around or find a path that can get one around the train somehow and to the meeting on time. It might involve sadness this happened to happen to it. But the red light case also poses a choice that the train case does not because in the red light case, the car could proceed to its destination, but has to choose to do that in the face of internal psychological constraints programmed into it or learned by it – rather than the external insurmountable physical constraints imposed by the train case. Should the car run the light, knowing it is probably safe to do so, but breaking the law, which it is programmed to resist doing, and possibly facing undesirable (i.e., not preferred) consequences for doing that. Feelings which might be called feelings of guilt about breaking the law may arise about contemplating running the light versus feelings that might be called feelings of foolishness for sitting there just to follow a rule/law though it is probably safe to go. There is an internal conflict going on in the red light case that is not part of the train case. That can produce an anxiety that is different from, and in addition to, the anxiety of being late. It could be considered, by humans and by the android, to be a moral anxiety or stress.
Or suppose the red light problem is given to a group of robot-android students, and they find it interesting, stimulating, and challenging to work on and discuss with each other to try to arrive at a satisfactory resolution. That is, they expend energy on the question and they devote their resources to trying to come up with an answer that logically is compatible with all their knowledge about any relevant kinds of situations. Or, since they are not the ones who will be late or have to break the law, their ‘feelings’ or considerations about the problem might be very different from the robot-androids actually faced with the red light problem. They may not devote many resources to solving it or want to expend any energy coming up with an answer that fits their other knowledge. There may be no real concern about urgency that might be called ‘feelings of anxiety, no frustration, no anger, no hostility toward others as a result of redirected anger, etc. Some student androids might not find it ‘interesting’ at all – having little or no ‘desire’ or programmed or self-learned program to try to solve it, since it doesn’t affect them. They might call it ‘boring’ or ‘uninteresting’ or a waste of time to have to consider. Others may have other operations or ‘thoughts’ or work they need to attend to or resolve that are more interesting or pressing to it at the time, and might not have the capacities to work on both to the satisfaction or resolution of either.
Human beings are likely to be aware of their frustrations and anxieties in experiencing these cases in real life, or in their interest or lack of interest in thinking about them in a classroom, though we humans are not privy to the internal processes that produce them. And when we do care about classroom cases, we are likely to be aware of the internal conflict and its ensuing consternation and frustration imposed by the warring pros and cons of the options in the dilemma of the red light case – though, again, without our knowing what physically neurologically causes these feelings or how the physiology translates into our mental states.
However, there are also many cases where we do not recognize our own mental state or distinguish its different components, and cannot appropriately name or describe them. We might just walk around wringing our hands and snapping irritably at people, instead of saying “I don’t know what to do and I am afraid I’ll do the wrong thing, and it is making me short-tempered.” Or one person may have more ability to distinguish the different, perhaps nuanced, components of one’s feelings and/or of the conflicting moral principles involved and how to resolve them than another and may have a better vocabulary or better literary creativity at his or her disposal to describe them. Theoretically the android would have all film and literature from which to draw ways to describe what it seems to be experiencing or ‘feeling’.
Well, computers/androids could do the same sorts of things. The android could just experience conflicting program drives or operations that put it into an unproductive loop that results in its wringing its hands and snapping at people in a way that makes it seem to be irritable. Or it could have sensor components that monitor what is occurring in its programming process and distinguish and identify or name certain conflicts, problems, insufficient capacity or energy levels, or relative ‘strengths” of opposing operations by name, so that the monitoring computer could recognize, and even report, the different states, say, the car at the train experiences from the car at the red light.
This would be no different in kind from an audio volume monitor in a bluetooth device telling me the volume is getting into a danger level or how much battery usage I have left either in percent or time of use, or my camera getting my lens aperture or sensor sensitivity to adjust for the amount of light, or for a thermometer or pressure gauge to register different degrees of heat or pressure, etc. Switches do not have to be simple on/off switches like a furnace thermostat that either cranks up the furnace full blast or has it off, depending on the one specific temperature the thermostat is set to. Controls could be sensitive and responsive to degrees and gradations of different states they monitor. For example, a thermostat and heating and air conditioning system could warm or cool different amounts less than full blast as temperatures change in an area. This could be similar, conceptually, to how a phone, TV screen, or computer monitor changes the brightness and/or contrast depending on the amount of ambient light – although my own phone (which is a fairly inexpensive model) does not adjust the screen’s brightness in “automatic” mode to be anywhere near brightness and contrast I need to be able to read it comfortably or enjoyably.
My newest cameras are a few years old and they require manually setting them for the type of light source I am using them in – fluorescent, incandescent, sunlight, flash, etc. – for them to get the color balance right in the picture. But I don’t see any reason that a “smarter” camera can’t be developed, if it has not already been made, which would have a sensor to detect the kind of light that is available and/or being used, and set the color balance controls of the camera accordingly. Part of the battle of making computers able to think like us is developing sensors that detect the properties we do, and then figuring out the logic of how we deal with those properties or a logic that will give the same result.
But in nature, one’s pupils do not always automatically adjust to meet the needs of the retina in the same eye, and ears do not automatically adjust for different volume settings or surrounding ambient sounds or distinguish what is wanted to be heard from ambient or conflicting ‘noise’. My wife could see things in low light that I could not, but she needed dark sunglasses in bright sunlight to be able to see, though I could see fairly well without sunglasses to block light even at the beach on a sunny day. Either her retina was apparently much more sensitive than mine, or possibly her pupils opened more than mine did for any given amount of ambient light. Different androids might have different sensitivities and reactions or overreactions to any given circumstance in the same way humans do.
The “monitoring” component of the computer would be akin to our conscious mind and the processing part of the computer would be akin to the neuro-physiology of our bodies and the subconscious results of our experiences that affect it. The monitoring part could ascertain the data fields in the processing part and reasonably be talking about the computer’s “feelings” and emotions, based on the patterns in its processing. It could name some patterns as being ones of frustration; others, anxiety; others, guilt about breaking a law, even if the law is problematic in this particular case; others fear of being caught and being punished, even if wrongfully punished since no one was put in danger by this particular breach of law, etc. There is no reason the monitoring computer cannot make or have similar difficulty trying to make all the kinds of distinctions about its internal processes that we make about our own psychological and subconscious states, or have difficulty trying to make (and refer to as feelings or emotions) about our own physiological/neurological states and processes. And the monitoring parts of computers may have to develop new vocabulary or concepts as new processes arise that seem important to distinguish in order to avoid operating conflicts or states of being it is programmed to avoid or overcome. So the car at the prolonged traffic light could start to express frustration at having to wait, even though clearly no cars are coming and the light seems to be staying red for no worthwhile reason. It could voice the problem by telling you “Look, there is no good reason to remain here idle; there are no cars coming. And the only reason you are sitting here is because it is a law, which clearly in this case is pointless. And yes, I know you are afraid of getting a ticket, but it is highly unlikely you are being monitored, and if you are, they will know you did stop and waited a reasonable amount of time before you went and that it seems like the light is just ‘stuck’. The cop would be unlikely to ticket you.”
It may be easier for an android to recognize all its own inner states, conflicting or not, than it is for people to recognize, distinguish, and describe or name all their own various emotional feelings or physical and psychological states. Perceptions of internal states are particularly difficult for people to identify, let alone discuss with someone who may not have experienced what you are talking about. In a simple case, you know exactly how a food tastes because you perceive it directly, but if the flavor is made up of several different ingredients, you might be hard pressed to describe how it tastes and might end up sounding like some sort of stereotypical wine critic trying to describe the taste of a wine in ways that no one else can understand and that sound pretentious and ridiculous. In her book The Feminine Mystique, Betty Friedan called the kinds of feelings suburban women of the 1950’s with conventional marriages and lifestyles “the problem that has no name”. But all psychological feelings, along with all other perceptions, arise without names to begin with. Names and descriptions have to be applied to them. Some cultures do that better than others. 18th and 19th century British writers and orators make great distinctions about various feelings and emotional states that have been lost to most 21st century Americans who neither have the vocabulary nor the concepts, and who do not themselves see the differences, whether nuanced or not and whether important or not, among various similar feelings.
And humans do not always understand their own desires very well. I had an adult student one time who was married and in his early thirties who said he really wanted to water ski but had never been able to do it successfully. He had even bought a boat to be able to do it, and he had had ropes break while trying to pull himself up onto the water surface, but he was never able to actually water ski. I asked him why he wanted to water ski, and the conversation went like this in class:
Him: I see other people water skiing and they are obviously having such great fun and enjoyment, it makes me want to do it.
Me: But ten minutes ago when we were talking about likes and dislikes, you said it didn’t matter how much other people liked raw oysters and obviously enjoyed eating them, you were not about to even try eating them.
Him: Well, I like going fast on the water, and that would let me do that.
Me: But you can go fast on the water in your boat, and in fact, you can’t go any faster on the skis than you can in the boat (except possibly for slaloming or skiing out in a very wide arc to the side so that you are covering a greater distance than the boat in the same amount of time, and thus going faster, if I am correct about that) and you said your boat is really fast and powerful.
Him: But I like water skiing!
Me: You don’t know that. You’ve never been able to do it.
Him: Then why do I want to do it?
Me: I really don’t know, but I am pretty sure that because you want to do it so much that you have gone to all this expense and effort, I am pretty sure you will likely really enjoy it the first time you are able to do it successfully because it will be a triumph of perseverance and because it will be a desire or wish come true. As I have been saying, and as Bishop Joseph Butler held over 200 years ago, happiness is not a goal, but a resulting side-effect or by-product of striving for or reaching our goals. It is the desire or drive that comes first, but it is not always clear what makes the desire or drive occur. And, as I pointed out at the beginning of the class, the way to get anyone to have sex with you is simple and universal – just get them to want to so much that they will do it no matter what. Of course, you are on your own about figuring out how to do that, and it is different for different people and you may not get someone you desire to want to have sex with you, but then you will just be out of luck, I guess.
With regard to androids, my point here is that we can build androids or computers to do certain tasks and to do that till they achieve certain results or readings. That is no different from a human pursuing something. Then, if we have a monitor in the computer that detects when it is in that state, the monitor can consider that state “wanting to” do the thing it is in pursuit of and the android can truthfully say “I want to do that”; e.g., (learn to) water ski. And if the pursuit requires some activity by the android, we can see it pursuing that activity and say “Wow, s/he must really want to do that” depending on how much effort it takes or what it gives up to try to achieve it. Conversely, if it avoids something, we or it can describe it as being something it hates or finds painful. Hence, we call it seeking pleasure or avoiding pain or suffering, even though all it is really doing is pursuing a task that has a particular programming or self-programmed, learned goal. What we consider seeking pleasure is just trying to achieve or maintain a neurological state. We seek to end or avoid pain, but pain is just a state of our nerves we seek not to have, or to end as soon as possible when we do have it. Or we might be a masochist ‘hard-wired to pursue’ or maintain a state that is what others consider a painful state to avoid and/or that makes us wince or scream out in agony even as we want to keep feeling it. It could be something we want to experience in one way but not in another way, or that part of us wants to experience while also a different part of us does not want to experience it.
Not being a masochist, I don’t understand what it is like desiring to have pain, though I understand willingness to have pain for a greater good. One of my aunts taught me when I was young how to withstand dental pain of having a tooth drilled before there were dental anesthetics. And, even after Novacaine became available, because the long needles scared me about breaking off in my gums more than the pain, and because of all the stories about people not knowing something was wrong with their dental work until after dentist office closing time when the anesthetic wore off, I have always had dental fillings done without anesthesia. It is not a problem. There are ways to tolerate pain, even unexpected sudden pain, with the right mindset about the difference between having pain and reacting to it in a certain way. In one championship football game a star receiver got an ankle injury in the first half, but stayed in the game and made crucial plays in the second half that won the game. Afterward he was asked how he was able to play with the injury and he said something like “It was just an injury, not a disability. I could still play; it just hurt to do it.” As in this film clip from Lawrence of Arabia, the trick is in not minding the pain.
Not minding the pain is easier in some cases than others, such as when one is distracted by something pleasurable to experience, or if one fears or wants to avoid something more than the pain. For example, one of my grandfathers whom I saw every day as I was growing up, had had back surgery before I was born and had become addicted to Demerol to relieve agonizingly severe pain ever after that. I say it was an addiction because sometimes an injection of distilled water would alleviate his pain, but until he got his injections, he was clearly in total agony and often screaming in pain at the same times each day. To me, it always seemed like the need for (addiction to) the Demerol was the problem that exacerbated his initial pain from the injury and surgery. So, I feared any sort of drug dependency or addiction. So, when I had Achilles tendon repair surgery, I did not use my morphine drip in the hospital. Nurses thought there must be something wrong with the drip that it was not dispensing morphine. I said, no, I just never used any. They said “Doesn’t it hurt?” And I said “Well, yes, but that is just because I had surgery, not because something is wrong or getting worse. I don’t mind the pain. I think about other things.”
Now someone might say that I just may not experience as much pain as someone else, and that is why I can ignore or tolerate it. But until my aunt taught me how to do this, I couldn’t stand pain, so I have no reason to believe the method lessens the pain as much as it lessens the effect of the pain or my reaction to it. One just accepts knowing that one is in a painful state rather than reacting to it in a way that tries to avoid it. I like to say that “pain never hurts anyone” as a kind of paradoxical zen koan meaning something like it is not pain that causes injury or harm, even if the pain is a sign of injury or of (potentially) increasing harm. With computers it would be a state of their electronics that they work to resist getting into or remaining in as much as we do pain. Conversely, they could seek certain states in the same way we do that we then call pleasures. And a stoic android could accept and ignore pain (or the pursuit of pleasure) or overcome it if necessary. A masochistic android would be one that seeks pain in the same way a human masochist does and could have the same sort of mixed reaction or approach/avoidance response to it. Although pains and pleasures are what we call things we seek to attain or to avoid, they require our compliance in that, which is not always necessary to grant them. There are ways to override the need to seek the pleasure or avoid the pain even though we or some part of us has that need.
Androids could do the same thing. They could ignore a drive for attaining or avoiding various states, if they have a drive to achieve a different state they consider more important. There could be digital or electronic states androids are trying to achieve or to avoid that they distinguish as a prudential imperative (in order to get have a better state later) or a moral imperative (based on a moral principle, such as the one I offer at the end of this essay), which they see (or a part of them sees) as being a more important pursuit than pursuit of a surface desire or simple craving. Androids could choose to do the things their principles imply are right, which may conflict with their desires, just as humans have obligations they don’t like or want to do but know or believe they should. Conversely androids could have, as people do, ‘guilty pleasures’ or cravings or addictions they give in to for things which they or we know (or mistakenly believe) are bad for us or wrong to do. For example, we sometimes may feel like resting but also know that we should do something we said we would, like take our kids to the zoo, or we might want to listen to a particular important and exciting football game with our earpods but know that we should not do that while we are getting married in a solemn church ceremony, or we know we should not drink too much beer before the wedding begins. But in some cases we give in to a pleasure we know (or mistakenly believe) we shouldn’t and that we know (or mistakenly believe) we will regret later. An example of a mistaken belief would be keeping a promise that is too risky or dangerous to keep compared to the benefit it might bring if successful, like if you were up all night for an emergency but you had told your children you would take them to the zoo or hiking, and then drive them there, risking falling asleep at the wheel or letting them get in harm’s way at the zoo or on the hike.
Computers could presumably be designed and constructed which would recognize and distinguish their own internal digital or electronic states better than we can recognize and distinguish our own neurological/psychological ones and therefore they could name these states and what behaviors they would be associated with, more readily, easily, and effectively than we can know our neurological/psychological ones. And they could understand better what each other is referring to and means. It is difficult for human beings to know or understand other people’s feelings, particularly subconscious ones, because we cannot directly perceive them. Sometimes we wonder what other people could possibly be thinking, or whether they are thinking at all, when they say or do something that seems so totally wrong. Humans often seem incredibly unintelligent or irresponsible, thoughtless, and to have no real self-awareness other than some sort of bare minimum at or near the level of only “likes” or “dislikes” of some things. They make us wonder not just whether there is intelligent life elsewhere in the universe, but whether there is all that much on earth, or even in the room, and they make us wonder not just whether there could be androids with artificial intelligence but how many human beings are without actual or natural intelligence.
Androids could share their inner workings/files/data (e.g., in digital electronic form) with each other to point out what they are experiencing/feeling. Since they would have things to examine or perceive together, they could discuss, conceptualize, and distinguish them much in the same way humans can discuss, conceptualize, and distinguish external things we perceive, such an attic space in a house from a finished room adjacent to it, or distinguishing a kitchen from a separate dining room, or distinguishing the eating area of the kitchen from the cooking area in a room that has both, or distinguishing the sink from the stove, or the range are of the stove from the oven.
And knowing about internal conflicts lets humans, and might let androids, come up with potential solutions for them, such as vasectomies or tubal ligations for people who want sex but know they do not want to risk pregnancy, or like benign sugar substitutes for people who crave sweets but do not want to gain weight or put themselves at greater risk for diabetes. Understanding a problem is generally necessary for discovering or inventing a good solution for it.
Or there can be abject consternation for an android, just like for a human, over something that is both moral and prudential, but whose consequences are unknown ahead of time when the choice has to be made, as in how much or what kind of insurance to buy to protect against the cost of the loss of something important. For example, insurance companies often give a lower rate for bundling automobile and home insurance, and my own policy currently has become what seems expensive to me, so I have called other insurance companies.
Research shows that in the area I live, earthquakes are not very likely, but have occurred a few times in the past 100 years – though the ones that occurred were not generally strong, but did cause damage to brick structures, such as my house. But the deductible is also high in case of damage, and it is difficult to tell whether the previous earthquakes would have caused damage whose repair costs would exceed the deductible plus the cost of the premiums already paid prior to the earthquake. One local geology professor believes the area will be hit by a strong earthquake, but others do not, so the probability of the insurance “paying off” (or being financially worth having) is almost impossible to know in monetary savings or expenditures alone.
I finally decided that, being as risk averse as I tend to be, I would rather spend the extra $600 per year for premiums that include the earthquake coverage in order to avoid having to pay for a whole new home or for repairs that substantially exceed the deductible, even if the odds are low I would ever have to file a claim. Someone else, whether human or android, might prefer or need to save the $600 per year or spend it on something else. But I think about how much I would regret and kick myself if I gave up the coverage and then had catastrophic loss due to an earthquake. That imagined feeling of regret vastly exceeds for me right now the benefits of having the $600 to spend elsewhere. And that is either the cause or the result of my being risk averse – or it is simply the same thing as being risk averse. So, if no earthquake occurs that does costly serious damage to my home, my premiums will not have paid for repairs or replacement but for the peace of mind of not having to fret about the possibility of a horrifically expensive one.
I want to reliably insure against almost any kind of reasonably likely catastrophic loss, including earthquake, because I know how bad I would feel if I had such a loss and was not insured for it but could have been if I were willing to spend more money. It is not that I want, in the sense of desire for pleasure, to spend money on earthquake insurance, but that I do not want the pain of the cost of losing a home in addition to the pain of the loss itself.
I do not insure against flood, however, because my home is just slightly below the crest of a high hill that slopes away from it in three different directions into large valleys, and which is protected from water on the part of the hill above me across the street by the street’s sloping away in both directions, and by steeper slopes from the very top of that hill into large valleys on the other three sides of it. But I do worry about loss due to earthquake because there is a slight probability of suffering one, since the last fairly powerful earthquake in my state occurred in the small city where I live now, just over 100 years ago, and there have been two other small earthquakes in this half of the state in that time, the most recent being 65 years ago. But, unlike the company I have insurance from now, most companies do not offer earthquake coverage and say it is unnecessary because they have never experienced a need for a claim for earthquake damage since they were an insurance agent. Moreover the deduction for earthquake claims is quite high, in my case 10% of the value of the home, and for some companies, 20%, meaning that for, say, a $300,000 home, damage from an earthquake would have to exceed $30,000 or $60,000 before the insurance begins to pay for repairs or replacement. But still, that, plus the cost of $600/year premiums is more affordable than having to spend $300,000 to replace your home.
And if the point is to prevent unaffordable or catastrophic costs, it would be better to pay an affordable premium, than to risk the loss, even if the risk is slight. Plus, it is difficult to assess the probability of both an earthquake in this area and its strength. A local university geology professor thinks we are due for a fairly strong one, but other geologists think any earthquakes in this area will not be particularly strong, although my home has a brick exterior and those are the most vulnerable to earthquake damage, and are the buildings that were the most damaged in the earthquake 100 years ago. I finally decided, and an android might decide the same, that what we might call my peace of mind about not having the catastrophic replacement cost of major damage is more important to me than the premium cost for insurance coverage I may never use. And basically, because you do not get refunds of your premium for not having damage, most insurance is for peace of mind, not for actual monetary benefit.
I like to jokingly ask insurance agents if they will refund premiums since I didn’t have a claim or any accident, but since they have to deal with lots of people who do actually mistakenly think insurance is a prepayment for damage repair or replacement, they don’t know I am joking until I say I am and that I understand my premium doesn’t just pay for damages or accidents I suffer, but for those other people suffer. The way insurance is supposed to work is that we each pay a little to pay for the low probability expensive losses a relative few of us will randomly and unpredictably have. And an android or another human being can go through all this reasoning and arrive at the same or a different conclusion and decide it would rather save money on premiums and consider the risk of catastrophic loss too low a probability to worry about.
On the other hand, applying the same reasoning process to purchase of an appliance extended warranty, I normally decline the extended warranty mainly because 1) the initial standard product warranty period will normally uncover any serious manufacturing defect in the appliance which 2) is less likely to appear in the additional warranty period anyway, and 3) I can repair many things myself, especially with the help of the Internet now with its abundance of repair instructions for all kinds of things, and 4) while I would regret having to replace the entire appliance at my own cost, I am willing to take that chance if the cost is affordable and not devastatingly undesirable, particularly when the purchase price of the warranty is almost as high as the replacement cost of the product. In addition, 5) I have gone through the process of filing some extended warranty claims and they were a hassle, sometimes not honored anyway for reasons that involved legal loopholes that were borderline fraudulent because the company was knowingly excluding the common causes of a malfunction the consumer had no reason to know, understand, or appreciate the significance of at the time of purchase . I had to get the state attorney general’s office involved in one claim before the company finally settled with at least some money in payment of it. So, with regard to an extended warranty on the purchase of anything I could readily afford to replace if I had to, although the cost of replacing the item would be disappointing, it would be tolerable. And that does not take into consideration the cost and hassle of making a claim, shipping costs for repairs, and worry about the reliability of the insurer honoring the claim or claiming some sort of reason it is not covered. Some insurance is not worth having even if you need a repair or replacement after the manufacturer’s warranty period has ended.
An android could go through all this reasoning and have had the same sort of bad experience with a past claim, or easily have the knowledge of other androids who did. Our responses to any situation or problem often depend on what we learned from others before. Androids have the benefit of ALL past individual android experiences collectively if they each share all their knowledge as transferred data. Because people have limited capacities for sharing and storing information, we don’t have the benefit of each other’s experiences and knowledge to the extent that androids could – apart from having each other’s collective knowledge available on something like an extremely high speed internet with a search capability that is able to understand our questions and issues or problems and respond reliably and appropriately. The ability of search engines to understand our questions has improved tremendously recently. The other day, for example, I typed in a search for something like ‘youngest marriage ages allowed in the United States’, and Google asked whether I meant minimum age requirements for marriage by states. I went with that, and it gave me the information I was seeking. The ability of a search engine to help you clarify your question for its search purposes is extremely helpful for letting it give you the results you are seeking instead of all the possible things related to how you phrased your question, most of which have nothing to do with what you meant.
I will admit, however, to possibly being inconsistent and irrational about extended warranties in some cases – those in which I will buy a product with a longer initial warranty even if it costs a little more than the one with the shorter warranty – like toilet flappers with a 3-year warranty rather than the flapper from the same company with the 1-year warranty – and even if I can afford to replace it if it does break down. I don’t seem to mind paying a little more for a product with, say, a five year warranty included than for a product with a one year warranty, but I will not buy an additional four year maintenance warranty for that same product even though the cost would be the same as buying the one with the five year warranty. I think that is probably irrational, but on the other hand, it strikes me (again, probably irrationally) that the product might be better built and more reliable if the company is willing to cover it longer for the original selling price, even if that selling price includes what would have been the additional premium for an add-on warranty. I cannot tell whether I am rationalizing the irrational or not, but I suspect I am..
And again, if we are talking about a major high cost of repair or replacement compared with a relatively low and affordable premium, I will opt for the insurance or additional warranty, particularly from a major manufacturer with a good reputation. I actually totally lucked in for one such major appliance policy – my first heat pump, which the local utility company insured for 10 years for total replacement, as an added incentive to buy a heat pump or to buy an all-electric home with one when they were first introduced. The compressor on my heat pump died 9 years and 51 weeks into the warranty period, and the company replaced it at no cost to me. The normal experience, of course, is for it to die a day or week after the 10 years would have been up, but fate was on my side that particular time, and that was a real thrill in addition to just saving the money.
An android may even be able to analyze my behavior and feelings about this better than I can, by seeing patterns in my buying behavior and supposed rationales that I cannot see. And, in general, AI could tell us what makes things witty, funny, shameful, etc. by seeing patterns we may not yet have recognized ourselves in what people with, for example a sense of humor find funny – in terms of things like joke and wit, not insults of despised or disadvantaged people, like the hateful mocking of a handicapped person or stutterer, etc. even although there can be humorous things about people with odd conditions that are not hateful, or insulting, as when people with those conditions tell the jokes on themselves because they themselves find them funny. Or there can be dark humor about terrible human conditions, as perhaps a show of understanding, sympathy, and solidarity with those others who suffer from the condition which is being sarcastically or satirically discussed. But basically those depend on some sort of witty insight that cobbles together in some way disparate ideas in a manner that others recognize once stated or pointed out but would not likely have thought of themselves.
Science works in the same way – seeking out patterns; and art often involves various kinds of patterns of sight or sound or both. We just react differently to the different kinds of patterns, science, humor, art, etc. for whatever reason or cause that is built into how our minds work. The search for significant scientific patterns is for serious useful purposes or for answers to questions we have; the search for patterns in comedy writing is for humorous entertainment, and possibly also some pointed useful social or person insights; the search for patterns in art is to explore and possibly expand our aesthetic awareness and feelings, but also sometimes serving to help our moral or other intellectual understanding. For example, most episodes of the original television Star Trek series illustrated in an easily understood, interesting, and meaningful way philosophical and ethical issues that were often difficult for students in a philosophy course to understand or find interesting and meaningful. Dramatic fiction has a way of illustrating otherwise merely intellectual points and ideas in a way that brings them to life.
Notice, however, an android’s seeing, reproducing or otherwise having, and acting on, patterns and underlying principles which fit (or ‘explain’ or describe or generate our behaviors) is different from their simply mimicking our behavior at right times by watching us like we watch animals in a zoo. Mere mimicry is different from responding to underlying emotions or “mental (or electronic or digital) states” which an android can respond to in the same way we respond to our instincts or biological, neurological states. In short, an android should be able to actually ‘feel’ love, anger, empathy, sympathy, disdain, hatred, curiosity, confidence, shyness, etc. similarly to how we do, not simply act loving, angry, sympathetic, curious, shy, etc. by only emulating our outward behavior when we feel those emotions.
Now the ways different androids react to different experiences, perceptions, etc. could be programmed in to begin with intentionally or randomly distributed (like various genes and mutations or just psychological reactions in nature), with potential mechanisms for keeping and spreading the android characteristics through some sort of evolutionary natural selection or through an infectious process or by knowledge, as in the adage that some people learn from thinking, some from hearing about other people’s experiences, but some people have to pee on the electric fence. Humans are able to spread ideas and knowledge, though many do not listen or heed it. Androids probably would transmit ideas or at least information more effectively, and certainly more efficiently by simply copying and sharing files or data.
More than likely, different reactions to the same phenomena by different androids would probably occur in androids similar to how it likely occurs in human beings in that we have all kinds of evolved likes and dislikes to different things and all kinds of other reactions to various things. And some of our likes or dislikes are caused less by just the characteristics of the new phenomena than they are caused by how previously developed responses to other phenomena react to the new one. It is like overreacting to something someone said that was harmless because you had developed a defensive response to it when someone in your past said something similar that was harmful or threatening – basically the idea of having psychological ‘baggage’, of which people seem to have all sorts, often mostly unconscious, with our not knowing why we react to something the way we do.
Or, suppose human reaction to any new phenomenon was simply either fight or flight. That would not likely lend itself to problem solving or to cooperation. Over time individuals, groups, and species develop mechanisms to cope with, or take advantage of and utilize, different things – mechanisms which can have a less than helpful response to new phenomena or which can trigger conflicting responses. One example in nature is an autoimmune disease in which someone’s or a species’ immune system attacks its own cells because those cells are too similar in some way to what the immune system previously defended them against. Another example would be an overly aggressive immune response to a new virus, such that the by-products of the immune response ended up being far worse than any change or harm the new virus might have caused if left alone.
Instincts that evolved in a rain forest may or may not work well in a crowded urban environment with automobiles. Behavior learned in growing up on a farm may not serve one well in downtown Manhattan or New York City in general. And any new invention can cause individual adaptation problems or cause difficult social upheaval changes to previously learned behavior or previously developed instincts. It is not just that a new invention, such as the automobile replaced horses, or that electric cars may replace gasoline powered ones, but that new inventions can wreak havoc on old conventions of other sorts. For example, all the recent new ways of providing entertainment, such as streaming, have caused problems with the previous contracts for paying people who create it. As of this writing, writers and actors are on what is projected to be a prolonged strike over how their future contracts are structured. And insofar as androids can take over the jobs of humans, the question then is how to divide labor and leisure among humans in order to have a fair distribution of increased benefits with fewer burdens for everyone. Although economics often creates unnecessarily unfair distributions of work and pay, it is generally more amenable to incorporating people fairly into the workforce to do work that is newly necessary than it is to treat people fairly and humanely whose jobs have become unnecessary. It seems to be much easier to fairly distribute new work which is recognized to be necessary than to institute policies to fairly distribute new leisure that is recognized to be possible because industry and technology have reduced the need for human labor.
Androids which do not have certain abilities or capacities would be just like humans who lack a particular or general sense of humor, appreciation for wit or cleverness, metaphor, analogy,, sense of decency or shame, sense of altruism, etc. In the most severe cases, androids would be akin to psychopaths, sociopaths, and pathological narcissists, as in robots that go rogue in science fiction stories, or as any other technology that is misused or used for evil and destructive purposes, etc.
Hence, androids playing the imitation game could talk about emotions and ethics just as well (or poorly) as humans do and have an excellent chance to fool the interrogators in the game into thinking they are the human.
But in order to keep androids from behaving unethically and essentially producing the same problems in society that evil humans do, they could be given an ethical protocol something like the following or some better form of the following in case it has flaws:
An act is right if and only if, of any act open to the agent to do, its intrinsic or natural consequences, apart from any extrinsic unfair rewards or punishments, bring about the greatest good (or the least evil, or the greatest balance of good over evil) for the greatest number of deserving people, most reasonably and fairly distributed, as long as no rights or incurred obligations are violated, as long as the act does not try to inflict needless harm on undeserving people, as long as the act does not needlessly risk harm in a reckless, negligent, heedless, or irresponsible manner, and as long as the act and its consequences are fair or reasonable to expect of the agent.* Rights have to be justified or explained or demonstrated; not just anything called a right is actually a right. Further, the amount of goodness created or evil prevented may, in some cases, be significant enough to legitimately override a right or incurred obligation that a lesser amount of good created or evil prevented may not. Overriding a right or incurred obligation is not the same as violating it.
*What is fair and reasonable to expect of an agent:
This particular principle is fully explained in my “Introduction to Ethics”, but it can be overridden or modified if strong evidence shows modification is necessary or that some part of it is incorrect. And because the use of it can give conflicting results if different people (or androids) give different weights to the various elements in it, I also explain that two different answers can each separately be correct, even if incompatible with each other, if reasonable people (or in our case, reasonable androids) disagree after examining all the relevant evidence.
But the basic plan is to have androids that tend to be much more likely to be constructive, rather than destructive. Human beings seem too often to opt for destructive acts rather than constructive ones, and, unfortunately, it is far easier to be destructive than to be constructive. What takes months or years to construct can be wiped out in an instant by a an explosive device, and lives that take decades to acquire knowledge, wisdom, and ability, can be snuffed out in an instant by a bullet or in a day or two by chemical or biological weapons or accidentally due to carelessness or negligence.
At this point, I would like to interject that although I realize AI can
be used for nefarious purposes and that androids could be programmed,
taught, trained, and exposed in bad environments to be as malicious
and/or psychopathic as humans, or that they can be tricked into
believing falsehoods they cannot detect if society at large were to be
filled with lies and false, fabricated, tampered-with evidence that
supports those lies, I am only interested here in the issue of whether
“good” androids could be built that would essentially be better and
brighter than the best and brightest of human beings, not the worst and
most cunning. How to prevent AI androids from being programmed to
do evil or from learning to do it on their own will possibly be a
serious problem, but not one I will address here. I say “possibly”
be a serious problem, because it may not be if greater knowledge of
logic, science, literature, psychology, philosophy, etc. – which
androids could have because of greater memory capacity and information
input speed, and the ability to easily transfer information from one to
another – contribute to greater understanding, accurate knowledge
(or at least better ability in general to distinguish between true and
false beliefs), and better morality in the way education in general is
supposed to, though it does not always seem to, as when greater
knowledge (even if assuming there is truth available) makes a psychopath or sociopath even more dangerous. (return to text)
The pharmacy I use recently changed its phone answering system to one
that used AI instead of a menu. I was trying to order a
prescription refill, but my prescription was out of date. When
that happened with the old system, it would just say it would contact
the doctor’s office to update the prescription, but the new system
couldn’t deal with that situation or even say it would connect me with a
person who could. I had to call back to speak with the pharmacist
assistant who apologized and agreed that the new system just “sometimes
goes haywire.” That prompted me to say, “Yes, they should not be
modeling AI after people with little intelligence or morality, nor with
no conscientiousness or sense of personal responsibility. We don’t
need ‘artificial unintelligence’.” In short, although making AI
that is indistinguishable from dumb, irresponsible, or lazy humans may
be useful, and even amazing, it is not the main goal and would not be
nearly as good or interesting as AI that would be at least as smart and
conscientious as smart and conscientious people, and especially the
smartest and most intellectually capable and morally responsible
humans. Plus, computers would not have to specialize in the way
humans do, because computers could theoretically have unlimited (or
immediate access to unlimited) knowledge and understanding in all areas,
whereas human factual memories and capacities for expertise and
brilliant understanding seem to be limited to one or a few subject
areas. (return to text)
I realize that at some point, and maybe even now, there may be other
mechanisms by which computers work, and I mean to include those too, but
will just use the terms “electronic” or “digital” states to refer to
the inner workings of computers and androids, which are simply computers
in a human looking form and that have human physical abilities such as
walking around, being able to burp babies, open jars, play tennis, or
snuggle with other people, etc. (return to text)
When my children were young and I took them to the zoo, they were going
to want to have me buy them snacks, drinks, souvenirs, or the trip on
the little zoo train that took visitors on a short tour. Since
that could get out of hand and children didn’t really have the concept
of money not being unlimited, I gave them each $5 to spend as they
wished on any of those things, which would buy a reasonable amount of
what they might want, but not everything they might want. I told
them ahead of time that I would let them know what combinations of
things the money would buy and what each contemplated purchase would
cause them to have to eliminate from the other things. At that
time, $5 would typically buy each of them the train ride (which they
liked to take), a soft drink, and ice cream cone or hot dog, etc.
Basically they had to decide how best to spend the $5 to get the most
they might want with it. Therefore instead of deciding whether to,
say, get a hot dog, soft drink, and snow cone, and/or toy animal, they
had to decide which of those things they wanted more or which they
wanted more if they also were going to ride the train, which might cut
out two of those things. For whatever reason, this worked to keep
them from asking for more money or from whining about not getting what
they wanted. They seemed to feel they were in control of what they
bought rather than me being in control of it and arbitrarily and meanly
denying them things they wanted. It did not seem to occur to them
to argue at the beginning that $5 was not enough to work with.
Whenever you work with a fixed amount of money you can spend, the
question is not just whether you can afford something or not, but
whether you want it enough to give up the money for it that would be
money you would have to buy other things instead. It is not just
what some product or service will cost you, but what other products or
services you might have to forego if you purchased the one you are
considering. In other words, you must take into account what economists
call the “opportunity costs” of any choice. (return to text)
(return to text)
9 The “distance” one is from a Goldilocks point does not necessarily dictate or cause the effort to be made to get closer to it; that is only when one has the drive at the time. Nor does being at a Goldilocks point mean one necessarily wants to maintain it because one can be satisfied after a while and then have a different drive begin to operate that one pursues – one can savor or ‘bask in the glow’ of attaining a particular state usually only for so long and then have the need to move on to other things. Under some circumstances one does not care for sex or chocolate or something else and does not strive at all to attain it even if one is nowhere near attaining it – just as a thermostat will not seek a temperature when the thermostat is simply turned off. It is only under other circumstances that the distance determines the interest or effort to reach the Goldilocks point. There is a difference we can generally appreciate and understand, even though it is difficult to spell out in a precise cause-and-effect way, between doing without something when you don’t care (don’t have the interest in it or drive for it) and doing without it when you do care (or have the interest in it or drive for it). Not being able to kiss someone I don’t care about is not the problem that not being able to kiss someone I care about when I am in the mood or have the drive. Not having chocolate on pizza or fries is not as important (because not important at all) as not having it on ice cream. It is not simply the “doing without” (or the distance from the Goldilocks point) that drives the effort to achieve the state in question. And although it might seem one works harder to achieve a goal such as an orgasm the closer one seems to get to it, the goal of sex to begin with is not necessarily an orgasm so in many cases the drive changes to orgasm after the drive for the initial arousal and/or intimacy state is met or satisfied. Similarly, one might have a craving for a certain food, but change to a different craving when one sees an alternative in the refrigerator or on a menu that then seems more compelling and thus more desirable. Even the goal of something like running a race is not necessarily winning it, but that can change if one is closing in on the finish line ahead of the other runners. This can all be more complex than the cases I have been discussing, but I think the general idea still holds.
An old joke that illustrates the first point about the need for the drive to begin with is about the three couples who wanted to join a very conservative church: an old couple, a middle-aged couple, and a newly married young couple. They were given a two-month-long course on Monday nights on church rules and policies, etc. After they passed the tests on that instruction, they were told the final criteria was they had to abstain from sex for a month. At the end of the month they returned and were asked whether they had met that standard. The old couple said it was no problem; the middle aged couple said it was only a little difficult the last few days, but they had managed to abstain. The young couple admitted they failed because on the fourth day, the husband said, his wife had bent over to get something from a low shelf and it aroused him so much he touched her and that aroused her so much in return that they made passionate love then and there on the floor. The pastor denounced them and said they were no longer welcome in the church. The husband said “That’s okay; we are not allowed back in that supermarket either.” (return to text)
10 When 35mm cameras first became available with a program mode for automatically determining the supposedly proper exposure for a scene, manufacturers knew that a photographer might want the exposure to be darker or lighter instead, so they added a feature that let the photographer change the exposure by 1, 2, or 3 in either the plus or minus direction. I found it far more difficult myself to know how much lighter or darker the numbers made the exposure, and in which direction the plus or minus made it be, than knowing how to just work a light meter and get the exposure I wanted for the main subject in the frame, and just adjust the exposure setting myself. Unfortunately many computer scientists (or IT people) do not even realize that the normally ideal automated setting for a system may not be what a user of a particular program wants, and they do not provide an override system at all. That means the user either has to find a way to fool the computer or sensor in order to get it to do what s/he wants, or just has to abandon using that instrument or system. (return to text)
Video clips and excerpts are from:
https://www.youtube.com/watch?v=QYAZkczhdMs Bentsen/Quayle debate
https://www.youtube.com/watch?v=DLZw3vG3S7I the Dick Van Dyke Show scene
https://www.youtube.com/watch?v=14NQIq4SrmY Robin Williams about the invention of golf
https://www.youtube.com/watch?v=LL8pN96MHZs Rubik's Cube record
https://www.youtube.com/watch?v=uVwIrbZdegAs Seth Meyers November 2022
https://www.youtube.com/watch?v=x5BblIFW-vM Seth Meyers April 2023