About This Episode
On this episode of StarTalk Radio, Neil deGrasse Tyson sits down with Anthony Daniels, the man behind one of the most iconic characters in film history – C-3PO. Anthony (as C-3PO) is the only person to appear in all nine Star Wars films spanning multiple decades and you’ll hear about his time involved in the biggest movie franchise on Earth. In-studio, Neil is joined by comic co-host Chuck Nice and robot ethicist Kate Darling, PhD, to explore C-3PO’s cultural impact and the rise of robotics in our society.
Kate investigates “human-robot interaction from a social, legal, and ethical perspective” so she is just the person we want for this conversation. We start with a simple question – what makes C-3PO so likeable? Find out why C-3PO’s appeal stems directly from his humanity. Anthony tells us why he wasn’t really a fan of science fiction before he took the part of C-3PO. You’ll learn about the development of C-3PO and why his subtle asymmetrical design brings his “humanity” to the forefront.
We explore how robots have evolved since the beginning of robotics. We breakdown the differences between androids and robots. Find out more about the “uncanny valley.” We ponder the concept of the “soul.” Kate enlightens us to what our interactions with robots tell us about our personalities. Can you create strong bonds with a robot? Should our laws protect robots?
We investigate how film’s portrayal of artificial intelligence has created unnecessary fear of robotic advancements. We also explore the flaws still found in today’s cutting edge technology. Lastly, we ponder how robots will be introduced into more shared spaces like grocery stores, cafes, etc. All that, plus, Kate tells us what the biggest dilemma will be as robotics continue to advance.
Thanks to our Patrons Leon Galante, Tyler Miller, Chadd Brown, Oliver Gigacz, and Mike Schallmo for supporting us this week.
NOTE: StarTalk+ Patrons and All-Access subscribers can watch or listen to this entire episode commercial-free.
Transcript
DOWNLOAD SRTWelcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk.
I’m Neil deGrasse Tyson, your personal astrophysicist, and today I got with me Chuck Nice.
And there’s like someone between us here.
We’re always fist bumping in somebody’s face.
In somebody’s face.
Today we’re talking about robots.
In fact, that’s not only what we’re going to do, that’s the title of the show.
Talking about robots.
Talking about robots.
And we have as our studio guest, Kate Darling.
Kate, welcome.
I don’t get a fist bump?
Double fist bump.
Give it a…
There you go.
There you go.
And why are you here with us?
You are a robot ethicist.
Didn’t even know that was a thing.
We’ll get into that in a minute.
From the MIT Media Lab, of course, in Cambridge, Massachusetts.
And what we’re featuring today is my interview with Anthony Daniels.
Anthony Daniels.
Oh, she’s getting all nerdy.
She’s nerding out on that one.
Anthony Daniels, the actor who portrayed C-3PO.
Oh my God!
Oh dear!
Oh, you want the gig.
How lovely!
All two, all two, oh!
You want the gig.
In particular, we’re not just talking about robots.
We’re talking about relationships, robots, between humans and robots.
And we don’t even know what that means entirely.
Not at the moment.
Ah, the movie AI kind of covered it.
But of course, C-3PO is from the Star Wars franchise.
One of the most successful movie franchises in history.
There ever was.
And just let me get a little bit of background on you, Kate.
So, did you come to this from robotics?
No.
No.
Well, I’ve always loved robots, but I’m a social scientist.
Nice.
I have a legal background, I did social sciences, and now I study human robot interaction from a social, legal and ethical perspective.
So, it’s good to learn that someone such as you exists in that world.
Yes.
We should have somebody like you in all of the potentially troubled places where technology is going.
You mean everywhere?
Yeah, like everywhere.
So, you have a book that may be coming out in 2021.
Yeah.
I have a title here.
Is this the right one?
The New Breed.
What our history with animals reveals about our future with machines.
That is an awesome title.
That really is.
So, let me go get to my first clip with Anthony Daniels.
He’s the only actor that was in all nine Star Wars movies.
All nine of the official, not the fan off ramps.
Right, exactly.
And he’s also the author of I Am C-3PO, The Inside Story.
Cool.
You see what he did there?
Yes, very clever.
You see what he did there.
So, what is it about C-3PO?
Is it his performance, the way he speaks, that people could relate to him so deeply?
Oh gosh, C-3PO is amazing.
I think what it is, actually, is that C-3PO looks like a robot, but acts kind of like a human.
Like, he’s very flawed and has all these human emotions and I think people just relate to him, ironically, because he’s so human-like.
Oh, so it’s the opposite.
It’s not like he’s the perfect robot and we’re finding a way to relate to that.
Right.
It’s that he has enough human in him so that he’s an imperfect robot.
And that’s what we’re related to.
Is that what you just told me?
You know, that makes sense because that is what makes us human.
They’re like, you know, the fact that we are flawed and imperfect and, you know.
And kind of annoying.
And well, definitely, not kind of, definitely annoying.
You know.
So I kind of liked it when he got a little excited.
Oh, oh, oh, what shall we do?
That was just kind of fun.
It was like, hey, that’s kind of cool.
Yeah, exactly.
So that’s, so it’s the human side of the robot that we’re relating to.
Sometimes, yeah.
In that case?
In that case, for sure.
Okay, and so, so what is it about him, other than his costume, that told you he’s a robot in how he’s interacting with people?
What is the evidence that he’s a robot?
Other than he’s a shiny metal thing.
Right, right.
I mean, a lot of it is the design.
I’m trying to remember anything like specifically robotic that he said.
See, I don’t think so.
Other than, I know 80-drillion languages.
Right, exactly.
Yeah, something like that, maybe.
Yeah, no, we don’t know human wood, so that’s a robot talent.
Right.
But I think a lot is the visual and the way he moves.
Yes, well, and he does.
It looks like he’s actually doing the dance, the robot.
You know what you wanna do if you wanna be a contemporary robot, though, is fall over a lot.
Oh, this is one of the YouTube videos.
Yeah.
Well, let’s go to a clip.
So I sat down with Anthony Daniels, and did you know he wasn’t always a fan of sci-fi?
Really?
Well, remember, he’s an actor.
Until he got that first check.
Oh dear, I do believe I love Mr.
Sci-Fi.
Oh, R2, R2, where’s the nearest bank?
Let’s find out what was he up to before he landed where he did.
Check it out.
Maybe I’ve been traumatized.
You know, I never thought of this.
I had been bashed around the head by 2001, A Space Odyssey, to the point where I never wanted to see another spaceship.
That was a long movie with very little dialogue.
There’s no character development except for the how, the computer.
Well, you’re right there.
So it’s a very different intersection of genre.
It’s not even science fiction.
It’s some other, it’s a science portrait in a way.
It was almost a philosophical treatise on man and man versus space.
Yeah, man, machine and space.
Yeah.
And so there we are.
So I didn’t want to go because back then, the only robot that I remembered really were the Daleks on television.
Oh, the Daleks.
Yeah, so of Dr.
Who.
Exterminate, you know.
Dr.
Who with sink plungers on their faces.
I mean, cute.
And as a kid, I adored them, but as an acting role, not so much.
Then there was Robbie before them, Robbie the Robot in, what’s it called?
Forbidden Planet.
Forbidden Planet.
And he was this kind of lumbering thing made of Michelin tires, really, it seemed to me.
Yeah, because he had these horizontal segments.
That’s right, and he lumbered in a kind of unprepossessing way, I felt.
But anyway.
Are you judging the acting talents of a robot?
No, but it was.
You just gave a critique.
He lumbered, he didn’t pull off that movement convincingly.
He lumbered, and when you read on page 95 that I met Mr.
Kinoshita, who designed him, and I said joyfully, oh, what was it like to see your design come off the paper?
And he went, mm-hmm, not so good.
I didn’t mean him to kind of lumber.
And I said, oh, so you get why 3PO kind of teeters around because it’s more characterful, more forgiving, more human in a way.
Yeah, because Robbie the robot was, I’m not saying robotic, no, I don’t know.
He just didn’t, there was no, he had the arms, that was it.
Whereas C-3PO, there were sort of body gestures that could help communicate a mood.
And it’s all I had, really.
Right, because there’s no, this is not a moving mouth here.
And you’d be surprised how many people think it’s makeup that I’m wearing.
No, it’s a solid.
Well, we all saw Goldfinger, so just a couple of years ago.
Yeah, yeah.
She was prettier than me.
So he was referencing his book, I Am C-3PO, and he said, on page 95.
Right.
So Kate, how has, we talked about a few generations of robots there, how has our concept of robot evolved from the beginning until now?
It used to be that anything that was even remotely automata and could move on its own was viewed as a robot, and now we have a little bit more that we expect a robot to be able to do in behavior.
Right, because in fact now, so when does it become an Android versus a robot?
An Android is a robot that looks deceptively human-like.
Lieutenant Commander Data.
Yes, yes, Data, my favorite Android.
And then there’s C-3PO, which is more of a humanoid robot.
So like, head, torso.
Yeah, yeah, so Androids look realistic.
Humanoids just have a kind of human shape, torso, head, arms, legs.
So not R2-D2.
Not R2-D2.
So what’s R2-D2?
He’s just a robot.
He’s a robot.
Damn, with no OID on it.
Poor R2.
He’s just a robot.
Okay, so, but presumably all three kinds of robots are still legit in storytelling today.
Oh yeah, for sure.
Okay.
But we don’t have the big Robbie the Robot kinds anymore.
Those, the lumbering.
With its circuits turning in his top tower.
Warning, Real Robinson, warning.
Right, that was a Robbie robot style.
Exactly, yes sir, in Lost in Space.
Lost in Space.
Warning, Real Robinson, danger.
That’s not warning, danger, Real Robinson.
And with the arms would be flailing.
Right, right.
And then Dr.
Smith would be like, oh dear, oh dear.
At what point do you inform a person who might be trying to design a robot in terms of its personality or its character or how they would best be an actor doing so?
I mean, I work with roboticists a lot in social robotics and we have more and more robots coming into shared spaces and they have to interact with people and not all roboticists.
What’s a shared space?
Oh, you know, a workplace household, public areas.
Stop and shop as robots roam in the aisles now.
Yeah.
You could call, they are indeed a robot, but it’s more like an obelisk on wheels with googly eyes attached.
It looks like a penis.
Wow, all right.
It does, though.
I’m just saying, maybe to you.
Okay, so it goes up and down the aisles?
Yeah.
And there’s no one controlling it?
No one’s controlling it.
There’s no joystick?
It actually moves about like a Roomba, except it doesn’t have to touch things.
And I assume it doesn’t bump into the orange aisle.
It doesn’t bump into the orange stack, right?
But I think I’ve never engaged with them personally, but I think you can ask them questions and they will direct you to places in the store.
All right, the next time I see one, I’m doing all kinds of experiments on it.
Oh my God.
I might have gone to Stop and Shop last weekend with Daniella, who’s one of the students in the personal robotics lab, who is obsessed with this robot.
And we might have like put stuff in front of the robot to see what it did.
You might, just might have.
And what happened?
Well, they might have done it.
They didn’t, they…
Oh, you might, if you had.
What do you think would have happened if you had us do it?
If you had done it.
If we had.
What do you think might have happened?
What were you testing it for?
We didn’t get kicked out yet.
We’re gonna go back.
Well, we just wanted to see what it would do because the purpose of the robot is to find hazards on the floor and alert someone to come pick them up.
And so we wanted to know what’s the hazard.
Clean up all four.
Now, I wonder what it would do if you just laid down in the floor, like on the floor in front of it.
Like what it would do if it’s actually, if that’s…
It’ll go around you.
See, that’s a really bad robot.
No, why?
So you’ll recognize a spill, but I just had a damn heart attack.
And you’ll just go around me.
Really?
So somebody drops a jar of pickles and it’s a huge monumental problem.
We need somebody to get here right away.
But you fall and you can’t get up, and the robot, you’re just in the way.
Exactly.
Instead of hitting my meta alert bracelet, you’re just like, okay, excuse me.
Like, really?
So that’s the thing though.
Like when designing robots, you have to think about what’s going to be frustrating to people when they’re interacting with it.
And they’re going to be like, why isn’t it helping me do X when they don’t understand that building a robot is really, really hard?
And like, they only have very limited capabilities.
And so roboticists really need to think not just about how they’re working, but how people are going to perceive them.
They only have limited capabilities now.
Right.
Yeah.
Stop covering for him.
So Anthony Daniels, as an actor, okay, he’s best known for C-3PO, but he almost didn’t take the gig.
Oh.
Let’s find out why.
Check it out.
So there we were thinking about playing a robot.
And the thing that really changed my mind was reading the words that I had not written.
George and his team had written them, pretty much George.
And clearly he had invented a machine with more human characteristics than he could apply to a human being.
You couldn’t get away with Han Solo being the character of 3PO, if you see what I mean.
So 3PO is allowed to have intense humanity because he isn’t a human.
He isn’t a human.
That’s deep.
Not really.
Yes, it is.
Because what you’re saying is, with a machine who is sort of human, but is still a machine, you can take it to human places that would be unconvincing if written for a human character.
And slightly uncomfortable written for it.
There are…
I never thought about that.
Yeah, well, you will now.
You will now in your next lecture.
You can talk this out.
There isn’t a film called Bicentennial Man with…
I never got to see that.
It’s interesting, Robin Williams, beautiful guy.
I had luck to meet him a couple of times.
We didn’t talk about it, but that was a slightly uncomfortable film because the storyteller was, he was transitioning from a robot that arrived in a packing case from Amazon or somewhere.
And then…
And then pack that.
Whatever the Amazon equivalent was back when that movie was made.
And then gradually he metamorphosis into a human and it’s slightly uncomfortable because it veers towards pushing our humanity buttons.
Like what does it take to be human and where are we slightly uncomfortable through the uncanny valley and beyond when…
So tell us about the uncanny valley because it’s a great name, but it still has to be defined for people to know what it is.
It is often used in games or in visuals or in film to say or in computer terms.
The Turing test almost gets there, but it’s when something is almost real, looks great and it speaks nicely and has great skin, for instance, in a robot, but there’s something that’s not quite right.
There’s something that we sniff as a human being that not quite there.
So it’s even unconscious within us, perhaps.
It’s innate within us.
Innate, that’s a better word, right?
Right, you don’t even know how to verbalize it.
Verbalize it, yeah.
And so people have coined this phrase, the uncanny valley, because you know there’s something not quite right.
Kate, do all humans respond to the uncanny valley the same way?
So people have tested this theory empirically with very mixed results, but most people who work in robotics seem to think that there’s something there.
And they’re wrong.
Yeah, they are.
Let me save them a lot of money.
You’re wrong.
Save all the academics who have researched this.
Save all the academics who are researching this forever.
You’re wrong.
What you’re talking about is the perception of normal humanity.
That’s why you can’t put your finger on it, because it doesn’t exist.
We feel the same way about human beings that may have some type of brain disorder.
And we talk to them and we go, oh, something’s not quite right here.
But you don’t say they’re not a human being, but that’s really what your perceptions are telling you.
So what you’re talking about is the normal perception of humanity as opposed to what makes someone human.
And they’re two different things.
I think you’re right.
I think I personally think it’s about-
Let me tell you something.
So the comedy thing doesn’t work out.
I’ll hire you in the lab.
Well, for me, Uncanny Valley’s always been about expectation management because you’re expecting something to behave a certain way.
If it looks human, you’re expecting it to blink like a human and not twitch its face.
And if it doesn’t, that kind of unsettles you, if it does something that you’re not expecting.
So what do programmers yourself, what do you all do in the media lab to either exploit Uncanny Valley or to dodge it?
I don’t think anyone wants to exploit it, but I also don’t understand why we would try to create something that looks like a human or talks like a human because we can create anything we want.
Why create, like why try to like risk this Uncanny Valley creepiness factor when we can create an R2-D2 that communicates in beeps and boops.
Do you tell this to your peeps back at MIT?
Oh yeah, like everyone I think in the social robotics field agrees that making human-like robots is not as interesting as making something that has expression.
Yeah, you can make something better.
Animators have honed this technique for hundreds of years, how you can make something like Bambi that looks like a deer but actually looks better than a deer to us.
So I agree with you but from a different direction.
So I think the future of AI and robots is not to try to mimic a person.
A person is not even an ideal form.
No.
Right, right.
For a task that you want to conduct, the human body is like why would you design that?
It’s not the case.
Even with the people who don’t have legs but they run track.
On the blades.
On blades.
Yeah.
We’re not trying to duplicate the bones of a foot and then put flesh on it and say now you’re, no, it’s like we got something better.
Something better.
Something better.
Here’s something that will spring and propel you forward.
So, in your lab, are people thinking of the task they need not trying to duplicate a human?
Because we can just, people make babies all the time.
Why do you need to make a robot human?
We have real humans.
Well, I think people have this fascination with recreating ourselves, but I really don’t see the point.
I think we’re all in agreement here.
Yeah, I mean, I’ve never thought of it that way, but to write, I think when you look at sci-fi movies, like Alien comes to mind and the so-called Android robot is so human that it’s indistinguishable, but the problem is it doesn’t have a soul, so it can’t make any more, it’s a sociopath.
Let’s get to that next.
We’re gonna take a break when StarTalk returns more about the evolving relationship between robots and humans.
So we’re back with StarTalk.
Neil deGrasse Tyson, your host, Chuck Nice.
That’s right.
Kate Darling.
Kate Duh.
Kate Darling.
In from Boston, thanks for, from Cambridge specifically, the MIT Media Lab.
Good stuff happens.
Every time something amazing is happening, it’s traceable back to the MIT Lab.
It’s funny how that works.
Just, just, not only like art science and robotics science and computing and science, just, and culture.
Yeah.
So, congratulations to all y’all.
Oh yeah, it’s just me.
You’re the one.
All right.
I just, I just want to pick up on where we left off.
This idea that you’re talking about in the Alien series, there was a human, there was a human who was not human.
Right.
So, not even humanoid, but android.
Android.
Android.
And you’re cool until you realize they would make a different ethical choice than you would.
So, do you have to program this in?
Is this something they can learn?
There’s a whole field called machine ethics that looks at can you program ethics into machines?
And it turns out that’s really, really hard because we don’t even fully understand or agree on humans.
You can’t program something that you ain’t got yourself.
So, I would prefer maybe not to create robots that have to make those kinds of ethical decisions.
But there are people who are trying to solve that problem.
And so, but it would also be a way if some, oh, so let’s get back to the concept of soul.
When a religious person would say the soul gives you a sense of right and wrong and purpose.
Purpose.
And these sorts of things.
And would that be like-
And that was the idea with Bishop.
Bishop didn’t have a soul.
So if it meant that bringing back this life form to earth that could potentially wipe out all humans, it doesn’t make a difference because it’s in the interest of experimentation and exploration.
So who cares?
I think this is people’s greatest fear about scientists going astray.
Yes.
Without a doubt.
So will that be the hardest thing to program into robots?
A soul?
That’s three lines of code.
Well, Japanese actually believe that certain things have souls.
Tell me about the Japanese.
Yeah, so like there’s this Japanese roboticist who creates these very, very life like androids.
He’s made one of himself Hiroshi Ishiguro is his name.
It seems that in Eastern cultures that have a history of Shintoism and believe that even objects can have a soul, like they have funerals for sewing needles, for example, it seems that they’re more…
What?
Yeah.
That must have been a bad ass sewing needle.
If you were to give it a funeral, that must have sewn some good stuff with a darned a lot of socks.
Poor Needy, we knew him well.
Needy, is that the name?
Needy, that’s what we called him.
Needy the needle.
That’s your nickname.
Yes, exactly.
So tell me, I was unfamiliar with this.
So keep going.
Yeah, and we don’t have that concept in more Judeo-Christian society.
We have, oh, things are alive and have a soul, things are not alive, don’t have a soul.
And so there’s this idea.
Or that only humans, depending on, yeah.
But that’s why some people say that the Japanese are much more accepting of robots and this idea of having humanoid and android robots around because they’re like, hey, that’s cool.
Are they also more accepting of robots in the Uncanny Valley?
They might be.
Again, like I said, the empirical testing on the Uncanny Valley has kind of been mixed, so there’s not a good scientific basis for it, but anecdotally, yes.
Is it part of the fact that in their culture, they have a greater need for robots?
I mean, it is clear that they have like in Japanese healthcare, they don’t have enough people and you have a great advancement of robotics in that particular arena.
Are you confusing robotics with automation?
No, I’m talking about actual robot care.
I mean, in different, like for instance, in a hospital, like for the delivery of certain things, a robot will do that.
Rather than orderly or something.
So for instance, or just even go outside of healthcare.
Hotels that you go to where they have robot check-in and it’ll be like a Tyrannosaurus Rex will check you into the hotel.
Really?
Yeah.
A robotic tyrannosaurus.
That’s just fun.
Right.
Yeah, because it’s a novelty, that’s how much into robots they are that we are not, you know?
So that’s, and what about Shintoism enables that or empowers it or drives it?
Some people would say that that makes them more willing to accept robots as this thing that’s alive but not really alive.
Oh, so the simple element of inanimate objects having souls, that alone would be sufficient.
So that is one reason people think the Japanese are more accepting of robots.
Another reason is, like you said, the need as robots come more into these shared spaces and people interact with them more, people just get used to them.
And then there’s also the fact that their science fiction and pop culture tends to be less dystopian when it comes to robots.
Like they have Astro Boy, they have these positive stories.
I grew up with Astro Boy.
Astro Boy bounds away on his mission today, rocket high to the sky.
How come I don’t remember that?
Because I just made it up.
No, I’m joking.
That was the Astro song.
That was the Astro song.
I remember Astro Boy.
Yeah.
All those…
And there was…
Well, they also had Speed Racer.
There was a lot of sort of early anime.
That was the early Japanese anime.
Yeah.
That made it to American television.
And yeah, it was all very, very happy stories.
Yeah.
And…
But we have a lot of Terminator and stories of the robots taking over.
They have less of that.
That is true.
We are so messed up here.
So, is the Japanese culture a good bellwether for the global acceptance and trajectory of robots?
Not necessarily.
I would say not necessarily.
I think that maybe the ways that they will want to use robots are different.
Like the fact that they like androids and I don’t really think that we do in Western society.
Right on.
Yeah.
But it would be interesting to see if all countries have equal access to this technology, what they’ll come up with relative to their own cultural needs.
For sure.
So Anthony Daniels, we’re featuring my clips with him.
He has an interesting perspective on what makes C-3PO more human than a robot.
It’s his perspective because he was, he is C-3PO.
Cool.
So let’s check it out.
George came up with this idea of this kind of figure, this art deco figure.
Then he employed Ralph Macquarie, who made this life-changing painting that I saw of the character.
And then Liz Moore, the sculptor, turned that into 3D and made this beautiful face that people recognize.
And interesting, I only just realized the other day because I was trying to cheat in Photoshop because some robot faces are just scary and this is actually very…
It’s got curiosity in it, but you want to know what he’s thinking.
Because he clearly is thinking and partly it’s that sort of wide-eyed, almost babyish stare with big eyes.
And what was interesting, Liz had actually, and I never realized it until recently, created something that wasn’t machine perfect, it wasn’t symmetrical about a center point.
It is actually, as in a human face, I tried to flip it in Photoshop to double it up to make it perfect and it doesn’t work because he is asymmetric.
And that is one of the clues, I think, to his humanity.
That’s an interesting philosophical point because there’s been research on symmetry and there’s a whole off-ramp from that research that says, maybe it’s not an off-ramp, maybe it’s an on-ramp, that a little bit of asymmetry brings interest to a character, to an image, to a painting, to art.
Perfection, there’s nothing more to say.
It’s like somebody did it already.
Now, just between you and me, you do have a very symmetric face.
Let me just stare into the camera here.
I personally don’t.
If you cut me in half.
If you cut me.
Here we go.
It is not symmetric.
Yeah, you are so symmetric.
I probably like to be, but it is too late now.
I think we have to go with what we have got.
Were you hitting on C-3PO?
His book had a picture of him and the robot.
And so I put the other half of him next to his head.
And it was him.
I am just saying.
But what a genius design tactic to actually purposely put in asymmetry.
Tell me about perfection.
I mean, I hadn’t heard about this asymmetry thing before.
That’s really interesting.
But one of the tricks that a lot of robot designers use in social robotics is to, you know, if you’re going to give it a face, don’t make it as human-like as possible and don’t give it too many features.
Like don’t necessarily give it eyebrows or a nose.
Just eyes is enough.
Things that we automatically respond to, like he was saying, like the big eyes, the babyish face, things that we kind of evolutionarily respond to are the best, the best design tricks.
That’s right.
Because babies, their head grows only by a factor of three and the body grows by a factor of five or six.
So babies have a disproportionately large head to their body.
Yeah, push one out of me.
Tell me about it.
Yeah, you had the power.
You don’t have to do it.
You don’t have to biologically recreate it, okay?
The rest of us, we’d do that if we could.
Yeah, so I think that the argument from evolutionary biology standpoint, at least what I learned from my colleagues here at the American Museum of Natural History, here’s a commercial, is that in order to prevent mammals from killing their children, the children have to look cute.
Yeah.
And so, not that everyone would kill their children, but I’m just saying.
I would.
No, it’s not that you would want to kill them all the time.
There are occasions in the arc of raising children where if they weren’t cute, we go extinct a long time ago.
Have you done research into what our relationship with robots says about us?
Psychologically?
Emotionally?
A little bit.
I could talk about this all day, but it’s kind of like, you know how when you go on a date with someone and they’re really mean to the waiter and you’re like, that’s a red flag?
Some of our research indicates that if, you know, you’re mean or violent to a lifelike robot, that might say something about you as a person.
Wow.
You know, that makes sense.
It’s so Boston Dynamics has these videos online of robots being abused.
And I know clearly that that’s a thing.
That’s not a person.
And I got to tell you, it is so hard to watch because they’re hitting it with bats and they’re kicking it and they’re knocking it over.
It’s a robot that’s trying to walk.
Yes, it’s trying to walk.
And basically, you’ve seen those?
Yeah.
And it’s really disturbing.
People get really upset.
Like the first time that they put one out that looked kind of like a dog and they named it Spot and then they’re like kicking it and it’s like struggling to stand its feet.
People got so upset that PETA, the animal rights organization, was getting a bunch of phone calls and had to issue a press statement.
And they didn’t even take it seriously.
They were like, yeah, we’re not going to lose any sleep over this.
It’s not a real dog.
But there actually might be something there.
Okay, so would you preemptively…
I mean, is this like…
Is this like…
What’s that movie?
Tom Cruise?
The Minority Report.
Is this how you would pre-diagnose someone’s propensity to…
You sort of already said so, because in a date, someone behaves in a way that is…
Right.
To someone who they have power over.
I mean, it’s…
Right now, robots are still really primitive and we’re still able to mentally compartmentalize.
But as the design gets more and more lifelike, I mean, we do definitely draw connections between animal abuse and child abuse in the same household legally.
If you have a case of one, you look for a case of the other.
And it’s possible that…
Strong correlations already established.
Okay.
And if you have a robot that can mimic pain and suffering and you enjoy inflicting that on it, like that might be an indicator that you might also enjoy torturing an animal.
But we don’t know.
We don’t have the evidence.
This requires more research.
So it’s certainly evidence that you’re a dick.
That’s for sure.
I feel like it kind of is.
So it’s interesting.
So in the dating scene, these are like secondary cues.
They can be really nice to you, but the waiter, not so much.
If they kick the Roomba, it’s over.
Given the examples of the Boston Dynamics and people kicking it, and you feel the emotion for something that is not alive, where do laws ultimately have to land with regard to rights for robots?
Well, it depends.
So I believe in evidence-based policy.
So we need…
Really?
Yes.
What’s wrong with you?
Like most legislators.
Have you been checked out?
Have you…
I know, I know.
But like really, like it would be nice to have some evidence.
And if, for example, we found out that it was actually desensitizing to people to behave really violently towards life like robots, then, you know, there’s some question of whether we should regulate and say you’re not allowed to do certain things to certain types of robots.
Because it’s fostering behavior that would be culture to the interest of civilization.
So to me, only if it actually has an impact on that behavior.
Actually, I have to say that makes a lot of sense.
That would be evidence-based legislation.
That makes a lot of sense.
Very hopeful there.
But it’s tough to research.
So actually, my last clip of this segment, I talked to Anthony Daniels about robots today.
Just to get a sense of what is, because, you know, that character dates from the 70s.
Right.
So, just what were his thinking about the interaction of humans and robots today?
One of the frustrations we have now with machines that pretend to be human, and certainly in Japan there are companies working on human.
Are they leading the way on that?
Every time I see a new robot, it’s a Japanese robot.
Well, they like that kind of thing.
They’ve slightly taken it to their own in the sense of social interactions with machines, human to machine.
Human-cyborg relations, indeed, George was there first, oh yes.
Some of them, we’re in early stages of real robotics and we have to think what we want from that.
But when you have something that pretends to be human and then sort of suddenly malfunctions, it’s like, well, we’re talking about Stepford Wives.
Suddenly I’m alerting to all these, the thing about the film is…
You’ve given us a full review of 20th century robots here.
This is great.
And 20th century film writers, script writers, who now very, I think, cogently have adopted this slightly outer world, nether world, where we are going.
Not in my lifetime, I hope, because I need the work.
So let’s not move.
I’ll come back to that because in Japan, it’s widely known that they are looking for really human-relatable, probably bed-sized machines that people can relate to.
But then you have to look at how you, what kind of figure, physically, do you supply?
Because if it’s too humanoid and it starts clicking, then it’s a little scary, isn’t it?
If it’s too mechanical, then you’re relating to, I don’t know, a kind of fizzy drink.
It’s like, where’s the balance between who I want to believe I’m relating to?
Because if I get too fond of you and you’re a machine, it’s not going to end happily.
There are all kinds of off-ramps there for where that would go.
It’s not going there.
That’s for the second series.
So is there any thinking in your lab about human-robot relationships?
Bonding?
Yes.
And where does that land?
Well, for me and a lot of my colleagues, I feel like we’re, as humans, capable of a lot of different types of relationships.
And to me, the relationship with a robot isn’t necessarily the replacement of a human relationship.
It’s more like how we would treat a pet or something completely different and new.
So it’s not something that I worry about.
But maybe not.
But I think that’s an enlightened outlook.
It’s not clear to me that that’s where that’s going to go.
I think people, you know, if people can have imaginary friends, then they can have a robot that becomes a friend.
That becomes a friend.
Okay, but why is that bad?
No, no, I’m asking you.
Is there some, should we, I don’t mean to imply it’s inherently bad.
I’m asking you, have you guys thought about whether or not it’s…
Well, what keeps me up at night isn’t…
That’s what we want to know.
Isn’t that someone might bond or have like a friend as a robot.
It’s that a company is making that robot and maybe is using the robot to emotionally manipulate that person.
But that already happens in toys.
No, no, it’s called advertising.
That’s even…
Manipulate all the time.
Don’t even take a robot.
You’re absolutely right.
Big psychological brain screw called advertising.
But no, I remember it was like a furry or a furby or something, but it’s a little robot and it says things like, I love you and you’re my friend and it’s like, you know, I was like, I would never get that from my kid.
That’s the loneliest kid in the world that needs this toy that’s giving it love and affection and reinforcement.
There was an episode of The Twilight Zone where there’s a guy isolated on an asteroid somewhere this early before they knew what space was really going to be.
He’s on an asteroid and this asteroid apparently has a breathable atmosphere.
But holding that aside, holding that aside, holding that aside, okay.
He couldn’t be rescued for like a long time and he’s slightly going crazy.
So they brought him a robot, a female robot, okay.
And it says turn here and then she comes to life.
Of course, it’s played by an actual actress, but it doesn’t matter.
And then it’s their companions and they’re there for like a year.
And then the rescue mission finally comes.
But there’s no room on the ship for her.
Oh, I love it.
Don’t do tell what happens.
They said either nobody gets back or we’re going back without your companion.
It’s no, but she’s this, I love her.
The guy takes out his gun, shoots her in the head.
What?
And then the springs come out and the thing, and it said, let’s go.
Who does that?
The guy in love with her or what?
No, another guy.
No, the other guy who’s trying to save his fellow astronaut.
That’s so cruel.
To remind him that she’s just a robot.
So, talk to me.
I mean, robots can fill a void like that.
They’re already being used as an animal therapy replacement in nursing homes because we can’t use real animals.
So, you bring in this baby seal robot that gives you the sense of nurturing something and people become very attached to them.
So, I’m asking about the ethics of the story I just shared with you.
Oh, well, I mean, I think it’s unethical to shoot lady robot.
Okay, but otherwise they all die because there’s only one seat on that rescue ship.
Well, yeah.
That’s the construct.
Yes.
And you’re an ethicist.
Talk to me.
Why couldn’t she sit on the outside?
She doesn’t breathe air.
Strapped her to the bottom of the ship.
Thank you.
Now, we don’t have to resolve the ethical issue.
Chuck solved that problem.
No, but tell me, how would you…
Can it be unhealthy, though, this bonding that you’re talking about?
It can be.
Yeah.
I mean, if it’s being used to manipulate someone or if they’re bonding with something.
So it sounds like she was meant to be a tool and they didn’t anticipate that he would bond with her this much.
Yes.
And this happens in the real world.
This happens with soldiers bonding with their bomb disposal robots, where they treat them like pets and they get really upset if they get broken.
Particularly if they save your life, it does.
Right.
Yeah.
So Peter Singer has written about soldiers actually risking their lives.
The Princeton philosopher.
No.
So there are two Peter Singers.
Peter Singer.
There’s a Peter Singer who has written a book called Wired for War about military robots.
And apparently soldiers have risked their lives to save the robots that they work with.
They’re actually missing the point of that robot.
Well, or the people.
The point of the robot is to save their lives.
Yes.
Exactly.
Yes.
But kind of you bond with something if it saves your life though.
And I don’t think the people who deployed that really anticipated that response.
Real interesting.
Okay.
So here’s the question then if you’re going to make it empirical.
There is the risk to his psychological health having no companion for a year versus the risk to his psychological health of having a companion that you put a bullet through her head.
Right.
Which of those is worse?
I mean not having a background in psychology.
My guess would be it’s…
Ethically.
I mean, you know, we get pets and we know they’re going to die and this is a similar thing like…
That’s true.
All right, we got to land this plane.
We got…
We have a whole other segment.
Okay.
We’re going to take a break when we come back more of the relationship between humans and robots on StarTalk.
I-3PO.
We’d like to give a Patreon shout out to the following Patreon patrons, Leon Galante and Tyler Miller.
Guys, thanks so much for your support because without you, we couldn’t make this show.
And if you would like your very own Patreon shout out, make sure you go to Patreon.
Thank you We’re back, StarTalk.
We’re exploring the relationship between robots and humans.
Featuring my interview with Anthony Daniels, who recently published the book, I Am C-3PO.
Yeah.
And we have with us, as sort of an expert commentator, Kate Darling.
Kate, reintroducing you to those who, whoever comes in only in the third segment, I don’t know who that is.
No.
Animals, that’s who.
So, anyway, thanks for bringing your MIT lab perspectives for us.
And I was delighted to learn that Anthony Daniels was affiliated with Academia.
Let’s check it out.
Maybe I shouldn’t call you Anthony Daniels, I should call you Professor Daniels.
This is, I’m fundamentally an academic, so my radar hurts up.
You can call me a professor, but I know where you’re going, because I’m not a professor.
I am a kind of visiting professional at Carnegie Mellon University.
One of the leading institutions in computer science and robotics and everything, you know, automated, yeah.
But years ago, I kind of got connected with it, curious circumstances, through the Robot Hall of Fame.
They invited me.
It’s an institution in the Science Museum there in Pittsburgh.
They contacted me.
Would I come and accept an award for C-3PO to be part of their exhibit?
Yes, of course.
But on the way there…
So where is the Hall of Fame?
It’s in the center of Pittsburgh.
It’s in the Science Museum.
And for instance, you know, they’ve got C-3PO, they’ve got R2-D2.
They’ve got a machine that can pot a ball every time, get that hoop every time.
A basketball.
A basketball.
It can do it mechanically every time, perfectly.
No matter where you put it in the…
Or just from that one spot.
I think you…
Yeah, it’s cheating, isn’t it?
It’s rubbish, isn’t it?
And they’ve got one of the original arms of that original…
Could pick up an egg and put it there and just do that all the time.
They’ve also got…
Oh, so they have the history.
They’ve got the history.
And at the time, that would have been quite remarkable to get a machine to do anything.
It was the first, what was the industrial robot there was.
And it’s got to be able to pick up an egg and not break it.
Not break it, but also put it exactly to replicate.
And the definition of a robot has changed now from the early Asmobian days to where we are, a machine that can do something kind of that’s useful, doesn’t need a human to do it.
They also have, for instance, a room as large as this, which is a medicine dispensary, which is apparently far, far more accurate than having a human dispenser in a hospital.
It’s dishing out the drugs, but in a good way.
If you visited that Hall of Fame, let’s assume they have all robots.
What would be your favorite robot, Kate?
My favorite robot?
They only have real ones, right?
Like not science fiction.
Well, no, C-3PO is in there.
Yeah.
Is Wall-E there?
I like Wall-E.
Wall-E.
Wall-E.
Wall-E that you’re allowed, even if they just have a drawing of Wall-E, we’ll give you Wall-E.
You’re like, well, that’s cute.
I like that.
OK, how about you?
Alien Covenant, which wasn’t the best movie in the world, but Michael Fassbender.
Oh, yeah.
Hot robot, but not as hot as what’s his face in AI.
He was the male sex robot.
God, did I just go gay for robots?
Anyway, Michael Fassbender plays two robots.
He plays himself.
He plays Walter, who has no emotions, but then he plays Walter’s evil twin, who does.
And they’re both robots?
And they’re both robots, but the one without emotions, believe it or not, easily manipulated by the one who has emotions.
Because when you’re evil, you can do evil.
But when you don’t have any emotions and you’re just susceptible to anything.
So that’s your favorite robot?
Yeah.
Which one?
The evil one.
I can’t lie.
The evil one is my favorite.
Damn.
You know why?
Because I don’t have it in me to be that.
And I think maybe if I did, I would feel differently.
Maybe you needed to complete you.
Oh, wouldn’t that be cool?
So at what point do you think about the good and evil that a robot might or might not do, either because they’re programmed to or because they learn it on their own?
Right.
Yeah, I don’t think we think about it in terms of inherent good or evil, but more how is the technology being used?
By those who should know the difference between good and evil.
Yes.
Right.
Humans.
But see, now apparently at some point, these machines will be programmed by algorithms written by people, even if they’re written by other machines, written at some point by people, and good and evil will be kind of inherent in that algorithm.
Yeah.
It’s going to be a whole mess of gray.
Thank you for that.
Thank you.
I’m very, very hopeful, yes.
Well, I had to take the conversation there.
Okay.
The future of AI infused in robots.
Let’s see what Anthony Daniels says.
I am a little frightened by AI.
And you’re quite right.
I come in to deal with the talks of the students with an objective eye.
That I don’t really understand much of the science, but I have an outer perspective.
And gradually through practice, you know, I’m enjoying virtual reality, augmented reality, all these sort of things that are gradually bringing the theatrical user, if you will, the entertainment user into the scene.
So you’re not just a sit back participant.
You are actually involved.
So maybe the gradualism of this prevents anyone from even noticing the day that AI takes over.
I think kind of that’s already happening.
But in the world of entertainment, which is what the Entertainment Technology Center is about, the growth in involving entertainment is very marked with all these headsets coming on to the leap motion and all that kind of thing.
But in a world where robots are going to industrialize jobs and take jobs away from human beings.
We better look to what humans are going to do apart from twiddle their thumbs.
Interesting.
So let me ask you now then.
Whatever you were not thinking about evil in the moment, how about evil in the future?
How about AI turning evil?
How about have, let me ask you a tighter question.
Has AI as portrayed in film gone the right places that we all should be thinking about?
Or are they missing something?
Yes, they’re missing a ton.
And I love science fiction.
I think that science fiction opens people’s minds to thinking about what’s possible.
But we have so many dystopian stories of robots taking over that people are fearing robot uprisings when that’s very premature and we should be worried about other things that are happening right now.
Wait, you didn’t say that it wouldn’t happen.
You just said it’s premature.
That’s funny.
People worry about robot uprising?
Not yet.
Not for at least another two years.
That’s still 2027.
So tell me, where should we be focused?
Well, there are a lot of issues right now with privacy, data security, supplement versus replacement of human ability, with reinforcing racial, gender stereotypes in the design of these technologies.
All of this is happening right now.
There’s autonomous weapon systems being developed.
There’s things we should be concerned about that aren’t the robots becoming smart and taking over the world.
I think that that one also tends to be a lot of rich white dudes worry about that one because they don’t have to worry about the other stuff.
They’re just like, my only danger is that a robot is going to kill me.
I don’t have to worry about facial recognition holding me up at the airport.
Sociologically insightful.
You’re right.
Plus, you’ve seen the racial sinks in bathrooms.
The racist sinks.
No.
I don’t know about this.
You don’t know about those?
They can’t see black hands.
We’ve done this with all types of technology because it’s all white dudes building it.
I just thought the sink didn’t work.
You put your hands on it waiting for the water because it’s an automatic sink.
That has happened to me.
Maybe that sink doesn’t work so that I go to another sink.
Then I wave my hand some more and eventually it’ll hit.
If you do the experiment, you put in a darker surface or a lighter surface, it’s reflecting.
It’s about reflecting the light.
Reflecting the light.
What you’re saying is white men, say it, white men.
White men because there’s also a lot of gender stuff that happens.
We’ll design things thinking that they are the model of what it is and should be and capturing their concerns.
But it’s also not their fault.
We all view the world through our own experiences.
And so the problem is that we don’t have diverse teams building technology.
That’s really where it is.
It is their fault if they’re not hiring you or Chuck.
They need to hire Chuck, really.
Can a black man have clean hands?
Trying to stave off viral infections.
Damn.
Yes, sir, you’re hired.
But we don’t want any trouble.
No, but you raise a very important point.
I’m stereotyping here.
But if white men are programming all of the code that will be the future of AI, it could have remarkably biased consequences.
As an unintended consequence.
You’re speaking about this as though this is in the future, but this is actually happening right now.
Well, got you.
Listen.
Yeah, I know that’s why my hands are dirty.
Chuck can’t clean his hands and you don’t know when.
Thank God for hand sanitizer.
So how about this?
Let’s try to land this plane.
What do you think is our largest ethical dilemma going forward?
I think the thing I worry about the most is that a lot of AI, the way it’s built right now, relies on data collection.
They need massive amounts of data.
And so I worry about privacy because there’s no incentive to curb that right now.
Interesting.
Because if you want to know all about humans, you’ve got to know everything they do.
But then other people will also have access to that data.
Governments will, companies will, and that’s what concerns them.
It’s already happening in China where they’re collecting information privately, but then the government forces them to turn over that information, including facial recognition that happens on just the streets of the cities.
All that camera data.
How do I identify foreign nationals?
Exactly.
Round them up.
Right.
Exactly.
So how do we lean into…
Because I do think there are so many positive use cases for this tech, so how do we lean into those positive use cases and curb some of this stuff?
That’s the challenge.
No, don’t ask us that question.
I’m asking you.
I didn’t bring you here to ask that.
No.
If I had an answer, then my job would be over.
Alright, so let’s get some parting thoughts.
Chuck, what’s your parting thought here?
You know, I’m really disturbed by the fact that there are racist things, man.
I’m sorry.
I did not know about that at all.
This has happened to me.
It’s like I feel violated by things now.
That’s all I can think about.
I’m sorry.
Okay.
Sorry to take you off the rails there, Chuck.
So, Kate, give us something hopeful here, reflecting on it all.
So, you know how people are sometimes nice to robots and then they feel silly about it?
Like they’ll say excuse me to their robot vacuum cleaner or they’ll like say please or thank you to Alexa, the Amazon’s assistant.
I don’t think people need to feel silly about that because I think that what that is saying is that their first instinct is to be kind to another.
And so what I really, really love about robots is that they are kind of a reflection of our own humanity in a way.
I mean our interactions with the robots.
Yeah, our interactions with them.
That’s cool.
That’s cool.
Yeah, I like that.
So if you’re a good person, a robot will tease that out of you.
So here’s what I think.
Not that you asked, but…
We don’t care, but go ahead.
Says the media lab professional…
I was going to say, why don’t you treat him like a robot?
This is the real side of who we got here.
I think, and I don’t even know if I have foundation to think this way, I think the apocalyptic scenarios are overplayed.
I think we always deal into our base lowest fears because fear tends to always override our joys.
That’s natural, I think, for survival, right?
If you’re not afraid of something and it kills you, then gone is the gene to be afraid of stuff that will kill you, right?
So you’re taken out of the gene pool.
So I think it’s been overplayed.
My worry is that our distraction with the evil prevents us from thinking more creatively about the good.
The good that robotic AI can bring to this world.
And I don’t want to lose out on the creative solutions that they can bring.
So Kate, I put it entirely on your shoulders to fix the problem.
Because this office is not called the Media Lab.
We’re going to send you back home, back to your peeps.
And we want you to solve this problem.
Challenge accepted.
Excellent.
Chuck.
Always a pleasure.
Good to have you.
Kate, thanks for coming down from Boston.
Thank you so much for having me.
You’ve been watching, possibly listening to this episode of StarTalk, Robots and Humans.
And I just want to thank Kate and Chuck for doing the show.
As always, I bid you goodbye.




