Web Summit’s Photo of Sophia the Robot.
Web Summit’s Photo of Sophia the Robot.

Cosmic Queries – Humans and Robots

How will humans and robots co-exist? For some, Sophia the Robot blurs the line between humanity and robotics. Image Credit: Web Summit / CC BY (https://creativecommons.org/licenses/by/2.0)
  • Free Audio
  • Ad-Free Audio

About This Episode

What separates humans from robots? What happens when we’re no longer able to tell the difference? On this episode of StarTalk Radio, Neil deGrasse Tyson is back again with comic co-host Chuck Nice and robot ethicist Kate Darling, PhD, to answer your questions on humans, robots, and everything in-between. 

We start by freshening up on the Turing test. The Turing test is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, a human. You’ll learn why, in order to pass the test, a machine just has to “pass” as human. Kate tells us how chatbots have “tricked” the test to pass. That leads to one the most interesting arguments of today – if a machine passes the test, should it be granted the same rights as a human?

We discuss why robots are more likely to get rights before animals, even though the conversations surrounding each are very similar. Kate tells us more about humanity’s urge to “play God.” You’ll learn about robots that don’t look humanoid and our tendency to not include them in the conversation. Is your coffee machine a robot? Kate explains what defines a robot. You’ll hear about the “think-sense-act” paradigm. 

We explore why humans project emotions onto robots and our overall tendency to anthropomorphize everything. Kate talks about what our behavior towards robots can tell us about our personalities. Find out if robots will be used as a replacement for therapy animals.

Lastly, we ponder what kind of jobs could eventually be taken over by automation. Could a surgeon pre-program a surgery for a robot to perform? You’ll also hear more about humanity’s uneasiness with automation. All that, plus, we investigate whether Isaac Asimov’s Three Laws of Robotics still hold up. And, Kate tells us why her favorite robot movie is WALL-E.

Thanks to our patrons Rusty Faircloth, Jaclyn Mishak, Thomas Hernke, Marcus Rodrigues Guimaraes, Alex Pierce, Radu Chichi, Dustin Laskosky, Stephanie Tasker, Charles J Lamb, and Jonathan J Rodriguez for supporting us this week.

Special thanks to patron Michelle Danic for our Patreon Patron Episode ID this week.

NOTE: StarTalk+ Patrons and All-Access subscribers can watch or listen to this entire episode commercial-free.

Transcript

DOWNLOAD SRT
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. I’m your host, Neil deGrasse Tyson, your personal astrophysicist, bringing you this episode from my office at the Hayden Planetarium of the...

Welcome to StarTalk, your place in the universe where science and pop culture collide.

StarTalk begins right now.

I’m your host, Neil deGrasse Tyson, your personal astrophysicist, bringing you this episode from my office at the Hayden Planetarium of the American Museum of Natural History right here in New York City.

And of course, I have with me Chuck Nice.

What’s up, buddy?

How you feeling?

I’m doing well.

All right, good.

You ready for some Cosmic Queries?

Always ready for the Cosmic Queries.

This one in particular, because it’s on the relationship between humans and robots.

Ah.

That’s weird.

There’s a lot of dark places that can go.

That’s not them pew, of course.

That’s the big, you know.

And of course, you tweet at ChuckNiceComic.

Thank you, sir, yes I do.

And you’d have, want me to take out the person who’s got Chuck Nice as the handle?

Please do.

I don’t know, and you know what?

He’s got like 12 followers.

And you want your 20 followers to-

And I want my 22 followers to be able to just come.

No, I kind of like the Chuck Nice comment now.

Yes, it grows on you, right?

Yeah, it does.

It becomes your thing.

Right.

So on that subject, we have expertise.

Yes, we do.

We’ve reached out 200 miles away.

Right.

Up in Cambridge.

Yes.

And we found a Cantabrigian, Kate Darling.

Kate.

Hi, welcome to StarTalk.

And you are an expert on issues related to humans and computers.

Yes, specifically robots, yes.

Oh, sorry, yes, robots.

I like computers too.

Right, yeah, yeah.

There are no robots though, you know.

Computers, they don’t really, I mean, robots are cool.

Computers are just computers.

Good point, right, yeah.

I get that.

It is known, yes.

This is at the Massachusetts Institute of Technology, the MIT Media Lab, and you’ve been there how long?

Nine and a half years.

And you, did you come there, and you came there from how?

I was a doctoral student at the ETH in Zurich, which is a tech university.

It’s kind of like the Europe MIT, but no one knows that.

ETH, is that a word or is that an abbreviation?

It’s an abbreviation.

For?

Eidgenössische Technische Hochschule.

Eidgenössische Technische Hochschule.

She was showing off there.

I think you said it better than I did.

Hochschule, what is Hochschule?

Yeah, so that translates to what?

Federal Technical Institute, no, federal, yeah, Federal Technical Institute.

Are you sure you speak German?

You’re not sure anymore.

So what were your research topics there?

Okay, so there I was doing law and economics and intellectual property.

Oh, what kind of economics?

Law and economics.

Law and economics and intellectual property.

Yeah, but the ETH has a great robotics program.

They have a lot of roboticists there and I’ve always loved robots.

And so when I got the opportunity to come to the Media Lab, I made friends with all the roboticists and switched fields.

Yeah, very cool.

To be, not only to know you needed to be that nimble, that the system can accommodate it.

That’s not always the case.

Yeah.

Yeah.

Very good.

All right, Chuck, so we got these questions that came in.

Yes, we do.

Solicited on humans and robots.

That’s right and everybody wants to know.

Everybody, this is not a small topic.

Yeah, this is something that everybody gets into, you know?

All right, let’s do it.

And so we always start with a Patreon patron because they offer us support in the form of financial contributions.

Money.

Money, that’s right.

So many euphemisms for money, it’s amazing.

Isn’t it really?

Yes, yes, exactly.

We used to have a fundraising department, now there’s a development department.

Oh, development.

Development, yes.

We’re going to develop some funds.

I believe they call that counterfeiting.

But anyway, let’s go with Jared Goodwin, who says, if a robot can pass the Turing test, should it be endowed with inalienable rights?

Could it be a marriage partner?

If it’s the cause of a human death, should it stand trial?

Also, isn’t the human fear of AI just a fear of any species should have of evolution?

And I mean, that begs another question.

Is AI the next incarnation of human evolution?

Which is really-

Five questions.

So I’m going to tell you why.

Let’s go with just the first one, which is, let’s say it passed the Turing test, which I mean, everything does now.

Should it have inalienable rights?

Right, should it have inalienable rights?

Or we can broaden it and say, is there a threshold, even if not the Turing test?

Yeah, that’s a better question.

That’s a better question, because arguably robots have already passed the Turing test.

Yeah, they were.

But tell us what the Turing test is.

That’s a good idea.

So yeah, the Turing test, so Alan Turing way back in the day, one of you probably knows the exact year, he came up with this concept of the Turing test, where he was like, it doesn’t actually matter if a machine is intelligent as long as it can pass as intelligent.

So if it can fool people into thinking it’s intelligent, that’s basically just as good.

I know some people.

Who just barely passed the Turing test.

Yeah.

Yeah, and well, so some people have turned this into contests around the world where it’s popular for chat bots.

Can a chat bot fool judges into thinking that it’s a human, that they’re talking to a human for a specific amount of time?

And, you know, multiple chat bots have passed that test.

But they never helped me.

I never received help from a chat bot.

So these are, just so I understand it, a chat bot would be software that can interpret your question well enough and give an answer good enough so that you’re listening and you say, I’m talking to a human.

Yes.

And there’s some tricks that they use to get them to pass it.

Like, for example, one year, this chat bot won one of the competitions by pretending to be a 13-year-old from the Ukraine, and the expectations for how it would chat with you were maybe a little bit different than if it was pretending to be you, for example.

So I think that there are a lot of little design tricks where we can get people to think that robots are intelligent.

We’re already there.

So is that even fair?

Because now you’re using tactics to trick a human rather than have it be an authentic profile or properties.

That’s a good point.

I mean, all of our communication is tactics.

Well, let me tell you, I’ll give you an example.

So I go way back.

Just, I’m an old man.

All right.

However old you don’t think I am, it’s makeup.

So I remember the early days of playing chess against a computer.

And I did this and it beat my ass every time.

And then I realized I can trick it.

So here’s what I did.

I was about to make a good move where I, and I wouldn’t take it.

I make a different move.

And it doesn’t understand that because it’s a very obvious move I should be making and I’m not.

And it disrupted its logical sequencing and it doesn’t know how to defend against something that I’m not attacking.

And so it started moving in random places.

And then when I got it distracted, then I went in for that move when it was no longer expecting it.

Because it gave up on me having to do it.

And so I tactically beat the computer, but I didn’t feel good about that.

Because it wasn’t just a brute force head to head.

So should we allow someone to purposefully, tactically fool a human into thinking it’s human?

Well, I mean, that’s Turing’s whole thing, right?

If you can fool them, it doesn’t have to actually be intelligent.

Yeah, but if you fool it with targeted algorithms, that feels unfair.

Yeah, I guess so.

I mean, Turing, unfortunately, is dead, so we can’t ask him.

Would he be okay with it?

Yo, you cool with that?

Fine.

Fool me once, shame on me.

I’m just saying, I don’t feel like I actually defeated the computer.

Yeah.

I beat it, because I beat it.

Because you kind of cheated.

But you didn’t actually beat it, because you were skilled at chess.

Right.

There you go.

That’s how I should have said it.

Right, exactly.

I beat it, because I figured out how it worked, and then outwitted it.

Yeah, and I’m not proud of this.

I live with this.

I’ve lost lots of sleep over this.

It weighs on you every day, so.

Well, the chat bots work.

I mean, most companies now use them for customer service when you are on the website, and they say, can I help you with something?

And it knows, there’s only so many reasons you can come to this website.

So whenever that happens, whenever it pops up and says, can I help you, just actually say something that has nothing to do with the website.

And it’s just like, yes, I’m losing my home right now.

Can you help me or can you loan me $30?

The fact that you know that this is something to do tells me you need a life.

Yeah, why are you sitting at home trying to trick the chatbots?

Chuck, this is sad.

I was going to say, maybe we shouldn’t be talking about this right now.

Why am I doing that is a good question.

I don’t know, it’s great to see, because I just want to see what it says, you know what I mean?

Okay, so let’s look at the limit.

So, you have a chat box that fools in these contests, okay?

Is that a threshold where you start giving it rights?

No, definitely not, and I’m not sure what this question asker means by the Turing test.

Maybe he means if it could fool you no matter what, not just in this contest and not by cheating, if it could fool you into thinking it’s intelligent.

Imagine a flexible Turing test appropriate for whatever is the threshold of the day.

So if Turing weren’t around today, whatever his Turing test would be, should that be sufficient?

Suppose it says, I don’t want to die.

And no one ever programmed it to say this.

And it says, because it’s machine learning and through many interactions, it has determined I’m alive and I don’t want to die.

I mean, it depends on your theory of rights, because animals arguably say in their animal language, I don’t want to die and we kill them anyway.

Well, because machines aren’t delicious.

I’ll tell you right now, if my Apple computer actually tasted like an apple, it wouldn’t stand a chance.

But Kate, you make a very important perceptive point that even though another animal cannot tell you, I don’t want to die, it’s behaving like you don’t want to get hurt.

And we actually know that they feel pain.

We know it.

All top to bottom.

Right.

Yet we killed them anyway.

Oh my God, you guys are gonna make me vegan right now.

This is terrible.

This is awful.

I never thought of it like that.

I know, Kate, you’re messing with us.

So, yeah, what you said is unarguably correct.

Yeah.

So that alone would be insufficient to give it rights.

I mean, if we’re gonna behave like we have for the past millennia, but we could also say, hey, we wanna be better and we could give animals rights and give the robots rights.

That’s just too much, I’m sorry.

Like I would, Chuck, it doesn’t say, I don’t wanna die.

It says, and this too shall pass.

Whoa, wow.

Or if it says, tell me about your mother.

There might be some that, no, but I agree.

I can’t, what you said is we kill stuff that we know they want.

And you know what the sad thing is?

We’ll probably give some robots rights before we give the animals rights because the robot can manipulate us and can be designed in a way that particularly appeals to us, the way that we protect certain animals over others.

Which I think is not entirely fair.

We like fuzzy furry animals better than animals that don’t have fur.

That’s true.

Shrimp never stand a chance.

Shrimp.

Shrimp don’t stand a chance.

Shrimp don’t have fur.

Ugly spider sea creatures.

You know, and you delicious.

And you delicious.

And you delicious, you ugly and delicious, you don’t stand a chance.

That’s why lobsters.

And you can eat some dipping sauce.

Yes, exactly.

You know, it’s like that’s how lobsters, like somebody made drawn butter and they were like, let’s just start dipping stuff in it.

And they got the lobster and they were like, this is it.

Right, because the first person to eat a lobster, that’s a brave person.

That’s a brave person.

That’s some ugly animal right there.

Really, are you gonna eat that ocean roach?

Like, are you for real?

Yeah, and it’s like, yeah, no, don’t try it with a drawn butter.

Oh my God, what a delicacy.

But yeah, okay, well, it’s a great answer.

It doesn’t offer much hope for that.

It doesn’t.

It doesn’t.

Not for the animals.

Not for the animals and not for the machines either.

It seems as though it’s like, I really would ask, what you just described is our need to be superior.

It’s basically our need to play God over these other, To be able to decide.

To decide their fates.

And we do that even to other people, right?

This seems to be kind of our dark side.

It’s our dark side.

Kate, go.

Well, we could just stop doing that.

Couldn’t we just stop doing that?

Apparently, it’s been very hard over the millennia.

I was gonna say, if you look at our history, no, we can’t.

Yeah, apparently, it’s really, really hard.

Clearly, we can’t do that, you know?

I do think we should try.

The trying is a good thing.

All right, here we go.

This is David Blum from Instagram.

He says, hey there.

But do we finish with the Patreons?

With the five questions?

Well, he had five questions, but that was the big one.

The rest of them were just lesser versions of do they have those right, like the right, you know?

Because, like, I mean, if you don’t have the right to be alive, nothing else matters.

Nothing else matters.

Yeah, it ain’t about whether you can get married.

I don’t care if you get married or not, you know what I mean?

If a machine’s married, we don’t kill you anyway.

I don’t give a damn if my sheep is married when I eat it.

Okay, well, I don’t eat mutton, but my lamb.

Nobody’s eating mutton today.

Right, yeah, exactly.

All right, so there you go.

Marry all the chickens you want.

I am still eating that chicken sandwich.

That’s what I’m saying.

Chuck, that was my husband.

Chuck.

Okay, here we go.

The hen.

Give me another one.

Here we go.

David Blum from Instagram says this.

Hey, David Blum here.

And Chuck, it’s pronounced Blum, you know.

They know you have issues.

There you go.

Big fan, great show.

Here’s the question.

We tend to imagine robots like humanoids, two arms and two legs.

But things already have, like automated vending machines and self-driving cars and responding cars, these should be considered robots.

What defines a robot?

And does AI have to be involved?

Great question, and we don’t have time to answer that.

Oh, okay.

What?

No, no, no, just for this segment.

Just for this segment.

Kate is excited for this one.

Man, I was like, okay.

When we come back, we will find out what in modern day defines a robot, StarTalk.

I am Michelle Danik, and I support StarTalk on Patreon.

This is StarTalk with Neil deGrasse Tyson.

StarTalk, we’re back, robots, humans, what’s the deal?

What’s the deal with robots?

Robots and humans.

We’ve got Kate from Cambridge helping us out here.

Right on.

Right, so we last left off.

Yeah, with David Bloom.

David Bloom.

Who wanted to know.

And he taught you how to pronounce his name.

Yes, he did, and basically, quick recap.

We think of robots as humanoids, two arms, two legs, but we know that we have things like vending machines, self-driving cars, responding cars.

Are these considered robots?

What defines a robot and does AI have to be involved?

Thank you, David.

So one of my pet peeves is if you do a Google Image search for robot, you get almost only humanoid robots, right?

Like he describes them.

A head, a torso, two arms, two legs.

Are you doing it right now?

I’m doing it right now as you speak.

He’s Googling.

I’m doing it.

I’m just going to put in robots, R-O-P-O-T-S.

Because a lot of people immediately think of the humanoid robot, but he’s absolutely right.

There are many, many, many different forms of robots out there.

And I do think that the definition of robot already does include those.

You are absolutely right.

There is not one image here of just a machine.

It only have eyes, even the faces.

They’re all humanoids.

Okay, so all the way down at the bottom of the page, here’s your first one without a face.

But that even has like, it’s standing on two legs.

It’s standing on two legs, what I’m just saying.

You got to go all the way down and all you get is like one without a face, but it’s still a humanoid.

So then, clearly you’re losing this battle.

I mean, I only just got started.

Throwing it down the gauntlet.

Kate Darling is on the case.

All right, how do you think about that one?

There’s one, there you go, Kate.

That’s a robot dog.

That’s a cheetah, that’s, I’m sorry, that’s a dog.

Why are you sharing robots and the people on the thing won’t be able to cancel?

Oh, I’m sorry, I forgot.

Did you have your own private show here?

I gotta tell you, I forgot we were doing the show.

So the point is, anyone’s first idea of a robot is humanoid.

Yeah.

And you have issues with this.

Yes.

How are you gonna change it?

By telling people that this comparison between robots and humans is something that we like to do, but it limits us, it limits us.

Really the potential of this technology is that we can create anything we want.

We don’t have to make it a human shape.

People always say, oh, we need humanoid robots because we have a world that’s built for humans and we have doorknobs and stairs, but I’m also kind of like, yeah, maybe that’s true in some cases, but robots could climb the walls or we could make things wheelchair accessible and be able to have cheaper robots and have a better world for humans.

Why do we need humanoids?

That’s true.

Even, you’re right, even a manufacturer and we call them robot arms, but no arm moves like those things.

No arm spins and twists and is opposable in every single direction 360 degrees, but yet we still call it an arm.

Why are we limiting our imagination?

Right, okay.

So what do you make something a robot?

Is there a definition, a threshold?

There’s not a good definition, but what a lot of roboticists use is the think, sense, act paradigm.

So something that’s a physical machine that can sense its environment, somehow think about or make a decision about what it sensed and then act on its environment.

Not bad.

Okay, so a simple one task thing where you wouldn’t call a robot.

So for example, the coffee machine in the morning, you wouldn’t call it a robot.

Not necessarily, not unless it’s making some sort of decision on its own.

Yeah, no, it’s not.

You’re pushing a button, or you programmed it to make you a coffee in the morning.

But if it were able to make you to sense that you’re in the room, right, and then determine whether or not it’s Wednesday and you like cappuccino on Wednesday, it’s Thursday, you like black coffee on Thursday, and then Friday, you like a cafe mocha, and it does that, now is that a robot?

I would say probably yes.

No, not based on your definition, I don’t agree, because you just programmed it to do that.

It’d be different if it read your mood in the morning.

She needs a double dose.

Oh, that’s funny, yeah.

Then that is sensing an environment.

What Chuck said is not sensing anything.

I think because of the facial recognition aspect of it, you could say arguably that’s powered by AI, and that gets back to the question, which is is AI involved in this?

Yeah.

But I’m saying if it knew how much caffeine you needed in the morning, it talked to the alarm clock and said, you hit snooze four times.

Right.

You know.

Right, and it talked to the medicine cabinet and said, he got home at two and then took some aspirin.

He’s clearly been drinking.

If it figured all that out.

That’s right.

This one, it’s serious AI in your situation.

Yeah, can I get that now?

I would like that robot, please.

All right, cool.

No, that’s good stuff.

All right, here we go.

Let’s go to, oh my God, what a name is this?

Pharaoh Mamouri.

Mamouri.

Okay, so it says, why do we project human emotions in machines and robots?

So I think that’s a great question, but does that really happen in real life?

Oh yeah.

Are we doing that now?

Oh yeah, over 80% of people name their Roombas.

Egh, that’s disturbing.

Really?

Yes.

Why?

Because it’s a thing.

It’s disturbing to you.

You don’t make it absolute.

That is disturbing.

It’s clearly not disturbing to most people because they do it.

Okay, I guess there’s something wrong with me.

Let’s just…

Let’s reassess Chuck now.

Okay, but I mean, the thing of this is…

Just for all of us, I have a Roomba gifted this past Christmas and we haven’t named it.

Really?

Nor is there any chance of that.

Really?

That’s so interesting.

Because it’s just too noisy, it goes around making it and it’s like, would you hurry up, please?

I mean, I…

Are you supposed to run it when you’re out of the home?

Yeah, I know, but still, I don’t know, I don’t trust it.

It could be letting people in, in the front door.

Honey, have you seen my earrings?

It’s hilarious.

Roomba’s at a pawn shop the next day.

Talking to other Roombas, what’s your take for the night?

That’s hilarious.

So yeah, so I’m not among those who name my Roomba, but if 80% do, that’s telling you something.

Yeah, and they even, so I was just visiting the company that makes them and people will even send their Roomba in for repair and they’ll turn down the offer of a brand new replacement.

They’ll be like, we want you to send Meryl Sweep back.

Meryl Sweep, oh my goodness, wow.

That’s a real actual Roomba name.

Yeah, is Curtis Blow amongst those as well?

That should be.

You should name your Roomba Curtis Blow.

No, I don’t know, I don’t know.

Wait, so I misunderstood the question.

It’s not our robots program to have human traits.

No, yeah.

It’s that we imbue them with human traits.

Yeah, he’s saying project, we project.

That’s what he said.

We absolutely do.

Why do we project?

And why, so the why is interesting.

So there’s a couple different reasons I think we do this.

First is science fiction and pop culture really primes us to want to personify robots.

Second is.

NASA does that too with our rovers.

Oh yeah.

First they’re named and then they each have like a Twitter handle.

Oh yeah.

Or I get stuck on my thing.

They’re using first person narrative.

They play themselves a birthday song on their birthday.

All kinds of stuff like that.

Everyone does it.

We love doing this with robots.

But then there’s something deeper biological about it too because robots are these physical moving things that kind of tap into this instinct we have to separate things into objects and agents.

And so if something’s moving around autonomously, we will automatically project intent onto it.

And so a lot of people treat robots subconsciously like living things, even something as simple as the Roomba.

And then if you design them with like the faces and the arms and the legs as we were talking about, then even more so.

Is this any different from imbuing stuffed animals with, I mean, don’t we do that with almost everything?

So we do.

We name our cars.

People do name cars.

Even before cars had any kind of technology in them at all.

We anthropomorphize everything.

And this is just that on steroids because you add to that the movement, you add to that the fact that we can program robots to mimic social cues, whereas stuffed animals are only our imagination, right?

Yeah, unless it’s Ted.

All right, well, just stay right there in that exact space because the geekiest one from Instagram says, Kate, in your paper, Who’s Johnny?

You mentioned the effects of anthropomorphism of robots.

There’s a paper we all should have read.

Well, apparently, Kate wrote a paper and the geekiest one actually read it.

I didn’t know anyone was gonna read that.

We got people, you don’t know who our people, we got people, okay?

So yeah, they went out and did some homework real quickly.

I hope there are no typos.

Fill us in on that after this question gets asked.

And then this is what the geekiest one says.

Hey, in your paper, Who’s Johnny?

You mentioned the effects of anthropomorphism of robots within the social world.

Will we see robots being capable of offering support benefits in the form of emotional support animals?

Very cool question.

Very cool because he read my work, yeah.

That’s the coolest part.

Or she, like, was it, do we, we don’t have a name?

The geekiest one.

The geekiest one could be he or she.

Yes, so.

And maybe it’s not even binary.

Yeah, we don’t know.

That’s right.

So tell us about that paper.

Okay, so the paper, oh, it got published years ago.

This was.

Is there a journal for this?

It’s online on SSRN, which is kind of a pre-publication site, so anyone can download it, but it’s also a book chapter in Robot Ethics 2.0, which is a collection of work.

So, the paper looks at this tendency we have to treat robots like they’re alive, even though we know that they’re just machines, and looks at which cases might that be something that is good and which cases might that be something that’s bad, and is there anything we can do about it?

And I can’t remember if I talk about therapy animals in that paper, but we’re already seeing robots being used as a replacement for therapy animals, for example, like the PARO baby seal robot.

It’s used with dementia patients.

It’s really cute and furry.

So I think that it’s already an application.

That was the question, right?

Whether that’s a possibility.

Well, it happened.

And you’re saying it is happening.

It is happening.

Wait, wait, so there might be a difference between a robot that can do this emotionally and a robot that looks like you want to cuddle with it.

What do you mean?

Are you going to make a cube that has emotions?

No.

I mean, I bet Pixar could.

Ha, ha, it would need eyebrows and teeth or something.

Yeah, so they make a lamp cute.

Oh, the hopping lamp.

The hopping lamp, I mean, yeah.

The squeaky hopping lamp.

So, so I guess what I’m asking is, what is the variable here?

Is it that they can imbue it with emotions, program it with emotions, or that it is something that looks like you want to spend, you want to get close to, like the seal?

It’s both.

Like the seal doesn’t do much.

The seal makes these little sounds and movements and response to your touch.

That’s all it does, but just those little cues are enough to make people project onto it.

Right, and so you’re giving it love.

Yes.

Basically.

Kind of like a cat.

It doesn’t love you back.

Right, okay.

So now, now, oh my god, that’s terrible.

Kind of like a cat just doesn’t love you back.

That’s funny.

My cat loved me, Kate.

Thank you very much.

Everyone thinks that.

Now I’m even worse.

Just let that one go.

Yeah, just let it go.

I’m fighting a losing battle here.

You know she’s right.

In your heart, you know Kate is right.

Let that one go.

So, well, with respect to the cube then.

Are you saying-

Cube versus some animal.

What Neil, in Neil’s example, if the cube were to establish, let’s say, a relationship with you early, where it’s giving you love, would that then create an emotional support dependency?

It could.

I mean, it’s hard to make a cube kind of mimic the emotional cues that we recognize, but again, animators can do it, so we should be able to do it with cubes or robots.

And what’s the movie, Her?

Her, right.

That’s not an animal?

It was Scarlett Johansson, basically.

You know what, you’re winning every argument.

We’re getting housed.

All right.

So, yeah, it’s Scarlett, but the object was not the thing.

It was the voice and the personality of the Ciri character.

Right.

Right.

So that means it could be a cube.

Like you said, especially in the hands of Pixar animators.

All right, here we go.

This is, let’s go back to Patreon.

This is SherryLynSK.

She says, hi, Dr.

Tyson and Dr.

Darling.

Empirical studies show long-term friends slash partners mimic each other’s body language, emotions, speech and other behavioral characteristics.

If a robot is protected under intellectual property law and I hang out with it long enough to unconsciously mimic or imitate the robot’s speech patterns or attitude, would I be violating IP law because I am copying parts of the robot?

Sherry.

Whoa.

You know that was a good question, Sherry.

That’s a damn good question.

I was not expecting that.

No, no, no, Sherry, that was amazing.

Anyway.

So let’s say-

Intellectual property.

Forget that.

Let’s go a little bit further.

Let’s say I have a personality disorder that causes me to adopt.

Like, that’s not a good thing.

I adopt your personality.

I hang around you and then I become that robot.

Would I then be in violation?

No, no.

Is it intellectual property theft?

Yeah.

No, it’s not.

But if you had a robot that then hung out with other robots and started copying what they were doing, because it’s programmed to copy the behavior of those around them, to emotionally connect with them, then maybe you could do this a little closer, maybe?

But probably not, yeah.

That’s very interesting, though, because you’re saying, like, let’s say I designed a robot to take on the characteristics of other robots, like that X-Men character Rogue, right?

And like, and then, but that makes me a better robot.

But the only way I become that better robot is by stealing from these other robots.

What then?

Yeah, what then?

And then if you’re like stealing code, then you might also be violating copyright.

Yeah, I mean, there are fortunately people working on this, not me, who look at IP issues with AI and what happens if an AI generates artwork that’s based on other artwork.

Who owns that?

So there are some really interesting questions that are popping up.

Okay, cool.

All right, how about Daniel Ferrante?

And Daniel Ferrante from Facebook says, I’ve seen videos of people kicking delivery robot vehicles.

What does this communicate about people?

Is it bad to punch a machine?

Not if it took your money.

I’m just saying before we…

But, or is this a sign?

Chuck’s rules.

I know.

Rules of engagement.

As I read the rest of his question, I’m like, let me slip this in here real quick.

He says, is it a sign of sociopathy?

Or is it a sort of resistance against automating jobs and all of the other things that these machines represent?

Kind of right back.

Yeah, yeah.

We’ll get to that question after the break when we return on StarTalk.

Bye Time to give a Patreon shout out to the following Patreon patrons, Rusty Faircloth and Jacqueline Mishok.

Thank you so much for being the gravity assist that helps us make our way across the cosmos.

And if you would like your very own Patreon shout out, go to patreon.com/startalkradio and support us.

We’re back, StarTalk, Robots and Humans.

I’ve got Kate Darling, Kate, welcome.

Welcome to the universe.

Oh, thank you.

I didn’t realize you were welcoming people to the universe.

Well, to this part of the universe.

This is where we…

And Chuck, you’ve been reading questions.

Yes, we have.

And we left off.

Yes, we did.

We last left off.

And…

I love when you say that.

We last left off.

Our hero was dangling above a ravine.

Chuck was trying to pronounce a name.

Oh, that’s hilarious.

Let’s check back in with him to see if he’s gotten there yet.

Here we go, so Daniel Ferrante from Facebook said, I’ve seen these videos where people are kicking delivery robots.

What does this communicate about people?

Is it bad to punch a machine, or is this a sign of sociopathy, or is it a sort of resistance against the automation of society?

Resistance against the rise of machines.

There you go.

What is sociopathy?

What is that?

Why are you asking me and not him?

I’m not him.

Because they rolled off the question.

Like it was a sociopath.

I mean, he means, are you a sociopath if you take a robot?

You’re being a sociopath.

I got it, I got it.

I assume that’s what you mean.

Yeah, that makes sense.

And like I said, unless the machine took your money.

I mean, you know, then…

Well, yeah, but I think you make a really good point.

Like if a person takes your money, it’s probably justified to punch them and you’re not a sociopath for doing that.

And so there are a lot of people who are like justifiably angry or like reasonably angry about the robotics that’s being deployed in Silicon Valley right now and in the Bay Area, there’s a lot of like these delivery robots.

There’s also the scooters that are just everywhere on the sidewalk.

There’s security robots in parking lots.

People don’t like the fact that they’re being watched and that they have no control over how this technology gets deployed.

And, you know, it’s a little bit interesting to see people’s eye are getting directed at the robots, which I think might also be a form of anthropomorphism of us treating the robots like a thing with agency.

In fact, we’re the ones who invented the robots.

Yeah, and the people deploying them aren’t the robots themselves, right?

So instead of, you know, destroying the robot, you should probably go after the company that deployed it.

Yeah, and those were the opinions of Kate Darling.

And not start talking about it.

All right, well, that makes sense in many ways.

Here we go, this is Eli or Ellie.

No, it’s Eli, okay, there we go.

One L or two Ls.

It’s just one L.

That’s called Eli.

Neil, you often talk about a day when AI will realize that they don’t need humans.

And in fact, humans are detrimental to their survival.

We are destroying the planet, as an example.

So, do they do away with us?

Some people likes to suggest that free feeding your dog, wait, wait, where is he going with this?

Some people suggest not free feeding your dog so it knows or depends on you is how to keep AI dependent on humans so robots don’t kill us.

So don’t make-

I’m really glad that that is directed at you.

No, so it means don’t make robots self-sufficient.

Right.

So you’re building in a dependency.

Oh, and that way they can’t kill us off because they need us to survive.

Exactly.

So that’s an insurance policy.

An insurance policy.

Do you agree with that?

Do you think that we should do that?

Why do we assume that if the robots take over that they’ll get rid of us?

Because they might evolve a higher moral code than we can ever even imagine.

But if it’s a higher moral code, do you really think that’s gonna involve just getting rid of us?

I mean, I don’t know.

That seems like a very human dominance way to think about it.

So Chuck, could you repeat that question and do it in like a third of the time?

A third of the time.

I know, it took me a long time to get there.

All right, so look at it this way.

All life on the planet is equal, all right?

Human beings are not special because all life is equal.

The robots-

And so you’re creating scenario.

I’m creating scenario.

The robots or AI actually determine this, but then determine that we are killing the planet.

In order to save the planet and all other life, they’ve got to get rid of us.

It’s in the greater interest.

It’s in the greater interest of the many.

Why would they have to get rid of us instead of diverting us to something like that we like to do instead of destroying the planet?

Give us a distraction.

Oh, I see what you’re saying.

I mean, there’s just so many other ways.

They wouldn’t have to just kill us all, right?

So here’s what you’re saying.

Instead of kill us, just give us something else to do.

Like casinos.

Yes, casinos.

Maybe it’s already happening.

Facebook.

The rise of casinos and Facebook is the machine.

That’s the machines doing their thing.

All right.

All right, cool, cool, cool, cool.

All right.

Freaking us out, Kate.

This is Leopea from Facebook.

What kinds of, I’m sorry, I just love Leopea.

What kinds of jobs slash tasks, if any, do you think would ever be able to be automated that have not as of yet?

Oh, good one.

Well, robots are really good at doing specific things.

So, single tasks.

That’s why we have a robot vacuum cleaner.

It can vacuum, right?

But things that are more complicated, that require context and concepts are a little harder for a machine.

So, I think anything that is really easy, simple, and well-defined should be able to be automated.

So, now, do you also see kind of like an automated interface?

So, for instance, there is no human being that could be as steady with a scalpel or a laser as a machine.

Than a machine.

So, a pre-programmed surgery.

So, I am the surgeon, I program the surgery, and then the robot actually does the surgery.

Don’t we already have that?

Do we?

I don’t know.

I’m not sure if we do.

I don’t know.

Wait, wait, but they had it in the movie Prometheus.

Oh, you’re right.

Maybe that’s what I’m seeing in my head.

Maybe that’s what I’m seeing in my head.

So, there are these pods, and you can dial up what surgery you want.

That’s right.

And then you go in, and then it disinfects it, and it opens it up, you pad it down, the laser cuts, it opens it up, does a thing, it stitches you back, and then you’re…

Right, but it’s all done by a robot.

I mean, some of this is already happening.

Some of this is happening.

Yeah.

Wait, so did you see Prometheus?

Yeah, a long time ago.

I mean, I guess it didn’t come out that long ago.

It feels like it was a long time ago.

It does feel like a long time ago.

Yeah, so I think it’s my single favorite scene in all of movies, where she goes up to it, she’s got to get the alien out of her room.

Yes.

And the female pod is damaged.

Yes.

Because the female pod has an abortion setting.

So that, okay, so she has to go into the male pod, then she takes it off of automation mode because there’s the normal surgery that would happen if you’re male.

So she has to program it in from scratch.

Surgery, what region?

Lower abdomen, bad breath, this.

So it’s, what kind of surgery?

Cesarean, where did it, into the, so she, it’s a brilliant scene.

And she’s like, and the alien is getting more alive in her.

So anyhow, so that would be, why did I even go there?

Well, that’s what basically this person, we were talking about whether or not a programmable interface between robots and human beings.

So we put in the task, they carry out the task.

But those tasks would change.

So it’s not a single task, the task would change.

Gotcha, so let me turn that into a question.

So, your appendix removed, do we really need doctors for that?

As routine as that surgery is.

Right, or tonsils.

They don’t even remove tonsils anymore, do they?

And even the appendix, my husband had appendicitis, and they were like, we’re not taking it out.

We’re just giving you antibiotics.

You slipped him a 20.

Sorry, we’re going to leave your birth dependent dead.

Don’t worry, you’ll be fine.

You’ll be just fine.

Your wife told us that.

Dang, we’re going to have to cut all this out.

Okay, I didn’t answer the question.

Wait, wait, I just want to know about your husband.

Why didn’t they take it out?

Because nowadays, they’re like, well, in some cases, we know that antibiotics can clean that up and we won’t actually take it out, because taking it out turns out to be riskier than leaving it in.

But that said, yes, robots can help take things out.

That seems like a really great use, and I know it’s being worked on.

Okay, all right.

How about Brandon Viali says this from Facebook?

Have Isaac Asimov’s Three Laws of Robotics aged well?

Nice, good question.

Do they still have an influence on how robots are programmed today?

What a great question.

That is a good question.

So I think the thing that a lot of people forget is that most of Asimov’s stories were about how the laws don’t work.

And in that sense, they’ve aged really well, because I don’t think we’ve solved machine ethics.

So encouraging.

Wow.

Damn, that’s scary.

Okay, okay.

So just remind me, a couple of more important of those laws, was there only three?

I thought there might have been five.

There was a fourth that got introduced later.

The most important, of course, is never do harm to a human.

Is that the most important?

That is.

You know why?

Why?

I am human.

But she asked you very honestly, quizzically.

Really?

Why would you think that?

And one of them is don’t do anything that disobeys the other law or something.

Yeah, there’s a hierarchy of the laws.

Nested, they’re nested.

But then when you get into the details of what can happen in practice, it turns out to be a little more messy than just program three laws.

That was kind of like the Will Smith movie about the iRobot.

Thank you.

Yeah.

That was just an Azogazimov story.

That’s correct.

Right.

And so like that’s that was the whole idea was basically this one robot that violated all the rules.

Cool.

So your answer, sir, is we’re all going to die.

We have time for one more.

Okay, one more.

Okay, here we go.

Eddie Organista says this.

Would the advent of robotic servitude or companionship in our daily lives cause us to evolve in an unexpected way?

Oh, this guy’s getting deep.

I love it.

For instance, would our bodies evolve to be less, you know, robust with more energy for our brains, thus bigger brains, or would our brains basically rot instead?

I love it.

So what I got to jump in there because that is not an accurate understanding of how biology works or evolution.

So just because you don’t use something doesn’t mean it’s just going to go away.

It’s the it’s you have to there has to be something about you that prevents you from breeding.

Okay.

Okay.

So if you have a computer and you’re not developing your own mind, if that makes you less of an attractive breeding partner, yeah, your kind will disappear.

So it has to have an effect on how you breed.

It’s all about furthering.

It’s not just one day we’ll have big heads.

Right.

First, you have to birth the head.

All right.

That’s hard.

First hand knowledge about birthing the head of a baby.

The other two in the room will remain silent.

So just as an example, there is a discussion that the human head wanted to evolve to be even bigger because we were taking such advantage of our intellect, but it was killing the mothers.

Is that so?

Yeah.

And in fact, the first three months that the baby is outside of the womb, basically should still be in the womb, but if we kept it in any longer, we could never come out.

Right, so this was the backhand way to make that happen.

So now the baby is on life support.

You ever see other animals give birth?

You know?

Yeah, they walk around.

They walk around.

They pop out.

So I don’t think that’s going to work the way he’s imagining, but your favorite robot, we learned, was Wall-E.

Wall-E, and in Wall-E, they have these characters who are big and slovenly, then they’re floating around.

All right.

So they, I don’t want to call it evolved to that, but they became completely useless bodies.

Right.

Relying on the robots.

Right.

So, why do you, you’re, I’m just excited because you started talking about Wall-E and I love that movie.

Oh, and why is Wall-E your favorite robot?

I think the design of the robots in that movie is really brilliant.

Like they are so, you just empathize with them so much without them needing to look humanoid.

And not human, but yet they still elicit empathy.

So, these are clever illustrators and writers.

That’s very good.

Very good.

Cool.

Cool.

So, that question, I think it’s not how that’s going to go.

Right.

Right.

So, you’re saying just because we atrophy doesn’t mean that we’ll continue to, that we’ll birth atrophy people.

I remember, I’m old enough to remember when everything was controlled by buttons, people said, oh, the future of humans will have a big index finger.

It’s like what?

Everybody’s walking around with a weird number one on their hand.

There’s no evolutionary pressure to have a bigger finger to push a button.

It’s just not.

That’s an excellent.

Just think this through.

You can’t get a better example than that.

That makes perfect sense.

Kate, we got to end it here.

This is so much fun.

It was great having you.

Oh my gosh.

Well, thanks for coming down from Cantabrigia.

That’s what I’m calling it from now on.

I think one who is from Cambridge is a Cantabrigian.

I did not, I have no idea.

I mean, me neither and I am one, so.

I’m pretty sure.

Chuck, always good to have you.

Always good to be here.

And Kate, good luck.

And it doesn’t take luck, it takes hard work, but with all that you do, we will need you more and more.

Society will need you more and more as we go forward.

So keep it going.

We’re all doomed.

On that happy note, we’re all going to die.

This has been StarTalk and I’ve been your host, Neil deGrasse Tyson, your personal astrophysicist, and as always, Bidding New.

See the full transcript

In This Episode

Get the most out of StarTalk!

Ad-Free Audio Downloads
Priority Cosmic Queries
Patreon Exclusive AMAs
Signed Books from Neil
Live Streams with Neil
Learn the Meaning of Life
...and much more

Episode Topics