A computer circuit board.
A computer circuit board.

Cosmic Queries – Robot Ethics with Dr. Kate Darling

Photo: Harland Quarrington/MOD, OGL v1.0, via Wikimedia Commons
  • Free Audio
  • Ad-Free Audio
  • Video

About This Episode

Are robots going to take over the world? On this episode, Neil deGrasse Tyson and comic co-host Negin Farsad explore the future of artificial intelligence by looking at our past with the animal kingdom joined by robot ethicist and author of A New Breed, Dr. Kate Darling.

What is the biggest ethical challenge we’re facing with robots? Find out why there might be bigger challenges ahead with robots than a science fiction takeover. We answer patron questions and discuss how humans already use other beings to supplement our skills. Would we use robots to build a habitat on Mars? What are the challenges of making a robot for space? If robots were construction workers would they still catcall?

Is robot Tinder going to happen? We break down robotic ethics and ask, just because we can do something does that mean we should? Could robots ever breed or self-program their own behaviors? What is our definition of intelligence? How do robots change the nature of warfare? Should there be a ban on autonomous weapons? What sort of responsibility do we need in creating– or not creating– artificially intelligent weapons.

What happens when robots achieve consciousness? Discover why this watershed moment is more nuanced than meets the eye. Could we make robots that have brains as complex as our own? Also find out how robot intelligence functions differently than human intelligence. How are our biases influencing what we build into machines? Does AGI, artificial general intelligence, hold real potential? How do we define consciousness? All that, plus, Negin explains her goth mime phase, all on another episode of StarTalk!

Thanks to our Patrons Dino Vidić, Violetta + my mom, Izzy, Jeni Morrow, Sian Alam, Leonard Drikus Jansen Van Vuuren, Marc Wolff, LaylaNicoleXO, Eric Colombel, Jonathan Siebern, and Chris Beck for supporting us this week.

NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free.

About the prints that flank Neil in this video:
“Black Swan” & “White Swan” limited edition serigraph prints by Coast Salish artist Jane Kwatleematt Marston. For more information about this artist and her work, visit Inuit Gallery of Vancouver.

Transcript

DOWNLOAD SRT
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk. I’m your host, Neil deGrasse Tyson, your personal astrophysicist, and this is a Cosmic Queries StarTalk, all about AI...

Welcome to StarTalk, your place in the universe where science and pop culture collide.

StarTalk begins right now.

This is StarTalk.

I’m your host, Neil deGrasse Tyson, your personal astrophysicist, and this is a Cosmic Queries StarTalk, all about AI and the ethics of AI and what it means and what are robots and their relationship with us and our relationship with animals and what does all that mean and how and why and are we all going to die?

Negin, you’re going to help me, help us figure this one out.

Oh, my, first of all, I’m glad that you started with are we all going to die because that’s the first question on everybody’s mind.

Like, let’s just lay it out.

That’s the way we’re all tuning in today, Neil.

Thank you.

Thank you.

Thank you, Negin Farsad.

This is your, I’ve lost count of how many times you’ve been my co-host, and it’s a delight to have you back.

And Negin, you’re the host of the podcast Fake the Nation, which I was delighted to have been one of your guests, but apparently only once.

That invitation never came back.

I don’t know.

You gotta earn it, Neil.

You gotta earn it.

You’re also a voice of someone on a new animated adult swim cartoon.

What’s the name of that?

It’s called Bird Girl, and it’s really fun and really ridiculous.

I am, I’m not Bird Girl.

I am Meredith the Mind Taker.

So I go into people’s minds, and I can tell what they’re doing, and then I can also change their minds.

So I feel like it’s a superpower that I use sparingly, but I do use it, Neil, so be careful.

Crazy, and one of my favorite things you’ve done is your book, How to Make White People Laugh.

Did I get that title right?

Oh yeah.

And you get to say that because you’re like you were from the Middle East or something, so you’re allowed to address light-skinned people that way.

Yeah, the Iranian American Muslims, you know?

We get to say stuff like that sometimes, you know what I mean?

Yeah.

So while I read a lot about AI, I claim no particular expertise.

For that, we had to go to the source.

So up to Cambridge, Massachusetts, and the one and only Kate Darling.

Kate, welcome back to StarTalk.

Yeah, you’re an expert in robot ethics.

That’s just a crazy thing that, like, I don’t want to have to think about that, but you know we have to think about that, right?

And, you know, human-robot interactions, tech policy, of course, policy is influenced by this.

And you have a doctor of sciences from the Swiss Federal Institute of Technology.

Did I get that right?

And I’m afraid, I’m not even gonna ask you about this.

I’m just gonna read it and we’ll move on.

You’re a caretaker of several domestic robots.

Let’s move on.

I don’t know.

I don’t know.

Your latest book came out in April, 2021.

The New Breed, What Our History with Animals Reveals About Our Future with Robots.

And this is a Cosmic Queries, and so our whole fan base is like ready.

They’re like at the gates at the start of the race, trying to understand what all this is about.

But I want to just lead off with a few questions.

What would you say was our single biggest ethical challenge with regard to robots, other than they’ll kill us all?

Oh, it’s interesting because I actually wrote the book because I don’t think the question whether they will kill us all is the single biggest ethical challenge, even though it’s the one we always focus on.

There’s a bigger challenge than that.

Oh, oh, oh, Negin, what, what?

The bigger challenge is like, will they kill us all but be really nice while they’re doing it?

Is that the bigger ethical challenge?

Will they put the forks on the right side of the plate in the process of killing, like what’s their etiquette skill?

Yeah, that, you know, I never thought about that one, but that might be the single biggest question, you know, will they have good table manners?

No, no, I think that there are a lot of ethical questions actually, and we’re at a very unique moment in time because robots have been around for many decades, but they’ve been kind of behind the scenes in factories and behind walls and in cages, and now they’re coming into shared spaces.

And we’re kind of trying to figure out how to live with them and what they can be used for.

And one of the things that I try to do with the book is move away from this constant comparison we have of robots to humans and artificial intelligence to human intelligence.

And these narratives we have about them taking over and replacing us, and I tried to push a different analogy, which is animals.

And I look at the ways that we’ve harnessed animals for work, for weaponry, for companionship, for millennia, and how we’ve partnered with animals, not because they do what we do, but because their skill sets are so different from ours.

And so as we move into this future of artificial intelligence and robotics, we should be thinking of these technologies as a partner in what we’re trying to achieve.

And if we can do that and get rid of some of the moral panic that we have, then we can start addressing some of the actual issues that I think are at play that often have to do much less with the technology than they have to do with humans making choices against the backdrop of corporate capitalism or oppressive governments.

And so it’s really all up to us in the end and not up to the robots to determine the future.

Okay, I have to push back.

I have to push back.

With your permission, may I push back?

Okay, so when we harness oxen to pull the plow and horses to do other kinds of farming, and we have dogs that sniff out whatever, so we are using certain talents that each of these creatures possess.

At no time are we relying on any animal’s intelligence, really, not in the way we think of intelligence.

I don’t go to my dog and say, I’m having problems with his calculus question.

Can you help me out here?

No.

Okay, stop licking your butt and help me out.

You know, this is funny.

I’m just saying.

Hold on, Neil.

I take offense on behalf of my Pomeranian who does excellent calculus theorems.

Okay, continue.

So, oh, it’s a Pomeranian.

So, even if it gets the wrong answer, it’ll do it cute, right?

Yeah, and shed a lot of fur while he’s doing it.

It’s fantastic.

So, whereas AI in our dreams is smarter than us, so to say let’s partner with something smarter than us feels scary.

And I’m thinking they’ll just really rather make us their pets.

And that’s the animal robot analogy that you should be exploring.

What kind of pets would humans make for the robots?

See, I feel like that’s been explored a lot.

And I feel like that vision of the future relies on a very narrow definition of intelligence and is kind of caught up in this idea that the artificial intelligence we’re creating is like us but smarter.

It’s only a matter of time before it is smarter than us and can outsmart us.

I don’t think that’s how it works, and I don’t think that’s how it’s currently happening, and that’s not the trajectory we’re on.

Because we already have machines that are much smarter than us.

We have machines that can do calculus.

We have machines that can beat us at chess and go in a jeopardy and do endless calculations and see patterns in data.

They’re way better than us at so many things.

And then there are many other areas where we are still much, much, much smarter than the machines.

It used to be that if you asked Apple’s voice assistant to Siri to call you an ambulance, she would say, okay, from now on, I will call you an ambulance.

Because she didn’t understand the context.

And Apple probably had to fix that by hand because machines don’t perceive the world or learn about the world or understand the world the way humans do.

But even if we could recreate our own intelligence and go along that path of making it better and better, I just don’t think that that’s a very interesting path to pursue because rather than recreate what we already have, why aren’t we trying to create something different that we can benefit from?

So you say that we haven’t used animal intelligence.

I don’t think that’s true.

That’s only true if you view intelligence as human intelligence because animals clearly have a very different skill set, a different type of intelligence than humans do, but they can perceive the world through their senses in ways that we cannot, and that has been really useful to us for a long time.

And so we’ve partnered with them and we’ve used them to supplement our own ability rather than partnering with them because they can do calculus.

And also the point of the book is not to say that animals and robots are the same or that we should treat them exactly the same.

Obviously, they have very different skill sets as well, but I’m just trying to open up our minds to more opportunities and more possibilities than just recreating what we already have.

So, Negin, I think the robots at the MIT lab created Kate Darling and made her say exactly this.

Exactly what the robots want us to think about the robots.

Yeah, who is controlling who, Kate Darling?

The other thing I want to point out is that, Kate Darling, you have the perfect name for a character in a movie in which the robots become self-aware and take over.

You’re like the expert in the background who’s like, I’ve been warning you guys all along or whatever.

So, Negin, let’s get straight to the questions.

These are all Patreon members.

We changed that rule now.

In order to ask a question, you have to be a member.

And that just keeps the wheels turning of our entire operation.

So, I just want to publicly thank Patreon members for this.

And here’s your reward.

All right, Negin, give it to us.

Okay, here we go.

Our first question comes from patron Sean Grossman, who asks, could robots build a habitat for humans on Mars?

Could robots build a telescope on the moon?

What are the physical challenges of building a robot that can operate in space?

It kind of bridges the Neil-Kate Darling divide right there with that question.

We can tag team on this one.

So Kate, why don’t you begin?

And what a perfect example for something that we should be using robots for, right?

Anything that’s difficult for us to do, like head to Mars and hang out and build a habitat, we absolutely need machines that can go to these places where we currently can’t and do work for us.

So that’s a great use case of supplemental technology in line with how we used to use animals to help us do things.

But in terms of the difficulty, and I’m sure Neil will have a lot to say about this as well, it is very, very difficult to create robots straight up.

And then to create robots that can go to space, I am just in awe every day of NASA and people who have built robots that can actually not only function in space, but also they have to get everything so exact and it has to be so precise, and nothing can go wrong.

And even working at MIT, I see many, many things go wrong all the time with the technology in the labs, so it’s very impressive what we’ve been able to do and there are many challenges with it.

I don’t even know where to begin.

It’s a very challenging job.

Something I’ve thought about a lot is, I kept thinking when robots take over, and they will only keep two kinds of humans, the stand-up comedians, because robots don’t know how to do that, and I think construction workers, because I can’t picture construction workers being replaced by robots, because they’re doing such different things, carrying things, they’re making decisions on the spot, and I’m just curious.

So I would wonder how soon, if ever, we’re going to send robots to Mars and have the robots build something.

I guess that’s what I’m trying to think of.

Yeah, I think that’s right.

Robots can help build something.

They are usually good at helping people do their jobs, but they’re not great at straight-up replacing human workers, so it’ll probably be a while before we can have robots just autonomously do something.

I mean, we don’t even have automated car factories yet.

Elon Musk tried to automate his Tesla factory and even in a space where everything’s very predictable and you would think we could have robots just do the assembly line, he ended up tweeting that humans are underrated because there’s always something that can go wrong.

A screw can fall on the floor.

Something can happen and robots don’t know how to deal with that.

You need a human.

And so I think it’s more than just construction workers.

I think they’re going to need quite a few humans around for quite a while.

And I don’t even think that the robots would want to get rid of us because we, again, have skill sets that are so different from theirs that they would probably want to keep us all around.

And I think they want to keep Negin, right?

I think they want to keep him.

I hope so.

The king has the court jester, you know.

Right, exactly.

They need to have some form of entertainment.

I have a follow-up on the robots as construction workers, which is that if robots did become construction workers, would they also still catcall female passersby?

Is that just built into the job?

Only during the lunch break, right?

Is that what they eat?

That’s how science fiction movies would…

Like, for example, in the Jetsons, the maid who’s a robot was actually female, but it’s a robot.

It was hard to break out of these gender stereotypes that people were…

The maid didn’t have to have any gender at all, but it was a female maid.

Even though it was on wheels, it was quite the thing.

I talk about that in the book too, about how the design of these robots, a lot of our own biases flow into that.

So if you had construction workers build a robot, a construction working robot, and they liked a cat call, they might make the robot cat call as well.

We do this constantly in less funny ways.

Yeah, because if we program it, it’s got us in it.

Whether we want it to or not, maybe that’s what we think about.

Even when we think it doesn’t, it does.

That’s the more pernicious biases that filter in to what it is we do.

Well, let’s go for another question, Negin.

Okay, Gary Manneberg asks, we know how to model cognitive intelligence and probably even emotional behavior in machines.

How close are we to building machines that can find an appropriate mate and then produce offspring and then teach the offspring behaviors that are not encoded in the algorithms?

I love this question because I also envision a world where there’s like robot Tinder.

You know what I mean?

Kate, is that happening?

Well, let’s hold on to that.

Let’s take a break.

And when we come back, we’ll return to the subject of robot Tinder on StarTalk with Kate Darling.

Hi, I’m Chris Cohen from Haworth, New Jersey, and I support StarTalk on Patreon.

Please enjoy this episode of StarTalk Radio with your and my favorite personal astrophysicist, Neil deGrasse Tyson.

And we’re back, StarTalk, talking about AI ethics.

Kate Darling has a new book out, comparing our relationship with animals and how that might give us insight to the future of our relationship with robots.

It’s been out for several months, and check it out, she’s at the center of all we need to be thinking about with regard to robot ethics.

And we’re all going to die.

I have to end every comment with that in that way.

So, Negin, you left off with a fun question.

Just tell us what that question is again.

Yeah, well, the question is about, you know, we know how to model cognitive intelligence and probably even emotional behavior in machines.

How close are we to building machines that can find an appropriate mate and then produce offspring and then teach the offspring behaviors that are not encoded in the algorithm?

Okay, why would we do this?

I mean, there’s always this technological determinism that because we can do something, we should do something or that it will happen.

I think that we have this total fascination with recreating ourselves.

I think that for art and entertainment purposes, we will always be chasing these particular goals.

I don’t think we’re as close to modeling all of human cognition and emotional behavior as some people may think we are.

I actually think we’re quite a ways away, and I don’t really see the purpose of creating robot Tinder, as Negin said, other than that that would be hilarious, and I would love to swipe through that.

Which is reason alone, if you ask me, but continue.

I mean, that’s fair.

And in that sense, maybe it will happen soon.

Actually, I might go back to the lab and suggest that we create a robot Tinder, just because I really want to see that now.

But the flip side of that, or an additional element to that question was, if robots are programmed to the way they can maybe learn emotions, would a hybrid of those two robots have to be derivative of what those two robots were?

Or can what emerges from it acquire or self-program a brand new behavior that was not seen in what made it?

I don’t see any reason why we couldn’t, at some point, do something along those lines.

I don’t think that anything that is encoded in us is not in some way that we couldn’t somehow recreate it.

So in theory, I agree that that’s possible.

I just don’t see it happening anytime soon.

I’m kind of a crotchety skeptic in that sense.

And I think there are plenty of other questions that we need to be focused on right now before we even get to that one.

But I do love the theoretical question of what could that look like?

And I am open to us getting there.

Negin, keep it coming.

And I do think just as someone, I’m the dot.

My mom is in real estate and my dad’s a surgeon.

And then together they made a comedian.

So I feel like they can’t.

I see a future where robots will come up with a third crazy thing we didn’t even anticipate.

Yes, from Dean Clunk, we have the question, since we have a sort of narrow idea of what intelligence and knowledge is due to humans only seeing humans as such, do you think an alternate outcome of the evolvement of robotics and AI is possible, where we don’t necessarily make them leaps and bounds smarter than us, but them becoming intelligent in ways we can’t even foresee or understand given our current standings, understandings?

Yes, so this is what my book is all about, that that is actually the ideal future.

We don’t want to recreate human intelligence.

We do want to recreate, we do want to create something new and something different, something supplemental.

The one thing that I would add though to that question is that we have a lot of agency in this.

We as humans can decide what technology we create, right?

So it’s not just, you know, is something going to happen or is something possible?

It depends.

It depends on whether we read my book and agree that this is a direction we should go in or whether people just buy it and throw it out.

I don’t know.

I think that we have so much agency and choice in shaping the future.

It’s not up to the machines.

So that’s a warning shot, really.

What you’re saying is you are offering a path to the future where we don’t all die, but that it becomes a sensible invocation of robot technologies and robot intelligence.

And if we don’t read your book, it’s the end of civilization.

This is what I got from you.

Yes, because let me be clear.

We could all die, but it wouldn’t be the robot’s fault.

It would be your fault for not reading my book.

I love it.

I love it.

That’s the answer.

All right, Negin, give me some more.

Hyperactive Jedi says, I’m curious to know how far robots will get.

I’m so sorry, Kate, but this is back to we’re all going to die.

Okay.

How far will robots get with warfare?

And how would long term use of robots such as drones or even a robot that wipes your butt due to the psychological standing of a person controlling slash using such machines?

You asked me at the wiping the butt part, how did that connect to warfare?

Let me lead off here and say, the original Star Trek from 1966, 1967, there was an episode where a civilization got so advanced that they conducted warfare via computer.

And the computer would log losses on one side or the other, and all the people who were killed in that war game would then have to go into this chamber and be destroyed.

And that’s how they were fighting war.

The war reached a level where it was just that organized.

And of course, the Star Trek crew, they’re not supposed to interfere with it, but of course they do every single episode.

And they say, no, you have to know that war is hell, and war is bloodshed, and war is pain, and war is not just this machine you walk into and just disappear because the computer told you to.

So the death and bloodshed that comes with war seems to be an important force in making sure we don’t fight wars in the future.

Has that really prevented us from fighting wars because it feels like we still fight enough?

No, but maybe you’ll think twice.

I don’t know.

So getting back to the question and then landing in your lap, is there, if we get better and better at having machines wage our war, what is a drone that fires missiles while someone is with a joystick 2,000 miles away?

That person doesn’t hear or feel the bloodshed wrought by the drone.

And the drone is a computer.

It’s a robot.

So where do you think this goes, Kate?

Yeah, I mean, this is actually a really important question because the use of technology and warfare is really changing the nature of it.

I will point out that we did try to use animals as autonomous weapons for many, many years back to ancient times, which is like an equally kind of setting an autonomous technology loose like a flaming pig in order to wreak havoc and destruction.

But obviously with the machines today, we can do much different things and more precise things.

And it is in some cases helping soldiers stay out of the battlefield and be out of harm’s way, but it’s also allowing us to make kill decisions without the same cost of needing to put people in danger.

So there is a lot of debate over to what extent we should be allowing weapon systems that are autonomous or semi-autonomous on battlefields.

And there’s even movements to ban autonomous weapon systems before the UN.

And I think that the direction that this goes in ultimately depends on where we want it to go, right?

I think we should be having these conversations.

I think these are the very important conversations to be having rather than are the robots going to come and kill us all?

No.

Which countries are going to use robots in which way to harm people?

And to what extent does removing people from the battlefield actually lead to more harm because you don’t have that as much of a cost to the people making the decisions?

Forgive me for not remembering who said this, but the first time someone was able to kill another person at a distance, I don’t remember if it was a bow and arrow or some military advance where they’re not right in front of you to kill them.

The person commented, this is the end of valor.

Interesting.

Are you brave by launching a missile over a wall?

Where is the bravery in that?

Where is the valor in that?

Where is the heroism in that?

Or put another way, where is the responsibility in that?

Do you feel as responsible for the harm that you’ve caused if you just pressed a button and it happened many miles away?

That’s way better put than I just said.

Exactly.

And can I also, I don’t want us to forget about the other really important thread in this question, which is the psychological impact of a robot wiping your butt, if that was the other…

Don’t the Toto toilets basically do that?

Do you lose sense of your own valor in your own butt wiping?

No, the Japanese toilets do that already, wipe your butt.

They do, they rinse them off and everything and dry them.

It’s everything.

Yes.

It rinses and dries and everything.

I love them.

Some people prefer that because if you can’t wipe your own butt and you can choose to have a person do it or a robot, some people would choose a robot.

I’m hoping everyone would choose them.

I’m going to say most people are going to go robot on that one, on butt wiping in particular.

It depends on how often they go haywire and wreak havoc.

I don’t…

Yeah.

Time for a couple more before the end of the segment.

So, let’s see.

Violetta and Mom and Izzy ask the following question.

I’m going out for my junior high’s archery team this coming school year, so my question is inspired by that and the robotic bow and arrow seen in The Hunger Games.

Will there be robotic or AI weaponry in the future and how will it be used?

So a little of a sister question to the earlier one.

Yeah, I have bad news for you.

We already have…

We don’t have the…

I don’t believe we have the bow and arrow.

I’d have to…

I read The Hunger Games but didn’t watch the movies, so I’m not sure, and I don’t really remember.

But we already have robots that can, for example, aim and shoot a weapon, although currently people aren’t allowed to let them just do this autonomously.

There always has to be a human in the loop making a decision.

We already have robots that could do it.

And so the question isn’t when will we have those or will we?

The question is what are we going to do with them?

Which has been your mantra the whole time here.

Yes.

It’s never the robot’s fault.

It’s your fault.

Someone made that robot.

All right.

We have Pat Elvin comes in with a question.

As machines become more sophisticated, will they become self-aware?

And here’s the key.

How do we protect them from abuse?

Which sounds like how do we protect the robots from abuse?

Rather than how do we protect humans from robot abuse?

Anyways, but both of those questions.

Yeah, and I want to put extra emphasis on the question, achieve self-awareness, consciousness.

This seems to be the big turning point in all plot lines.

When did Skynet achieve consciousness?

And then you had Terminator, right?

So can you comment, because you haven’t yet, on achieving consciousness and self-awareness?

Yes.

So this is, like you said, the big plot line, the thing that we’re very interested in about robots and AI and all of science fiction, what happens when they become conscious and self-aware.

And one of the things that I look at in my book…

Wait, Dan, I just realized we’re out of time.

No, just for this segment.

When we come back to StarTalk, our third and final segment, we’ll find out from Kate Darling what happens when robots achieve consciousness on StarTalk.

Time to acknowledge our Patreon patrons who support this show.

Dino Vidic, Violetta and my mom Izzy, and Jenny Morrow.

Guys, thanks so much for what you do for us by giving us your support through Patreon.

Without you, we couldn’t do this show.

And for anyone else listening who would like their very own personal Patreon shout out, please go to patreon.com/startalkradio and support us.

We got Kate Darling, not her first rodeo with us, because this is a topic that comes up all the time.

And I got Negin Farsad.

Negin, what’s your social media handle?

Oh, at Negin Farsad on all of the socials, including a newly entered the world of TikTok.

Oh, welcome to TikTok.

Welcome, Negin, N-E-G-I-N and F-A-R-S-A-D.

Yes, on all platforms.

How about you, Kate?

Are you socially active?

I am, mostly on Twitter, GROK underscore.

What?

Brock.

I told you that.

That is such a robot’s handle, Kate.

It was six, two on the nose.

Negin, I told you.

I know, I don’t even know who’s on here right now.

She calls herself Kate.

All right, so Kate, we were trying to find out from you from a question, what happens when the robots achieve consciousness and has it already happened?

And if it hasn’t, how soon will it happen?

And if it does happen, is that a watershed moment in civilization?

That’s, it’s such a great question.

According to our science fiction, it is going to be a watershed moment.

And no, it hasn’t happened yet.

Although we don’t have a good definition of what consciousness even is.

So depending on how you would define it, maybe we have achieved that if you have a very low bar.

I actually love to compare this though to our history of animal rights and our history of not really caring that animals are conscious in Western society.

Because when you look at how we’ve treated other non-humans that have achieved consciousness, arguably, we haven’t really protected them or done anything about that has not really been a watershed moment.

The watershed moments have been more about the animals that we find very cute or that we relate to in some way emotionally.

Like Pomeranians?

Yes, Pomeranians.

Just as an example.

I just pulled that out, I don’t know why I said that.

Yeah, we might think that we care that the Pomeranian is conscious, when really we care that it is a cute little fluff ball.

Those are the fluffy ones that look like balls, right?

Yes, they are.

Yeah.

But we haven’t really, it hasn’t been a watershed moment in the animal kingdom, so why would it be for robots?

Okay, so the thing that’s really popular in movies is that the robots become self-aware and they want to destroy us.

Is it possible that the robots become self-aware and it turns out they’re super delightful and we just want to do like brunch with them all the time?

You know what I mean?

Like, why do we always assume the evil part?

You know, the guy, I forget his name, the guy who runs Pinboard, he’s like an entrepreneur guy.

He has said that what if when the robots become self-aware, they are just crippled by existential angst and they just sit around all day worrying about artificial super, super, super intelligence?

Oh, yeah, reading Kierkegaard and, you know.

Like, just because someone’s smart doesn’t mean that they’re not going to be like depressed.

What if they become drug addicts because they can’t handle the reality of, you don’t know, it’s not necessary.

What is their drug?

Like extra USB cords?

I know, that’s what I’m trying to figure out.

But I need some more USB-C.

I need that 5G.

I need that 5G.

More bandwidth.

So that’s an interesting point, Kate.

I mean, I don’t want to undersell the point that you’re making here, that there’s a lot to be gleaned by studying our prior relationship with animals.

There’s a lot of insights to come to that, and not enough of us are taking advantage of those lessons.

And if we read your book, then we will know how to.

Yes, I think that it’s an analogy that works very well, that we’re all familiar with, and yet we somehow always are comparing robots to us instead of to the other non-humans that are autonomous that we’ve dealt with previously.

You know, we had as a guest on StarTalk the actor who played C-3PO in Star Wars, and he said he’s the only person in the world who knows what it’s like to be a robot.

You played, you’re an actor, you played a robot.

He said, no, no, no.

He’s there in the robot outfit, and other people, humans, are talking to each other completely ignoring him until the moment they need him to do something for them.

And so this is the robot servant, right?

Not the autonomous robot.

And so he felt very lonely.

I don’t want to put words in his mouth, but he was describing this feeling where he’s only relevant when they deem it so.

Otherwise, he’s just there.

And that’s a weird psychological state that I wonder, we might need robot psychologists.

I don’t know, isn’t that just called being an actor on set?

Like, do you have to be wearing the robot costume for that to happen?

That is true.

Because when you’re not needed, no one needs you, right, as an actor?

Yeah, they’re always telling you what to do.

It’s also like being everyone’s little sister, right?

Isn’t it just like being a sibling?

The younger sibling, yes.

Exactly, the younger sibling.

Nobody wants you around until they want you around.

Until they need you, right.

So Negin, give me some more.

All right, so from Lorenzo and Elizabeth, we have the question, do you think that we will ever create an artificial intelligence complex as much as our brain with emotions controlled by electricity that mimics our biological hormones?

And are we going to have a digital conscious mind that can think for itself?

That’s a lot of questions at once, actually.

Let’s unpack it.

What’s this about uploading your consciousness and then it’s in a jar and it’s electronic, so it’s living a whole life in a jar, like the Matrix?

I mean, look, and I think I can answer all of these questions the same way, which is it’s really hard to make predictions and I never say never, right?

A lot of these things could happen or something that no one has anticipated could happen as we keep playing with these technologies.

I’m less interested in like uploading my consciousness to a jar than some people are and more interested in how we’re going to deal with robots as entities in our lives.

But yeah, I would not say no to any of those things.

What do you think, Neil?

What are your predictions for those?

I strongly align with so much of where you’re coming from.

Among them is just because it’s something that may be even possible, is anyone really going to want to do that?

And people talk the talk, but what are you accomplishing by that?

And of all the advances as they come, is that going to be your highest priority?

Or is it going to be, I want to make a better cup of coffee, I want to get to Detroit faster, or I want to…

You know, there are other things that might just simply have higher priority in our lives.

And that’s how I kind of think about it.

And the science fiction writer bypasses all the natural needs and desires and priorities we might have, and goes to an extreme one, we’re all going to die, and then they sell movie tickets.

I think that’s really what’s driving it.

But also, like, the idea that biological hormones would be replicated somehow in robots, I just want to say, like, I hope we don’t give them, like, you know, menstrual cycles.

Like, we don’t need to have another race of thing be set by menstrual cycles.

Like, we did it to women.

But what I do know is that since men have been in charge for most of civilization, it was they who got to say how hormones are affecting women.

And whereas if women were in charge, they would have gotten to say, men, you’re messing up the world with your testosterone.

Put the weapons down.

And so we don’t have a self-awareness of it because, like, because we’re just the guys, right?

But the world is so messed up because of testosterone.

And by the way, I can say as a guy, I feel it.

I mean, I don’t know, you know, you’re, you know, there’s the person.

I mean, here’s the question.

And is this duplicatable in robots, I guess, is the ultimate question here, is the rage a man feels, feels when someone cuts them off at the red light or something or whatever, right?

The number of men putting their head out the window screaming is incalculably higher than the number of women who are reacting to that same incident the same way.

And why not just say, guys, put down your hormones, you’re being harmonally influenced.

But the men are in charge, so we don’t get to say that.

Well, some people have even said that this idea that artificial superintelligence would want to kill us all is a straight up projection of this male dominance that is in our society and not anything that we would build into machines.

That’s interesting.

Like you said earlier, there’s a bias you’re going to put in whether you even are self-aware of it or not.

And so here are all the science fiction stories and all the horror stories and the apocalyptic stories, and they’re all having the robots behave as men would.

But in fact, they’re just robots.

Yes, we do a lot of projecting of human qualities onto these machines, and we could build them that way, and we might if we don’t stop to think about it, but we don’t have to.

And then you’d have to have gender.

Yeah.

And also, I just want to make a case for like projecting onto robots Pomeranian qualities.

So just do with that what you will, Kate Darling, if that’s what I’m pitching.

Are you guys working on any fuzzy cuddly robots?

We actually are.

The lap robots.

Those actually exist.

I wouldn’t want to try to completely replicate your Pomeranian because we could never get that right.

But any robot that looks kind of like something that people are familiar with but isn’t quite like it doesn’t have to be a dog, but it could be like a baby harp seal.

That one exists.

There’s a baby harp seal robot that’s very cuddly.

So we’re already there.

Time for just a few more questions.

So we have from Abby Sheikmatur.

I would love to hear Dr.

Darling’s thoughts on artificial general intelligence, AGI, and its future.

Are there any other minds working on AGI other than Dr.

Ben Gortzel?

How promising do you think this approach is towards achieving strong AI?

So first tell us what general AGI is.

Well, right now, basically anything that people are working on in artificial intelligence is very narrowly focused.

So machines can do a task or a thing within very narrow limitations.

But the ideal that some people are chasing is artificial general intelligence, which is something that’s more like human intelligence.

Humans are able to do a lot of different things.

Like I’m talking to you here right now, but if one of these plants burst into flames behind me, I would be able to leap out of my chair and do something about it because I have that contextual awareness and I can task switch and I can do a lot of different things and I’m flexible.

But the machines that we’ve created so far are not able to do that at all.

And so some people are chasing this goal of trying to create more general intelligence and machines.

Unfortunately, we have no clue how that can even happen.

We don’t even fully understand how human intelligence works, so it’s very difficult to create machines that can do that.

And there’s a couple different camps of people.

Some people believe that that will be possible soon.

Many people that I work with don’t believe that that’s going to be possible.

Or at the very least, it’s going to require so many smaller breakthroughs before we even get close to it that we’ll have a much better prediction of what that would even look like, because that also we don’t know what it would even look like.

Could it be so that you will never achieve artificial general intelligence because you will always have a targeted AI that will do its task better than any AGI could possibly do it?

That almost has to be the case.

I mean, you know me, I always say, why are we trying to recreate AGI when humans can do it?

Why don’t we create something that’s more useful, that can do something that humans can’t do?

We have great AGI already.

Okay, I like that.

This is the first…

You’re very pro-human, a pro-human robot expert.

Yes, yes, I’m feeling better.

Yeah, I feel like we don’t need to end everything.

End every sentence with, we all might die.

Although, do you want to stress that we could all die?

Again, yeah.

So, time for like one, maybe two more questions.

Alec asks, if you believe the brain has nothing more than the sum of its parts, is it fair to say we can one day recreate not just artificial intelligence, but artificial consciousness?

So, that kind of goes back to the becoming self-aware thing.

Like, is it even in the cards technologically right now?

And can you make a circuit that’s sufficiently complex?

There’s been a lot of assumptions that if it is sufficiently complex, it is a natural next step through to achieve consciousness, whatever that is.

And is that a fair guess?

I mean, yeah, we would need to define consciousness first in order to answer that question.

But I do, I am very much in camp, yes, we are just a sum of all of our parts.

And in theory, you should be able to replicate the parts that we have.

We just have no idea how to do that right now.

So the sum of the parts thing, there are things that come together and become more than the sum of the parts, right?

Like you can analyze a bird in great detail, and nowhere in there will you have an understanding that a group of birds will flock together.

Right?

So that’s an emergent phenomenon that in fact only exists in the group because one bird cannot flock, right?

That makes no sense.

So how we’ve defined the word.

So people have wondered whether consciousness is an emergent feature that wasn’t built in from the beginning, but sort of shows up as a natural consequence of the evolution of neurocomplexity.

Yeah.

I mean, that makes sense to me.

But again, it depends on how we define consciousness, right?

Right.

And you know my best evidence for why I know we don’t understand consciousness?

What?

Best evidence, you ready?

People continue to write books about it.

So if you go to the shelf in the library and say, where are the physics books?

It’s like this wide on the shelf.

It’s got like the Newton physics, the Einstein, and that’s it.

Right?

We’re the books on the conscious and the mind and the conscious.

It’s shelf after shelf after shelf.

And so I think if we really understood it, we wouldn’t have to keep writing books on it.

That’s my measure.

We don’t understand it, but we really want to.

But it’s funny because I feel like I was obsessed with it too.

All those books are like people who were in high school.

You know what I mean?

It’s like I questioned consciousness in high school and college, and then I was just like, I’m good.

I’ll ignore this question.

I’ll ignore this life’s great question.

Who cares?

You know what I mean?

So in high school, you were contemplating your consciousness in high school.

That’s cool.

Like, yeah, I feel like I went through that phase.

Right?

I was also goth, you know, and I also went through like a punk gypsy phase.

You know, there was a lot going on with me, but I feel like I came to terms with like, I can’t answer this question, so I will move on forever.

So you were punk, goth, Iranian, American.

This is all of this?

Yeah.

Neil, I don’t even want to get into, I did mime for a while, but I did.

So anyways, there was a lot going on.

Is that the last time I’m on their show?

We have a mime rule here.

Neil is creating a very convincing box for people who are listening to the podcast.

They don’t see this.

He is excellent.

I can’t speak because I’m miming a box.

So I think we got to call it quits there.

But Kate, it’s great to have you back on the show.

And you have to promise us that when you do create a robot that achieves consciousness, you call us first.

All right.

So again, it’s been great to have you.

We love this topic.

It should be obvious, Kate.

And you know we’re going to find you again.

Thanks again, Negin and Kate.

Negin, we can find her on Adult Swim.

Birdbrain, what’s it called?

Bird Girl.

Bird Girl.

And your character really scares me getting inside people’s heads and changing their mind.

But that’s scary.

I have to catch a few episodes and get back to you on that.

And Kate, keep it going there up at MIT.

We all love the Media Lab.

Such really cool things come out of there.

And your work is no exception to that.

So everyone, check out Kate’s book.

And give me the full title so I don’t mangle it.

It’s The New Breed.

What our history with animals reveals about our future with robots.

And there’s a lot of insights there that I think can benefit us all.

So that every time I have this podcast, I don’t have to say we’re all going to die.

Thank you, Kate, for saving us from that thing.

Thank you.

That’s all we have time for.

I’m Neil deGrasse Tyson, your personal astrophysicist.

See the full transcript

In This Episode

Get the most out of StarTalk!

Ad-Free Audio Downloads
Priority Cosmic Queries
Patreon Exclusive AMAs
Signed Books from Neil
Live Streams with Neil
Learn the Meaning of Life
...and much more

Episode Topics