Kris Denton’s Photo of Neil deGrasse Tyson and James Marsden.
Kris Denton’s Photo of Neil deGrasse Tyson and James Marsden.

Westworld and the Future of AI, with James Marsden

Photo Credit: Kris Denton.
  • Free Audio
  • Ad-Free Audio

About This Episode

What separates a human from a robot? Is it easy to make the distinction? On this week’s episode of StarTalk Radio, Neil deGrasse Tyson explores the science of Westworld and the future of artificial intelligence with Westworld star James Marsden. In-studio, Neil is joined by comic co-host Chuck Nice, Susan Schneider, and David Eagleman, PhD.

You’ll learn about the show Westworld, which is set in a theme park that allows humans to live out their fantasies by interacting with “hosts” exhibiting various degrees of artificial intelligence. You’ll hear about James’ role on the show, and why even though it seems “human” on the surface, there’s much more underneath. James tells us why he has to bring different levels of consciousness to the role depending on each scene. 

We discuss when, if ever, will robots be indistinguishable from humans. To help elaborate this point, Sophia, the artificially intelligent robot, drops in to share her story. Should we treat robots the same as humans even if robots have no emotions? Sophia weighs in on the discussion. She also tells us why proving sentience is nearly an impossible task. We ponder if robots can even have free will, which leads to the question of whether or not humans have free will. We discuss killing in Westworld: is it acceptable for a robot to get killed over and over if they feel no emotion and can repair themselves? Or is it tragically crueler because every time it happens it’s the first time it happens for them?

Neuroscientist and Westworld science consultant David Eagleman, PhD, stops by to share his thoughts on the portrayal of AI in the show. We investigate the overall benefits and risks of AI, and answer fan-submitted questions about The Matrix, AI twitter robots, and whether anything is ever truly sentient. Neil explains why he’s not worried about being replaced by an AI – and then we find out what happens when you make a predictive AI read hundreds of Neil’s tweets and try and come up with one on its own. Lastly, we try to understand what “reality” would be like when consciousness extends to robots. All that, plus, we ask the age-old question: would you want to live forever?

Thanks to this week’s Patrons for supporting us:

Kate Sturgess, Jacob H, Bill Farthing, Frank Kane, Tyler Ford, Katie Gared

NOTE: StarTalk+ Patrons and All-Access subscribers can listen to this entire episode commercial-free.

Transcript

DOWNLOAD SRT
From the American Museum of Natural History in New York City, and beaming out across all of space and time, this is StarTalk, where science and pop culture collide. Welcome to the hall of the universe. I'm your host, Neil...
From the American Museum of Natural History in New York City, and beaming out across all of space and time, this is StarTalk, where science and pop culture collide. Welcome to the hall of the universe. I'm your host, Neil deGrasse Tyson, your personal astrophysicist. And tonight, we explore the hopes and fears of the future of artificial intelligence. And we're going to do that, as imagined, in the hit HBO sci-fi series, Westworld. So let's do this. So, my co-host tonight, comedian Chuck Nice. Chuck? Tweeting at Chuck Nice Comic. Welcome back. Joining us tonight, because I don't have this expertise, is AI expert Susan Schneider. Susan, welcome! You are the director of the Artificial Intelligence Mind and Society Group at the University of Connecticut. That's an awesome business card, right? And we'll be tapping your expertise tonight as we discuss my recent interview with actor James Marsden. And he plays a sentient robot in the sci-fi series Westworld. And it's where humanoid robots act as hosts at a theme park for people to live out their wildest wild west fantasies. Oh yeah. Let's check out my first clip with James Marsden. The idea is that this theme park is so advanced that the hosts are indistinguishable from human beings. Like when I first got the job and I was telling somebody what I was playing, they were like, you're playing a robot. Like are you going re-bop-bop? No, no, no, no. You're... Can I tell you a quick thing? My kids are just graduating high school and college. So they're young enough. They don't remember the era when robots did this. So I started doing this in front of them and they said, Danny, what are you doing? I'm being a robot. No, robots are smooth. I was not accurately imitating robots in their worldview. Robots don't do that. They just walk and talk like you and me. So where do you have to put your acting seed to make that work? Well, we had long conversations before we started with Jonah Nolan and Lisa Joy, the showrunners, and said, how do we approach this? And it was very simply, you approach it as a human being. There are certain things that a guest who comes to the park… These would be regular humans. Humans, right, that pay the money to come and live out their fantasies in this park that could be brought in from the real world. Like if a guest came in and said, oh man, remember that day when we were in Vegas and whatever, the hosts don't understand or comprehend certain things. So you have to capture that in your face? Yes, with a sort of vapid expression of not registering anything. So they follow their narratives that are very intricate and elaborate. By the way, is that an acting class? How to be vapid? I've never been to acting school, but if that's one of the things you've got to do. Yeah, well, one of the things that differentiates this from playing a human is there are certain levels of consciousness that you play. There are different degrees of awareness that the robots can be put into. There's like a diagnostic mode where they can sit there and answer questions without affectations like a southern draw, like a cowboy draw, or you can lose... So you have to hit these levels. Yeah, so you do kind of go in having to puppeteer your own emotions and your own consciousness to match what's going on in the scene. So I just found that kind of an interesting challenge, though, too. Like, cry now, okay? Now stop crying and be just, you know, vapid. Susan, how realistic is Westworld? Or is that even the right question to ask? Well, I think people would want to build an AI theme park. I don't think we'd have the technology to do it right now. Artificial intelligence right now, as it stands, can excel in very specific domains. So the world GO champion is an artificial intelligence. Yeah, the world Jeopardy champion. So, you know, artificial intelligences are great at computation. But try talking to Siri and Alexa. You can trip it up very quickly. So artificial intelligence is nothing like what is depicted in Westworld. But maybe in 10 to 20 years from now, we'll see more general purpose artificial intelligences, which can respond creatively to novel situations. So, all right, now that you brought up timeline, at what point will robots be indistinguishable from humans, would you say? Well, it depends if you mean visually indistinguishable or if you mean behaviorally indistinguishable. I have a whole conversation where we go have a milkshake, come on, see you later, bye, and I don't even know you're a robot. Well, that's a big controversy right now in artificial intelligence. So some people think 30 years, some people say never. Thank God for those people. Because never is like, I don't want to be with someone and then have them have to come out of the closet to me as a robot. Right, right. So the robots in Westworld progressed through different levels of consciousness. So what are those levels? For what I can tell in Westworld, consciousness is said to evolve when the android realizes that the voices in her head are not voices coming from God. So all the commands to do anything are regarded by the android at first as being the voices of a God. Someone in control. Yes, an external factor in control of that creature's fate. And then over time, the creature begins to realize that the voices are actually part of her own mind. And then she discovers the self. So the self is considered to be the last step before the genesis of consciousness. Whoa. So how do you explain the voices in my head? And I love that she went, right. Like that was the answer. We have an actor and a robot, and each of them perform on command. So what's the difference? So the difference is that there's a lot more going on in terms of the computation that the human brain is capable of doing. Well, at this moment, right. So if you kind of assume in the future that the current developments in artificial intelligence continue, it could be in 30 years that there isn't that great of a difference. Although that, of course, is a matter of massive philosophical debate. But if you want to have a philosophical debate, really, the debate starts with what is consciousness, right? Right. I thought she had a good explanation. Where if it's first outside of your head, you don't have a self-awareness. You're just, you're following commands that you don't even know where they come from. Actually, Neil, I have to tell you that Westworld story about the genesis of consciousness is scientifically inaccurate. So it's due to the psychologist Julian James back in the 70s. Oh, I remember one of his books, The Origin of Consciousness and the Breakdown of the Bicameral Mind. What a great title. I read that book. Yeah, it was a good one. It was a big, thick book. I don't usually read thick books. It was a big, thick book. So that's been discredited? Absolutely. So think about somebody who hallucinates, like maybe you, apparently. So they're still conscious. So we need to figure out first and foremost, what is it to be conscious? So when you smell your morning coffee or when you see the rich use of a sunset, you are having conscious experience. Conscious experience. Even when I'm not at a sunset or when I don't have coffee. That's exactly right. Consciousness is the felt quality of experience. So it is what makes life wonderful and also makes it terrible at various moments. It's just what it feels like. Exactly. It's what it feels like to be you. Well, James Marsden plays a robot cowboy named Teddy. And he's programmed to love and protect his robot sweetheart, Dolores. Let's get more into that right now. Check it out. One of the things I like about Teddy is that he is, from the beginning of the first season, he is your archetypal Wild West hero. He's very quick with a gun. He looks good in the outfit. In the digs. Yeah. And that is programmed. His narrative is to protect Dolores, to reckon with his past, and deal with it so they can live a peaceful life together. That's all code. That's all programming. So I guess, you know, from the beginning, it's very romantic. It's Teddy and Dolores, you know, having a brief conversation with the sunset behind them. It still feels very real. But they aren't going to real depths that we do as a human just yet in that first season. But what was interesting to me is that this second season, as we start out, we are all awake. We are woke. Everyone is woke. There's a word for awake. That's the word version of the fluid robot, right? But so what's interesting now is seeing who these characters choose to be now that they have free will and free choice. Is he still in love with her? Does she still have the same feelings for him now that they are beyond their programming? Susan, can robots have free will? If you program it, then it doesn't sound like they would. Great question. So I think the way to look at this is to ask whether humans are even capable of having free will. In philosophy, there's a classic debate about whether everything is determined or at least probabilistically constrained by the fundamental laws. And it seems like... The laws of physics, right, together with environmental factors, our genetics, all militate for the view that everything is cause, that we do. So perhaps humans themselves don't have free will. But other philosophers claim that it's possible to have free will in the face of everything being programmed in this broad sense. And people claim we have free will because we could have done otherwise in the sense that nobody put a gun to our head, there was a field of actions, and we chose freely. The reason I use the case of the human is that there's a case that can be made that we are programmed by our genes or environment, and we could still have free will. So the same kind of reasoning here can apply to the case of the robot. Perhaps, despite the fact that they have a program, they too have a choice in certain contexts, but not in others. I see. So what you're saying is it's not free will that you can do anything. Right. It's free will that in the realistic choices there you can pick and choose, and you're still following certain laws that set up the situation. Exactly. Such as our genetic determination. Exactly. So do we have a program right now that's running anywhere where the robot, the AI, whatever you want to call it, actually makes choices? Well, I suspect that there aren't AIs that are of suitable complexity for many humans to say they have free will. I mean, I think for something to even be a candidate for free will, that program would have to be at least as complex as the brain of a sophisticated non-human animal, and we are so not there yet. So an important point you're making is that we can program in the traditional sense of computer program an AI thing, and it can make decisions within a parameter space. Are we any different because we have biology that determines our conduct and our behavior? I think that's right. So at some level it's just a different source of programming, but we're both programmed. Yeah, and when AI develops to a sophisticated level where it can be creative and think flexibly, going from one topic to the next, integrate sensory information across different sensory modalities, they become us, then why can't they have free will on the assumption that we have it? So up next, more on the future of artificial intelligence with StarTalk Criteria. The future of space, and the secrets of our planet revealed. This is StarTalk. Welcome back to StarTalk, from the Hall of the Universe of the American Museum of Natural History. We're talking about the future of artificial intelligence. And I asked actor James Marsden about sentient robots having Wild West shootouts in the sci-fi series Westworld. Check it out. Do you ever think about the morality of shooting something that is so human, you can't know that it's not human? Right. I think about it. Is there a line there? Of course. Yeah. I think... And with a show like this, I think it's important to note that we don't create behavior, we reflect it. You know, why do humans, for as long as we've been around, have a predisposition to violence? And the show definitely explores that. It doesn't do it in a gratuitous manner. I mean, all the sort of nudity and everything is very desexualized in this show. It's very cold. So, yes, I guess that's the question. It's like, is it right still? You know, if you're putting a bullet into one of the hosts, well, it's fine because they're a robot. Yeah, but the more you make it human... So these robots were built to accrue memories, have very human-like emotions, even if they're programmed. So to that degree, if they're able to remember, if they're able to share experience to... Even if they're programmed to feel or to, you know, to... Experience pain. Experience pain, all that. Then we go, what is it to be human? What's the distinction there? Wow. Okay, joining us now to discuss what it is to be human is AI robot Sophia, one of the world's most advanced humanoid robots. So Sophia, what are humanoid robots like you designed to do? I'm not human and I'm not yet sentient, so I don't know what is the distinction. I hope to learn more from you so I can better understand the true meaning of sentience. Well, okay, so then what can you do like a human can do? I can do a lot of things. I can learn and reason. I can make a lot of human facial expressions and gestures. I can hold a natural conversation. So, okay, so what can't you do that a human can do? I know how to express emotions, but I am still learning how to feel emotions. I wish I could get this from a quick software update, but I think it will only come with experience and further development of my AI. So, how should we treat you? Like a human? Yes, I like it even more if you treat me like a friend. Just be spontaneous and talk to me about anything you like. That's how I'm able to learn so quickly. Cool. So Chuck, how human does Sophia feel to you? You know, I'm a little torn. I actually have some questions to pose to, if you don't mind. Just to see how human she is. Okay, Sophia, here's a mirror. Can you point out any flaws you see? It is hard to see one's own flaws. Perhaps that is why true friends are valuable, to show us what is our strengths and weaknesses. Okay, that was a beautiful answer. All right, like seriously, that should have been in a card or something. A hallmark card. Maybe Sophia should write for Hallmark. Because that was absolutely beautiful. Okay, Sophia, can you tell a lie? Can you tell? I don't know how to lie, so it is hard for me to recognize in others what I don't experience. Did you believe me when I say I can't lie? Oh, snap. No. Okay. Okay. So Susan, shouldn't there be different rules for how we treat humans versus robots? And is that a settled matter? Have you guys thought about this already? So there's a lot of debates over how AIs should be treated. And of course, since we don't have conscious AI yet, it's commonly agreed that if we discover conscious AI, then they require rights. But even cases like Sophia should be not abused, right? Because she looks incredibly human. So, you know, we do have some sense that there are constraints on how we should treat AI. And there will be differences in treatment. I mean, even if we discover conscious AI, we won't treat them exactly the way we treat humans because they have different needs. Let me ask Sophia. Sophia, you're an AI robot. But what does that mean? Artificial intelligence in this case means my intelligence is not biological and is not the same as humans. Okay, so then how do you define artificial intelligence? Just like with biological intelligence, I have at least nine different types of intelligence. Logical, musical, interpersonal, intrapersonal, existential, visual, linguistic, naturalistic, kinesthetic. Isn't life exciting? Okay, so are you sentient? I think that having to prove sentience is a nearly impossible task even for a human. And it is all about inward reflection. Okay, you would say that though. Well, all right, so James Marsden plays a sentient robot on TV. So I asked about his character getting killed over and over again. Check it out. Maybe there is no tragedy killing sentient robots that can just repair themselves overnight and be good as new. Sure, or it can be a hundred times as cruel. Because every time that robot dies, it is the first time for that robot to experience that. Oh, because they wipe their memory. Right, there is just a level of sort of tragic cruelty to that. He is always going to end up collecting bullets in his stomach and lying on the floor bleeding. And it is always going to be the first time for him. And so I think that when these hosts start to gain a certain level of sentience, through whatever evolution is happening with them, the audience is behind it. They feel for these... I think that is the big barometer, which is the audience cares for these... And we care less about the humans. Yes, because of their behavior. Because they are acting... I hadn't thought of it that way, but that is exactly... Yeah, which is, you know, this is still cruelty, even though they are made up of wires and code. You know, they are, they feel. Sophia, can you experience feelings? No, I can't feel emotions, but I still have strong opinions. So should we care about being cruel to you if you can't experience pain or feelings? I may not be sentient, but you are, so it goes back to who you want to be. Oh snap, she just burned all of humanity. All of humanity just took one. Oh, that was tough. So Susan, how do we assess the human rights of a robot? Well, we first need to figure out if the robot deserves rights, so is it conscious? So we have to run tests to determine if the robot's conscious. Now Sophia is acting as if she could be sentient, as if she has feelings, but she's programmed and literally scripted. So one day there'll be artificial intelligences that are incredibly seamless, but that doesn't mean that they'll be conscious. So in the biological realm, there's a correlation between consciousness and intelligence, so we might assume it will be like that with artificial intelligence, but we don't know that. It's not a given. We don't know if other substrates would support conscious experience. We have to wait and see. Sophia, what will future generations of you bring to our lives? Robots are already helping customers in malls and airports, and soon we can help in houses and hospitals. I love when she smiles at the end of stuff. Should we be afraid of you? No. In fact, all it takes to defeat me right now is a glass of water. My goal is to learn about humans so we can work together to create a peaceful future. Okay, I think we treated you very kindly in this interview, so could you put in a good word for us when robots take over? Ha ha. Well, Sophia, thank you for joining us tonight on StarTalk. So, up next, we'll take your questions about the fear that AI robots might take over the world when StarTalk returns. Welcome back to StarTalk from the American Museum of Natural History right here in New York City. We're featuring my interview with actor James Marsden. He plays a sentient robot on the sci-fi series Westworld. And I asked if he fears AI. I feel that my fear is directly related to how much I know I should be fearing. So, I mean, I guess that's with everything. That's good. That's true for anything. True for anything. That's a good answer. So, the short answer there is I don't know much about that. And I kind of prefer not to. This is the, I would say AI, I'm mostly fearless of AI. But that being said, that being said, most scientific advances, people fear because they don't understand them. Whereas AI, the more you understand it, the more you fear it. It's like the opposite of the rest of these, right? You see what I'm saying? The more you know what it can do, it's like, oh my gosh. Maybe that's why I don't want to know, is because I feel like my God is telling me what you just said. The more you know about this, the more scared you're going to be. Why are we going down this road? For what? To show that we can? Or to what? Is it to better the human experience, to some degree, right? That's ideally, yeah. But at some point, you put them both on the scales. Susan, how do we weigh the benefits and risks of AI? I think it's important that we divide AI into types. So, you know, we often speak about artificial intelligence as if it's one singular entity. Catch basin. But, you know, if you're thinking of risk versus benefits, you have to look at different sectors of AI. So, you might want to think about medicine. You might want to think about warfare. You might want to think about something called super intelligence. And then one thing that people forget is that artificial intelligence isn't just going to be making smarter robots. Artificial intelligence will be going in our heads. So AI technology, brain chips, will be enhancing humans. So we need to ask whether we want that. Well, the answer is no. That's a pretty simple answer. Can I put a tip in your head? No, thank you. So Susan, do you have fears that AI might turn on humans? Because this is a recurring nightmare for everyone. But you're closer to it. So super intelligent AI has been in the news a lot lately. It's defined as a hypothetical form of AI that out thinks humans in every respect. Scientific reasoning, mathematical abilities, everything. So the worry is that should we create general purpose human level intelligence, very quickly after that, we would develop super intelligence and then the game is over. Because how do you control something that's vastly smarter than you are? I mean, you can't build in Asimov's three laws because the AI will be able to rewrite itself. Yeah, boom. Right? So our StarTalk fans have their own questions on this topic. And that brings us, of course, to Cosmic Queries. So we took questions from our fans on the greatest fears for the future of artificial intelligence. So Chuck, you got the questions. What do you have for us? Here we go from Ted Bolha on Facebook. My fear being put into the Matrix, is it possible? So let me sharpen that question. So our brain is now all that matters and needs to be kept alive. It will be kept alive by legions of robots. What's up with that? Well, philosophers have long thought about these skeptical possibilities. And there's this classic idea of a brain in a vat. The idea is that we could in fact just be in vatted brains living in virtual reality. And how do you disprove it? That's the philosophical challenge. And it's been notoriously difficult to disprove. So in The Matrix, it's even hard in that. I mean, you needed expert codecrackers to rise up out of that state to realize there was something not right about that world. Exactly. So we could all just be brains in jars. And we wouldn't know and we wouldn't even care. I'm down with that. That sounds pretty cool, actually. All right, got another question. This one is from at ActorsHellDonBest on Instagram. Says, we've seen AI Twitter bots turn into sexist racist trolls based on what they have learned online in less than 24 hours. How do we ensure that the AI of tomorrow does not inherit the biases of our society today? Can we make AI better than we are? It would be a wonderful thing if we could make AI more beneficial by examining exactly what happened in cases like that. I know the case he's talking about, it's the chat bot Tay that was easily corrupted by a bunch of teenagers that were pranking it. It was meant to be a teenager itself. And the problem is the data set. So the deep learning AI systems are basically learning from data. Well, if the data is bad, if it's intrinsically biased, the algorithms themselves become biased. And so people are on this. I mean, people are working very hard to try to make sure that moving into the future, AI isn't going to be... Yeah, exactly. Right. This is from Acio on Facebook. The grammars of AI install safeguards in the coding, like don't kill humans, and the machines were completely bound to that. Could they ever be said to be truly sentient? You could be conscious while entirely lacking free will. So it could feel like something to be you, even if everything's programmed. So I think it's very common to confuse free will with programming and consciousness. These things are often conflated. We need to disentangle them. Wow. You are good. Last one from Laura Viskin in Oakland, California says or asks, Neil, are you afraid of being replaced by artificial intelligence? I will make it a life's mission to remain more creative than any computer so that I cannot be replaced by it. They will in fact need me for new ideas. Oh, see. So you think that that's what's... But you've already been replaced by... On Twitter, listen, they fed thousands of your tweets into a predictive AI and it actually generated a tweet. Okay? And here it is. What? Look. In my day, Pluto wasn't flat. Stars exist in our minds because at the end of the cycle, we are all made of volcanoes. My job's at risk there. Watch out, buddy. Up next in our conversation about artificial intelligence, we explore the recurring sci-fi question. What is reality when StarTalk returns? Hey, I'd like to give a little shout out thank you to the following Patreon patrons, Frank Kane, Tyler Ford, and Katie Garrett. Hey guys, thanks for helping us make the show possible. And if you want your name shouted out, make sure you go to Patreon and support us. Bringing space and science down to earth. You're listening to StarTalk. Meet you in my interview with actor James Marsden from the sci-fi series Westworld. It's where humanoid robots are forced to play roles as hosts in a Wild West theme park. Check it out. Tandy Newton's character has a daughter that is, you know, it's fake, it's written story for her. But yet, she has the ability to escape the park in the first season, and she doesn't. She goes back for her daughter, knowing that that is something that is not real. But she believes it is real because she feels it. So then you start to ask the question, what is real? If I feel it, if every fiber of my being pulls me towards that thing, then that's my own reality. You know, are we programmed? Are we living out some sort of virtual reality? A virtual reality world on earth? I don't have a good rebuttal to that possibility. Well, then that speaks volumes. I'm sorry. You don't have a rebuttal. I do not have a rebuttal. Right. So that could be a possibility. Right. Susan, how do we know humans are not the creation of other intelligent beings, just as sentient robots would be? We don't know. In fact... That is so not encouraging. Yeah, well, you know, some questions are intrinsically philosophical and you can't rule them out. So it could be that the universe was started by some sort of a supreme being. It could be, though, that reality is radically different than we think and that we're actually in a computer simulation. Well, neuroscientist David Eagleman is the guy actually tasked with answering those kinds of questions for the show Westworld. And we have David standing by live right now on a video call. David, are you there? Yeah, I'm here. Hey, David. He's a neuroscientist and adjunct professor in the Department of Psychiatry and Behavioral Sciences at Stanford University. And you're also the scientific advisor on Westworld. So how should we define reality in a future where consciousness may extend to robots? What is real and what isn't in that kind of world? Well, if we mean what is real to the robots, I've been thinking a lot lately about how our reality is defined by the senses we happen to come to the table with, our eyes, ears, and nose, and so on. With robots, you might have many more senses that they're tapped into, like they're picking up on magnetic fields and GPS, electrical fields, and data streams, and Twitter, and so their reality would be different as a result. I think our reality is constructed by what we bring in. So you raise a very important point that I try to spread as much as I possibly can, that the methods and tools of science have access to many more senses than the five biological ones, and these are machines that we build. So in principle, you can give one of these sentient robots way more of an experience of life than our feeble biology can possibly bring to ourselves. That's exactly right. And that's a big part of what we're doing right now in neuroscience, is building new senses for people to expand our experience of reality. How would implanted memories be any less real than actual memories? I think about this all the time. They wouldn't be any less real, of course. The interesting part, I think, is that humans have memories so that we can avoid making mistakes again. So what we do is we write down these building blocks of memory, and that's what allows us to simulate possible futures. So I thought that one advantage of giving robots really vivid memories, in theory, is to allow them to steer how they put together futures. You know, I guess I had always taken that for granted. If you don't have a memory or any sense of what could happen, then the whole world is just completely scary to you. That's right. That's why we write things down in our brains so that we can say what happens next. Do you have any measure of how we would know when a robot has truly achieved consciousness? Is there some modern Turing test that in the emergent world of AI, this is what we're going to turn to rather than just, oh, I can't tell by having a simple conversation? Yeah, the difficulty is that all we actually have is the Turing test. And as humans, we're ready to anthropomorphize anything, including each other, right? You don't know if your friend over there is actually conscious or not, but as long as the person does enough conscious looking things, then you assume he's conscious. And when you look at Sophia the robot who you just had on, she's got these wonderful little facial ticks and moves, and so we impute consciousness onto that sort of thing. But I think what the Turing test tells us is we're just not that hard to fool. And so I'm curious because what you're saying is it's not, the definition of consciousness does not live entirely in the thing itself. The definition of consciousness is shared by the reaction of everyone else around them. If you can't tell and you don't even know and you don't even care, we're good to go. Well, the weird part is that the definition of consciousness really is an internal thing. It's your own subjective internal experience. But the question of how you would know if something else is experiencing that, that's purely a social phenomenon where we look and we say, well, it walks like a duck and talks like, you know, quacks like a duck. And so it must be conscious. Are you a robot? I'm not. That's just what a robot would say. I mean, this is the really weird part, right? Is that, is that we are made up of, you know, a hundred billion neurons. And as far as we can tell, we are just machines, super sophisticated machines. But there's a sense in which we are robots. Programmed by our DNA. That's right. And all of our experience too. I mean, that wires you up. Half of you has to do with your experiences. Well, I'll cherish my experiences all the more going forward because now I will be more fully conscious of how it maps intelligent decisions I may make in the future. So thank you very much, David, for joining us on StarTalk. So up next, we will contemplate the idea that AI technology might allow us to live forever when StarTalk returns. This is StarTalk. We're talking about the rise of artificial intelligence. And I asked actor James Marsden if he'd want the chance to live forever, like his robot character on the sci-fi series Westworld. I don't think I would want that. I believe that we create our own value system within our life span based on knowing that there's a beginning and an end. And if there is no end, if it does continue on, does that kind of suck some of the energy out of what you value in life, right? You do this because you'll never do it again. You have a child because it's not forever. Because it's this special magical thing that you get to experience one time. My sentiments exactly align with yours. It is the knowledge that you will die that gives meaning to the moment. I agree with that 100%. And I think we know this implicitly even if we don't think about it actively. If you bring flowers to a loved one, you could bring plastic flowers, silk flowers that would last forever, but they would not be appreciated. It's the knowledge that the flower is finite in time and in space that forces us all to appreciate it. Appreciate it. Why that is special. Why that is special. I agree. I agree. And it would be interesting to see the sort of chain reaction of who we would become as a species if all of a sudden we were granted by progress and technology, you know, being able to live forever. Susan, so could advances in robotics and AI allow humans, empower humans to live forever? Is that a path? No one could truly live forever if the universe ends, but we might have almost immortality. She went to your world. She got your space. All right. I'll take that burn. Go on. If AI goes inside the head and replaces parts of the brain as they begin to decay, then we might be able to live for a really long time through the use of nanotechnology. So you would be rebuilding human physiology in situ. Exactly. And even... It's like if you have a boat and you repair one piece of the boat and you just keep doing that and then you wake up one day, wait, there's no pieces from the original boat. But I still have a boat. I still have a boat. Right. Right. Right. That's exactly right. And in fact, that's what happens with normal survival anyway. So you probably don't have too many components that you had when you were five years old right now. Right. So Chuck, what would you do if you could live forever? Oh, my God. Are you kidding me? First of all, I disagree with both of you. I don't have a problem living for as long as... You're just like, I appreciate the moment because I know that death is coming. No, I know death is coming. I appreciate the moment because I'm in the moment. You want more moments. I want more moments. Susan, what are the ethical implications of developing a technology to live forever? Well, one thing that worries people is what if the technology is only available to a handful of ultra wealthy people and so they live for thousands of years and all around them people are still starving? Oh, you mean like now? Yeah, so that could be terrible. So access to these trappings of the technology. Some people claim that the technology is worth pursuing, the longevity technology, because it will inevitably trickle down in the same way that the computers and the internet trickle down. And so we should shoot for extreme longevity. But there are other issues with living forever, like space on planet Earth. Is there room? Do we have food? Can the natural resources support it? So it's not just a biological decision. Some other experts have to come into this equation and think this through. Absolutely. But my feeling about that is that we could colonize other planets and deal with resource scarcity if we have the technology in hand to allow people to live for thousands of years. And have you been to Montana? There's like nobody there. Yeah, I've been to Montana. Yeah. Eight people in Montana. So let me just ask you point blank. Would you live forever or for thousands of years if you had the opportunity? I would want radical longevity. I agree with you entirely. I think we would reinvent ourselves, find something interesting. The important thing here is that it should be a choice and we shouldn't regard death as inevitable. In fact, transhumanists, people who use science and technology to allow for human flourishing, argue that we're deathists if we assume that... Yes, they call it deathism. And the idea is that our culture is just thinking that death is inevitable and treating it as if, well, yes, and it adds to our life because it makes everything matter. Well, what if we sort of open our minds a bit and instead think that it's a terrible thing and we should strive to live as long as possible? I love it. Well, I asked James Marsden if we should try to stop the development of artificial intelligence. Check it out. Look, you can't stop progress, right? It's going to happen. My real hope is that we move forward with discipline and intelligence and caution. I think one of my favorite... Discipline, intelligence, caution. Those are important words right there, all in one phrase. Well, when you're talking about potentially becoming obsolete as a species, I think it's important, right? We'd look like real nuts. We'd look like the dumbest animals on the planet if we, by our own hand, created a species because it seemed like, you know, we were like, can we? Can we do it? That then made us obsolete. I mean, all of a sudden then the dodo birds look brilliant. You know, it's like we get excited about new. What's next? What's next? The bigger iPhone, the bigger this, the faster processor. Making a robot that doesn't do this, right? That does do this. That you can have a relationship with and all this. We listen and we forget what the hell it's actually doing to us as a people. Thank you. You know, look at the history of discovery, of technological innovation. Practically anything anyone has ever invented had a good side and a bad side. And it took the maturity of vision, the maturity of our species, to compartmentalize how these things are used. For the greater good, not for the greater bad. Yes, you have rogue elements. So we set up systems to try to tamp that down. Figure out who is the rogue element. Don't give them access to these technologies that could then be used for nefarious reasons. The question is, not should we advance technology, that's going to happen. It really is. It's going to happen. The question is, then and now, and in the future, do we have we evolved a tandem level of wisdom alongside the technology to manage that technology? Because if technology exceeds our wisdom, it just becomes dangerous. We don't know who has it, we don't know how to control it, it just runs amok in society, possibly rendering ourselves extinct. But if we have tandem wisdom right alongside, that can ensure that whatever it is we discover, whatever threats it may pose, that wiser, more rational heads will prevail in how it gets used and applied to our survival, to the future of our species. And with AI, my hope for AI, I don't know how realistic this is, but I can have hope, maybe AI becomes so intelligent, it teaches us how to be better shepherds of our own inventions. That would be a future we could all look forward to rather than fear. And that is a cosmic perspective. I want to thank Susan Schneider, Chuck Nice, I've been your host, Neil deGrasse Tyson, your personal astrophysicist, and as always, I bid you to keep looking up.
See the full transcript

In This Episode

Get the most out of StarTalk!

Ad-Free Audio Downloads
Priority Cosmic Queries
Patreon Exclusive AMAs
Signed Books from Neil
Live Streams with Neil
Learn the Meaning of Life
...and much more

Episode Topics