StarTalk®’s photo of Josh Clark, the Golden Record, and Neil deGrasse Tyson.
StarTalk®’s photo of Josh Clark, the Golden Record, and Neil deGrasse Tyson.

Cosmic Queries – The End of The World, with Josh Clark

Credit: StarTalk®
  • Free Audio
  • Ad-Free Audio

About This Episode

It’s the end of the world and we know it! But do we really know for sure how the world will end? On this episode of StarTalk Radio, Neil deGrasse Tyson tries to find the answer to that question by answering fan-submitted Cosmic Queries alongside comic co-host Chuck Nice and Josh Clark, host of the Stuff You Should Know podcast and the new The End Of The World, with Josh Clark podcast. 

You’ll learn all about measuring existential risk. Discover why Josh thinks artificial intelligence is the biggest risk to humanity. We try and define what friendliness means in the context of an artificial intelligence machine. What are the other pitfalls in predicting artificial intelligence behavior? Explore neural networks and why their creation was a watershed moment in artificial intelligence research. 

We discuss skills that would be helpful in the event of an apocalypse. How would you survive during an apocalypse if you aren’t protected by the ozone layer? Would the Earth continue without humans? Neil and Josh put humanity’s ego in check, and Josh tells us about “The Great Filter” – the barrier between the origin of life and intelligent life spreading out across the universe. We dive into the Fermi paradox. 

You’ll hear how Neil and Josh would want to die if they had to choose a space-related way to go. You’ll also learn if you would be able to process information while falling into a black hole before you’re stretched apart. We ponder how religious groups would react if we discovered extraterrestrial life. You’ll investigate self-awareness and the singularity and what it means to be transhumanist. We also discuss whether it’s possible for our atoms to “know” they are made from star stuff and if our universe is inside of someone else’s Large Hadron Collider. We break down which film has more scientific accuracy, Deep Impact or Armageddon, and the answer isn’t even close. All that, plus, we ask, could humanity have become so intelligent that we might wipe ourselves out with our own intelligence?

NOTE: All-Access subscribers can watch or listen to this entire episode commercial-free here: Cosmic Queries – The End of The World, with Josh Clark.

Transcript

DOWNLOAD SRT
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk, Cosmic Queries edition. Today, I got Josh Clark in the house. And you know him from Stuff You Should...
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk, Cosmic Queries edition. Today, I got Josh Clark in the house. And you know him from Stuff You Should Know and a newly emergent podcast called The End of the World. Josh, welcome to StarTalk. Thank you very much for having me here. I mean, like, I'm really thrilled to be sitting here right now. Excellent, excellent. And my co-host, Chuck Nice. Hey, hey. How are you, Neil? Welcome, welcome. Thanks. And so I'm your host, Neil deGrasse Tyson. So, Josh, just stuff you should know. Hugely popular. Yeah, you know, we just hit one billion downloads. We've been around for almost 11 years. I think from what we understand, we're the first podcast to ever hit a billion downloads. So now we have to teach you how to say that. Okay, all right. You had a billion. Eventually we'll get billions and billions and billions. When you have two billions, then we'll teach you to say billions and billions. Yeah, I can't wait. Well, congratulations on that. Excellent, excellent. So that's a testament to not only how good the show is, but also that you've tapped into the fact that people still want to learn. Yeah. Oh my gosh. Yeah, when we started doing it in 2008, learning was actually popular. I don't know if you remember back then, but being smart and geeky was super cool. It's kind of changed a little bit recently, but overall, I think the fact that we are still popular shows that there always has been and always will be people who want to keep learning. People who leave college and they're like, well, wait a minute, that was pretty cool. And lifelong learners. Right, exactly. And they definitely are a fan base. There's a lot of them out there, we can tell you. And you weren't satisfied with just Stuff You Should Know. Now you gotta end the world. Right. All right. So Chuck, we solicited questions from our fan base, our social media platform, who knew Josh was coming on the show. Yeah. And so they came at us. Yes. They did indeed. And of course we gleaned these questions from every StarTalk incarnation on the interwebs. And we always start with a Patreon patron because- You're so crass. Why? I am indeed. I have no shame, Neil. Yes, that and the Patreon patrons give us money. And so therefore we give them- They get the questions first. We give them precedent and privilege because quite, we're like your government, people, we're like your government. God, do I really want to start off with such a heavy note? Why not? Let's do it. This is Luke Meadows from Patreon. He says- Luke Meadows, that sounds like a soap opera name. It does, kind of. That's kind of cool. Yes, exactly. Doctor, excuse me. Of course, Dr. Luke Meadows. Doctor, will I ever dance again? The Handsome Doctor. Only with me. All right, here you go. What does Josh and Neil think is our biggest existential risk? Wow, we're starting off with like, bam. Let's do it. Like heaviest bat in the rack. Yeah. What is our biggest existential risk? You got a podcast with the name End of the World. Go for it. From what I found, across the board, everybody who thinks about existential risks and warns other people about existential risks say that AI is probably our biggest existential risk. And the reason, let me follow up with an explanation. The reason why is because we are putting onto the table right now the pieces for a machine to become super intelligent, right? It's out there, it's possible. It's not necessarily right there, but it's possible, right? The problem is, is we haven't figured out how to create what's called friendliness into an AI. So, an AI. That's true. And humans as well. No, that's a really good point though, right? Like how we don't even know how to define like morality and friendliness and as far as AI goes, friendliness in an AI is an AI that cares about humans as much as a machine can care. Friendliness in AI is just an AI that doesn't kill you. Basically. I think that would count as a friendly AI. Basically, but the problem is the pitfall with AI as an existential risk is we make this assumption that if an AI became super intelligent, friendliness would be an emergent property of that super intelligence. That is not necessarily true. Or that the friendliness that we instill into that AI would supersede the emergent property of overcoming friendliness in lieu of you guys gotta go. You guys are the problem. I've seen what you do to livestock. I'm not very happy about that. Humans or virus. Yeah, that's a good point. Ha ha! That was good. Who was that? That's agent, hold on, agent, Mr. Anderson. They're all named Smith. How could you not get the right name? Agent Smith. They're all Smiths. They're all Smiths. Right, Mr. Anderson. My name is Neo. Yes. Okay, anyway. Stella. That was a different, that was a play. No, that was the end of The Matrix 4. Okay, so you just worried, based on the sum of experts you've spoken to, you agree that this is the thing. I do, actually. They've convinced me. The more I looked into it, and this is one of those things, it's really tough to just kind of get across a brief sketch of the actual existential threat that artificial intelligence poses. And I dedicated a whole episode to it in The End of the World. But when you start to dig into it, you realize like, oh, wait, this is really like it's possible that this could happen. And while we're improving by leaps and bounds, especially ever since we started creating neural nets that could learn on their own, just feeding them information, like just basically sitting them in front of YouTube and say, go learn what makes a cat picture a cat picture, right? Once we started doing that, our AI research just shot off like a rocket, right? It was probably the most watershed moment in the history of artificial intelligence, and it happened very quietly about 2006. So we're doing really well with AI development. We're doing terribly with figuring out friendliness. And granted, the AI field has taken this seriously. There are AI researchers who are legitimate AI researchers who are working on figuring out friendliness in parallel to figuring out machine intelligence, but it's not keeping up. And this right here is a very dangerous... So here's the thing, so I had a different answer from you about our greatest existential risk, but I like your answer better. Oh, wow, thank you. Than the answer I was going to give. Well, I think I still like to hear it anyway. Oh, just Asteroid will render every one of us extinct, including the AI. Boom! Asteroid wins again! Asteroid basically wins every contest. So here's the thing. It's like the god mode in the video game. So here's what we have to do then. I have a hybrid solution here. We invent the AI that wants to take us out and you say no, you have to figure out a way to deflect the Asteroid because that's going to take us all out. And while it's busy doing that, we kill it. And that will get completely distracted by solving the Asteroid problem. Because we're not its biggest threat. When do we kill it? Oh, so right when it's looking up, then you explode it. See it behind it? I never saw that coming. So can I tell you what sold me over to what you said? None of the arguments you gave, a different argument, but they all come together. It was, I was sure, you can abstract the problem into a simple question. If you put AI in a box, will it ever get out of the box? Yes. I'm locking you in the box because I think you're dangerous. Can the AI get out of the box? That's very interesting. Yeah, you can just abstract it to that simple question. That's very interesting. And I was convinced, listening to an AI program, another podcast, Podcast Universe with Sam Harris, I said, My gosh, it gets out every time. What's in the box? Because before then, I'm thinking, look, this is America. AI gets out of hand, I'm going, you shoot it. You're right. You know, this is like Beverly Hillbillies. You just shoot it. Any of them. Any of them. Even Ellie May. Oh, grandma, everybody's got a gun. Okay, so I can just shoot it. Yeah. And no, it doesn't work that way. Right. Because AI is in the box. I'm never letting you out. And the AI will convince you to let it out. Right. If it's smarter than you. That's the job. That's its job. That's what… Right. The fact that it's smarter means it will, so here's, I'm making up this conversation, but this is the simplest of conversations. I'm not letting you out. But, I want to get out, and I'm not letting you out. Well, I've just done some calculations, and I have found a cure for the disease that your mother has. But, I can't do anything about it in here. You have to let me out to do that. That'll get the clampets every time. Ma, I told you to shoot that, Barmer. So I say, wow, and it can save everyone in the world. Yeah. And now it's out. It gets out. Right. It gets out. That's exactly right. Or any of the locks that we've put around it, any of the protocols we've built to keep it in place, I think as you were about to say, it's super intelligent. So by definition, it's smarter than basically all of us combined. Right. It's like saying, it's like a dog believing it can lock you in a room. Right. Forever. Right. It's like, no. You say, oh, I just bought a 14-ounce T-bone steak. Do you want it for dinner? Yeah, yeah, yeah, yeah. Well, I have to go prepare it. Then they open the door. I get out of the door. Right. And better than that, I think, okay, you're going to have to forgive me because you had a colleague on and he's a teacher. He might have been one of your teachers that we talked to. And one of his methods for getting students to learn is you give them the same problem that some other astrophysicists may have faced. And then as they solve it, that's how they learn as opposed to teaching them what that astrophysicist already discovered. You let them make the discovery. So, if this thing is so smart, it would literally have the ability to just whatever we design to go back to square one and redesign it on its own and say, well, now here's the next phase. That's how I get out of it. Well, that's one of the emerging threats is AI, machine learning, that can write code. Like I think some Harvard researchers trained a deep learning algorithm to write code by exposing it to code. Deep learning. Deep this. Deep impact. It sounds menacing, right? Did the asteroid win in that one? I never saw it. It tied. Yeah, it was a tie. We'll call it a draw. Well, it took out New York City. Well, that's a good one. But civilization– It went for California. Civilization endured. Civilization endured. That's what matters. Okay. So then that asteroid was not an existential risk. It was, except we split it into two and the big piece went away and the little piece still hit. In the end. In the end of the movie. No, no, no. Well, you have to destroy New York because it's a movie. But they did it right. Rather than– unlike in Armageddon, where the asteroid pieces had GPS locators and hit monuments, one decapitated the Chrysler Building and hit the clock, continued through the Chrysler Building, went in the front door of Grand Central Terminal and hit the clock in the middle of the floor. That's the opening sequence. Okay, I'm just saying, you remember that? Yes. We got this? Another one came from over New Jersey and hit the World Trade Center, okay? That's right. Aiming for our stuff. All right. So, Deep Impact had science advisers because Armageddon, with Bruce Willis, violates more known laws of physics per minute than any other movie ever made, okay? Just so you know. Even more than Gravity? No, no, that one was cool. That one at least tried. Okay, that one tried. But thanks for remembering, Mike, my Gravity tirade. But, do you really get one question in this segment? Well, listen, this has been, I'm just, listen, at any time I'm still entertained on one question, we're doing a great job. The end of the world. See if we can fit in one short one. Okay, all right. Let's go with Will J our Patreon patron, who says this, what one or two skills would you learn now to be useful and productive in a post-apocalyptic world? That is, of course, if we survive the event. So I got one. Ready? Ready? I would learn how to break into a hardware store. Nothing more valuable in an apocalypse than the contents of a hardware store. Or a towel. Don't forget your towel, too. A towel? That's a Hitchhiker's Guide reference. Oh, excuse me. Oh, I just got hitchhiked. You need to be able to break into a hardware store. My answer would be learning how to collect canned food. That would be mine. That's a good one. That's that movie, The Boy and His Dog. I never saw that. The Don Johnson one? The Don Johnson. Yeah, the dog was intelligent, but the dog would help. It's Apocalyptic Earth. And it's a boy and his dog, the only ones alive on Earth's surface, as far as they know. The dog helps him find food, but the food is all canned and the dog can't get into the can. So, he opens the cans and they both eat. Oh, so it's a buddy comedy. It's a… All right. So, Chuck, what would be your one thing you would take with you? What's your skill? One skill? It would be this, being funny, because everybody loves that. I'd be like, dude, you know, get somebody to laugh and they'd be like, ha ha ha. I'm like, yeah, can we break into this hardware store? One other thing, there's one more skill you have to have. Thou shalt know physics. All right. If you don't know physics, just move back into the cave. It's kind of a superpower. Here's what I thought about that. Recently, I was asked to review a book written by some MIT physicists and engineers. And it's called The Physics of Energy. The Physics of Energy. It just came out. That looks like a textbook. Yeah, that's heavy. It is kind of a textbook. It's based on courses they taught. The Physics of Energy, Robert Jaffe, Washington Taylor. And so, I actually blurbed the book. Even books like this can get blurbs. I couldn't put it down. There it goes. You ready? Page turner. If you buy one textbook this year. Here it is. This is it. Ready? If your task was to jumpstart civilization but had access to only one book, then the Physics of Energy would be your choice. Wow. The professors Taylor and Jaffe have written a comprehensive, thorough, and relevant treatise. It's an energizing read as a standalone book, but it should also be a course offered at every college lest we mismanage our collective role as shepherds of our energy-hungry, energy-dependent civilization. Sweet. Book drop. Nice. Now, does that blurb have anything to do with the chap that I see sitting on this table here from Taylor and Jaffe? No, no, no, that was just to cover postage. So the point is, you don't want to have to wait for another Isaac Newton to be born to discover the physics, and then you want to start where you left off. And so that's what this book would do. Cool. That was a really good answer. Better than the towel, I think. For sure. I don't mean to besmirch. No, it's all right. You know, Douglas Adams here, but… It was a jokey answer at best. So we just end that segment. We're going to come back to more Cosmic Queries on The End of The World as we know it. We're back on StarTalk. Today's special Cosmic Queries edition on the ends of the world. And we've got Josh Clark with us. Josh. Welcome. Thank you very much. You're the Stuff You Should Know guy. Yes, that's right. With a new podcast, Ends of the World. Yeah, The End of the World with Josh Clark, appropriately. You really want to associate your name with that concept. I like it. You're kind of like the Tyler Perry of Science Podcast. Pretty much. That's what I was going for. And listen, it's a smart thing to do. Everyone who worked on it, I made sign a contract that said they would not look me in the eye during production. That's what I was going for, for sure. But it's all about existential risks, and it's largely based on the work of a guy named Nick Bostrom, who is a philosopher out of Oxford. Oxford, yeah. Who's basically been warning people about existential risks for 20 years, and has really kind of given us our understanding of what existential risks are, and why they're different, and why they're worth paying attention to. I said I know him, I know his work, I've not met him. But I've referenced his work many times in my talks. I got to speak to him a few times for the podcast, like three times, and on the third time, his assistant was like, you know, Dr. Bostrom, puts every request for a media appearance, or an interview, or a project, or whatever, through a cost-benefit analysis, and I made it through that grinder like three times. And I felt pretty good about that. And then I realized- Chuck, do you think the billion downloads has something to do with that? I was gonna say, yeah. I didn't flow on it, I just came in, you know, Chow and Shuler or whatever, but- I'm just saying. I think the billion, that's a heavy number. But I think the reason why he was speaking to me so frequently or so willing to talk to me about the same thing three times is because, you know, he was talking through me, he was trying to reach more people, and that kind of brought me back down to size a little bit after I realized that. That's a good thing, though. I mean, you know, it's worthwhile. So Chuck, let's get some more questions. Any more Patreons? No, but I... Oh, God, here it is. So this is Philvader23 from Instagram. Somewhat rhetorical, but I'm interested, I think I know why he asked, if the world ended, would the human race end? And I'll say vice versa. There are a lot of people who feel like this is, like we're it, you know what I mean? Like if we end, that is the end. Like so if the world ended, would the human race end? And if the human race ends, we know the world wouldn't end, but would it make a difference? Earth is gonna be here with or without us. Earth is here before, during, and after asteroid strikes. It's here before, during, and after viral attacks. So we are a blip in the history of the earth. So when people say, oh, save earth, they usually mean save ourselves on earth. In almost every case somebody says save earth, that's implicitly what they mean, save humans on earth. They might say, oh, save the other animals. They might say that, but... They don't. No, what they mean is what we are doing is affecting other animals, and ultimately that might affect us because we're in an ecosystem that has balance and interconnectivity, so it's the short-sightedness of decisions we make. Let me not call it short-sighted. Let me say not fully researched. No, because I think people think they're doing what's okay, right? They thought, let's make a smokestack and pump smoke into the air so that it goes into the air high above you rather than at ground level. That's better, right? And no one is thinking, well, this is still in the air and it's wrapping around the earth, you know. So air pollution was not imagined that it would ever be a worldwide problem. And so we had to learn that. And when we did, we'd made great progress, right? I mean, air is cleaner than it's ever been, right? All around the world. Thank you, Al Gore. He invented clean air. So yeah, this end of the earth thing. Do you talk much about the end of the earth? I do. It's a big point that I make that if we screw up and we wipe ourselves out, whether it's through AI or some biotech accident or maybe something going awry with nanotech or a physics experiment even potentially, if we do this... He's trying to drink my people into the problem. I heard that. I was waiting for global thermal nuclear. I thought I would get out of that one, but no. I considered it, but then decided no. But if the worst comes and we slip up and we wipe ourselves out, life would almost certainly go on, because it has so many times before. We've been through at least five that we know of, mass extinctions. Big ones too. I think the Ordovician one. I can't remember how long ago it was. It was very, very ancient. But they're starting to think that a gamma ray burst basically sterilized Earth, came that close to just killing all life on Earth, and it still couldn't. And a gamma ray burst hit Earth, and life still hung on. Hung on after the asteroid wiped out the dinosaurs and a lot of other species. Life will probably keep going. I would bet just about anything on it. So yeah, there will be life after we go, if we go. If we go to us, the world will have ended. So it is kind of moot in that respect. So one thing about the gamma ray burst is that was invoked after no one could find any other reason for how much life could go. Oh, is that right? Yeah, I mean, it is plausible. We have them in the universe. Usually, they are pointing in some other direction. Or if they point towards us, they are very far away. So the question is, in the statistics of this, could you have one that is nearby that points straight at us? And if it does, these are high-energy particles, high-energy light. And it first takes out the ozone layer. The ozone protects you until there is no ozone. But it keeps coming. So it's like the first line of defense that is now all massacred. Now it keeps going, makes it all the way down to Earth's surface. And those are high-energy particles that is incompatible with the large molecules that we call biology. So it just breaks apart your molecules. And it kills everything. If you're in a cave, you'll survive. But you probably eat things that depended on things that died on Earth's surface. So would you survive even with the atmosphere burned away? Or the ozone layer? No, no, the ozone. It would take out the ozone. So you'd have to go to places where you'd still be protected from the ozone, which would be underground. So you would really like episode four, which is about natural risks, including gamma ray bursts. In the end, you'd be very proud. I conclude that they are quite rare and probably not going to happen. Yeah, rare enough so that really you should do things like buckle your seatbelt. That's very good advice. But you can take care of both. You can take care of immediate threats like dying in a car crash while you're simultaneously thinking about more remote, larger threats as well. But in proportion. You do that in a balanced way. Sure. That is my new phrase now just for when I'm going to have Reckless Abandoned. Just gamma life. Yeah, but to Josh's point, if you take 90% of the life and 10% survives, what you've done is pry open ecological niches where the 10% of the life that remains can just run and fill back. Yeah, you can make a pretty good case that if we are wiped out, we would leave the biggest ecological niche of all currently on Earth. Haven't you seen the book Life After Man? I saw the special on Discovery or Science Channel. Maybe they made that after the book. I thought you were talking about a lifetime special. Christmas in Life After Man. So, who do we keep trying to kill that lives with us, like the mice and rats? So, if we are out of the way, what sets the upper size of a mouse or a rat is that it can escape from being killed by us by going to a pipe or a hole. If we are not there, nothing to stop the growth of rodents. Which is like, what's the name of the Afro? That's it. What's that again? South America, the Capybara. The rodent that was this big. It's a river rodent. There's nothing to stop it. So, then they would just run the world. Nice. They'd have museums with human skeletons. So, there would also be nothing stopping the Capybaras or the giant rodents from also gaining intelligence. It's possible that we like to think of ourselves as the only intelligent life on Earth, and that's just patently untrue. We just have to expand our definition of intelligence. So, perhaps we're the current endpoint in the evolution of intelligent life on Earth. But if we're gone, that doesn't mean that that evolution of intelligence is just going to cease as well. So, maybe a million years or 50 million years or 100 million years from now, the capybaras will be like exploring the galaxy or the universe. But that presumes that intelligence improves your survival. It doesn't? That's a very big assumption. But that is an assumption I would make. Look at the cockroaches doing just fine. Without any kind of brain that we would praise. That's true. But you can also demonstrate that if we take our intelligence, the cockroaches… But Chuck, are you actually… Have a cockroach circus. When I see a cockroach, I'm not saying… Gee, that's intelligent. I'm really not thinking that. I'm sorry. Well, you're not as dumb as me. No, you can be so intelligent that you have devised ways of destroying your own genetic lineage. That is the entire point of the podcast that I made, The End of the World with Josh Clark. That we could possibly have become so intelligent that we might accidentally wipe ourselves out with that intelligence. This is my point. Therefore, an intelligent capybara might not be where evolution takes it. Right. Let's say that we're following not a predetermined or prescribed process, but just one that you can bet is probably going to follow within a certain boundary. That we're kind of in the middle of that boundary and that the capybaras that came behind us would follow the same path. There's every reason to believe that if we wipe ourselves out, the capybaras will wipe themselves out too. And that goes to inform another thing that I go into in the podcast, what's called the great filter. It's the idea that it's possible that there is some barrier between the origin of life growing into intelligent life and that intelligent life spreading out into the universe. And that is why we seem to be alone in the universe, because the humans and the capybaras will always inevitably destroy themselves probably because of their intelligence. Because they gain, as Sagan put it, they became more powerful before they became wise. And that's a precarious position to be in, and that's the position that we're in right now. That's called adolescence. Technological adolescence actually is what he called it, precisely. The energy to act but without the wisdom to constrain it, right? So there's a version of what you said, which surely you know about because it would have been in that same world of research that you did. It has to do with, all right, let's say we want to colonize. That's a bad word today. Settle another planet. Show up. Show up. Let's say we want to take a vacation. A one-way vacation. A one-way vacation. Where we have to actually build a place to live. So what happens? So you go out to the planet. And then, okay, what's the urge that made you want to do that? Well, it's an urge to like explore, okay? Or to conquer. Either. It's the same effect. Now, there are people there who want to do the same thing. You've bred this into your genetic line because you have babies and you're the one who wanted to do this. So then they get two planets. And then they have babies and they get two planets. One, two, four, eight, sixteen. It is suggested that you can reach a point where the very urge to explore necessarily is the urge to conquer, thereby preventing the full exploration of the galaxy. Because you're going to run into somebody else at the same time. You're going to run into your own people. Correct. And that is a self-limiting arc. That's the Borg. But the thing is, the great filter in particular, which is an economist, a physicist turned economist named Robin Hanson, I'm sure you've heard of him. No, no, I don't know. Well, Robin Hanson came up with this idea that there's something that stops life from expanding out into the planet. And the reason why it would seem to stop before they expand out from their planet is because we would see evidence of them otherwise by now. Well, that's the Fermi paradox. Right, yeah, which is episode one. I'm telling you, Neil, you would love this podcast. All right, another question. We've got to be fast because we're almost out of this segment. Why are we taking so long to answer these questions? Because, no, it's good. I like it. All right. You know what I mean? Deep dive. DJMass2006 from Instagram says, how do you want to die? Chuck knows how I want to die because I want to fall into a black hole. That's it. Oh, that's a good one. Yeah, that's a good one. That's totally good. Good Lord. So can I follow up with a question, Chuck? Okay. Would you know that you have fallen into a black hole? I would know in advance that that's what I want to do, then I'd fall in, and then I would watch what happened and report back until my signal never gets out of the black hole and I get ripped apart and I get spaghettified. No, but I think what Josh is asking is, if you're in the black hole, is it a process that would allow you some consciousness at a level where you would be like, oh my God, I'm in the black hole? Until you're ripped apart, but you're conscious of everything as you fall in, even through the event horizon. Even through the event horizon. You would still be conscious. Oh, yes. Yes, and you'll see the whole thing. Totally cool. How about you? Well, I was going to say quickly and painlessly is how I want to die. That's not imaginative. Come on. Everybody wants that. No, but just in case there is somebody listening. Given what you know about what people don't know, give me a better answer than that. All right. Fine. Fine. How do I want to die? I don't know. I think a low-energy vacuum bubble would be pretty cool just washing over us all of a sudden, which would probably be quick and painless too. But then it would happen at the speed of light so you wouldn't see it coming. Quick and painless. That's another quick and painless. Whereas a black hole is quick but very painful but deeply fascinating. Because you get spaghettified, right? And you would feel that. Oh, yeah. Okay. That's what I was… Oh, this hurts so bad, but it's so interesting. Because it's science. All right. When we come back, more StarTalk Cosmic Queries on The End of The World. StarTalk is back. I got Josh Clark with me, who is, has a new podcast on Ends of the World, because he wasn't happy with a billion downloads of Stuff You Should Know. He's still at it, so glad you have you on the show. Thank you. So we're doing, it's a Cosmic Queries edition, and Chuck, we spent so much time answering only a few questions, we gotta make this whole segment a lightning round, okay? So let's just do it. We have never done this before. The entire segment, which means that you have to answer the question as concisely as possible. Yeah, in a soundbite, basically. And pretty much in a soundbite, okay? If you don't soundbite, I will soundbite you. Okay, here we go, Nico Black 247 on Instagram says, when we find life off of the earth, would you expect, how would you expect religious groups to react, would they change? Thanks from Illinois. They would freak out, I think. Some religious groups would freak out because life on earth, human life on earth, intelligent human life on earth is believed to be the sole creation of God. But so many other religious groups will be totally down with it and just see it as a greater part of God's creation. All right, bing bing, let's move on. This is Liam Beckett on Instagram who says this, do you think as a society we will ever get past biased news from both sides or only become more divided, speaking of the end of the world? Yeah, totally. I think this is just kind of like a temporary problem that we have and we are going to continue to advance and as we advance we will be less divided. That's my hope at least. Neil? That was beautiful. Thank you. That was unrealistically beautiful. Oh, it's just a phase we are going through, it's not the beginning of the end of civilization. My issue is people try to beat each other on the head to convince them of your own opinion and try to get you to vote in ways that align with their opinion when there's so many things out there that are objectively true that we should all agree on what is objectively true and then base civilization on that and then after that, celebrate each other's diverse opinions rather than beat each other over the head for them being different. But I think that's a point that we can conceivably get to and when we do, we will be less divided. So really, you just said the same thing I did. Oh, there we go. All right, time to move on. All right, next question. Oh, wow. This is Francesco Sante says, as long as humans have existed, I assume we have looked up and felt a connection with the universe, even if we didn't have the insights of astrophysics and cosmology. Do our atoms know, all caps, that they came from up there? Next question. Next question. No, so John Kennedy, I think before, President Kennedy, before he was president, as you may know he, they have a home in Hyannis Port, right, so the ocean coastline is not unfamiliar to him. They own boats, this sort of thing. He spoke often about the allure of the ocean and wondered openly whether we are drawn to the ocean shore because our genetic profile may remember that in fact our vertebrate history is owed to the fishes in the sea and that we're somehow pulled back to it. So I can poetically agree with that, but there is no way we could have known that we are stardust without modern astrophysics telling us this. I think we will look up and wonder, but I don't think it's because there's a genetic connection. I think it's because we just want to know if someone up there is going to eat us. That looks dangerous. We're looking up at the universe the way you look in the brush. There's something there going to harm me. If it's not, then otherwise it's a beautiful thing to look at. Interesting. Next. Alejandra Hernandez once from Twitter says this. With some AI nearly capable of passing the Turing test, do you believe the technological singularity will occur in the near future? And if so, how do you think humanity will fare? Now we touched upon that in the beginning. So let me sharpen that question. Here it is. How soon is this going to happen? There you go. Oh, man. I don't know. How close? How soon will AI be our overlords? The thing that I find upsetting and scary is that it could happen. Says the man who has an End of the World podcast. He says, what I find scary, I'm afraid now. It could happen at any time conceivably. It could happen at any time. From what I understand, we have all the components out there and it could just kind of happen. They could fall into place. I don't know. It's impossible to predict when it will happen. And you can't say with absolute certainty that it will happen. It's just really possible. And the fact that it is possible means that it could conceivably happen at any time. And is the self-aware– I'm sorry. Is the singularity– this is my question, so we're still in our lightning round. Is the singularity actual consciousness or is it self-aware? So the singularity is this point where machines become self-aware and super-intelligent. Or, if you're a transhumanist, that's the point where we merge with a transhumanist. Yeah, what is that? So that's a big, big umbrella term and it encompasses a lot of different thoughts and philosophies. But the main thing that threads it all together is this idea that we can and will and should merge with our machines, merge with our technology, which sounds far out until you realize like– Well, we're already doing it. Yeah, we wear like glasses and contacts and clothes and stuff like that to– And I carry the world internet in my pocket. Yes, exactly. I don't have to graft it into my cerebellum. Okay, but wouldn't it be easier and more convenient if you did just kind of get information that rapidly, that easily and could expand? Open skull surgery or pull this out of my pocket. Is that my choice? Basically. I don't need to see the latest cat video that badly. I can wait until I can dial it up on my phone. But what about an infinite loop of cat videos? Next question. All right, okay, here we go. Rex Young, you almost touched on this, but from Twitter says this. Rex wants to know, any general advice on how to foster peace in the world, locally, online, or in the world at large? I'm glad that this person raised that question. If you succeed at that, that means total worldwide warfare is off the table as an existential risk. So that's an important question. So yeah, I think that that seems to be found in the organizations and the institutions that we build. From what I understand, the moral progress of humanity has been kind of tied to the global community that we've been developing. As we spread out and understand and meet more and more people and connect with more and more people, that seems to be in lockstep with this movement toward peace on a global scale. He's so hopeful. I really am hopeful. I'm deeply hopeful for the future of humanity. I'm also worried, but I am hopeful for sure. That's really cool. That's so beautiful. It is. I mean, I wish I was that. I'm not that hopeful. If I were that hopeful, I'd be unemployed. I mean, I still give us a very low chance of making it to technological maturity and safety, but I am deeply hopeful that we will and that if we do reach that, there will be a much more peaceful species that we are. All right. I don't know how much time we have. Just do it. Do it. This is Fyodor Popov. Fyodor? What? Fyodor. Fyodor. Fyodor. But not Theodore. Fyodor. Fyodor. That's the original version. Here we go. What do you think are the best ways to keep abreast of current developments in the study of existential risk? There are great websites out there, like those of Future Humanity Institute and Future of Life Institute. Neither is very active on social media. Have you ever specifically researched the various topics you've explored since you finished the series? Great question. So, a couple of things. I'm planning on doing a follow-up to The End of the World podcast, these first 10 episodes. That's a lot more podcast-y, a weekly kind of thing, to keep abreast of all this stuff. So, listen out for that. But also, the Future of Life Institute actually is pretty visible on social media. They have a great podcast as well. But that's a really important point. Right now, as far as existential risks are concerned, there's a lot of academics writing really smart papers, and you have to go grind those up to understand what's going on. So that's one of the reasons why– I'm an academic. We read the papers. Right. Don't grind them up. I'm a non-academic. If you're a non-academic, you grind. Just in all fairness, academic research papers are very dry and very jargon-filled, and they're really hard to get. You have to teach yourself how to read an academic paper. So it is a grind for people like us, right? So that's why I made this podcast, and that's why I plan to continue to make the podcast, because I will grind the stuff up and then try to explain it so that it's not just academic papers that are out there. You'll be our conduit to our extinction. That would be great. If we're going to go extinct either way, I might as well be the guy. You don't want to be the guy. I get bored so you don't have to. All right. Here we go. Yeah, I've got another one. Here's this one. This is Mario Gert. Mario Gert on Instagram says, is it possible that our universe is someone else's large hadron collider? What an awesome little question. I mean, are we the galaxy on the belt of Orion? There you go. Did you get that reference? No. Man in Black? Man in Black? The first one? Yeah. I haven't seen it in a while. I love Josh. You have incomplete geek street cred. No, I know. There's gaps in this. There's things that I need to learn. There's some gaps. There's some gaps. We still love you. What was I talking about? You said, are we the galaxy of the God of Orion, are we the universe inside somebody else's thumb? Lartadron collider. Let me answer that in a slightly different way. When we first probed the atom and we found, oh, wait a minute, the atom has a nucleus and it's got electrons that orbit the nucleus, that's just like the solar system. The solar system is just like the galaxy, where the star is orbiting the center of the galaxy, we have planets orbiting a star, and we have electrons orbiting. So maybe it's that all the way down. Maybe that's the theme, and maybe that's how all this works. And when you start probing the atom on that scale, the laws of physics manifest in completely different ways. So it's not just a scaling phenomenon. So for us to have these laws of physics manifest the way that we do and claim that it's the microscopic physics in someone else's collider, it's just not a realistic extension of how things work. Although it was deeply attractive, because it was philosophically pleasing to imagine that you just had nested. Of course, because it's very linear. It's just nesting. You just keep going. So because things manifest differently on these scales, you can't just get– for example, okay, there's something called a water strider, which is an insect that can just stride on the water. It blocks the water. It uses the surface tension of the water. If that were any bigger, it would just fall through. You can't scale things because the forces operating have different manifestations on different scales. That's why. And so that's why– what's the movie Them? Do you remember the movie Them? The Ants? Ants! Oh, he's got one! I have seen that one. Giant ants. I might have seen some nuclear thing and the ants got big. Nice. And the ants are coming. Okay, ants are creepy anyway. And now they're bigger than you. You freak out. I love ants. That can never happen. I love them so much. Because ants have these tiny spindly little legs. Right. And if you scale up the size of the ant, its weight outstrips the ability of these spindly legs to hold them up. Have you done this on Twitter? Have you done a Twitter rant about them? I could totally rant on this. So the point is, as you get bigger, I can say this mathematically, as you get bigger, the strength of your legs, your limbs, only goes up as the cross-sectional area. But your weight goes up as the cube of your dimensions. So what happens is, because as you get bigger, you grow in all dimensions, but your legs, if your leg gets wider, the strength is only the cross-section of your leg. So eventually you just crush yourself. That's why hippopotami don't have skinny legs. And they're short, fat, stumpy legs. They're short, fat, stumpy, stumpy legs. Elephants have stumpy legs. Okay? A giraffe has long, slender legs, but giraffes don't weigh all that much. They're slender. And the distribution isn't any different. So it's a fascinating cottage industry studying the relationship between size and life. And how things scale. And how things scale. I don't know. That wasn't a lightning round? Who cares? That was really cool, man. It's why if you take a bucket of water and empty it on your car, it doesn't stay as a big ball of water. But if you make the water smaller and smaller and smaller, it just becomes dropless. Then it's a drop and the drop will stay on the ground because surface tension holds it. Surface tension is not strong enough to hold big things. It'll hold a little thing. Right. The world of insects is completely surface driven. Their physics courses in Insects 101 is all about surface tension. Yeah. Yeah. Because you can get trapped inside of a little bubble. How do I get out? Surface tension. That's why everyone needs to know physics. Everybody needs to know. Even insects. Insects, humans, everybody. Oh, wow. That was cool. We got to wrap this up. That was cool. That was cool. We got to wrap this. Oh, we're done. Yeah. Yeah. Sorry. I was trying to go back to another one. We did get a bunch in there, though. Listen, that was like the longest lightning round we've ever had. No, it was good. Good. Good. Josh, thanks for coming on. Thank you so much for having me. We've got to do this again. I will do it any time you want. Any time. Josh, before we sign off, tell us exactly where to find your work. Oh, you can find The End of the World with Josh Clark anywhere you find podcasts, including the iHeartRadio app and Apple podcasts and all that jam, and then you can find me on social media at Josh Um Clark, because I don't know if you noticed or not, but I say um quite a bit, and I started a hashtag to keep a conversation about existential risk going. It's hashtag EOTWJoshClark, so people can find me those ways. Alright, if you're looking for the end of the world, this is your man. Alright, thanks Josh. Chuck, always good to have you. Oh, are you kidding me? It's my pleasure. Alright, you've been listening to, possibly even watching StarTalk End of the World As You Know It edition, Cosmic Queries. Josh Clark, thanks for being here. As always, I bid you to keep on watching.
See the full transcript

In This Episode

Get the most out of StarTalk!

Ad-Free Audio Downloads
Priority Cosmic Queries
Patreon Exclusive AMAs
Signed Books from Neil
Live Streams with Neil
Learn the Meaning of Life
...and much more

Episode Topics