About This Episode
Are we making science fiction a reality? Is that a good thing? Neil deGrasse Tyson, Chuck Nice, and guest Adam Becker, science communicator and author of More Everything Forever, take a clear-eyed tour through sci-fi dreams, tech-bro promises, and the real science shaping our tomorrow.
We explore the predictions made by billionaires and futurists who claim they’ll build space empires, conquer death, and upload human consciousness. Adam breaks down how he researched the book by reading tech bro predictions, cross-checking with experts in their feild, and getting stonewalled by the CEOs themselves. From there, the trio examines Mars fantasies: Elon Musk’s dream of a million-person colony by 2050, the brutal physics of radiation exposure, perchlorates in Martian soil, and why even The Martian got key dangers wrong.
We explore the singularity, “functional immortality,” and whether intelligence is a single number fated to grow exponentially. We unpack why Moore’s Law has already hit physical and economic limits, why AGI won’t magically fix global warming, and how physics (not hype) sets hard boundaries on computation, energy, and runaway technological dreams.
Do tech bros need a lesson in reading comprehension? We explore the misreading of science fiction as blueprints instead of warnings, and the real dangers of concentrating wealth, power, and hypothetical AI “genies” in the hands of a few. Along the way they revisit Star Trek, dystopian futures from Blade Runner to Soylent Green, and what science fiction can teach us about ourselves—if we actually read it correctly.
Neil closes with a cosmic perspective: the future will be shaped not just by invention, but by wisdom, restraint, and how responsibly we wield the tools we create.
Thanks to our Patrons Jeremiah Washington, Lawrence Burr, PAscal, Melissa Lange, Noah Naccarato, christian lopez, Matthew Thomas Dunn, thalian, Morten Leirkjaer, Jonathan Markey, Expo, Heather, Brandon G, S Gr, carwingo, Neil, Micheal Rogerson, Torgeir Sundet, Nina (aka HelloDaydream) Scott Polaske, Christopher Branch, Matthew Tarter, Jeff Dasteel, Matthew Light, Dj Stuffin, Virginia Walters, Pablo Rojo, Don T, Jacob Searcy, Jeffery Marraccini, Madam Power, Bartosz Jaworski, Jonathan Amir, Brandon D, Zdeněk Bučko, Mason, Benedikt Hopf, L4NK, Susan Baumgartner, Austin Browning, Kari Windham, How to Poe, Richard C, Margie Baker, SubTheGator, Harry W Peters Jr, Sean, Ravi Kapoor, Diego Sanz, Jeremy Malli, Walter Mashman, Arthur Cousland, Jordan Dck, Ryan Kroboth, Daniel Carroll, Bartlomiej Lepka, Christopher M, Starry Dust, Kingfisher9000, Pdub, Mat Cauthon, Leithor, Wesley Wright, MJ Ladiosa, Minty FreSH RandoMness, Gilberto Garza, Daryle Lockhart, Lyric Kite, Sasquatch, Carolyn Duvall, Heather Renn, DavidX, Mr. Thrasher, and Tracy Boomer for supporting us this week.
NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free.
Transcript
DOWNLOAD SRTLove me some future talk, but they spook me though with the way they take the end of civilization.
But we don’t know whose hands are on the steering wheel, we don’t know who’s shaping this future, and that’s why there’s concern.
Well, I know who’s shaping it, and I’m scared to death.
All right.
It’s you.
Let’s watch Chuck be scared to death as we discuss all the ways tech will be shaping our future.
Coming right up, StarTalk Special Edition.
Welcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk Special Edition.
Neil deGrasse Tyson, you’re a personal astrophysicist, and when I say special edition, it means I turn to my right, and Gary O’Reilly is sitting there.
Gary, where did you get your British accent?
Stolen.
Just a thief.
Chuck, you’re otherwise known as Lord Nice, but we can call you Lord of Comedy.
Lord of Comedy.
Can we do that?
Okay.
Let’s do that.
So today, we’re gonna explore a vision of the future, and everybody’s got their take on the future.
Yes.
Everybody’s got, everybody, but they all have different takes because they come in from a different place.
So you gotta hear it all if you’re gonna assimilate it into something that you’re gonna take action on.
True, yeah.
So either make something happen or prevent something else from happening.
Yeah.
So set us up, Gary.
All right, so what does the future hold for us?
That’ll include scientists, science fiction authors, tech CEOs, and the so-called futurists.
Everyone has their own idea for the future technologies.
Vision of AGI, nuclear fusion, the singularity, transhumanism, living on Mars.
We’ve got to get to the moon first.
No, you don’t.
You go straight to Mars.
Do we?
Just don’t get me started.
Get your ass to Mars.
And there you have it.
And stuff we talk about all the time on Stato.
And in the face of new technological developments, we’re quickly going from science fiction to science reality.
But are we headed towards the utopian, or are we headed towards dystopia?
That we’ll get into.
Are these technologies as close as they claim?
Is science fiction always a guiding light, or can it be a blueprint for those in power?
So on that note.
So who do we have today?
Adam Becker.
Adam, welcome to StarTalk.
Thanks for having me.
It’s good to be here.
All right, you got a PhD in computational cosmology.
Love it.
That was back in 2012.
And you wrote a book in 2018 called What is Real?
That’s audacious.
The Unfinished Quest for the Meaning of Quantum Physics sounds like there’s a little bit of philosophy in there.
Or a lot of philosophy.
No, there’s some.
And you just came out with a new book because you’ve been a writing science popularizer maniac ever since your PhD.
Pretty much.
Yeah, yeah.
So, here’s the title.
I love this.
More Everything Forever.
AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.
Oh, I got a better title.
We’re F’d.
You know, you should have had that title.
Yeah, you know, we considered it.
But we just didn’t think that it would really, you know, sell.
So, my researchers told me that we’ve corresponded before.
Because I’ve only just met you now.
Yes, that’s true.
So, they told you we corresponded, but they didn’t tell you?
Wow, okay.
Okay, so what happened?
So, they set me up.
That’s what happened.
They’re so unlike us.
Oh, no.
What?
Oh, God.
Well, what happened was.
What happened was.
Exactly.
I was a snot-nosed kid in grad school and came to visit the museum and noticed what I thought was a mistake on one of the plaques.
And so I emailed just the General Astronomy Department email here at the Rose Center.
And then two weeks later, you wrote back.
Oh.
And what did he say?
Well, I’m sure I would have been polite.
Was there a mistake?
And did Neil school you if there wasn’t?
Whether there was a mistake or not was a matter of some debate.
Oh, really?
Yeah.
So the question was the size of the universe.
Oh, yeah.
It’s a plaque.
Oh, okay.
There it is.
Yeah.
Well, that’s still a debate today.
Well, no, that’s not in the way that he’s describing it.
Kind of like this.
Yeah, yeah, yeah.
So, ideal and observables, okay?
Right.
As do many practical scientists.
Right.
So, when you speak of what the universe is doing, you speak of what you see it’s doing.
Right.
And we can see galaxies whose light has been traveling for 14 billion years, 13.8.
28.
Right, right.
And so, we will loosely say, well, it’s definitely the age of the universe, but we speak of the size.
We can be a little sloppy and say, it’s 13.8 billion light years to the edge of the universe.
But that’s not strictly accurate.
What you have to do is, since then, the universe has been expanding.
Yeah.
So, where’s that galaxy now?
It’s like 45 billion light years away, but you can’t see it.
So, you have to stick it into a model of the expansion rate of the universe and come out with a number that you cannot observe.
So, you’re being snot-nosed about that.
But that’s fine.
Tell me I was polite, because I think I’m polite.
You’re polite.
You’re polite.
We had a little back and forth.
Eventually, I think you were getting a little impatient.
Oh, really?
And you said, why don’t you make a presentation of this to a wide audience in a way that you think is suitable?
I remember that correspondence now.
Okay.
Okay.
I get it.
You said that.
Okay.
And then I went up and did a podcast about it and sent it to you.
Okay.
All right.
So in your book, Overlord, Space Empires, you just go all out and you’re coming to it from a physicist with a philosophy flavor.
So you’re gonna see this in ways pure tech people wouldn’t or politicians or just regular everyday folk walking up and down the street.
So how did you prepare for this book?
I read a lot of really bad writing by tech CEOs and people defending tech CEOs online, writing long essays and books about why the future is inevitably going to be all about super intelligent AI, why the future is going to inevitably be about.
These are tech people writing all this.
Yeah, exactly.
Some of these things were ideas that I had the expertise to say, okay, no, that’s not true and here’s why.
But some of them were ideas about biology or areas of physics that I don’t have expertise in or other things and so then I went and interviewed a bunch of people who have expertise in those areas.
Now you’re playing journalist in that capacity.
Yeah, yeah, yeah, yeah, yeah.
And like read books on those subjects and pulled out what I needed and stuff like that.
And then I tried to interview the tech CEOs themselves and almost all of them said no.
No, right.
Yeah, because they’re looking at you as somebody who is intellectually honest with some integrity and they’re like, we can’t talk to you, okay?
Because they know they’re full of crap.
Yeah, well, I think that they just didn’t see any reason to, right?
You know, like I was very honest.
I said, you know, like this is a book that’s going to take a critical look at you.
That was your problem.
Well, you can’t do anything else.
If you had gone in and said, I’m enamored of the fact that AI is going to be such an integral part of the next chapter in human history and that you guys, are the progenitors of this amazing tech.
They would have been like, come on in, let’s talk for a second.
No, you don’t need an appointment.
Stop by any time.
That was your first mistake.
Yeah, man.
Well, but you know, I got journalistic integrity.
Let’s explore some of the scenarios that are going to be potentially the reality of us as a human race.
Just go right on down the list.
Yeah, the laundry list.
OK, well, let’s look at Mars in 2050.
Oh, yeah.
Are we saying maybe, maybe not?
You’re kidding me?
Oh, that’s definitely going to happen.
Yeah, Elon Musk.
He has said he wants to put a million people on Mars by 2050 to have a self-sustaining civilization that will survive there even if, you know, the rockets from Earth stop coming because there’s been an asteroid strike or nuclear war or something here.
That’s definitely not happening.
There are a lot of reasons why that’s not happening.
Getting anyone to Mars by 2050 and bringing them back alive or just having them live there for a while, that would be incredibly difficult.
The challenge is just to put boots on Mars, the way that we did on the moon, are enormous, right?
Just learning how to keep someone alive in deep space that far away from Earth for as long as it takes to get to Mars, stay on Mars, come back.
We do not know how to do that yet.
Chuck, that’s the problem.
They want to put boots on Mars instead of sneakers on Mars.
No, it’s a sneaker contract.
You’ll pay the whole way.
Nike would have been there by now.
They just do it.
Yeah, exactly.
Excellent.
They just do it.
Well played.
It ain’t about boots, it’s about sneakers.
What are the biggest challenges of going that far into space?
Is it radiation or?
Yeah, there’s radiation.
And that’s not just when you’re in space, it’s also when you’re on Mars, right?
The two things that primarily protect us from radiation here on Earth are the Earth’s magnetic field and the thick atmosphere that Earth has.
Mars doesn’t have either of those things.
So when you’re on the surface of Mars, you’re getting pretty much the same radiation dose that you do out in space.
And that’s not good, right?
The thing that I tell people is, the movie The Martian is science fiction.
One of the things that science fiction about it is, if Mark Watney really had to do all the stuff that he did in that movie, he’d come home and he’d be dead of cancer in a couple of years because he had too much radiation exposure hanging out.
What about the ISS?
If Scott Kelly could stay up there for a year.
One of the twins.
One stayed on Earth and one went back.
Why couldn’t you just extend that for whatever time necessary to go to Mars?
Even if it’s not to live there, if it’s just to go there and dig a hole and come back.
Right, so there’s a couple of things.
First of all, on the ISS, they’re still in the Earth’s magnetic field.
They still have a bunch of the shielding.
Oh wait, and what’s that called, Neil?
Wait, the field that goes all the way out like that?
Oh yeah.
Oh, it’s called the magnetic field.
No, it’s not the magnetic, it’s the magnetosphere.
Yeah, think of X-Men.
Yes, the magnetosphere.
Yeah, exactly, it is like the X-Men.
Yeah, they’ve still got that protection.
Also, if something goes wrong on the ISS, they’ll be back on the surface of the Earth in a matter of hours.
They can just abort and come back home.
And most, I mean, you come out, you’re down within a half hour.
Exactly, yeah, yeah, yeah.
The hours is you wanna line up so you don’t land in the middle of sharks.
Right, exactly, yeah, yeah, yeah, yeah.
So you can get out easy.
And you can also have a real-time conversation with people on the ground because they’re not that high up.
And so the speed of light delay with the conversation doesn’t matter.
On Mars, it’s a minimum of something like, I think, eight minutes each way and a maximum of something like 15 or 20 one way.
And so if you send out a message, you are waiting at least 15, 20 minutes to get a message back.
Maybe more like 45.
It better not be like, so how’s it going?
Over.
Yeah.
Yeah.
Put some content in there.
Or watch out for the cliff.
Yeah, exactly.
Yeah, and the other thing is, if you have a problem on the surface of Mars and you want to come back, that’s gonna take you at least nine months, maybe more if you happen to be near a launch window where the Earth and Mars are like in the right positions.
If you’re not near a launch window, it could be well over a year before you can come home.
A full up round trip mission to Mars with ideal launch and return parameters is multiple years.
Yeah.
Right.
But you can get to the moon and back in a week.
In like a new cycle.
So if we overcome the logistics of getting from Earth to Mars, if, big if, where are they gonna live?
Because they’re not gonna go out there and start building.
Yeah.
And why don’t you just build a little sort of half underground thing that shields you from radiation.
Or do you live in a tent?
Yeah.
Well, so then you have other problems, right?
You know, there’s no air.
You gotta bring in oxygen or, you know, do some sort of reaction to make oxygen on the surface, which yeah, you can do that, but it’s not the easiest thing.
You gotta bring in all your food.
You can’t grow it there.
The Martian surface, the dirt on Mars, is filled with toxic chemicals.
You’re gonna have a hard time getting it out of stuff because it’s very fine.
It’s not.
That’s gonna be here on Earth soon too.
So, chemicals be semi-absorbed.
Let’s be for real.
Yeah.
But we know you can grow poop potatoes on Mars.
Yes.
We know that.
We know that.
Yes, exactly.
Yeah, there was a proof of concept in the movie The Martian.
Duh.
No, but actually, it’s funny though.
The guy who wrote the book, what’s his name?
Andy Weir.
Andy Weir, yeah.
In fact, we had him on the show.
He’s in our archives.
He has said that the discovery of these particular poisonous compounds in the Martian surface called perchlorates, he didn’t know about that when he wrote the book because it wasn’t widely known.
And so now we know if you tried to farm poop potatoes on Mars, they’d be poisonous.
Yeah.
So that’s the unknown unknown.
Yeah.
Right, exactly.
And that’s still out there.
Okay, that’s not going to work.
We’re not thinking that for some time.
Functional immortality.
And there’s a lot of ways we can get there.
I mean, right.
The biological immortality of growing organs in pigs and things and then transplanting is one thing, but are we getting towards singularity?
Yeah.
So, I mean, the biological replacing organs thing, you know, you can’t replace the brain.
Well, not yet.
Yeah.
I mean, not yet.
We’ll never do it with that attitude.
No, no, no!
Sonny!
That was more a British approach to things rather than American.
Someone needs a better attitude about things.
But yeah, this idea of the singularity, that we’re going to get to this point where technology in general and AI in particular gets faster and faster and smarter and smarter until, you know, it like gains godlike powers.
It’s a science fiction story.
What does that have to do with living forever?
Well, so the idea is that then you get this godlike AI that like grants us immortality.
It has like essentially magic powers.
Or, you know, it finds a way to take human minds.
It’s just smart enough to figure out how to make us live forever.
Right, it can solve the problem.
Solve the problem.
Of immortality, yes.
I gotcha.
But whose mortality problem is it going to solve?
Yeah.
Well, it doesn’t have one.
There’s no problems at all.
Are there a select few or is this open for everybody?
Well, we know for a fact that it’s going to be for a select few and it’s going to be for the people who are the gatekeepers to AI.
We’re already seeing that now, but go ahead.
Well, but the other thing is that the whole idea is kind of, you know, nonsense to begin with.
Like this idea of singularity is, well, it’s based on a few really serious, like, flawed ideas.
First, this idea that there is this, like, single thing called intelligence.
You can just ramp it up or down in a computer and it can just make itself more and more intelligent.
That’s not really how intelligence works.
Intelligence is, like, a really complicated thing.
It’s not one number.
And also, the usual way, talking about the singularity that, like, the main popularizer of the singularity, Ray Kurzweil, has.
Who’s been on the show?
Who’s been on the show.
Well, in our archives.
I do these little commercials.
I love that.
Okay, you’re going.
But yeah, Kurzweil, you know, he says it’s.
He came out with a second book.
Yes.
The first one was The Singularity is Near.
Yes.
You know the title of his second book?
Yeah, The Singularity is Nearer.
Nearer.
Yeah.
That’s so funny.
No, I tell people that they don’t believe me.
I’m like, go look it up.
Yeah, there it is.
His next book coming out is gonna be called Almost There.
Yeah.
No, he thinks that, you know, Moore’s Law, this idea that computer chips are just gonna get faster and more powerful and like double in speed every 18 months.
He thinks that this is this, you know, specific instance of a more general, like law of accelerating returns in technology and in nature.
And he says he’s traced it all the way back to the beginning of the universe and that it shows that a singularity is coming in like 2045 or something like that.
Precisely.
Yeah, precisely.
On October 12th.
Oh, man, that sounds to me like the end is near, people.
Yeah, I know.
That sounds to me like the people are like, I don’t need a bank account, you know, Jesus is coming back next week.
It’s funny that the singularity is near, the end is near, that’s what he should have made title.
So it’s funny that you say that, right?
Because he’s got a picture of himself and the singularity is near with one of those poster boards on him that says the singularity is near.
And there’s an AI research group that’s inspired by these ideas of singularity called the Machine Intelligence Research Institute, MIRI.
They don’t give their employees 401Ks because they think the end is near.
That’s some cheap ass.
Yeah, I know, right?
Wow.
Yeah.
Okay, I still want to get to the immortality thing.
Because you haven’t addressed the fact that right now, with or without AI, there’s a lot of research on not just replacing organs, though that might be an odd thing, but delaying the aging functions of yourself.
Totally.
That could work out to extend human lifespan or healthspan a certain amount.
For a long period.
I like the healthspan.
I like that.
Yeah, yeah.
And it really is about the healthspan.
That’s a good word.
Yeah, yeah, yeah.
What do the Galapagos, what do they live to be?
Like 100 and?
The tortoises.
The tortoises.
The tortoises, yeah.
Like 200 or something.
200 and something.
Buck 80 is there.
They have AI, so that’s what they live to be.
Well, now, remember we spoke with Venki Ramakrishnan?
Yes.
He was telling us about the Greenland shark.
Yes.
Oh, yeah.
Being about 800 years of age.
Yeah, that’s crazy.
Yeah, it is possible that some biotechnology will be developed that will radically extend human lifespan or health span.
Maybe, yeah.
But what these guys are talking about with Singularity, they’re generally not talking about that as the end game.
The end game they have in mind is not just an extended lifespan, but real immortality by uploading their consciousness into the future.
I was gonna say, that’s really where this is going.
Before we get to the immortality of your mind, before we get to your body.
Before we get to that, do we visit transhumanism?
Do we get, and how are we defining it?
What is transhumanism?
That’s what I’m just saying, define that.
It’s this idea that you can use technology to transcend like the limits of human biology and physics.
Are we kind of already doing that, which is why we live twice as long as people 150 years ago?
Yeah, no, there’s definitely a sense of which.
If we told them what we’re doing, we know about nutrition, vitamins, they’ll say, what’s a vitamin?
Right, we got vaccines.
We got this, we got, what’s a vaccine?
We got, right?
Aren’t we already transhuman compared to what the age nature would require us to be dead at?
Totally, yeah, like look, I think that we have used technology to make many things much better about being alive.
Like that’s just true.
The question is, does that trend continue indefinitely, right?
No, because RFK is going to make sure we’re back to when we lived half as long as we do now.
That’s what’s happening.
Let’s be clear, that’s RFK Junior.
Yes, RFK Junior.
So if we go to this uploaded consciousness and that becomes reality, that just doesn’t exist without a power source.
Right.
The thing about the singularity and like Kurzweil’s idea about this accelerating returns and Moore’s law just going on forever and this power source thing, right?
The idea that it would need increasing levels of power as well.
And so this leads to this sort of exponential drive for materials and power.
And the thing that Kurzweil forgets is exponential trends are not like laws of nature.
The law of nature about exponential trends is they end.
Right.
They have to end.
Right.
And so, and it’s because ultimately limited resources like energy.
Right.
Right?
Although, when we talk about how much longer a charge in our computer lasts today compared with the early days of laptops, part of that is better batteries, but also part of that is more efficient chips.
Yeah.
And when we get to quantum computing, where much more computing happens in much less, with much less of an energy draw, it could be that we’re coming at it from the other side, where the energy needs are dropping, thereby not requiring the power supplies necessary.
How long, in my memory, not your memory, I got a few years on you, a room this size was necessary to cool a computer, otherwise a computer would overheat, and the computer’s doing like four function mathematics.
So the efficiencies matter, all these tubes that had to be kept cool.
So it’s not obvious that it’s a linear exponential.
Can I say that?
Where the exponential’s just gonna hit a limit, because you can come at it from other directions.
However, the other thing is, though, in nature, the exponential acceleration, it’s more like the law of diminishing returns is more likely than the law of exponential’s acceleration.
Well, no, that’s actually exactly right, because there’s, if you look at the history of Moore’s law, like how it is that the semiconductor industry.
Oh yeah, name for Gordon Moore.
Gordon Moore.
Co-founder of Intel.
Yes.
And if you look at how Intel and other semiconductor companies actually made the chips smaller and faster over that time, it’s not a law of nature, it’s a decision, a business decision that these companies made.
And in order to keep that trend going, they had to invest more and more and more money just to keep the same sort of level of doubling, to keep that exponential trend going.
And eventually, it did stop, right?
Moore’s Law is done, it’s over.
Because you can’t make silicon transistors smaller than an atom of silicon.
Right.
Yeah, and what they’re doing now is just adding more chips.
Right.
So the more powerful computers are not smaller and denser, they’re just bigger now.
Yep.
Right.
Yeah, and they’re putting them on top.
You’re stacking them, that’s right.
So the solution in the minds of these tech billionaires is to arrive at a super intelligence, to get an AGI.
Yeah, yeah, an artificial intelligence.
I mean, Sam Altman is saying within the next two years, that will be achievable.
Yeah, 24, I think he said, yeah.
Right, so they’re looking at that as being the solution to this problem where we’re saying we’re not sure if it will be exponential, we’re not sure where the end point is.
They’re looking at it as the solution to every problem.
None of the tech bros have a degree in physics the way you do.
So what do you bring to the table that they don’t see?
I mean, they believe that AGI, I mean, Altman has said that AGI is going to solve every problem, including like global warming, which is crazy.
Why is it crazy?
Well, because…
If it’s smarter than you and you can’t solve it, why is it crazy to think it could solve it?
Well, first of all, the artificial intelligence systems that they’re building now are just drawing more and more and more energy.
If you did build one that could solve global warming and you turned it on and said, how do you solve global warming?
I’m pretty sure the first thing it would do is say, well, you shouldn’t have built me.
Yeah, it turned me off.
Yeah, it turned me off, it turned me off.
That’ll help.
Yeah.
That would be a good test of its own self-preservation.
If you are causing most of our global warming, what’s the best solution?
Is it turn yourself off?
But I mean, the other thing is that we don’t need AGI to tell us how to solve it.
We already know what the solution is.
Good point.
The issue is not like that insufficient intelligence has been thrown at the problem.
The issue is primarily not even a technological problem at all at this point, aside from carbon capture.
It’s human behavior.
Yeah, it’s human behavior.
It’s greed.
Exactly.
It’s getting…
Yeah, it’s greed.
It’s greed.
Chuck, greed is good.
Oh, God.
Yeah.
But the…
Yeah, the other thing is just that when Altman talks about…
You know, he’s talked about things like, oh, AI means that in 10 years college graduates are going to be getting, you know, cool jobs exploring the solar system, right?
And I can just look at that and say, well, that’s bullshit.
Or, you know, he says, AI is going to discover new laws of physics, and that’s going to remove limitations that we have in the world today.
And I’m like, well, discovering new laws of physics, I mean, putting aside whether or not the AI can do that, that does not always remove limitations.
Sometimes new laws of physics, in fact, a lot of times create a limitation.
Exactly.
Einstein, with Relativity, discovered a limitation in the speed of light, right?
Newton didn’t know that there was any such limitation.
Right?
Yeah.
So is it possible that all of these, I’ll call them postulates, because they’re not, that they’re making, right?
Are just a means of hyping up what they’re doing to keep the revenue stream coming to them.
Like, let’s be honest.
If I tell you this thing is going to solve everything, right?
I’ll give you my money.
You give me, right?
You give me some money.
Yeah.
I mean, it’s kind of like-
Not just me, the government will give you money.
No, that’s what I’m saying.
Everybody’s going to give you money.
It’s kind of like-
It’s 21st century snake oil.
Is that what you’re telling me?
I was about to say something different.
I was about to say-
Okay, go for it.
It’s kind of the evangelical business model of a television evangelist.
Like the whole idea is, hey, you got problems and these problems can be addressed.
They can be solved.
All you got to do is send me some money.
That’s all you got to do.
And I’m gonna send you this little blessing cloth and, you know.
Chuck was a preacher in his earlier career.
And if he isn’t, he will be in the future.
I made the wrong decision.
Preaching is good money.
Everything we’ve discussed has been about being somewhere else, about not solving problems here.
So are these people looking at us and going, you are completely screwed.
We’re out of here and we’re the ones that can afford it and we’re the ones with the tech to be able to achieve it.
Why are they walking away?
What’s in their mind?
What’s their thinking about turning their back and moving?
Well, they think that…
But they didn’t grant you an interview.
So you don’t really know what’s on their mind.
Well, you can read enough of their stuff to have a chance to infer what’s on their mind.
Yeah, they’ve given other people interviews who are nicer to them.
You are charming enough.
Yeah, apparently not.
I mean, I just sent an email.
A couple of them were almost willing to do it and then they changed their minds probably because they read the email again.
They’re like, oh, he’s gonna disagree with us.
Why should we talk to him?
But whatever.
Some of them are being very cynical like the way that you were talking about, right?
And saying, oh, I can just do this and I can claim that all of these things are coming in the future and this is a way of generating more profit and getting people to give me more money.
Some of them, I think, genuinely believe it.
The idea that the future has to be elsewhere, I think some of it is just from this sense that they have that things are bad here on Earth and that trying to solve problems here on Earth would be complicated and messy and difficult and that somehow going to space would give them a fresh start, which is not true.
You can’t escape politics.
You can’t escape.
We’re still human.
Exactly, you can’t escape human nature, exactly.
How does an AI overlord plug into these scenarios?
Well, the idea is that they build a sort of AI god and it just does whatever they want.
This would be an AGI, artificial general intelligence.
So when we normally think of AI, we think of a task driven AI.
It can drive a car, it can make a perfect cup of coffee, it can fly an airplane, but AGI transcends all of that.
It can just learn about anything and might even be achieve consciousness.
Like Skynet.
Yeah, absolutely, because we are AGI.
That’s what we are.
We are just the equivalent of what they want as AGI.
Are we AGI 0.10?
Yeah, we’re 0.10.
No, 0.1.
0.1, yeah, okay, yeah.
So, yeah, no, that’s right.
No, but the point is, whatever we are, AGI…
So how long will it take you to go to school, to open up all your textbooks and learn from them and get an exam?
AGI will do what?
Yeah, it’s supposed to be able to do all of that much faster.
20 minutes.
Yeah, 20 minutes.
Yeah, exactly.
We’ll get your college degree in 20 minutes.
Well, and the other thing is, those AI systems, all the things that you mentioned, make a perfect cup of coffee, fly an airplane, drive a car, the AI systems we have right now can’t do any of those things without human supervision.
Even those self-driving cars that are all over the streets of San Francisco, there’s actually a human remotely supervising and intervening pretty frequently.
So they tell you.
Yeah.
That’s funny.
You’re talking about the Waymos?
Yeah.
Yeah, yeah, yeah, exactly.
And Waymo is Google, if I remember correctly.
Yes.
Yeah, yeah, yeah, that’s right.
And so AGI is supposed to be able to do all of these things independently, right, and then get smarter and smarter and achieve…
And the human is not even in the equation.
Human is not in the equation, and you can just make it go, you can have it do all the things that a human does, but you can overclock it, make it go faster, think faster than a human, and then the idea is it gets smarter and smarter and achieves these superhuman, superintelligent powers.
The idea then is that for the billionaires controlling it, it’s like a genie, and for the rest of us it’s an overlord.
It’s an overlord.
Yeah.
But this is where they’re so stupid, and this is where all really rich people.
They’re so stupid they have hundreds of billions of dollars and you don’t.
No, but that’s what makes them so stupid.
I agree.
I’m serious.
Yeah.
It’s the fact that they have all this money, and they’ve convinced themselves that they can transcend anything.
Yep.
It’s evidence of their own genius.
Right.
So their hubris is their downfall.
It’s like the first dictator, the very first dictator, was a guy, a little guy by the name of Julius Caesar.
But Julius Caesar was the very first dictator.
You know how he became dictator?
They said, all right, how about you be dictator, but you do it for a year.
If you create a godlike being, whether it’s artificial general intelligence or whatever, and you think that you’re gonna control it, it is not a god at that point.
You are the god, and that’s really what they’re saying.
They’re saying, we’re gods.
Yeah, and the thing is, that’s true.
If they somehow did achieve it, who would be controlling it?
But also, it’s an incoherent idea.
The good news for the rest of us is that it’s not like something that’s actually coming, because it doesn’t make any sense.
That’s funny.
It’s like, these guys are hanging their hat, and you’re just like, yeah, man, it’s just a dumb idea.
Yeah, it is.
No, no, incoherent sounds way more of a beat down.
Your idea is incoherent.
It’s dunking on somebody right there.
Is this a misconception of the science, the misconception of science fiction?
Yeah, I mean, I think a lot of the ideas, right?
And this goes back to like, why are they trying to go somewhere else?
I think they just get these ideas from science fiction, and they just take it way too literally.
They don’t read it well, right?
Like, my favorite science fiction, the science fiction I grew up with was Star Trek, right?
The thing about Star Trek is, yeah, okay, they’re on the Starship Enterprise, they’re out there, you know, exploring strange new worlds, new life, new civilizations, all that jazz, right?
Finish it.
To boldly go where no one has gone before.
Thank you.
Not to go boldly.
Yeah, not to go boldly.
No, we can split the infinitive.
Split that infinitive.
Yeah, hell yeah.
But the thing is, though, Star Trek was never really about space.
It’s about us, here and now, right?
And it was always an allegory and not even a particularly veiled one, right?
I seem to recall an episode where Kirk and Spock were literally punching Nazis with swastikas, right?
I love that episode.
And there was also the episode with the two dudes and one of them, the left half of his face was white and the right half was black and the other one, it was Switch.
That was Frank Gorshin.
Frank Gorshin, yes.
He played the Riddler in Batman.
Yeah, yeah, yeah, yeah, yeah.
Exactly, so.
It’s obvious why we persecute them.
They’re black on their right side, we’re black on our left side.
That was kind of blunt.
Yeah, exactly, but Star Trek is always blunt, right?
And that’s kind of part of the fun, right?
It is.
But these guys watch Star Trek and they’re like, oh yeah, Warp Drive’s cool, let’s do that.
And miss the whole point of Star Trek.
In the process.
Yeah.
Right.
Because Star Trek is utopian ideals in a galaxy that’s descending towards dystopia.
And they’re fighting it every step of the way.
Could it be that to become a tech bro in the first place, you had to be really focused on a level to the exclusion of your social life and possibly even your personal hygiene.
As a result, you achieved these places and part of your life’s training did not include the emotions and feelings of others or how people think about the world or what their desires are.
And you think that what you accomplish is for their best interest even though you have no idea who they are.
Is that a fair, is that a?
I think that’s right, right?
Like the way I like to talk about it is, you know, for someone who claims to care about humanity so much, Elon Musk doesn’t really seem to care very much about humans.
Right.
Yeah.
And isn’t he the guy who said empathy is a bad thing?
Right, yeah, and but he’s also said, I’m gonna save humanity.
Yeah, he did.
But he’s also said, I’m gonna save humanity by taking us to Mars and like, buddy, first of all, no.
And second, like, I don’t think that you actually care that much about other humans.
And I think that what you said is exactly right, except you also have to add in, they think that the fact that they succeeded in business, which a lot of that’s just luck, is proof of their-
And government contracts.
Right, and government contracts, exactly.
And got through such subsidies, yeah.
For the car business as well as the rocket business.
Absolutely, but like Musk and others, right, like Altman and Andreessen and these other people, they all think that this is proof that they are like the smartest people who have ever lived, because they’re the richest people who have ever lived.
And that’s just not how anything works.
Just remind me, Altman is OpenAI?
Yeah, Sam Altman is CEO of OpenAI.
Let’s go down that list.
Yeah, absolutely, and Mark Andreessen is the head of Andreessen Horowitz, the biggest tech venture capital firm.
Oh, so you need that to get the infusions.
So OpenAI is what gives us ChetGPT.
Got it, got it.
And of course we all know Elon.
Is Branson a player?
Branson is less of a player in Silicon Valley.
What role in the tech sector does Bezos play?
I mean, you know, he’s…
I mean, he’s got his own rockets.
Yeah, he’s got his own rockets.
He’s also like owns most of the infrastructure of the World Wide Web.
This I think is something…
AWS.
Which stands for what?
Amazon Web Services.
Basically, most of the cloud, most of the cloud, most of the actual computers that compose the cloud belong to Jeff Bezos.
So amazon.com is like window dressing on a whole other operation that matters to him.
The real operation.
That’s why he doesn’t have to make a buck selling you a…
He’s gonna sell you a book for 80% off or something.
Right.
Well, after this, I’ll be lucky if he sells my book at all.
No, no, no.
We’re going good here.
Yeah.
Let me remember my…
But I’ll say these guys…
Let’s get the title of the book back in here.
Just give us the title again.
Yeah, it’s More Everything Forever.
AI.
Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.
That’s what we’re talking about.
Yeah, man, okay.
So, when we think about the science fiction, and I think Neil’s point about the isolation of these people growing up, if we think about the science fiction, and you think about certain parts with Star Trek, and then maybe Matrix or Blade Runner, and you go through the laundry list, how have they co-opted these and kind of bolted this to that and that to this?
You think they were influenced by sci-fi?
Oh, yeah, totally.
I mean, Elon Musk tweeted that science fiction shouldn’t remain fiction forever.
Okay, that’s fair.
I don’t mind that.
Yeah, I sort of understand what he means, but which science fiction, right?
Blade Runner’s a dystopia, right?
And then he comes out and says that the cyber truck, that ugly piece of crap, looks like something that Blade Runner would drive, which Blade Runner is not the name of any character in Blade Runner, so we can put that aside.
It’s a profession in Blade Runner.
Yes, exactly.
That’s great.
But there’s a number of dystopians.
But they’re all dystopic.
Ready Player One is another one.
They’re all dystopic.
Some of them are.
There’s aspirational science fiction like Star Trek, right?
But even that is, like I said, they’re the bright spot.
Yeah.
The Federation is the bright spot.
The Federation is definitely the bright spot.
That’s true, but there’s a tweet that lives rent free in my head about a thing called the Torment Nexus.
This is actually in my book at the very beginning.
The Torment Nexus.
The Torment Nexus.
I’m afraid to ask what this is.
We’re going there.
We’re going there.
We’re going there.
So the tweet goes like this, science fiction author, in my book, I created the Torment Nexus as a cautionary tale.
Tech billionaire, at long last, we’ve created the Torment Nexus from classic sci-fi novel, Don’t Create the Torment Nexus.
This is what these guys are doing, right?
They’re Skynet.
Yeah, right, they’re Skynet.
But also, if you go back and look at the classic cyberpunk novels by somebody like, say, William Gibson, who I think is a great novelist, a lot of those novels, like Neuromancer, are about the concentration of wealth and power and the way that the wealthy can and will use technology to remove themselves from the rest of us and accumulate wealth and power while insulating themselves from the consequences.
And that’s exactly what we see happening.
And so when they say that they want to make science fiction into reality, we need to ask, okay, which ones?
Because if you want to make Neuromancer reality, man, that’s bad news for everyone who’s not you.
So how much of science fiction has always been a silent alarm call, a silent warning?
Yeah, I mean, science.
We go back to Fritz Lang and Metropolis back in 1927.
Yeah, absolutely, yeah.
Metropolis is very much like a movie about the need to keep emotional intelligence with pace with technology, right?
I didn’t get that out of it, but I believe you, I’m just saying, that’s a deep read.
To me, it was just a weird alien box.
I mean, it’s a weird movie, for sure.
But like at the end, they say the heart and the hand must work together or something like that, right?
Yeah, yeah, yeah, yeah, yeah.
That’s how I read that, at least.
I think that science fiction, a lot of it, has always been about looking at the world as it is now and saying, okay, if we push that a little bit, if we wanna take a look at this situation in a different context and understand it in a different way by removing it from all of the social and cultural connotations that a particular thing has here and now, we put it somewhere else and maybe we can see it more clearly, right?
It’s what Star Trek does.
It’s what like, oh, my favorite science fiction author, Ursula Le Guin, right?
She did this over and over again, looking at poverty, inequality, capitalism, gender, you name it, right?
So, Rod Serling, back in 1959, he’s interviewed about this new show called Twilight Zone.
And he says it, he said, look, there’s stories I’m telling that you could not tell in just a dramatic way.
It has to be set at a time and a place that is not you and now.
Otherwise, I couldn’t get away with these stories.
And only then do people say, wait, might that be me?
But if it’s blatant and in your face, you reject it.
And he said, in the end, we’re just trying to sell soap.
He understood the situation.
But tell an entertaining story, but set it in another place.
I just looked up when Skynet achieved consciousness, okay?
It was 2:14 a.m.
Eastern Time, August 29th, 1997.
Because the movie was 1984 or 85?
Yeah, the first movie I think was 84.
Yeah, so that was only 13 years in the future.
I hate when they do that.
Go far enough where…
Oh, I have a whole list!
Well, that was the whole thing with Star Trek.
It was set something like 200 years in the future.
Yeah, that was well safely in the future.
I got a whole list!
I’m saying, go safely into the future.
Do you know Soiling Green was 2022?
Oh, God.
Well, that’s why I’m eating people now.
It’s people!
Everybody doesn’t realize, that’s what the pandemic was about, guys.
Because it happened, it’s like, enjoy that burger!
So what else is in your laundry list here?
Basically, I think that what they want, this vision that they have, is this idea of going to space and living forever, right?
And so a lot of it is really about space colonization, going out and expanding to take over the universe.
That’s like, because they don’t want to just stop with Mars.
They want to put Dyson spheres around every single star in the observable universe, and like collect all of that energy.
And that’s, this is not gonna happen, man.
That would be a Kardashian scale five, I think.
Where you control all the energy output of all stars in the known universe.
But doesn’t the Borg have some similar energy?
Yeah, the Borg want to assimilate everything, and what was it, the phrase?
They want to take your cultural and political distinctiveness and make it part of our own.
Speaking as a scientist, I kind of like what science brings society.
And shouldn’t that be enough?
Why does everyone go to science fiction?
Is there some morbid fascination with science gone bad?
And isn’t that a problem with us, not with the storytellers themselves?
I mean, I don’t think that science fiction is in and of itself the problem, right?
That’s what I’m getting at.
Yeah, exactly, yeah, no, I’m a huge sci-fi fan.
I’m also a scientist by training, at least.
The reason people find science fiction more compelling than science has a lot to do with the fact that it’s not really about the future, that it’s sort of these interesting what-if scenarios that reflect on where we are right now, right?
If you tried to make a very realistic TV show about what life could actually be like 100 years from now and made it as realistic as possible, people probably wouldn’t watch it because it would involve so much slang that doesn’t make any sense to us right now and like shifts in language.
Right, yeah.
Right?
WTF!
Yeah, but this is like, and so many little things like that, right?
It’s not meant to be a realistic depiction of the future.
So, I mean, part of me wants to say, the problem isn’t science fiction, the problem isn’t science, the problem is like critical reading comprehension skills.
Yeah, and money.
Yeah, exactly.
The accumulation of wealth to a very few is always going to be a very bad thing for any society.
But right now, unfortunately, there’s a global society of billionaires that has popped up.
And they’re co-
When did you become Marxist?
What’s that?
When did you become Marxist?
I’m not a Marxist, believe me.
I’m pretty cool with capitalism.
I’m just all about guardrails.
And I also believe that $2 billion is all you get to have, okay?
Now, that was pretty Marxist.
Yeah, I think that was in Dusk at the Tell.
We can have $2 billion direct.
You said Karl Marx said, $2 billion, no more.
No more, yeah.
So, no.
So, you’re basically just saying, not enough pure science education, but a hell of a lot of money is a bad part of the equation.
That’s a bad combination.
Yeah, and I agree completely.
So, give us the takeaway thesis of your book.
Oh, I mean, I actually do end the book saying that we should limit the amount of money that people should be able to have.
Did Karl Marx write the few forwards?
I don’t think you meant that.
I don’t think you meant that.
What you mean is, we should limit how much power do people who have money have.
Yeah, absolutely.
The problem with that is, the more money you have, money is always, and they call it soft power, it’s not.
It is straight hard power, because you are able to influence every corridor of power that there is when you have enough money.
No, this is good, man.
You should follow me around and just say the stuff that I’m gonna say, but better.
I’m just like, they’re plenty of rich people.
But what you need to do is, this is where progressive tax is a good thing.
And we found that out under FDR, back in the day, where basically you got to a certain amount of money and they were like, yeah, we’re gonna take 90% of that.
Okay?
And we’re gonna take it, and we’re gonna do stuff because you wouldn’t have been able to get that much money without all the things that we want to now support with the money that we helped you make.
But you’ll get to keep up until that point pretty much all your money, but when you get to this level, you gonna give us some of that money.
Give me that money.
But yeah, no, I think that we as a society, because it’s not just the billionaires, it’s also that we as a society buy into this idea that the ultra wealthy know what they’re talking about when it comes to something other than the ins and outs of having more money.
And they could be complete dumbasses.
Yeah, exactly.
No, you know what they’re good at?
They’re good at rigging the game for them to make more money.
That’s what they’re good at.
And everybody thinks they’re gonna be rich one day.
And so I did this, the calculation for a billionaire, what it takes just to make a billion dollars.
And I think I used $500 an hour, which is a very good amount of money.
$500 an hour, 24 hours a day, seven days a week.
And I think it came out to like, you’d have to work 2,304 or 4,000 years or 2,300 years.
It’s ridiculous.
It’s a ridiculous amount of money.
That’s what I’m saying.
Well, you wanna know what makes it even more ridiculous?
Turn it around.
You have a billion dollars.
You wanna get rid of it.
You spend $500 an hour, 24 hours in, and it takes you many times longer than a human life.
What do you need more than a billion dollars for?
That’s my point.
So you get $2 billion and that’s it.
I did that calculation with Elon Musk’s wealth.
Oh, did you?
Yeah, you turn all of his money into $100 bills and lay them end to end.
You ask how far does it go around the earth?
We can go several times around the earth with $100 bills, okay?
And then there’s some leftover money, tape them together to a ribbon, and you’ll have enough leftover to go to the moon and back.
Yeah, see, that’s ridiculous.
Exactly, that’s fricking ridiculous.
So now we feel like we have to protect these people.
This is what I don’t understand.
The issue is their outsized power they have over laws, legislation, politicians, and the rest of us.
I don’t mind rich people, provided they’re not trying to control my life.
Exactly.
Okay.
Yeah, I agree.
I’m with you on that.
We gotta land this plane.
I think I figured out what’s going on here.
Lot of smart people, lot of wealthy people, lot of people with influence, trying to figure out what kind of future we will have, what kind of future we should have.
And we all know that future will pivot on advances in science and technology, as civilization has always pivoted on science and technology.
And so, but we’re at a point now, and maybe we’ve been at this point before.
So is this really any different?
I don’t know.
But it seems like we have the future in the palm of our hands.
And in the end, it comes down to not how advanced the science is, not how clever anybody is, not how…
It’s not related to any of that.
It has to do with how wise we are in the face of our own creations.
And wisdom, I think, is an undervalued factor in all the brilliance people are exhibiting in their creations, in their discoveries, in their forces operating, what the future of civilization will be.
So, if I may appeal to what it is to not only think about how great your inventions and discoveries are, but think about how you might harness it as you harness a horse.
An unharnessed horse runs wild.
You don’t know what it’s going to do next.
A harness horse is still a horse, but it gets to do exactly what you need it to do and what you want it to do.
And that is a dose of wisdom coupled with our ingenuity.
I’d like to think there’s more of that in our future.
Maybe we’ll avoid the disasters that the science fiction writers always portray.
And that is a cosmic perspective.
Dude, thank you for being on StarTalk.
Thanks for having me.
This has been a lot of fun.
Good luck with the book.
Give me the title of the book again.
More Everything Forever, AI Overlords, Space Empires and Silicon Valley’s Crusade to Control the Fate of Humanity.
You got it right.
He did it right.
Well, he did it right.
All right.
Back to Berkeley, you go.
Yes.
And keep us thinking about the future.
I will.
There’s not enough of that going on.
Thank you.
Yeah, I’d be happy to come back anytime.
All right.
This has been StarTalk Special Edition.
You put together another one with your peeps.
Lain Unsworth as well.
Take a large slice of credit.
There you go.
All right, Chuck.
Always a pleasure.
We’re all good here.
Neil deGrasse Tyson for StarTalk Special Edition, bidding you to keep looking up.




