mikemacmarketing’s image of a robot solving math problems.
mikemacmarketing’s image of a robot solving math problems.

Will AI Replace Us? with Matt Ginsberg

mikemacmarketing, CC BY 2.0, via Wikimedia Commons
  • Free Audio
  • Ad-Free Audio
  • Video

About This Episode

Is artificial intelligence taking over? Neil deGrasse Tyson and co hosts Chuck Nice and Gary O’Reilly discuss deepfakes, AI hallucinations, and whether AI really is intelligent with software engineer at X, the moonshot factory, Matt Ginsberg.

Have we been using AI in the right way? We explore deepfakes and the technology to verify an image’s legitimacy. Is artificial intelligence really intelligent? We break down what AI can and cannot understand. Does AI understand causality? Or even reality?

What is AI bad at? Could it help us discover objects and phenomena we never knew to look for? We discuss how it could help us explore space or call plays in a football game. Could we one day see AI calling plays instead of coaches? Do two machine coaches cancel eachother out?

Can AI predict the weather? We discuss the concept of chaos and how close we are to artificial general intelligence. Find out about AI’s relationship to the truth and how we can combat people using it to spread misinformation. Finally, could deepfakes become so good that it’s the end of the internet?

Thanks to our Patrons Kathleen Kussman, Craig Hamilton, Denis de Oliveira, Jim, Ryan, and Krishna for supporting us this week.

NOTE: StarTalk+ Patrons can watch or listen to this entire episode commercial-free.

Transcript

DOWNLOAD SRT
It’s time to unmosquito your life with Thermocell Zone Mosquito Repellent. By the way, I use this, and it works. Thermocell uses heat to diffuse repellent into the air, creating an invisible barrier around you that keeps mosquitoes away with...

It’s time to unmosquito your life with Thermocell Zone Mosquito Repellent.

By the way, I use this, and it works.

Thermocell uses heat to diffuse repellent into the air, creating an invisible barrier around you that keeps mosquitoes away with a 20-foot zone of protection per device.

It’s people and pet friendly, no more having to use those itchy sprays on your skin, plus it starts working quickly and is 100% satisfaction guaranteed.

You will be satisfied.

I’ve already done the testing for you.

Find Thermocell’s own mosquito repellent at retailers nationwide, including Amazon, Walmart, Home Depot and Target, and my house too, but you know, you can’t shop there.

How great is it to be able to communicate with somebody in their language?

Makes you both feel good.

How about job prospects and how much more plentiful they are when you can speak another language?

Well, how about you learn a new language with Rosetta Stone, the most trusted language learning program to make you a more competitive job applicant.

It’s available on desktop and as an app and teaches through immersion.

Find convenient 10-minute lessons.

It’s used by millions because it works.

Rosetta Stone’s true accent feature even provides feedback on your pronunciation.

For a very limited time, StarTalk Radio listeners can get Rosetta Stone’s lifetime membership for 40% off.

That’s $179 for unlimited access to 25 language courses for the rest of your life.

Redeem your 40% off at rosettastone.com/startalktoday.

Muchas gracias, you say.

Siempre de nada, I say.

Coming up on StarTalk Special Edition, will AI be the end of civilization?

But before that happens, will it help us coach sports better?

Will it make deep fakes that will destabilize all that we know and trust?

Also, will it help us explore space?

Stay tuned.

Welcome to StarTalk, your place in the universe where science and pop culture collide.

StarTalk begins right now.

This is StarTalk’s special edition.

An entire show right now devoted to AI.

We don’t know, is AI good or bad?

You heard all about it.

Everybody’s opining on it.

Is it a problem solver or does it create problems?

How do we use it in space, in media, in all kinds of places?

Here we are back with Chuck Nice.

Chuck, how you doing, man?

My co-host.

Sorry to tell you, but this is not me.

I’m not here.

This is my AI doppelganger.

That’s what this is.

That’s your doppelganger.

Right now.

I have ways to test for that.

We’ll find out in a minute.

The AI has me tied up in the closet.

Help me, please.

Someone help me.

Gary, how you doing, man?

I’m good.

I’m interested in this because it’s not a subject I have any real depths of knowledge about, which is most subjects.

I agree.

But this might be less so.

He’s lying.

That’s not Gary.

That is not Gary.

He’s lying.

Gary’s also tied up in the closet.

Gary is right next to me in the closet, man.

All right.

So, I’ll do my best with your two doppelgangers.

So, Gary, set the scene.

What do we have here?

All right.

Today’s guest has a doctorate in mathematics from Oxford.

That’s the English Oxford.

Was researching astrophysics and then decided to switch to AI.

Author, yes, published author of a book in 2018 called Factor Man.

Developed, and I love this thing, developed a computer program called Dr.

Phil.

That’s F-I-L-L that solves crossword puzzles.

And Dr.

Phil has competed in professional crossword puzzle tournaments successfully.

And now this gentleman works for the delightful people at Google.

So, today’s guest, none other than a dear friend, Matt Ginsberg.

Who’s returning to StarTalk.

Matt Ginsberg, welcome back.

It’s great to be talking to you all again.

I think at the end, we probably should have a vote on whether we want to let Chuck and Gary out of the closet or whether we prefer the doppelgangers.

And we can just go with what we have.

That’s right.

Whether these are better versions, right.

So, Matt, what are you doing with Google right now?

Or do you want some kind of NDA, non-disclosure agreement?

So I work for an organization called X.

It is part of Alphabet.

We are Alphabet’s moonshot factory, which means…

But just to be clear, Alphabet is the holding company of Google.

Alphabet is the holding company of Google and other organizations.

So Waymo, for example…

And X is part of Google or X in Alphabet?

It sounds like it belongs in Alphabet.

X is in Alphabet.

We’re not part of Google.

We’re in Alphabet.

I got to tell you something.

For a guy who went to Oxford, I ain’t so impressed that you know that X is in the Alphabet.

Okay?

That’s fair.

We do the hardest stuff we can think of.

It’s an unbelievably fun place to work.

The people are incredibly smart.

We have a project called Tapestry.

This is like the X Prize.

It is.

The X Prize was money for just some, as we say, moonshot.

Something that, who thought you could make this happen?

And you do.

You put enough smart people funded by enough money.

And then there it is.

Is it true that they call it the failure division of Google?

Because they don’t care if you fail.

It’s all about the discovery of information and advancement through doing stuff that you would never otherwise attempt.

So we have this project called Tapestry.

The goal is to decarbonize the electric grid.

So everybody uses renewables and huge climate impact.

That is so hard that if we can’t do it, nobody’s going to be stunned.

Any specific project at X is probably more likely to fail than succeed, but there are some amazing successes.

So Waymo, which is Alphabet’s self-driving car division, came out of X.

We have another project called Mineral that just graduated that has these weird vehicles that drive around the tracks between crops on a farm, and they use cameras and machine vision to figure out how the plants are doing and just generally to make farming more efficient.

All of these things are things that were just as hard as tapestry when they started, but they’ve actually succeeded, and now they’re out as, I mean, they’re bets.

They’re part of Alphabet, so we call them bets.

And these are other divisions.

Clever.

I like that.

Uh-huh.

But not, it’s, I have to tell you, it’s great working for a company that expects you to do unbelievably hard things and realizes that when you try and do things that hard, you’re not always going to pull it off.

Well, of course, science in general has many, many failures.

The press only talks about the successes.

So you starting out in science, this would not have been a foreign concept to you other than that there’s a whole company that’s cool with it.

Normally, if you don’t make the bottom line, you’re on the street the next quarter.

Exactly.

I tell my friends who are in projects that eventually get shut down because they didn’t work, I say, you always learn more from a failure than from a success.

So we should celebrate the failures.

And we do.

We actually, when an effort gets shut down, there’s a big meeting, everybody applauds, it’s like a party.

Because we know that we’ve learned stuff, we know that we’ve tried stuff, and we know that now we’re going to try something new, and enough of it works that the whole enterprise is something that continues.

And how do you justify that to the short-sighted desires of shareholders?

Because honestly, that’s…

No, here’s the deal.

We ask a different question.

So Matt, what percent of Google’s annual revenue gets directed towards X?

So that is covered by an NDA.

This is an R&D number.

That is covered by an NDA.

It is an R&D effort that tries to do the crazy things.

All the stuff that we’re going to talk about, about generative AI, is based on a technology called Transformers that came out of a part of Google called Brain.

Brain came out of X.

So Waymo came out of X.

We’ve done amazingly impactful things.

We have a long timeframe.

So when we start a project, it’s not, you know, it’s common for us to say this is going to take 10 years.

We might kill it in two because we can tell that it’s not working, but if it succeeds, it’s going to take 10 years, we’re okay with that.

So the hard part from the shareholders is not arguing that we’re adding value.

I think we’re clearly adding value.

We have to get them to be patient enough to see that value materialize.

So you’re lucky, Matt, because there’s a culture there.

I saw it once or twice when I was playing where if you did lose, got defeated, beaten heavily, people went away, licked their wounds, but considered where the things went wrong, came back with solutions.

If you build that culture, you can achieve things by using that way.

But the pressure to get results, and as Chuck was so rightly pointing out, it’s dollars.

I don’t have time because I’m committing such a phenomenal amount of money to this, and if we lose too many games, my coach gets fired, all sorts of things.

So it’s a really, really fascinating place.

If you can develop a sustained culture like that.

We do have the culture.

An alphabet has been fantastic about recognizing that there is room in an entity as successful as alphabet to have a bunch of people, to let them take the long view, to let them try and do incredibly hard things and see what happens.

And obviously, we can’t just keep failing.

Some stuff eventually has to work, but some stuff does work.

And the people at X who decide what are we going to work on, when are we going to kill it, when are we going to keep pushing it, they seem to be very good about ensuring that net-net were a positive.

Can you comment, just reflect on a recent AI news story about a John Lennon song where they sampled John Lennon’s voice and then had him finish the song because he died before it was recorded?

Can you just reflect on, is that a good thing or a bad thing?

That’s a complicated thing.

I think that…

Instead of asking what would Jesus do, we say what would John Lennon do?

If he were here, would he punch you in the nose?

What would he do?

More Beatles songs I think are an undeniably good thing.

I think that the ability…

There are two things I think you want to take away from this.

First, look at what they chose to synthesize.

They synthesized Lennon’s voice.

What made the Beatles so magical is, I think, the words.

What they chose to say.

That’s much harder.

Synthesizing a person’s voice is relatively speaking easy.

So the first thing I think we need to think about is synthesizing a voice and synthesizing the idea, the essence, different things.

And the second thing is the fact that you can synthesize someone’s voice is scary because it’s going to make deep fakes so much more of a problem than they currently are.

Because now we can make a picture of whoever doing whatever.

And we can even attach some voice to it to convince you that it really happened when in fact it didn’t really happen.

So I think that’s a big issue that will need to be addressed if we’re going to keep all of society sort of continue to be grounded in reality as opposed to these fabrications.

But it has to be addressed right now because if we don’t come up with some way to watermark this technology on a digital level, I mean, what is going to stop people from utilizing it in the most heinous ways possible?

Nothing, nothing.

So you’re absolutely right.

Watermarking is tricky because I mean you can pass a law saying all digitally created images must be watermarked.

Okay.

And then somebody creates an image in a country that doesn’t have the law and it’s on the internet and now what do you do?

I think the way we have to deal with this, I think there are a couple of things.

One is we need to develop the technology.

So right now there are programs that can recognize BARD, which is Google’s generative AI you can talk to.

And if you give it a text that was written by BARD, they can say, yeah, 95% that was written by BARD, not by a human.

We need to use those to understand what was created and what was not.

And the same thing can be said of images.

There are non-watermark traces that we need to better understand and better take advantage of.

And the second thing is we need to recognize that there are trusted sources.

And we need to pay attention to this came from somewhere that I actually am willing to believe.

If they say they took the picture, they took the picture, it actually happened.

So a news agency potentially can serve this role in the way that some guy somewhere far away who just creates a picture might not be a trusted source.

And we as a society have to be more suspicious, sadly, than we have been, that just because somebody shows me a picture, it doesn’t mean it’s true.

That is true.

Matt, we’ve discussed some of the positivity and some of the negativity.

But if we take the intelligence part of AI, have we actually, since we’ve been starting to play with it and develop it, kind of pointed it in the right direction to do the right things, or have we kind of just wasted our time with it so far?

I don’t think we’ve wasted our time.

Okay.

Okay, first of all, so I think there are lots of applications of AI that have been phenomenally successful, incredibly important.

The car that you just bought was probably manufactured mostly by robots.

The cars have fewer defects coming off the line.

Absolutely.

They’re cheaper.

It’s a good thing.

Aren’t they?

Well, there’s inflation, but I think Pathfinder, right?

We want to send a robot to Mars because we can.

We don’t have the technology to send people to Mars yet, but we can send our agents there in the form of these automated devices.

They have to be pretty independent because round-trip message time to Mars is long.

You know, the robot has to avoid a rock all by itself because if you don’t talk to it, you can’t talk to it so quickly.

Watch out for the cliff.

20 minutes later, it’s too late.

The Martians.

Clearly the Martians.

So I think we’ve done good things.

I think that AI is moving very fast at the moment, and when technology moves quickly, it’s challenging.

It’s always been challenging for society to keep up and for society to say, okay, here’s what I have to think about.

Here’s how the world has actually changed.

And what…

It’s important that society distinguish from the apparent changes, which aren’t actually grounded in reality, from what actually has happened that matters.

So for example, all these generative programs, BARD and others, they’re not actually smart.

They don’t actually know what’s going on, but it’s very easy for people to think they’re smart and to ask them for explanations about which they’re completely poorly equipped to respond, as opposed to asking them things that they can respond to.

So for example, somebody asked me recently if BARD understood causality.

They wanted to understand some causal thing or lack of a causal thing.

So they said, can BARD understand causality?

And I said, OK.

And I went to BARD and I said, is there a correlation between the phase of the moon and the amount of chicken eaten in Denmark?

And it said, absolutely.

And it quoted a paper that had never actually been written and a survey that had never actually been done.

It just made all this stuff up.

That was, and I told my son about this and he said, Dad, why are you asking BARD questions like this?

It’s not going to answer them.

But if I want, I’m a new business and I want to have a website and I can’t afford a developer, I can go to BARD and say, make me a website that does this.

And BARD will do great.

So we, all of us, need to understand what these entities can do, what they can’t do, what they’re going to be effective at, what they’re going to be ineffective at.

And we need to ask them to do what they are good at, which is a lot.

It’s just not everything.

They’re not going to, you know, replace all of us.

All of us.

All of us being the operative.

Well, Chuck, they replaced you, apparently, fine, because you’re in the closet, and this is just the Chuck avatar.

Well played, Matt.

It doesn’t make a difference if you’re driving across country or on your daily commute.

The time in your car is perfect for listening to podcasts.

Use T-Mobile’s network to help keep you connected to all your favorite podcasts when you’re out and about.

Now here in New York, people don’t really drive that much, but you do use ride share and you’re on public transportation, you might be on a ferry or you might be on a bus.

And you know what you’re doing?

You’re listening to something.

And if you want that something to come through on the fastest 5G network, well then you better be using T-Mobile.

T-Mobile covers more highway miles with 5G than anyone.

Seems obvious if you need great coverage, especially when you’re on the go, check out T-Mobile.

They’re the largest and fastest 5G network.

Find out more at tmobile.com/c-y, that’s S-E-E-W-H-Y.

Fast is based on median overall combined five speeds, according to analysis by Ucla of Speedtest Intelligence Data Download Speeds for Q4 2022.

See 5G device coverage and access details at tmobile.com.

You can crush your fingers and all your toes during a data center migration.

You can knock on wood, pluck a dozen four leaf clovers or look to your lucky stars for a successful office expansion.

You could hold your breath, shut your eyes and say all the world wishes to help avoid cyber attacks.

But none of that truly helps you.

Because next level moments need the next level network.

With the security, reliability and expertise to take your business further.

AT&T Business, the network you can rely on.

Matt, seeing as we’re talking about robots and Mars, what would you have to build into a program for it to look into deep space and find things that we don’t know to look for just yet?

How do you go about tweaking AI to be able to achieve that, or is it not quite there yet?

I think it’s mostly not quite there yet.

So the way these things work is, here’s how I often think of it.

The world breaks down into what I call 5149 problems, where you want to be 51% right is good.

So if you’re playing the stock market and you can accurately pick stocks where you’re going up 51% of the time, you’re about to be really rich.

And then you have 100-0 problem, where 51% is not good enough, 99% is not good enough.

If you’re trying to shut down a nuclear reactor in an emergency, you really need the 100% answer.

All of these machine learning systems are incredibly good at the 5149 stuff, and they’re not so good at the 100-0 stuff.

So if you want to look into deep space and you’re really interested in something that you thought was probability zero, an alien talking to us or a new kind of supernova or something that we have never seen before, that’s sort of a 100-0 problem.

Recognizing things whose probability is actually zero, and it’s a huge surprise.

Machine learning systems are not great at that.

But I don’t agree.

Nerdfight in the progress here.

I don’t agree.

Gloves on.

Nerdfight.

Excellent.

So, Matt, I agree with, I in principle agree with you, but there’s an important nuance here because I can program the computer to show me something that I don’t recognize because I have a huge catalog of things I do recognize.

No matter what it is.

So, I know what’s…

It’s extremely broad, but it’s something that we know we haven’t seen it because the computer knows our entire catalog.

Correct.

The computer knows everything that we know and something shows up that we don’t know and I say, show me everything we don’t know.

Because that has to be anonymous because we don’t know it.

It’s gonna be an anomalous one in a gazillion thing.

Yes, or it could just be a glitch in the matrix, but it’ll find it though.

And so, that’s why I don’t entirely agree it’s not good, at least astrophysically, in finding the lone wolf out there.

So, the trick is, what you’re asking is, find the one that doesn’t belong, basically.

Correct.

And these programs will classify everything.

You give them eight things and they’ll say, oh, this belongs in pot seven.

Oh my God.

So, whether you want it to or not is what you’re saying, is it makes the decision to do so on its own.

Even if it creates a category that didn’t exist and says, well, now it’s in this category.

Well, it’ll probably put it in an existing category.

What you can do, wait, wait, wait, hang on, hang on, hang on.

What you can do is you can-

Don’t make me come out there.

Somebody give me some popcorn, this nerd fight’s getting good.

What you can do is you can say, if the frequency is outside of a frequency I’ve ever seen, flag it.

If the periodicity is shorter than anything I’ve ever seen, flag it.

But what you’re doing now is you’re actually creating sort of a new category of surprising things that you are defining for the system.

And then saying what belongs in that category.

Absolutely you can do that.

But if you see a true surprise, something that you actually had no idea was gonna exist.

So for example, imagine that you find a pulsar and miraculously the phase of the pulsar is 100% correlated with the phase of another pulsar two light years away.

That’s amazing.

Something totally bizarre is going on.

But the machine won’t know because it has no idea to look.

It just looks at it and goes, oh, pulsar A, pulsar B.

It’s not a new object, it’s a new phenomenon.

It’s a new thing.

Right.

So these things that are brand new things, if you’ll just look at that, it’ll say pulsar next.

Right.

Okay.

So I’ll give you that.

Yeah.

So what you’re saying is it doesn’t know to look for it because we don’t know to look for it.

Correct.

So I was being blunt about it and saying objects and phenomena that are sort of singular, that you would just put in a catalog with properties.

We use neural net searching throughout data to find weird stuff all the time.

But you are right.

If there are two pulsars that are synchronized, we know what pulsars are, we know what their pulses look like, and they’re synchronized because aliens are getting ready for the invasion, we would have no idea.

Nobody would find that.

Exactly.

But what would happen if this happened, people would say, holy cow, these pulsars are synchronized, and then they would define a new thing, which is a synchronized pulsar pair.

And then all of a sudden, that would go into our category, and now we would talk about that pulsar pair as an object, and all of a sudden, the neural net would say, oh, pulsar pair, I’ve seen that before, this is a pulsar pair.

But the first time you see one of these fundamentally new phenomena, which is what makes science fun, in all honesty, machines don’t know, it’s too far outside, you know, what they’ve been trained to do.

Mm-hmm.

So we’ve got to retrain them, or retrain ourselves.

I think, so I don’t think so.

But it puts a limit on AI’s ability to explore for us.

So it does, so right now, and I’ve said this before on the show, what we’re good at, and what machines are good at, are different.

We are incredibly good at, holy cow, two synchronized pulsars, who would have thought it?

I gotta pay attention to that.

We’re amazing at that.

Machines are, they’re amazing at, this thing that you can barely see over here, I looked at it 18 different ways, it’s probably a pulsar.

They’re better than we are.

But right now, we can do more with machines at our side than either of us can do in isolation.

And I think that’s great.

I’m incredibly optimistic because of that.

I don’t see, and the day is coming somewhere way far away.

But right now, I don’t see that they can get by without us any better than we can get by without them.

We’re good together.

Let’s flip it into my backyard sports.

Could AI become a live in-game play coach, a head coach?

Could it react in real time?

The answer to that is yes.

And I have built an NFL play caller.

I have run it in simulation against the choices made by actual play callers, and it crushes them.

It just annihilates them.

And it’s easily fastened up.

I actually, I played with it before joining X.

I played with it and I had it, and I actually was watching football games with it.

And it would put in real time, this is what you should do.

And I would watch the coach do what he did.

And then I ran all these simulations.

People are, unfortunately, I guess, not terribly good play callers.

Play calling is this giant statistics problem.

Machines are going to be great at that, and they’re great at it fast.

However, does your program take into account the adjustments that are made by quarterbacks who recognize coverage in real time?

So the defense plays a call and the offense plays a call.

So the answer is yes.

No, that’s the wrong question.

Chuck, that’s the wrong question.

Does your program deflate the ball?

No.

That’s the right question.

So the answer to Chuck’s question is that part of my software was who’s the quarterback.

That’s enough because the quarterbacks that are effective at adapting to what they see when they come to the line.

Exactly.

They are going to have slightly different statistics than the quarterbacks who are not effective.

So when the program decides, do I want to call a run?

Do I want to call a pass?

Where do I want to call a pass to?

It does know.

But how?

Wait, I missed something here.

Since you…

What do you mean you do better than the actual call players?

Because you don’t have an outcome that you can look at.

You just say, shouldn’t have done that, should have done what I said.

Had he done what you said, how do you know what the outcome would have been?

So there…

You don’t.

So there are two ways.

First, as any coach is based on a simulation engine, so I can run that simulation.

Now it’s sort of a self-licking ice cream cone because the simulation…

Yes, it is.

Okay.

So that’s way one.

And I think there’s still merit there because you can test the accuracy of the simulation.

And the second thing I can do is I can just go back and look at the game and say, okay, the plays where the actual human coach made my play, A, B, C, D, E.

The plays where he made a different play, F, G, H, I, J.

And then I can look at them and say, oh, A, B, C worked out pretty well.

F, G, H, J, they were duds.

And it’s easy to look after the fact and see whether a play was a success or a failure.

But aren’t you still just invoking the statistics of past events to predict a future event?

I am invoking, but that’s reasonable, right?

So there’s a reason.

No, no, no.

If that’s the case, isn’t that what they’re doing?

Every single pitch in baseball today?

They are.

Everything is on a, they’re overanalyzing it, right?

In baseball.

Why are they not doing that in football?

So that I don’t know.

I do think that baseball is a little more committed to the statistics of the sport than the other sports are.

Nothing else is happening between.

I also think that at some level, what you’re asking is how hard is it to build a machine learning or other model that tells you how effective a pitch will be, how effective football play will be, that is able to predict with reasonable accuracy, whatever that is, is able to predict the outcome of a particular sporting choice.

The answer is, it’s not super hard and it’s not super easy.

You have to get the data, there’s a lot of curation involved, you have to use reasonably modern techniques.

That’s what I did with the NFL thing, and it worked pretty well.

I was able to tell, what was the actual answer?

It was a while ago.

I think I was able to predict run versus pass with very high accuracy, and I was able to predict exactly what play would be called, something like 20% of the time.

Damn.

That’s very, very high for football.

20%, that’s insane.

Could you imagine a football team being on the sidelines, aside from a Belichick team that’s stealing the signals, if you could imagine being on the sideline and with high accuracy, knowing 20%, one out of every five plays, you would know what they’re going to do.

You would win, you would win up against every game.

This is part of why the software, why the simulations indicated that a machine coach is really going to do very well against a human coach.

That’s a move off.

I know, but what if you put one against another?

You know, my AI is better than your AI.

Oh, I love it.

Machine coach against machine coach.

Oh, it’s the Decepticons versus the Autobots.

Do they cancel each other out?

They sort of cancel each other out.

So, you know, if you, let’s go to chess, which is sort of less controversial.

So you can have, even a relatively poor chess program now is way better than the best human.

Right.

But there are still better programs and worse programs.

Right.

So the same thing can happen here.

Now you have additional factors in sports because one of the teams may well be simply more physically talented than another team.

True.

And then the question becomes, can a difference in the quality of the coaches, whether they’re AIs or not, overcome the difference in the physical qualities of the team and do you want to allow that?

I don’t know.

But there will be better AIs and worse AIs.

How big those differences are, I don’t know, because right now there are no AIs.

But that’s going to be another facet of what makes a team good.

What likelihood, Matt, is there that sports organizations, whichever sport, are already using AI technology for in-game situations?

I think it’s pretty small.

When I joined X, I was trying to talk to the NFL about using the software developed for play calling.

And it seemed pretty clear to me that they weren’t doing anything like that yet.

Now, I’ve been in X for a couple of years, so we’re looking back two years.

I think that eventually this is going to happen, but of course, not yet.

But it’s moving towards that because right now you’re using Next Gen and Next Gen stats.

And what they’re looking at is percentage likelihood and probabilities for certain plays at certain times.

You’ve never seen more two point conversions in the NFL than you have right now.

That is a direct result of statistically, you should do it.

You’ve never seen more teams going for fourth and whatever because statistically, you should do it.

So hang on.

So we’re moving towards that direction.

So about that specifically, about, I was probably eight years ago now, I did some statistical work for the Oregon Dock.

And I told the coach at the time, I gave him these huge printouts with what you should do in every situation on 4th down.

And I said, just stop punting between the 35 yard lines.

That’s really what it all says.

And he stopped.

And that was the year that the Oregon Ducks were the best, they were.

And the other college teams noticed and they stopped punting between the 35 yard lines.

And you’re absolutely right.

What has happened at the NFL level is people have noticed.

So when I see someone not punting between the 30, not going for it on 4th down, I smile because it’s basically my work.

The statistical work I did a while ago to get the Oregon Ducks to stop doing it.

And then it’s propagated out.

So.

But just so I understand, to on 4th down, rather than punt to release the, to return the ball to the opposing team, they would go for first down.

And in some percent of the cases, they don’t.

But if I’m on my own 35 yard line, I’m handing you the ball at the 35 yard line.

Right.

And you’re saying that risk is not as great as just handing over the ball because it might have gotten a first down.

You’re only starting 10 yards further than you would if I had, if you had made a fair catch.

Yeah.

So you’re only conceding 10 yards.

You’re conceding 10 yards by going for it.

I mean, by not going for it.

That’s all you’re really giving up.

Unless there’s a really good return.

Now does it take that into account?

Cause that’s fascinating.

It took everything into account.

So yeah, if you’ve got a great return guy, you know.

It also, it’s also a function of how late in the game is it and how much are you ahead or behind by the actual rules.

It turns out if we’re doing sports is don’t punt between the 35 yard lines and always go for it on fourth and one, even from your own 10.

Whoa.

And I told that to the Oregon Ducks coach and he said, I can’t do that.

I’ll lose my job.

I lose my job.

Yeah.

But from a straight statistical perspective, and the bottom line is if you punt from your own 10, you’re still screwed.

They’re still going to have unbelievably good field position.

Yes, you’re right.

And if you go for it on your own 10, fourth and one, you have a reasonably good chance of getting it and now all of a sudden you’re back in it.

But he just said he couldn’t do that.

He just looked me in the eye and said, I can’t do that.

I’ll lose my job.

Did he lose his job anyway?

He went to the NFL.

He had a great year for the Oregon Ducks.

He went to the NFL.

Well, there you go.

He lost his job upwards.

That’s fantastic.

Bye.

You can crush your fingers and all your toes during a data center migration.

You can knock on wood, pluck a dozen four-leaf clovers, or look to your lucky stars for a successful office expansion.

You could hold your breath, shut your eyes, and say all the well-wishes to help avoid cyber attacks.

But none of that truly helps you.

Because next level moments need the next level network.

With the security, reliability, and expertise to take your business further.

AT&T Business, the network you can rely on.

You said earlier on, Matt, about how machine learning programs are getting quicker.

Are we going to get to the point where we can really start to predict some of the big natural disasters, the earthquakes, the tsunamis, or as good as it gets right now?

Well, let me be more precise there.

There’s certain, I don’t know, earthquakes wouldn’t be the best example here, but certainly storms where we have limits to how many days in advance you can predict the weather because there’s some chaos takes over.

How does AI handle chaos any better than we’ve ever handled it before?

So those are sort of the same question, and I think that actually comes back to this 51-49 versus 100-0 thing.

So predicting an earthquake, that’s 100-0 thing.

How many hurricanes are there going to be this season?

That’s more like a 51-49 thing.

Dealing with chaos, very much a 51-49 thing.

The stock market is sort of chaotic, but I don’t have to get it right all the time, I just want to get it right most of the time.

I should just be clear that our audience knows more precisely what we mean by chaos.

So what we learned back, I guess, in the 70s and 80s, that you can start a system out with certain variables having certain values, and then you could run a system, and not all systems would behave this way, but some systems, you get a result, okay, and then you can make a tiniest adjustment in your initial parameters, and then set it go, let it go forward, and you get a completely different result.

Exactly.

So that small changes in your initial conditions would not lead to small changes in your outcomes, it led to huge changes in your outcomes, which meant that your ability to predict far into the future for some systems was essentially mathematically impossible.

So this is-

That’s what I meant by chaos here.

A butterfly flaps its wings in East Africa, and there’s a hurricane six weeks later in the Bahamas.

And my answer is, it’s again, it’s the 100 zero versus 5149.

I cannot tell you there will be a hurricane in the Bahamas on October 27th, but I can tell you there are going to be more hurricanes this year than average.

In general, more hurricane 5149.

This specific hurricane that depends on that specific butterfly, I don’t know any better than I used to.

How about quantum computing when you add that into the effect, because now you’re looking at billions and billions of data points that are being fed to the AI?

It is…

So quantum computing is good.

I know less about it than I want to and that I wish, because I certainly have the background.

Right now, conventional machines appear to be able to process the data we need them to process.

And quantum computing, I think, will be helpful in other ways.

So the quantum stuff appears to be best at sort of doing an almost uncountable number of things in parallel.

So I want to find an integer that has certain properties.

And I can sort of look at hundreds of billions of integers simultaneously using the quantum stuff.

That’s cool.

But the machine learning stuff, I’m just trying to look for properties in enormous data sets that involve sort of looking at all the data and seeing how it interacts with each other.

And that we seem, at least so far, to be able to do with, it takes a lot of computing, a ton of computing, but we seem to be able, currently we’re keeping up.

There might be a way to query that same set of data with the higher performance quantum computing in ways we had not thought to even ask.

Maybe.

Of the data.

Maybe.

Interesting.

By the way, about that butterfly, do you remember that article in the journal of Irreproducible Results where…

I think I said this on another episode.

This is a journal where it’s for idle scientists who have some crazy thought that is completely stupid, but they want to publish it anyway.

And so it goes into the journal of Irreproducible Results.

So one of them was the calculation that heaven is hotter than hell.

And it looked at the thermodynamics of souls and how many people are heaven-worthy versus hell-worthy.

And it added up all the energy of the souls going into heaven, and it made heaven much hotter.

So stupid calculations like that.

The problem is that hell does not have air conditioning.

And heaven does.

So this is definitely a bot from…

Not Chuck in the closet.

So this paper has this photo of a butterfly, and it said it captured the butterfly that caused Hurricane Andrew.

You missed that.

That’s fun.

So Matt, what I think is most fearful for people is when AI does not just the tasks we give it, better than we’ve ever done it, but when it self learns and achieves what some mild version of what we might call consciousness, and this sort of artificial general intelligence, I think is the scariest part of AI that’s been discussed in recent months.

Could you just comment on where that is today?

It is scary.

What we need to understand is what this technology is actually doing.

These things that we’re dealing with, these generative AI programs, they have no notion of truth.

They have no notion of reality.

They have no notion of fact.

All they’re doing…

Or morality even, or morality.

All they’re doing is trying to predict what an expert would say, just what words would come out of his mouth.

And as a result, they sort of don’t know what they’re doing.

They just know what someone might say.

I don’t think we’re anywhere near a point where these things exhibit true general intelligence.

We need to understand when we’re interacting with them.

These things don’t understand, they don’t understand there are facts.

Not that they don’t understand the facts, they don’t understand there are facts.

So when I asked about the phase of the moon and chicken in Denmark, and I got back this long study that didn’t even exist, it had no idea.

And I actually asked it, I said, are you sure?

And it responded, it said, well, I’m not really that sure because this was the only study I could find.

It’s just standing by, it’s not facts.

And it has no idea that this is not how you look at the world.

What they call mirage?

It’s a hallucination.

A hallucination.

It’s hallucination.

And it doesn’t know it’s made something up.

When we talk about making something up, just the phrase is identifying a distinction between actual reality and whatever you’re saying.

These things don’t know there is an actual reality.

So is that a necessary guardrail?

Is a necessary guardrail to imbue the digital intelligence with the concept of these things that it will do, like what reality is, what a fact is, what truth is?

I mean, is it necessary?

It would be good, but I don’t know how you do it.

These things are so divorced from the notion of a reality.

You can’t just say, hey, there are facts.

Remember that.

It’s just not how they work.

It’s not how they’re architected.

Matt, could the problem be that these language model AI machines are coming of age at a time where the Internet is filled with non-facts?

So it’s not its fault we fed it junk food.

Had it come around right at the beginning of the Internet, where you didn’t have QAnon and all the rest of this, might it have performed a little better?

The fact, and it is a fact, that these things have no notion of truth would still be true.

People have tried to curate the information on which they are trained, so they’re not trained on nonsense, and they still have this problem with hallucinating.

The problem is they don’t understand that there is an abstract and abject reality of which they are apart.

And I think you’re right.

The problem is not the programs.

The problem is us.

We need to recognize that these things are divorced from reality.

We need to remember.

If I want a website created, it’s going to do well.

If I want to ask it, is there a correlation between these two crazy things that I pulled out of the air, it’s going to do badly.

And we shouldn’t pay attention.

So if somebody, when I told my son, I asked Bard, is there a correlation between the phase of the moon and the amount of chicken eaten in Denmark, his immediate response was, don’t ask Bard, that’s stupid.

I do think there’s going to be a job here, and it’s going to be an important job, which is, how do I take what I want to know, and it’s like prompt programming.

What prompt do I give Bard to get back the most useful answer I can and to avoid all the junk?

That’s going to be a thing.

There are going to be people who are good at it, there are going to be classes that teach you how to do it, it’s going to be a real skill that we are going to need.

Okay, so now, let’s make that a given.

What do you do, and this may be more philosophical than you’re qualified to answer, I’m certainly not.

What do you do with the people who purposefully use the technology for the end of misinformation, confusion and chaos?

Because even if you do everything that you just said, those agents can still utilize the technology to do some serious harm to society.

Correct, and I think they will.

And I think this gets back to what we were talking about before.

I think you need to have trusted sources.

Trust is going to become much more valuable because lack of trust is going to be so much more dangerous.

So you need to have trusted sources.

You need, to the extent you can, to have technology that can help identify generated images as opposed to real images.

So there is both a technical problem there.

Can I produce software that tells me this is fake, this is real?

Technical problem.

And there is a social problem.

How do I get people to care that they are looking at real information as opposed to sort of garbage?

That’s half my life.

That’s half my life as an educator.

I believe you.

And I think it is a huge part of what scientists need to do.

And our responsibility to do it is even greater now than it’s ever been.

This is more than one facet, I’m guessing now, because yes, scientists and programmers have a responsibility, but the legislation, and I mean, you can’t make it governmental because if I go do something in another country, that government’s got no power.

So who are you going to get?

Space Force to oversee it?

No, that won’t happen.

So who will bring to bear legislation and these bad actors keep them in their place?

It’s going to be a party time for them.

Well, that would have to be part of what Matt was talking about.

In the recognition portion, you would have to be able to recognize where these bad actors are.

Like most of the dark web, you know where the people are located.

You actually know where they are.

It’s just that they are in a country that’s not going to do anything to them.

I just wonder if bad actors here put civilization at risk.

That calls for some kind of international oversight over this.

I think that the, so first of all, the technical problems here are hard and they’re challenging and they’re important.

Identifying generated text versus non-generated text.

And I am thrilled because I’m a technologist and I get to spend my productive time working on technical problems and I don’t have to solve the social problems.

That’s apparently Neil’s job.

Well, I’m so glad this all works out for you.

I do care about the need to inform people about what the technology can do.

I do think that this particular technology is going to take this problem that you’ve mentioned that we have, right?

Truth has become more elusive.

And I think that this technology potentially can make it more elusive still.

But it is still up to us and I think it’s still possible for us to say no, enough.

We are going to be committed to actually knowing whether the sky is blue before we start going, telling all of our friends and neighbors what color the sky is, we should check.

The sky really is blue.

Here’s why I’m convinced.

Here’s my source.

Yes, it’s trusted.

So that’s why I’m willing to talk to you about it.

Well, part of me thinks that if AI has these hallucinations and it doesn’t really know what truth is, there’s nothing intelligent about it at all.

So it’s been misnamed.

It’s a disruptive force in our culture and in our society.

And maybe we should rebrand it as artificial idiocy.

Neil, you were trying to get me killed in this closet.

I’ll end them back in the closet.

Let me just bring some summative remarks here and get your final reaction, Matt.

It seems to me if deepfakes become so good that no one can trust them, then that’s basically the end of the internet as any source of information.

And that has a positive side to it.

Because it means, for example, let’s look at QAnon.

It means QAnon won’t even believe that stuff that’s wrong that it thought was true because it doesn’t trust it.

Because the level of misinformation would be so total that people who were previously misinformed will be worried that they’d be misinformed.

I think that the amount of misinformation can go up.

I think there’s likely to always be an internet.

I think that it’s likely to always have valuable, factual, accurate, important information.

And the trick is going to be to find it.

And if you think about the stuff that I talked about, on the technical side, we need the ability to find it.

And on the social side, we need to have the desire to find it.

And I think both of those things will happen.

A positive outcome if QAnon can’t even believe the stuff that’s not true?

Isn’t that a positive outcome?

That might be a positive outcome in isolation, but if QAnon has that problem, then so do all the people who are trying to affect positive change.

Well, then that brings us to something that we haven’t touched upon, and that is our ability as a society where the majority of people are scientifically literate and trained as critical thinkers so that they know exactly where to place their trust.

That is really where the problem is.

Okay, now you put it back on me that I got to train everybody to think this way.

Come on, Neil.

Now you know that that’s your job.

I wanted to leave the blame on Matt at the end of the show.

Before we leave the show, Neil, Matt, you’re saying how AI is learning at quicker speeds and is going to get there sooner, et cetera, et cetera.

Will it not then solve this problem for us, therefore itself?

Thank you.

I don’t see a reason to expect that it will figure this problem out.

Chuck, I do agree with you.

I think that people don’t have to be trained as scientists, but I think they need to be trained as thinkers.

They need to be able to understand the information with which they’re presented and evaluate it relatively dispassionately.

And I think that education, broadly, education is going to become much more important as we need it more.

It’s also the case that as machines start doing the drudgery, education becomes more important because we’ve been freed from the drudgery.

We get to do the fun, hard stuff.

People need to understand what that is, how they can contribute and all that.

And I think that’s going to happen.

You mentioned education there, Matt.

I’m sorry to cut across you, but people are handing in their homework or their assignments created by AI.

Are we not heading down a path where generations in the future will not have any desire to self-educate?

I think the answer to that is no.

I think this thing where kids are handing in homework written by Bard or what have you, I think this is a relatively, I hope, this is a relatively temporary anomaly.

I agree.

I agree, too.

And we’ll sort that out.

Honestly, it’s short-outtable.

It just means that the school system values your grades more than you value learning.

There you go.

And so you give them grades.

And so that will shift in a good way how the school systems place their value on what it is to teach you something.

And it may awaken in students a reckoning or a recognizing of that value themselves.

Yeah, exactly.

You know?

Yes.

Exactly.

So before AI takes us over and exterminates us, there are these good sides of it.

I think there are some short-term bumps.

I mean, the fact that there are going to be so many deepfakes is going to be a short-term bump.

But, you know, problems are always opportunities in disguise.

So the commitment to being able to recognize what’s true and what isn’t, the realization that there are facts, that’s something that I can see society embracing more than it has because it has to.

Because if you don’t believe in reality, you get overrun.

So maybe at the end of this, we come out better.

All right.

That’s a good put to end this on.

Thank you for finally bringing us around.

So there’s some hope.

Plus, we got to get Chuck and Gary out of the closet.

Please.

It’s been great to talk to you.

Matt, always good to have you on the show.

Thanks for coming around again.

And this will not be the last time we reach out to you.

Always fun.

Always good to have you, Chuck, Gary.

This has been StarTalk Special Edition.

Neil deGrasse Tyson here, your personal astrophysicist.

You know, a lot of you have reached out to me on social media and asked, when am I doing a comedy special?

Well, the time has come.

That’s right, this fall, I will be taping my first science comedy special here in New York City.

I’d love for you to be there because I need the most science literate audience possible.

And that is why we are making exclusive pre-sale tickets available to StarTalk listeners.

So go to chucknicecomic.com and use code STARTalk to get exclusive pre-sale tickets to the show.

That’s chucknicecomic.com and use code STARTalk and we’ll see you in New York at the show with the rest of the STARTalk gang.

See the full transcript

In This Episode

Get the most out of StarTalk!

Ad-Free Audio Downloads
Priority Cosmic Queries
Patreon Exclusive AMAs
Signed Books from Neil
Live Streams with Neil
Learn the Meaning of Life
...and much more

Episode Topics