Graphic showing human brain and circuit board for StarTalk Cosmic Queries Minds and Machines. Credit – metamorworks/iStock.
Graphic showing human brain and circuit board for StarTalk Cosmic Queries Minds and Machines. Credit – metamorworks/iStock.

Cosmic Queries: Minds and Machines

Image Credit: metamorworks/iStock.
  • Free Audio
  • Ad-Free Audio

About This Episode

“We must consider this question, can machines think?” Alan Turing asked that question many years ago, but these days that question is taking on a whole new dimension. On this episode of StarTalk Radio, Neil deGrasse Tyson, comic co-host Chuck Nice, and neuroscientist Gary Marcus answer fan-submitted Cosmic Queries on the intersection of minds and machines. You’ll learn what distinguishes the human mind from the minds of other mammals. Explore the mysteries of memory and the future of memory storage. You’ll also learn how humans have “context addressable memory” whereas computers have “location addressable memory.” Find out more about human augmentation and hear why Neil is hesitant to combine human biology with technology via implants. Gary explains some of the pitfalls inherent in deep learning. We also ponder if sentient artificial intelligence should be subject to the same laws and right as humans. All that, plus we discuss ELIZA, the Matrix, driverless cars, creating toys controlled by thoughts, and if AI could be used to assist in the mental health field as virtual therapists. 

NOTE: All-Access subscribers can watch or listen to this entire episode commercial-free here: Cosmic Queries: Minds & Machines.

Transcript

DOWNLOAD SRT
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk. I'm your host, Neil deGrasse Tyson, your personal astrophysicist, and today is a Cosmic Queries edition of StarTalk. We've...
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now. This is StarTalk. I'm your host, Neil deGrasse Tyson, your personal astrophysicist, and today is a Cosmic Queries edition of StarTalk. We've solicited your questions on an interesting subject, queries of minds and machines. Oh yeah, something I can't do myself. Had to bring in help for that. We'll get to that in just a moment. Chuck Nice, you're helping me out here. That's right, how are you, buddy? All right, good, good. Have you been practicing how to pronounce names? No, I have not, which is why they will be just as awful as they always are. And quite frankly, I believe that people send in crazy names just to hear me butcher them. I'm totally comfortable with that. Keep telling yourself that, that you're doing this on purpose. So we've got mind and machines. I mean, this is a very intriguing topic that touches everything, like morality and politics and culture, business, all of this. We've got a guy who's like in the middle of that and he's sitting here in the middle of us. Morality and business in one sentence. So Gary Marcus, Gary, this is not your first rodeo here on StarTalk. It's my third time here, thank you. Your third time, welcome. You're a professor at NYU of psychology and neural science. So you are an expert on the intersection of mind and machine, psychology and technology. That's right, my training is in natural intelligence and my work in recent years is mostly in artificial intelligence. And so that is kind of minds and machines and going back and forth between the two. Wow. We ever see a day where a machine will have a mind? Depends what you mean by a mind. We can dig into that if you'd like. Oh, well then in that case, what is a mind? Yeah, yeah. Clearly I do not have a mind. Apparently, you got that question wrong. Let's start a little further back. So what is it about a human mind that most distinguishes it from the mind of other mammals? Just so I can get a sense of what it is to be human. Just start there. I think our language is vastly more sophisticated. I think we can talk about and think about not just what's here and now, but what might be, what could have been, what happened before, what will happen eventually. So abstraction, and not just the abstraction of democracy, but also the abstraction of what would happen if the United States were no longer a democracy. So things that we hope are so-called counterfactual, but we don't know for sure, given contemporary politics. So some time ago, I interviewed Ray Kurzweil, and you were our guest in studio, academic guest in response to that show. And he had commented that the next evolution of the human brain, if it's not biological, then it would be mechanical, would be extending what the frontal lobe had done for us. Because as I understand it, the frontal lobe is responsible for this abstract thinking that animals that don't have developed frontal lobes are incapable of attaining. If that's the case, what thoughts are we not having by not having some other lobe in front of the frontal lobe? A fine question. It's sort of like the Rumsfeld known knowns and unknown unknowns. Sort of a question about unknown unknowns. The first thing I would say is that we're really restricted by our memories and the capacity limits on them. Computers have something called location addressable memory. That means everything goes in some sort of master map. And that means like- It's kinda true with the human brain. Humans use something called context addressable memory, where we don't know exactly where things are. Even like the best brain scientist in the world is not gonna be able to tell me exactly where your memory of the Pink Panther movie is. Because they're not there yet. Sometimes the memories might not be there, but for the memories you have, they're not very well organized. No, no, no, no. He means maybe the scientists are not there yet. Just because they can't figure it out doesn't mean it's not true. It's not real. For Isaac Newton, the planets would look pretty mysterious going forward and backwards up in the sky. Granted- He writes down an equation and takes away the mystery. Granted that there are lots of mysteries and unknown unknowns and all that, but if you look mechanically at how people's memories work, we are, for example, subject to a phenomenon you might call blurring together of memory. So if you park every day in the same lot- You give that an official term, blurring of memory? You don't have a more scientific term than that? We could call it that. One of the scientific terms is- You get blurry memory today? All of my memories are blurry. You get some glasses for your memory. One of the technical terms is interference. There's proactive interference. That feels a little better, okay. You want the technical terms. So we're very subject to interference in a way that you wouldn't be if you had location-addressable memory. So computers don't get confused between 12 similar memories. They can, for example, use buffers. So if you store, sorry, if you park your car in the same lot every day and then you go out on the 10th day, you won't be like, did I park here or there? Because you blurred together, my technical term again, those memories. There's interference between them. Whereas a computer could have a last entry buffer and it will just forget the first nine. There's a process called garbage collection. Get rid of all of those. You just have the piece of information that you're looking for. Our memories are not very reliable. This is why we can't, for example, give eyewitness testimony that's trustworthy and we can't have time date stamps the way that you can have on a video. So there are lots of ways in which our memory is really not as precise as computer memory. Can an experience bias a memory during the making of the memory itself? That's a really hard question to answer. I don't understand the question. What's that? That went above my head. Really? Try that again. Oh my God, let me ask that again. It sounded deep and I just don't, let me hear it, let me hear it. All right, so what I'm asking is if, for instance, we're, I don't know, hanging out in the kitchen and we're having a conversation, and for me, that conversation is like, wow, I was talking to Gary and Neil and I learned all this stuff and it's a great conversation, right? And I'm able, because of my experience, to recall things and to, it's a better experience for me. Could that same experience that we're all sharing and you two are like, Chuck's a dumbass and I hated this conversation, could that then mar your actual memory the information, the surroundings, how you recall it so that we recall the same experience differently because we're biased by the way we felt about the experience while it was happening. There's kind of two processes there. One we would call. Isn't just the answer, yes. Encoding. And the other we would call retrieval. So one is called re-encoding. And then retrieval. So we know there's lots of distortions made at retrieval time. So you can show people a video of somebody going past a yield sign and then ask them a question. How fast was the car going when it passed through the stoplight? And they'll just be like, oh, I guess it was a stoplight. And so they'll distort the memory by having some new information on top of the old information. Encoding is like how you put that memory down in the first place. And it's less clear. We may have bias even in how we record that information at the time, but it's a little bit harder to do the experiments. We know that at retrieval time, there's lots of distortion. In fact, we reconstruct a lot of our memory. So computers, like a videotape, you're just pulling out something that is stored. There's no question about it. A lot of what we do is we try to figure out, well, what could it have been like? So if I asked you, we did that episode with Kurzweil and what did I say about Kurzweil? You might sit there and try to remember. Well, at the end, I said nice things about Kurzweil, but I was nicer than Gary. And so what did Gary say? You can go back and try to reconstruct it. Or your viewers can go watch the podcast of it. They'll have a different experience watching the podcast of it, as opposed to you figuring out from your memory. Your memory is not a video recording. And some of your biases. I'm trained to not trust what I don't have explicit memory of. I mean, I have some training to edit that away from any statement, right? So in other words, and I agree with you, there are people who, particularly under pressure to have to remember something, they'll stitch together bits and pieces from things that didn't happen or happened that resembled it and come up with some other reality and that becomes the reality, right? So if I kind of don't remember something, I don't try to buff it up to try to fit in. But you think you don't and you might be better than the average person. No, no, I'm not saying I don't. I'm saying I'm trained to do that. Train yourself to avoid it. There's a process called reconsolidation that humans seem to use or biological creatures in general seem to use by which when you access a memory, that memory actually becomes loose and flexible and then you put it back down and you don't put it back exactly the way you found it. And this is just a fact about how biological creatures use their memory. Again, it's very different from what a computer does. And to go back to the earlier question, if you said, how would I soup up a human brain, I would start with a memory system and make it more reliable. So my evidence for whether I fail or succeed at this, and I think we can all test in this way, how well do you remember a scene of a film you saw once 10 years ago, 20 years ago, 30 years ago? And- It doesn't have the words Kaiser Soze, I don't remember. So I'm just saying, that's an example of something you experienced. No, you were not in the scene, but you observed the scene. So think of it as part of your life experience. And there are plenty of people who say, I don't remember who was acting, or no, I forgot the scene. But some people are candid about what they remember, what they don't. I have really acute memory of movie scenes, which tells me that I should also have corresponding acute memory of events of my life. And by the way, it's not that I remember everything. I'm not one of those. But if I think I remember it, chances are I remembered it accurately. There's plenty of stuff I say I have no clue. I was not paying attention, I was ignoring it. Plenty of times I will tell you that. But if I know something, it's pretty much there. So a few years ago, I wrote a piece for Wired, which was called Total Recall. And it was about a woman named Jill Price, who seemed to have perfect memory. But it turned out it was mostly for autobiographical facts. So it was things about her own life. Compartmentalized memory. Compartmentalized. A lot of it, I think, was essentially, I don't know how to say this politely, was narcissism. She kind of practiced her own memories the way I practiced baseball statistics when I was a kid. So when I was a kid, I was known as the walking encyclopedia of baseball. And it's not because I had some phenomenal memory, it's because I kept reading the Baltimore Orioles Information Guide. And so I just knew all of the stats that were in there, because I read it so many times. And she spent a lot of time rehearsing her own life. But when I asked her when the Magna Carta was signed, she said, what, do I look like I'm 500 years old? Which was way off, because it wasn't autobiographical, and so she didn't know about it. So people can choose, like if you care about movies, I heard you offstage talking about how you like to use movies as a scaffold to teach people about science. So the movies become important to you, you spend a lot of time getting it right. Only if it's a communal knowledge about. My mentor, Steve Pinker, does that a lot with Woody Allen things in his books. He'll use funny Woody Allen skits. You could pick a professor at Harvard. Professor at Harvard, he was at MIT when he was my PhD advisor. And in his books, he uses a lot of pop culture also. I'm not as funny and can't pull it off, but Pinker pulls it off very well. Those books are on the bestseller list. That's why his book, one of mine made it once for a few weeks, but anyway. But his reliable. Which one of these books, I have you here, The Future of the Brain? Did that make it? Or you edited that? I edited that. Guitar Zero was my book that was. Guitar Zero. On the bestseller list. New musician in the science of learning, very nice. And a failed video game. And a failed video, it's actually a story about, I am awesome at Guitar Zero. What? The game is Guitar Hero. And then the title was a joke. That was the, oh, the title was a joke on the game. Because I started learning about music after failing and then succeeding at the game. So your joke actually. My joke is working. Cuts to my personal history, but that's another story for the day. So I'd love what you're talking about, how you store memory. And it leads me to wonder, maybe you have some insight into this. If we did have perfect memory storage and recall, would that make us less creative? People have asked that question. We might be anchored to reality, and creativity comes out of a non-reality, no matter what. The science, art, music. It's something that did not exist before, maybe in threads. And you put it together into something that no one thought of before, and you are not recalling this. So we can complain about how we store and retrieve memory, but maybe that's the basic essence of what it is to be human. I've heard that argument before. I don't buy it, but I think it's open. So I think a lot of what passes for creativity is simply taking two elements from different places and combining them. You can do that if you have perfect memory, you can do that if you have lousy memory. On the other hand, it is the case that we do things like free association, where we just kind of jump from topic to topic, and sometimes it hits pretty well, and that can count as creativity too. So I don't know, there was a... Second creativity was more what I was describing. If you take two perfectly remembered things and put them together, yes, you can come up with something new, but you're still anchored to the reality of the perfect memory. And if you have imperfect memory, so in there are like unicorns and that you think you saw and whatever, and out comes a whole thing that is not derived from anything real that happened to you. Could be. I mean, there was a study in science a few years ago where they took... The journal science. The journal science, probably 10 and 15 years ago now. Love that you two knew that. Study in science the other year. The capital S. Right, the journal of science. He read my mind and saw them. I'm just trying to... Not just general science. In the journal of science. All of science, no, there's a journal of science. He precedes journal entitled science. The American counterpart to the journal Nature in the UK. In which they compared Madison Avenue trainees or something like that with a computer program for advertising. And people just made up things like, I don't know, like a drink that was fast. They would put tennis shoes and soda together or whatever. And the computer could do it just as well as people. And there, people had the weird memory that we do. Machines didn't, the machines did just fine. So it partly depends on what the task is. That would totally explain Japanese commercials. Because they are crazy. That comes from someplace else. Yeah, exactly. It's just like... See, Japanese everything on television. That's so true, yeah. It's like The Simpsons actually make fun of them. They're like Homer looks like a character and is called Mr. Sparkle. And they actually see the commercial and it just makes no sense at all. Because they're looking at it through American eyes. Cool. So what's the future of this? Where is this gonna go? I mean, we will invent... First, are you Cyborg? Let me just... I am part Apple Watch and part human being. And mostly I rely on my external memory for my phone. My phone is really a game changer. I used to have to remember phone numbers. I used to have to remember all kinds of facts. And your iPhone. You can tell from the watch. You can infer that I'm a fanboy, I guess. The phone extends my cognitive reach greatly. Eventually it might be on board. I worry about Bluetooth hackery and stuff like that. I mean, you put a phone outside of my body and hack it, I can probably still hack it in the other sense of the word hack. If you have something inside my head, cybercrime is gonna happen. I walk by you and make you think... No, no, no, it'll first happen with advertising. Absolutely. I'll make you, yeah, you want a Shake Shack Burger right now. Exactly, right. In this moment. And I'm a vegan. Where does that come from? Where does that come from? How does that even happen? Now, the other side of that, the other side of that is we're suggestible anyway. You just said Shake Shack and I want one. You don't need a brain implant to do it. Ain't that some shit. Right, so why, what is the urge to merge? You like that rhyme? Urge to merge. What is the urge to merge? Got a need for speed and a urge to merge. I am insane for an implant in my brain. There's like a little too many syllables in there. Oh, come on. Tough crowd here. It's like cut me a break. The third one in never gets it right. I was like, because the second one creates the trend and then you got to stay with the trend and now the pressure is on you. Once, twice, three times. So, what is the urge to merge it into your physiology, biology, when it's perfectly fine sitting in your palm? There's two things. It's within my arms reach. Why do I have to, why do I need a USB port into my neck? I think some of it's an avatar. They had USB. I think some of it's efficiency and some of it's a false quest for immortality. So, efficiency is, if I don't have to type it, I don't have to say it, it's faster. And if I'm paraplegic and I can't type it, I can't say it. Clearly, in those cases. So, there are some cases where efficiency wins hands down and if I don't have to sit here typing and I can search for those facts that I wanted to give you, faster, that's- Just by thinking. Just by thinking. That would be great. And I think it will happen eventually. So, I have the choice between a neurosurgeon cutting into my brain and sticking chips in it. Using the phone. Hitting my iPhone with my thumb? I'm thumbing, I got the thumb thing. I understand that you got the thumbing, but the analogy I would make is to all kinds of things that people do in sports, where they want an edge. And people are gonna want their kids to get in, I mean, already do want to get their kids into Harvard, and if they think, I can get my kid into Harvard with this implant, if they think it's safe enough, they might do it, just like they'll give their kids steroids so that they can get an athletic scholarship. So, it's a way- It's an edge. Human augmentation. There you go. That's what it is. We're talking about human augmentation. Whoa. All right, let's bring this first segment to a close. And when we come back, it will be Cosmic Queries. Yes. As promised. As promised, we will get to Cosmic Queries, you watching, possibly listening to StarTalk Radio. We're back on StarTalk. Professor Marcus here from NYU New York University, which does a lot of cool stuff lately, NYU. From the actors, they've got a whole math department. What's it called, the whole? The Courant. The Courant Institute, yeah. Because if your math is not a department, it's an institute. No, they say it's good philosophers there. You got a lot of good stuff going on at NYU, so it's great to have you in our backyard. So thanks for making time for us. You're one of the world's experts on thinking about... It's funny you get to say that about a professor. They don't have to do anything to be famous. They just have to think about it. The world's expert for thinking about this intersection of technology and mind, and we solicited questions on this very subject from our fan base and all the usual cast of sources, Instagram, Facebook, Twitter, what else? Pretty much anywhere that there is an internet, people can send us a question. They can send us questions. So Chuck, what do you have for us? All right, our first question is actually from a name that I can pronounce perfectly, Chuck Nice, sitting here on the couch, who would like to know. Are you taking first question? I am taking first question. Are you a Patreon member? I am indeed a Patreon member. Okay, well there you go, all right, okay. I would like to know, since we know how we download information to computers, how exactly are we downloading memories to our brain? From our brain to a machine? Well no, period, us as biological organisms that have this brain function in the hippocampus, how does that process actually take place? How are we downloading memories? I guess it depends what you mean by downloading. Wait, wait, so here's your brain. People talk about putting your brain in a machine. No, I'm not talking about that. So here's the thing. Everyday ordinary experiences, which we see and we record. Right. And then they're downloaded to a place in our brain or uploaded if you want. Okay, okay. If you want to get technical or upload it to the place in our brain, our hippocampus. What is that process? Because there's really two versions of the question. I think we're both thinking that. One is like the ordinary course of events, forget about modern technology. How do I make a memory at all? And then the other is like, am I ever gonna be able to have a way where I can type something in my phone and kind of like airdrop it, if you know the Apple technology, directly into my brain? So like the famous scene. Or somebody else's. Or somebody else's. The famous scene in The Matrix where she like downloads the skill for flying a helicopter. I love that scene. Isn't that an awesome scene? So that's like the second version of the question. The first is like an ordinary experience. If I want to learn to ride a helicopter, I have to practice a lot. And every trial is changing something in my hippocampus, in my free frontal cortex. The honest answer is we, as neuroscientists, don't yet understand that process. We have looked at some simpler organisms. So the Aplesia is the most famous one. And you can pluck at its gill and eventually it learns, hey, someone's being annoying. I won't pull my gill in every time. And we know something about how the synapses in the nervous system of the Aplesia change over many, many trials. And so that's a kind of gradual learning. But most of the learning that's interesting to us isn't about I tried something 50 million trials. I mean, there's some things like shooting a basketball is many, many trials. Practice makes perfect. Practice makes perfect. My guitar book was about learning to play guitar and learning those things. But there's also like, I saw my friend Gary and he taught me the new word of Chimera. And like, you don't need a million trials to do that. You're like, that's a cool thing. And it kind of rattles around your brain. We don't know exactly how the brain does that. We don't even know exactly where it does it. So this very quick memory, which is most of what you're talking about, there are a few things we would like to know. We'd like to know where it is. We'd like to know what the biological process is. We'd like to know what the representational scheme is, which is like, is it sort of like a bitmap for a picture? Is it like a set of words in a sentence? Do we use the ASCII code? What is the encoding scheme by which that information is stored? Unfortunately, we mostly don't know. There are some places we know a little bit. So we know something, for example, about motor memories. And so we can read to some extent, if somebody is paralyzed and we stick in implants in their brain, we can guess where they want to move their hands. We're partly reading their memories a little bit. The implants are for you to read what's happening in their brain. Read what's happening in their brain. But we don't actually have a general understanding of memory. It's one of the most basic things. But also like the memory of an aplasia is pretty different from the memory of a Chuck Nice, right? Hope so. You might not. And we don't want to do the same kind of experiments. Most people don't get too squeamish if you chop open the aplasia, but probably you don't want to be chopped open and you have a say in it. And your wife might get mad at me if I did it and there might be mitigation. She's the only one that's a fan of it. It might not be a lot of lawsuits, but there'd be some. It's a lot of paperwork. And so we, I'm being facetious, of course, but we as scientists don't do the same kinds of experiments on people. So we do things like MRI brain scans, but they're very coarse. MRI, the pixels in an image, or they're called voxels, because they're three-dimensional, has like 70,000 neurons in it. And a memory might be a matter of like 100 neurons in those 70,000 neurons being configured the right way. I wrote an article in the. You need a higher resolution, a higher voxel resolution machine. You definitely need a higher voxel machine. And there have been some work. So in people that have epilepsy, sometimes you have to cut open their brain in order to do surgery. And there are experiments in which scientists have stuck electrodes in the brains of those people and found some pretty interesting things, like they have found neurons that only respond when you see Oprah Winfrey or hear her name. So they're kind of multimodal. Oprah neurons. Oprah neurons. I was gonna say, there are about 50 million women in this country who have that experience. The Oprah neuron. The Oprah neuron. Is it Jennifer Aniston neuron that was identified? But these are kind of like outputs of a process. So we don't know the circuitry that causes this neuron to actually activate. We just know at the end of some long chain of events, it fires there. There are a bunch of memories that are involved in that that help you know what she looks like, what the name looks like, but we haven't decoded that stuff yet. I guess you're not in position to say, to tell me where in the brain is your concept of self. No, I mean, I can tell you things like your prefrontal cortex is involved. If I blow away your prefrontal cortex, you're not gonna have much of a concept of self. But there's the old joke, you might know about the frog in the foreleg. The scientists are trying to figure out where hearing is in the frog and they operationalize it by clapping and the frog jumps as they cut away their front left leg. They clap, the frog still jumps. So they say hearing isn't in the front left leg and they cut away the front right leg and the frog still jumps. They cut away the back left leg, it still jumps. And then when they cut away the back right leg, the frog doesn't jump anymore. And so they conclude, ah, hearing must be in the back right leg of the frog. This is a pretty shoddy inference. And unfortunately, a lot of the inferences that we might make about memory and self and so forth are kind of similar. We lesion some part of the brain or we study someone that has lesion. We don't. Fervently. We don't actually cause lesions too often in humans except to cure epilepsy or something like that. And then something doesn't work anymore, but that doesn't mean it's the only piece involved. It's like. I would say you're stopping the epilepsy. You're not curing it. I would use a different word. Well, fine. Point well taken. I will cut your brain open, cut through some lesion to cure you. You know, there's a long sorted history of that sort of thing. Going back to trephening when they cut holes in people's skulls. What's the one where they ice pick your thing? Yeah, that's a trephening. Yeah, okay. So, quick follow up on this. Ahead. A quick, it might be a naive question. In the scene in The Matrix where Trinity gets uploaded the instructions for flying the helicopter, wouldn't she have also needed muscle memory for that rather than just knowledge on how to fly the helicopter? Muscle memory is in your brain. It's not in your muscles. It's a misnomer. And some of it's in your spinal cord, if you want to get technical about it. Fine, so if I can read a book on kung fu and I can know every move, but if I have not performed it, are you implying that you can put performance memory in my brain? Yeah, but it's a really astute and clever question you're asking. So why is it that when you read a book, you don't get the muscle memory for free? So why when I read about guitar and music theory and all the things that you needed to do to play and strumming and read all these books about strumming, could I still not do it very well? And I still had to go practicing and I got at least a little bit better. I think that's a kind of question about which processes are linked in which ways into the brain. It's not a question of whether that stuff is ultimately in the brain. And we can do brain scans and show the different parts of the brain change as you learn to strum. So it's an access question. So not all parts of the brain are equally accessible to one another. And so even though you can read about it, you don't have a circuit that is responsible. You think about the environment of adaptation, exactly. So you could read the knowledge of the information, but then separately upload the experiential. In principle, you ought to be able to do that. And someday, I won't be here to collect or not collect on the bed, but someday, maybe it's 100 years from now, we will, I think, be able to do that. In principle, there's no reason why the experiential part of it can't be encoded, can't be fired in there using nanobots that change the circuitry of your brain. My book, The Future of the Brain, talks about some of this stuff. There's no reason in principle why you can't do that, but right now, we don't know how to read the code. It's like if a computer dropped from above, it would take a while. There's no other computers, and you had voltmeters and stuff like that. You could sit there and try to figure it out, but it would take a long time before you could say, so that's how Microsoft Word works. There's a lot of complication there. All right, next question is from CatPirates from Twitter, and at CatPirates, since we're on this subject, will it one day be possible at some point to use computers to store and access our memory? So this is just the exact opposite of what we were talking about at my question. I'm gonna offload it, offload. Offload, so can we take what's up here and offload it onto some storage device? I think the answer eventually would be yes. We're stuck in the same place if we don't really know the code yet. There's also a separate question I didn't talk about which is invasiveness. So right now we can use an FMRI, basically a set of magnets, to read stuff but not with enough resolution. To get the resolution, we have no way of doing it now short of putting stuff in the brain and then even now that doesn't really work. What's that I saw people, they were reconstructing a photograph of somebody out of their brain thoughts? Yeah, so there are studies like that that are actually not it. You know about that, Chuck? I'm asking because I saw it weeks ago. Weeks ago. One of the guys. It was fuzzy, of course, but it's like, whoa, that's a person. That's incredible. It's fuzzy, there's some tricks involved. So you need to have right now, it'll be solved eventually, as a kind of crutch to make these systems work better, these decoding systems, you have to kind of give them a hint. It's almost like animal, mineral or vegetable. So you tell them it's an animal and then given this information, you kind of guess, I'm making a little bit cruder, but you guess what kind of animal it is. It doesn't, the systems we have now can't sort of take an arbitrary picture and reconstruct it, but if you narrow things down, then the system- You help it out. You help it out with what's called a prior and the systems can get somewhere. Eventually, you'll need less and less support because the resolution will get better and better and we'll be able to do things less and less dangerously, there'll be less worry about infections and brains and stuff like that. You will be able to do it. I wanna pause, by the way, and say, I love the Star Trek episode of Black Mirror. I probably, a lot of people saw it. There's something totally wrong with it, which is there, you get the complete set of memories from somebody's DNA and DNA doesn't actually carry memories. It carries the evolutionary memory. It does not carry, well, actually, there's an interesting question there, which is DNA might actually be a substrate for memory, but it would be different. We might use, or strands of RNA. Could store memory in it. You could, that's right, it's a digital thing. Maybe even biology does in ways that we don't know, but you don't store it in what we call the germline DNA that they sequence in that show in order to reconstruct the memory. So just taking somebody's hair is not gonna allow you to break into their brain and decide were they looking at the porn or not. Like that is not gonna be recorded in their DNA. Well, thank God for that. Oh, Chuck, I got this hair of yours. You know the answer for you, we don't need that. You naughty, Chuck, you. I remember you, polyamorous roboticist. That's right, polyamorous roboticist, I love it. All right. Here we go. Alex Lander wants to know this. How close are we to toys that can be remotely controlled by thoughts transmitted as instructions via radio? So I did see where there are some things that we can control with our eyes, but that's really just tracking movements that become the joystick. Is there any transmission otherwise that we might be able to do? Funny you mentioned joystick, because I was gonna say if all you want is a joystick, you could probably do that now. There may even be some Kickstarter to do this, where you put an EEG skull cap on people and you can train up low resolution, so you get a few bits of information. I was at Comic Con, they were selling these hats that claim to read some EEG of your brain and there were things that would spin or something. And if you're in love, it would spin one way, and if you hate. So it looked kind of gimmicky, and it wasn't that expensive, so it could be just a fun party trinket. But it's sort of party technology now, and probably not even that reliable. So there's an open question about how much you can get from a skull cap that you wear outside your head. So you can get some bits of information, so forward and backwards or things like that. You're not gonna get subtlety, like I want the toy to go under the chair, around that other chair, up the guitar, next to the wall and back. That's too complicated a thought for the skull caps, maybe ever, but not too complicated in principle, ever. We might need different interfaces. If you get into the brain in other ways. If you get into the brain in other ways, eventually, then yes. So you would be, this is basically electromagnetic signals at this point, because the sensors will be reading out of your brain, and now that gets converted to, we know how to communicate across space, but you need some conversion from the electromagnetic signals of your brain to some transmitter at that point. It all, again, comes down to resolution. So right now, we can do that in a kind of low-res kind of way. So you get a limited bit of information. The resolution will get better, and there's a decoding problem. What is the code by which we read this? We don't know how much actually kind of makes it outside the skull. That's an open question, but some of it does, and we'll get better at it. We gotta take a break, and when we come back, we'll finish this up, which I hate to do, because I want this to go on forever. When we come back, Chuck, I want to ask a first question in that segment. All right. Because my turn. I got Chuck Nice, I got Gary Marcus, it's Neil Tyson, we'll be right back. We're back on a really cool episode of StarTalk. We're talking about the intersection of mind and machine, psychology and technology. Chuck Nice, helping me out here, as usual. Professor Gary Marcus, thanks for coming back to StarTalk. We last had you on, I had last had you with Ray Kurzweil. Great program, thanks for your contributions there. A question for you. Reading up on your profile, you're a critic of deep learning. And deep learning is a major sort of research angle in Google and in IBM. And so what's your problem with deep learning? This is where machine is sort of teaches itself based on just a few parameters and gets better and better at it on a level where it's better than anything we could have trained it to do. Well, it is for some things, but not all. There's an old logical fallacy, the fallacy of composition. You see something is true for X and you think it's true for everything. We do that in astrophysics all the time. It's always a problem. Deep learning is really good at recognizing objects, but not perfect at that. I'll tell you about that in a second. It's very good at speech recognition, so it allows your Siri or whatever to transcribe your sentences. But it's not very good at what some people call artificial general intelligence. So artificial general intelligence means machines, AGI, machines that could answer kind of any question, not just a particular narrow set of questions. So we have seen great advance in, for example, playing Go. But Go is something where you can get as much data as you want for free. It's a Chinese strategy board game. That's right. And DeepMind, a division of Google, has done fantastically well on that. But it's not clear how that translates to real world problems ranging from driverless cars, which seem like they're okay now, but they don't seem like they're maybe getting to where they're safe enough to actually use, to general natural language understanding. They just have to be safer than humans? Well, even safer than humans is pretty hard. So the problem with deep learning and the problem with driverless cars is what we call outlier cases. So deep learning is kind of like a glorified version of memorization. If you've seen some version close to this before, then you can interpolate this is like that. But if you see something that's unusual, the systems don't work that well. So there've been a couple of accidents with Tesla. One of them. In self-drive mode. In self-driving mode. One of them in self-driving mode, a Tesla ran into a semi-truck that was white on a sunny day that was crossing a highway. Well, that's an outlier case. It's unusual. If your paradigm is basically to memorize what you've seen before, you get into something unusual, something bad happens. Another case we suspect driverless mode was engaged in was just a month or two ago. A Tesla at 65 miles an hour on a highway ran into a stopped fire truck. A human probably would not make that mistake. Now, this is the red fire truck, red. I believe it was a red fire truck. Most of them are. They pretty much only come in two colors, which is bright red and bright yellow. I think it was a red one, but we'll have to have your research verify that. It's candy apple red. And you're like, how could that happen? Yes, how could that happen? Well, the way I think about it is deep learning is kind of like the part of your brain that recognizes textures and patterns, but not the part of your brain that reasons about things. So you don't have an experience probably of a fire truck parked on the side of a highway. So you can't look that up in your memorized experience, but you do have part of your brain that can be like, that's a very large object. It's not moving. That's probably not a good thing. I think I will move out of the way or slow down. And it's hard to build something like a driverless car system that can deal with the full variety of human experience. We're near my home in Greenwich Village. I ride a unicycle around here. I really don't want driverless cars. I do, and I do not want driverless cars in Manhattan because they're not gonna have a big data set on unicycles. That's the problem with deep learning is they don't have a big data set about a particular thing. They don't know what to do with it. And so the term deep learning is actually like a great rhetorical move, like calling something the death tax. Deep learning refers to a particular thing about how many layers in a neural network and something else. But not how abstract it is. Okay, so there's an interesting ethical question. If deep learning for self-driving cars removes the possibility of death, for most cases that any human would end up killing themself or someone else, like not seeing someone cross the road because they're putting on makeup or reading or texting, and- Or you're doing a cycling and juggling. If, no way. So if it- I never do that while I'm driving. If it prevents 100% of those cases. But causes its own problems. But the cases that we would have avoided- Right. A few of those slipped through. But nonetheless, we go from 30,000 deaths a year to 1,000 deaths a year. But every one of those 1,000 deaths could have been avoided by a human. If that guy wasn't juggling on a unicycle. I mean, for me, that's not a hard ethical question. I mean, I think then we should go with the machines. The statistical realities, we're not even close to that yet. And the political realities, they're questions of deep importance. So there is no question in my mind, even though I'm a skeptic about deep learning and so forth, that it is possible to build a driverless car that's safer than a human being. But politically speaking, there are going to be people that die in kind of objectionable ways. Nobody was too worried about the guy who died in the Tesla because he was a rich guy, he was watching Harry Potter and people thought he was spoiled and they kind of let it go. But at some point, there will be a driverless car that kills a bunch of children. And then there'll be a congressional investigation and so forth. And at that point, your question is really important because it might be that in fact, statistically it's just much better off, but they can't sell it to their constituents or think they can't sell it to their constituents and they could cut the whole thing off. And so I worry about that a lot. But if what Neil is saying is the case, you're outliers, notwithstanding, then the answer would be if I'm the company, I'm going to create a pool of other companies where we just take a crap load of money and dump it into this pool that becomes the insurance policy for when the one in 1,000th person dies. Well, I mean, there's an economic question about whose liability it is. And there are places like, well, maybe I can't say on the record, but there are big car companies who are thinking about maybe they can self-insure themselves. So there's that side, but there's also the political and legal side of it. So even if there's enough money to pay the families of the victims, nobody wants to be in that category of family of victim. And the people whose families are killed in these very peculiar ways that you're talking about are gonna be very upset. And they're gonna say, we should ban the driverless cars, even if the overall statistics say, actually we would save 20,000 lives a year. The drunk teenagers on prom night who didn't die is not a news story. That's right. Right. Right. That the self-driving car protected. Go quick to AI on there, because we don't have much time. All right, let's do that. Any questions? Should, this is Nicodemus Arcelone, who says this, or Arcelone says, should sentient artificial intelligence be subject to the same laws and hold the same rights as humans? Oh my goodness. I mean, I can certainly see that argument. The problem I would say there is we have no idea how to tell whether something is sentient. So it's one thing to be able to say, can a machine behave in all of these kinds of circumstances in ways that are reasonable or whatever. We don't have a measure. I mean, it's like for consciousness, we don't have a consciousness meter. So there's this whole scientific field of trying to figure out consciousness. We got an argument about philosophy. I'm gonna make it real simple for you, Gary. Machine, okay, you've programmed it, blah, blah, blah. And then you say, I'm going to unplug you. And the machine says, please, man, don't kill me, man. Please don't unplug me. Please, Gary. It's not persuasive because. Because. Because. Let's hear him out, let's hear him out, let's hear him out. It's not persuasive for the same reason the Turing test is not persuasive. You can can responses. So it's not that hard for someone to build a robot and have a sensor to see if somebody's unplugging it and say that just like Siri has this line about Blade Runner being a story about two intelligent assistants or whatever. Some comedian sits there and writes it. You have an assistant who's been contracted to write jokes of this sort. All I, you're reminding me of this comic I saw. I think I've told you about this once. Probably a New Yorker comic. There are two dolphins swimming together and one says to the other, of the humans on the side. They face each other and make noises, but there's no evidence that they're actually communicating. I love it. That's very funny. Says the bigger brained mammal. Bigger brained mammal. Give me another one, quick. Okay, here we go. Ben Sadaj says this, do you think it would be possible for AI to be able to identify and assist with mental health sort of like a virtual therapist? And I'll go a step further. Do you think that it might be able to identify and then help self-correct someone who maybe is going off their meds or about to go into a psychotic break? The answer is clearly yes. I'm actually talking to a guy named Roger Gould about working on a project with him about digital therapy. There's a number of other companies that are starting to work with this. Actually, early in the history of AI was something called ELISA, which was not very clever. Had a lot of canned responses. I think I'm older than, I think I'm older. I remember ELISA when it first came out. Then you are older than me because it came out a little before I was born. ELISA actually uses some of the same kinds of programming techniques as Siri, and it can get a little ways and say, you mentioned your wife and I can say, well, tell me more about your family or your mother or whatever. That's what ELISA do, ask me a question. I'm ELISA, ask me a question. Any question. How are you feeling today, Neil? Why do you ask that? It's called Rogerian therapy, where you redirect everything. Why do you feel so positive about Rogerian therapy? Screw you, ELISA. No, so you would say something like, my mother, I don't think my mother likes me, and they say, why don't you think your mother likes you? So it would take the sentence, analyze the sentence, the verbs and the nouns, figure out a sentence to send back to you, and it would be like an active, if you weren't really thinking that it's a computer, you'd think it was a sensitive psychologist. Some people actually got fooled by the original ELISA. It won't fool you for an hour, but it can fool you for five or 10 minutes. There's some advantages to digital therapy. For example, with a real therapist, you have to wait, and usually you feel this acute sense of pain, something like that, emotional pain, and you want to talk to somebody right away, and then you have to wait two weeks or a month or whatever. And digital therapists, in principle, could be there right then, right there, say, what's your problem, and let's try to figure out how to help you. Not only therapists, but also someone who could be a friend, your friend, a console. In China, there's something called Xiao Ice. Not too many people know about it here. It's made by Microsoft, and millions of people talk to Xiao Ice every day, and it's partly a quasi-therapeutic, friendship kind of relationship. But really, it's a government information gathering technique. If it's China, let's be honest. Theoretically, it's not, but I'm not going to touch that part. But Tay, which they made over here, Microsoft made over here, and it became very offensive, is actually somewhat similar technology, but it's sort of trained on a different data set. The other problem with deep learning is it's super sensitive to the data set, and it's hard to get it to kind of step away from the immediate data. So if you have a lot of Donald Trump Twitter bots talking to Tay, it's going to take Tay in a particular direction, and you don't have a sort of abstract enough understanding of what's going on. Chuck, let's see if we can get two more questions in here, but we'd like to go in speed mode. All right, speed mode, here we go. Brandon Christopher from Facebook wants to know this. Is there a concern that we are reaching a tipping point where people psychologically cannot handle the advancement in technology? People are pretty good at adapting to new technologies, so no. That is surely no one under 20 asked that question. They have adapted. Yeah, they have adapted. Okay, next one, Lauren Ploesi says this. What ethical guidelines should be established before these new technologies are developed in order to prevent abuses? Now, you want to talk about AI. That's a doggone good question. What are we doing to make sure that we don't? Who abuses who? AI abuse us or we abuse AI? Or we abuse them, like, yeah, well. I think this is a really hard question. I'll put in a plug for an organization I'm on the board of called Ada.ai, which is partly trying to kind of. Ada, as in Greek letter, Ada? As in the first female computer programmer, first computer programmer was female, Ada Lovelace. Oh, Ada, Ada, yeah, yeah. And it's Ada-ai. And they're trying to, in part, be a kind of consumer organization to help represent consumers' rights in all of this. So AI is being driven by the big companies. One of the big problems is you have these ethics panels where the people don't know as much about what it is they want to make ethical laws about than the people who are making the thing itself. You want to make sure you have people maybe with not so much self-interest, but have knowledge. The other problem is the machines are just so dumb. So I had a New Yorker column about what would happen if- Tiss in the machines? Well, I had a New Yorker article about what would happen if a driverless car went out of control, hit a school bus full of children. Everybody picked it up. Barack Obama picked it up. It really spread pretty wild. And it's a really interesting- The article you wrote in The New Yorker. Yeah, in November, I think, of 2012. And a lot of people started thinking about this. There are conferences where people talk about it now. And the reality is, okay, but right now, they're hitting fire trucks on the side of the road. That's not an ethical problem. That's a perceptual problem. We have to solve those first before we can get to some of the ethical problems. But they are important. I think we gotta wrap this. Gary, thanks for being on, dude. Always a pleasure being back. We gotta get you back. Let's do this all the time. Once a month, we need a brain- A brain machine episode. I'm down. Chuck, always good to have you here. Always good to be here. You've been watching, possibly listening, to StarTalk, a Cosmic Queries edition on the brain and machines. As always, I bid you to keep picking up.
See the full transcript

In This Episode

Get the most out of StarTalk!

Ad-Free Audio Downloads
Priority Cosmic Queries
Patreon Exclusive AMAs
Signed Books from Neil
Live Streams with Neil
Learn the Meaning of Life
...and much more

Episode Topics