Aug. 23, 2023

Moravec’s Paradox and the Evolution of Surgical Robotics

Moravec’s Paradox and the Evolution of Surgical Robotics

For episode 4, we turn to surgical robots with the help of Dr. Catherine Mohr, President of the Intuitive Foundation, who played an integral role in developing the DaVinci surgical robot system.

In Season 4 of the Theory and Practice podcast, we’ve been investigating the powerful new world of AI applications. We’ve explored how to build safety guardrails into AI-driven healthcare, what the future holds for empathetic AI communication, and how humans can control computers with imperceptible movements of their hands.

 

For episode 4, we turn to surgical robots with the help of Dr. Catherine Mohr, President of the Intuitive Foundation, who played an integral role in developing the DaVinci surgical robot system. Before we explore the limits of robotic-assisted surgery, we discuss Moravec’s paradox: computers are good at things we find complicated, including complex calculations and handling large amounts of data, but not as good at perception and mobility tasks.

 

This context explains why Dr. Mohr does not think that haptics, and the process of providing tactile feedback, is a breakthrough — humans have a very sophisticated tactile sense. She posits that we do not need to recapitulate evolution by having robots mimic human physicality. Instead, she asks, “What is the best technology I can use to solve that problem?” She believes a promising future for surgical robotics is to augment the surgeon’s hands: finding the cellular edges of a cancerous tumor by lighting up a nest of cells at its margins or helping the surgeon grasp a bleeding artery when the field is obscured by blood.

 

Further down the line, she believes we will be able to move away from extensive surgery apart from trauma and move to maintenance surgery. For example, routinely doing “precision excision,” where tumors in their earliest form can be detected and removed at the cellular level, and “precision installment” — adding regenerative cells before organs and joints are damaged irrevocably.

 

 

Transcript

Anthony  00:05

Hello, welcome to GV Theory and Practice. This series is exploring what it means to be human in the age of human-like AI. I'm Anthony Philippakis

 

Alex  00:15

And I’m Alex Wiltschko

 

Anthony  00:21

Hey, my friend, how you doing?

 

Alex  00:22

I'm doing great. How are you?

 

Anthony  00:24

Good, you know, I was thinking, let's try something different today. Instead of doing Hammer and Nails at the end, let's do it at the beginning. As a reminder to our listeners, Hammer and Nails is the time on the show when we talk about a problem, a nail, or a solution, the hammer in honor of our in person meet-up in Boston many moons ago.

 

Alex  00:41

Man, I still have fond memories from that. And I'm down to do it. I'm down to start with Hammer and Nail first. But why put it in front? Usually we do something that's inspired by the conversation that we've just heard.

 

Anthony  00:50

Well, today we're going to shift gears. And instead of talking about forms of AI, that impact our understanding of ourselves as human beings. Today we're going to talk about robotics. As you know, robots are simply machines that move. But in science fiction films, AI is often a humanoid robot, with or without intentions to dominate the world. We need to understand the difference between AI and robotics before we talk to our guests, Catherine Mohr about surgical robotics.

 

Alex  01:18

Okay, so how about I start, robotics is the nail?

 

Anthony  01:24

Go for it? What do you want to talk about?

 

Alex  01:25

So there's a concept that I think about a lot. And it comes from a futurist and philosopher named Hans Moravec And it's from a book that he wrote in the 80s, called “Mind Children”, and inside of it, he doesn't call it this, this was named afterwards, there's something called Moravec’s paradox. And I think it's really important for the conversation that we're going to have. 

 

Anthony 01:46

All right, tell me more about it. 

 

Alex  01:49

I think the thing to realize is, for all of the advances that we've had in artificial intelligence, and the incredible leaps and bounds that large language models and generative AI have made, and here we're talking in 2023, robots still can't fold a towel, right, we can walk across the room blindfolded and open the door. And robots are nowhere near being able to do a feat like that. So why is that the case, right, moving seems so easy for us. But adding two 12 digit numbers or multiplying them or taking their logarithm, that seems really hard. But yet, computers can effortlessly do arithmetic that completely stumps us. And humans can do motor actions and manipulation of the physical world that is completely out of the realm of the possible for robots today. So why is that? I just want to read a couple quotes from the book, I've got the book here. Because I had read about Moravec’s paradox, but had never read it in the original. And I really liked books. And so I have, you know, any opportunity there is to get a book, I jump on it. So I want to read the first sentence from the book. And then I'm going to read the section that contains the paradox. So he opens the book, saying,

 I believe that robots with human intelligence will be common within 50 years. By comparison, the best of today's machines have minds more like those of insects than humans. Yet, this performance itself represents a giant leap forward in just a few decades. And I want to remind you, he's writing in 1988.

 

Anthony  03:20

Right, so 50 years. Yeah, we're not so bad, not so bad!

 

Alex  03:24

We're kind of, we're kind of in the realm right now of what's happening. So I don't know if we're on or off schedule, but I think we're talking about, we're in the future that he's imagining right now. Here's the quote that has been named the paradox.

It seemed to me in the early 1970s, that some of the creators of successful reasoning programmes suspected that the poor performance in the robotics work somehow reflected the intellectual abilities of those attracted to that side of the research, a little bit of politics there. Such intellectual snobbery is not unheard of, for instance, between theorists and experimentalists in physics. But as the number of demonstrations has mounted, it's become clear that it's comparatively easy to make computers exhibit adult level performance and solving problems on intelligence tests or playing checkers but difficult or impossible to give them the skills of a one year old when it comes to perception and mobility. We're nowhere near a one year olds ability to have free ranging coordination and goal directed movement. 

 

Now, you've probably seen incredible videos from Boston Dynamics where robots are dancing, they're doing flips, they're doing all kinds of things. That's amazing progress. I just got to be honest, and I'd love our listeners to chime in and educate us. It's not clear to me how goal directed that movement is or how pre programmed that movement is. And they've been making leaps and bounds and strides and what Boston Dynamics is putting out looks better than anything that's out there. We're always seeing them in the demo context. So when we judge the ability for these robots to have the coordination of a one year old or five year old or 10 year old, we do need to see them in the wild and we are already seeing that for computer programmes that have perception. Those are what drives Google image search. That's what drives a lot of production, and consumer technology applications that you and I have in our lives today, we're not really seeing robots with that little coordination in our daily lives. We're seeing Roombas, we're seeing vacuum cleaner robots, we're seeing pre-programmed robots in the factory, but we're not seeing free ranging, coordinated goal directed robots in our daily lives. 

 

So I just want to stop there. He says two things: perception and mobility. We have computer programmes now that exceed the performance of a one year old on perception, but nowhere near the performance of a one year old and mobility. So already, the future that he's imagining is a little bit different in the present that we live in today. And he continues:

In hindsight, this dichotomy is not surprising, since the first multi celled animals appeared about a billion years ago, survival in the fierce competition over such limited resources as space, food or mates has often been awarded to the animal that could most quickly produce a correct action, from inconclusive perceptions. Encoded in the large highly evolved sensory and motor portions of the human brain is a billion years of experience, about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought effective only because it is supported by this much older, and much more powerful, though usually unconscious, center of our motor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100,000 years old, and we have not yet mastered it. It's not all that intrinsically difficult. It just seems so when we do it. 

 

Anthony 06:48

Wow!

 

Alex  06:49

That blows me away. I think that that really hits the nail on the head in the spirit of Hammer and Nail. 

 

Anthony  06.55

Yeah

 

Alex  06:56

I'm curious what your reaction is to that. 

 

Anthony  06:58

Totally. You know, it's interesting, I never thought about it that way. But this idea that evolution has been working on our sensory motor functions much longer than it's been tinkering with our cognitive functions. In hindsight, it seems kind of obvious. But at the same time, I find it to be a really profound point. And I just heard a great talk recently kind of related to this from a good friend at The Broad, Evan Macosko, who's a neuroscientist, and one of the people who kind of created the field of single cell genomics. And he's been working on this massive project to map all of the cell types in the mouse brain. And then now they're starting to work on the human brain. And he was explaining to me that there's something like around 5000 cell types, depending on how you count. And what's interesting is, when you think about the cortex, which is the part of our brain, that's the most recent and the most expanded, and that is responsible for a lot of our higher cognitive functions. There actually aren't that many cell types. Whereas you go into the basal ganglia, and the cerebellum and the brain stem itself, which are the kind of more ancient parts of the brain that are much more responsible for coordination and things like that. That's where you find this incredible diversity of cell types. And again, it speaks to this idea that evolution has been tinkering with those parts of the brain much longer than it has with the cortex.

 

Alex  08:09

That makes complete sense. And actually, there's an unexpected link, I think, that we have here. So I spent my early 20s, listening to those cell types in rat brains.

 

Anthony 08:18

Oh, wow. 

 

Alex 08:19

So I worked in a neuroscience lab at the University of Michigan, for a guy named Professor Josh Burke, a really brilliant neuroscientist. And, for one reason, or the other trusted me enough to do neural implants in rats. And we were studying action selection and action initiation in the basal ganglia. When you put a wire down into the rat's brain, and you turn that electricity into sound, you can sit there in the room with a rat as it's behaving and deciding and learning. And you can hear these neurons. You can hear the sound that they make, as the electrical signals are converted to the sound, and they sound different. And it's like being in a rainforest, while you're hearing all these different neurons talk to each other. And it's really just this tiny, tiny slice of the brain that you have access to hearing. But you can hear thoughts being constructed from these really elemental, you know, elemental pieces of thought, these neurons that are popping off to each other, firing at different rates. They've got different waveform shapes, they have different kinds of timbre, almost, it was a really profound experience sitting in the dark for hours and hours listening to a brain. So that'sa been a formative part of my development, I guess, as a person, not just professionally.

 

Anthony  09:27

Yeah, no, I totally understand. I mean, so it's interesting to kind of think through this paradox a bit, which is: we now seem to be on the cusp of computers that have human-like language capabilities. I mean, I guess, in some sense, which would we rather have - robots that are much more able to have human-like qualities of coordination and fine motor skills, or computers that are able to reason and do math problems and write computer programmes? In some sense, maybe it's kind of a happy accident that it worked out this way. That the things that are hard for us are maybe the easiest for the computers. And so that's where we can have the greatest progress.

 

Alex  10:04

I think there's something really deep there, which is, natural selection has given us this ability of coordination. But it's not really clear that natural selection is driving our ability to do arithmetic or to form chains of logical or causal reasoning. But technology is picking up where natural selection is left off. And I think there's something really profound there. Let's ponder on that and see how this season goes. And what we learned from our guests as well.

 

Anthony  10:30

Well, you know, one point about natural selection working on maybe cognitive abilities. Let me just say, I vote for the universe in which evolutionary pressure votes for math nerds. And where math nerds have greater evolutionary [inflation?], my high school experience would have been a lot better. Let me tell you that. 

Alex 10:49

You and me both. 

 

Anthony 10:54

We've been learning in this series about so many different developments in AI that could be incorporated into a surgical robot. So today, let's talk to someone who played an integral role in developing the DaVinci, surgical robot system. Doctor and engineer Catherine Mohr, President of the Intuitive Foundation, Dr. Mohr, welcome.

 

Mohr  11:12

Thank you. Very happy to be here.

 

Anthony  11:17

So tell us a bit more about how you came to become an engineer. As I understand you went to MIT as an undergrad and then Stanford for med school.

 

Mohr  11:25

Yes. So I shifted to engineering in my sophomore year at MIT, and really started to be excited about robotics, the field of robotics. It was really wide open. At that time, there was a lot of industrial robots that were not very back-drivable. But Ken Salisbury’s lab at the AI Lab, he was doing a lot of work in highly back-drivable robots, which meant that you could have a human being near the robot without being in danger for their life. Because most robots at that time were in cages, big industrial robots, because being near them was very dangerous for a human. But this whole set of back-drivable robots was very interesting. And a lot of side by side tasks could be done with this new kind of back-drivable robot.

 

Anthony  12:20

So why did you go to med school?

 

Mohr  12:22

I discovered that, as I got further and further in my career, I was managing people who were managing people who were doing what I'd originally gotten into engineering for. I missed the deep creativity. And I realized I'd fallen in love with the operating room after observing some surgeries. As a tinkerer, the body is the ultimate machine to tinker with. And you could sort of marginally take it apart and put it back together again, and have it work better afterwards. So I ended up going to medical school in my 30s expressly to train to be a surgeon.

 

Alex  13:01

I was thinking about the visual image that you gave me where in the early days robots were in cages, and you couldn't be near them because they were so dangerous. And now you're putting a robot in an operating room? What do patients think about that?

 

Mohr  13:17

Patients have a wide variety of reactions, as you would imagine, patients are not kind of a monolithic class of people.

 

Alex  13:25

Well, actually, before getting too far into that, what do people actually see? What does the robot actually look like, and then tell me a bit about people's attitudes towards it.

 

Mohr  13:33

So the robot in its presence in the operating room is essentially a single pillar that sits on the ground and can roll around, and a set of arms that, depending on the robot, come either from a single point or come from multiple different setup arms. And they are movable like a boom arm or something to bring a individual instrument to a place on the body. So it kind of looks like a set of arms that sort of enveloped from that single point. There's also a tower, which has all of the computer processing in it and an image so that people in the operating room can see what's going on inside the body, and can see any sort of settings and information about the system on that. And then there's a console, and that's where the surgeon sits. And the console, in a way, is like the biggest augmented reality wearable imaginable, but it's not very portable. So the surgeon sits down at that console. Their eyes are in a viewer and they get an actual 3D image presented to them not synthetic. There's two separate cameras that are in the body. And each one of those cameras that are close to one another are presented to each eye so you have a true 3D binocular vision. And so If the patient is awake when they come into the operating room, which is more often than you might imagine, they see the setup with the robot sort of pushed away from the operating table. They climb on, they are induced with anesthesia. And they generally don't see the robot docked over the top of them. Because we do that after they have been anesthetized, and we make incisions and then dock the robot to those ports. So in children's hospitals, they put little faces on the individual arms, and they name them.

 

Alex  15:39

What are some names for the arms?

 

Mohr  15:41

I've seen them named after Muppets. So you know, this is Grover, and this is Elmo, just really to make it more friendly and less imposing because it is a big piece of equipment.

 

Anthony  15:53

And say a little bit about how a surgeon learns to do this procedure. So I mean, how do you operate all four arms, for example.

 

Mohr  15:59

So the intuitive, of Intuitive Surgical is that the instruments behave the way you expect them to behave. So when you sit down at the console, and you put your head in, you see, in your view, instruments that have been brought in, when you hold on to the controllers, and take control of those individual instruments, and you start to make a movement with those controllers. You see the instrument through your viewer superimposed over where you feel your arm to be. The proprioception says “My hand is in this place in space. And my eyes see that that instrument looks exactly like the movements of my hand in space”. And so it is very easy for someone to immediately take on that movement. Because they are surrogate hands, they are essentially behaving exactly the way you expect your hands to behave. In fact, when my, now 19 year old, daughter was five years old, I sat her down at the console, and I said, okay, sweetie, you peek in through here, you squeeze these controllers, and you go. And I gave her that much instruction. And so she took control of the instruments, picked up a piece of paper, handed it back and forth between the two hands, and waved the little piece of paper. And, you know, at five years old, she thought mommy worked on robots that waved little pieces of paper. But when later I was teasing surgeons about the learning curve, I would tell them that my five year old picked it up in no time at all.

 

Anthony  17:43

And what was the point of actually having four arms instead of just two?

 

Mohr  17:46

Surgery is about exposure, and manipulation. And so in the current configuration, you'll have four arms, but one of them is holding the camera. So one is a motion that you can make with your eyes. Two are what you might think of as your primary manipulating instruments. And the third is often used for retraction. First rule of surgery don't cut what you can't see. And so you want to be able to what we call wrangle the bowel, the small intestine tends to obscure everything in the abdomen. And so you'll want to sort of sweep it up out of the way and hold it out of the way. So you'll have one of those instruments holding it out of the way while your other two are doing the manipulation.

 

Anthony  18:34

Understood. And so what's the scope of surgeries that could be done with this,

 

Mohr  18:39

The DaVinci is largely a soft tissue surgery machine. So experimentally we've done everything from soft palate, trans oral, in general practice people do trans oral and inside the throat surgery, neck dissections thoracic surgery, which includes getting into the mediastinum, which is that area between your two lungs, surgery on the lungs, pretty much anything in the abdomen and pelvis. If it's in there and can be repaired, you can generally approach it and repair it or excise it in the case of endometriosis or tumors. We have done a little bit of work on again, this is experimental on vascular approaches in the leg. But there isn't anything currently in FDA clearance in that space. But if it's soft tissue and you can approach it, and you can be able to create a space in which you can work on it, you can work on it.

 

Anthony  19:38

And so what makes it better than just doing things the old fashioned way.

 

Mohr  19:42

Smaller incisions are a major component of that. The incision serves no therapeutic purpose. It's the way to get there. And so if you can absolutely minimize the incision, you always want to be able to absolutely minimize the incision. We pay a price in laparoscopic surgery for minimizing that incision, in that we're using very long instruments that have counterintuitive motion to them, I want to go up, my hand goes up, the tip of the instrument goes down. So everything's reversed through that fulcrum, and I lose my wrists. And so it's very difficult to do something that I would want very high precision on. It requires a lot of additional training. So what the robot gives you is the small incisions of laparoscopy, higher magnification with binocular vision, and this wrist, more precision, more dexterity handed back to you. So that it's like doing an open surgery, but through these minimally invasive incisions,

 

Alex  20:49

But the operating times are longer with robotic instruments, why is that?

 

Mohr  20:54

They can be. And in some cases, they are shorter. In a lot of cases, when you're looking at people measuring the amounts of time associated with a procedure they are in their learning curve, they're first figuring out how am I going to put the robot together and setting up of the robot takes them a little bit of extra time. When you look at someone who has a lot of experience, their times have generally come down dramatically. But time isn't really the right measure of the quality of a surgery. Really, what you want to be looking at is did you achieve the patient and the surgeon’s objectives for that surgery out the other end? Were they able to do everything that they needed to do that first time round? Were they able to give back the functional capability that that patient needed? And were they able to achieve the objective, whether it's full resection of a cancer, or restoration of function that there was a blockage or something like that. So you really want to look at complication rates, you want to look at redo rates, how often somebody needs to go back and have another surgery, and just whether there was any functional compromise associated with the surgery afterwards.

 

Alex  22:19

So if not time, what are the differences in success on those measures for robotic assisted surgery?

 

Mohr  22:25

When you shift from open to robotics, you see a huge decrease in the numbers of complications. And you also will often see a much faster return to work for that person, sort of fewer compromises afterwards, they don't tend to think about their life as before I had surgery and after I had surgery, it's more of a maintenance. It's a speed bump instead of a brick wall that they've crashed into. And so just in terms of physical well being following it, their recovery is just so much faster.

 

Anthony  23:07

So who are the people who don't like the DaVinci system? Is it the surgeons? Do they like it? Is it the hospital administrators, it sounds like the patients like it.

 

Mohr  23:15

So it really depends who and what experience that person is bringing to it. We have surgeons that absolutely love it. And there are surgeons who've tried it and said, “This is too much fuss and bother for me. And I can get equivalent outcomes if I do this in another way”. And so again, surgeons are not monolithic, but lots of surgeons who feel that this is an indispensable tool, but they love it. Hospital administrators are a really interesting group. Because if they look at just the costs associated with what's inside the operating room, and they only look at that, they say, “Oh, I hate robots, robots are more expensive”. But if they step back, and they look at all of the downstream costs and the costs associated with extended lengths of stay, they look at the costs associated with complications and managing all of that. They start to love the robots that they have. But they have to be hospital administrators who understand the entire cost cycle associated with it, not just what happens inside the operating room.

 

Alex  24:30

So let's talk about those surgeons that love it. And let's, let's think about making that experience even better. So in this series, we've talked to Thomas Reardon and episode three, he started a company called CTRL-labs, and they build EMG or electromyography bracelets effectively, what that allows you to do is to put a person's hands through reading the activity of their musculature directly into virtual reality. So by understanding the muscle flexion and tension state through the electrical readouts on these little, you know, forearm bracelets, your hand is in virtual reality. And you can then of course, you know, put that there for visual feedback, you can also have haptic feedback, but you can close the loop and provide a great deal more presence and put more of yourself into the digital sphere, which seems like it links with the Da Vinci robot, where you're putting controls from the surgeon into the real sphere into the operating theater. So do you think that more use of haptics, of sensory signals from our musculature in surgical robots would improve outcomes?

 

Mohr  25:40

I think it's important for us to define haptics here in this context. Because the answer is very different depending on the definition we're using of haptics, we think about haptics as feedback to the person so that they can feel or explore the environment in a physical way. This gets to some of the things we had been talking about in terms of how refined things are from an evolutionary point of view. We have a very, very rich tapestry of inputs that come to us when we're trying to explore a world. So they range from in the world of touch. We have sensors on our fingertips that sense shear, temperature, vibration, pressure. We have sensors in our muscles that are giving us force in terms of force feedback, so I get a combination of pressure on my fingertip and force in my muscle. And then proprioception, how bent is that joint, or how straight is that joint, so that I can get an approximation of where in space, I'm experiencing this combination of pressure, temperature, vibration, shear, where I am exerting this force. This is the haptic experience of us as humans manipulating the world around us. When we talk haptics, in robotics, we reduce all of that to force. So we take out most of what is truly our haptic experience of the world. And we say we're going to give people force feedback. And so by haptics, we mean force feedback. And we as humans, don't just look at the world in terms of force feedback, in terms of these measurements that we're making on our muscles, what the guy who's doing EMG, and looking at the actuation so this is looking at the intended force that we are planning to put into an environment and now you can build a model of that human, and you can project that avatar into a virtual world and you can use this information to say, how would they be moving around in that world. Pull all of this into the world of surgical robotics. We, as surgeons in open surgical environment, use all kinds of elements of what I described as the haptic tableau. When hands are in the body, when you're feeling roughness, when you're feeling something that feels fibrotic, you are feeling the vibration associated with that. All of the measurements that we can make on the body using robotic instruments, and can project back to the hands are force only. So we sort of filter all of this additional information out. And so when you're talking about interacting with the human body through a robot, with haptics, we have no way of measuring most of what we as humans experience as haptic feedback. And we have no way of presenting to the user, most of what we think of as haptic feedback, we're just reduced to force and force interactions, which have limited utility.

 

Alex  29:12

So it sounds like that information channel both on the input and output side, like we can't use a computer to pull out all of the richness of our skin sensations. And you know, it's not just one channel of information on our skin, as you say, like there's dozens of different channels of information. And we can't actuate, we can't replay those sensory experiences. But what about for vision? Computer Vision has gotten really good, particularly with the advent of deep learning for identifying objects and images or identifying regions and images and potentially that could help guide surgeons activities. And we can certainly record the visual world and we can certainly play it back, I think easily in the context of the robotic console that you're describing. So what's your take on that?

 

Mohr  29:57

I think that's the right way to look at it. And when we try to duplicate the human’s experience, I think we're both limiting and making it too hard. So we have millions of years of evolution, creating all of these sensors that we have, because they're very useful for us to move around the world. To try to duplicate exactly what we see, feel, hear, taste, and present them back to the surgeon, I think shows a profound lack of imagination. We feel a tumor because this hand was the tool that we had available to us to try to feel that tumor. If we look at that tumor, and we can make that tumor light up in a wavelength that our eyes can't see, but then we can use a camera to then be able to present that in a range of vision that the human can see, I would argue that finding the boundaries of a tumor with touch is incredibly crude compared to is it glowing? Or is it not glowing? So trying to completely duplicate what the human does, as opposed to saying, what's the actual problem I'm trying to solve, which is showing the surgeon where the boundaries of the tumor are, and whether there's a few little nest of cells that got left behind that’s still cancerous, but that they'd never be able to feel it would be below the threshold of what they could feel. I think we need to go to not are we recapitulating evolution in terms of the capabilities that we're putting in? But are we taking a problem solving approach? And saying, what is the best technology I can use to solve that problem? And then how do I translate it into something that my poor, limited human, with a limited range of vision, limited range of touch, low discrimination at very small scales, very constrained to the scale that that human is in? How do I take all of these things that I can sense outside of those scales, translate them into something that is perceivable by the human, and therefore, I see the robot, and the vision systems and the machine learning algorithms that we could run on top of this data, as the ability to extract much more pertinent information in order to support the clinical decision making of the clinician.

 

Alex  32:31

How do you see that playing out live? Is there a vision where you have a sense, moment by moment how well the surgery is going like on a scale from one to 100? Is that possible?

 

Mohr  32:44

You generally want to think in terms of giving people information that's actionable. So giving them a vague sense that things are not going well, could contribute to an impending sense of doom, but it doesn't give you something that you're then going to do or not do. I can see situations in which there is a clearly actionable, you know, too much blood loss, you need to pay attention to that. But generally, those kinds of algorithms are masters of stating the obvious, whenever there is something that isn't going very well, something that kind of keeps popping up and intruding on your notice and taking you away, can be more distracting than it can be helpful. That said, one of the ways in which I have thought about managing hemorrhage, bleeding happens, humans have blood pressure, blood is flowing through veins and arteries, and you are cutting into tissue, there will be blood during surgery. Occasionally, you can lose control of a bleeding artery or a bleeding vein. And then the problem is it becomes very obscured by the blood that's flowing up. And what you want to be able to do is reach into the pool of blood and grab the thing that is bleeding, but you can't see it anymore. That would be a beautiful augmentation of the senses in that your AI noticed where that nick happened, knows in space where that actively bleeding element is even though you can no longer see through the pool of blood and can help you guide to exactly the position, orientation and grasp so that you can grab and contain that bleeding. So this would be the sort of thing where instead of kind of like someone in the backseat helpfully shouting instructions while you're trying to drive around. It's instead saying I have a really specific thing that I want to know about this right now. You have a history. You've been observing it, tell me where that is, and being able to go and control something like that. So we think about AI as keeping us from doing things like creating no fly zones, which has a whole set of problems in and of itself. When the AI is overriding clinical judgment on the part of the surgeon about where that surgeon wants to go, we have an obsession with metrics that are a aggregation of a whole bunch of different things, none of which you then can do anything about. I think we need to think about it as AI enabled tools, where I have the ability to do something, and it helps me do that, as opposed to it telling me that in the past, I didn't do something.

 

Anthony  35:44

So right now, a lot of what you've been talking about is kind of the copilot model, where the AI helps the surgeon. Let's go one step further. What about a world in which the robot acts autonomously, and there's not a human in the loop,

 

Mohr  35:57

We do have an existence proof that every step of the surgery can be done using the robot, once you've docked the robot. There's human manipulation that happens on the outside as people are placing the trocars and starting the insufflation, which is the inflating of the body with carbon dioxide. So we have an existence proof that a natural intelligence in the form of a surgeon sitting at the console, recording all of those inputs, and passing them through the robot allows that person to do a surgery from the beginning to the end, all we need to do is make a generalized artificial intelligence, that we can sit down at the console, having taught it all of these things, and it would be able to execute all of the steps. We know all the physical steps can be done. I'm not working on the artificial, generalized intelligence, when someone's got that we can have it doing surgery. I'll go back to the question of why do you want that? If it's for greater access, that there's more capability of doing surgery, I would say trying to make it be entirely autonomous, as opposed to trying to make it help the person, give them advice, tell them what the next step is. Making the things that it's easy for a human to do still central, and what it's easy for an AI to do., like, knowing what the next step in the procedure is, keeping track of a whole bunch of things, that that melding of the human and the surgeon is something that we should be thinking about, as probably a even a better hybrid than thinking about an AI doing the surgery entirely on their own.

 

Anthony  37:50

Understood, now, going back to this kind of question of access. I think that's a really important thread. Right now, there's a lot of attention on the cost of health care. And as you think about these technologies, a lot of them are actually deployed in the US where there's much more resourcing. But when you think about the developing world, what's the role for these technologies there? And how do you feel about expanding access?

 

Mohr  38:12

When we're talking about expanding access in low resource environments? We really need to look at not are we doing icing on the cake? But are we really fundamentally addressing the needs? And depending on how you measure it, there's between 10 and 17 million people a year who die in low resource environments, because a surgeon wasn't available to give them a life saving surgery that could have prevented their death. This is an enormous number. 10 million people a year die from cancer, and 3 million people a year die from HIV, TB and malaria combined. So when you're pointing out that lack of access to surgery, or those missing surgical procedures we would be doing if we had more people and more resources out there. dwarfs cancer and HIV, TB and malaria. It doesn't say we should stop treating HIV, TB and malaria, it is that we should stop ignoring the problem of not having enough trained surgeons and hospitals that they're working in. The answer to that is not to put robots in a lot of district hospitals all over. This idea that we're never going to be able to train enough people. And so therefore, we should jump over training people and put robots out there. Instead, we really need to take what we have learned from the training of surgeons and apply it to being able to train surgeons in low resource environments, and not treating the number of people who are in need of surgery. and who don't have access to surgeons as a problem, but treating them as the resource of people who could be trained to do this. There's a great Confucius quote, which is “if you plan for a year plant rice, if you plan for 10 years plant trees, and if you plan for 100 years educate children.” And my focus in these areas is not can we develop a technology that does this for people. But can we use these technologies in this understanding, to train people better? And so I would say laparoscopy and open surgery are the first things we should be training people in. Technology can come in later. But that we should be using machine learning and AI to coach somebody through open surgery through laparoscopic surgery, to be able to give people that kind of training all over that AI should lead robotics in these spaces. And we'll get out there first in the form of the best coach you've ever had, teaching you how to do these procedures.

 

Anthony  41:12

You know, I'd love to hear your thoughts about what you would most like to see happen next in the field of robotic surgery,

 

Mohr  41:19

I think in the field of robotic surgery, this idea that we are augmenting the surgeons senses and giving people changes in scale, giving people changes in their ability to interpret what they're seeing what they're feeling, how they're manipulating things, I would love to see us get to the kind of scale where we're detecting cancers long before they're becoming physically debilitating. And that we would be able to go in either with a straight poke and a needle or with a flexible catheter that gets somewhere or even with something tethered that we allow to wander around in the body. And we take care of whatever that problematic set of cells is. And we think about this as it becomes less surgery, which we think of as large corrections and more maintenance, that we can detect things early, we can head them off. And we can contribute to just this general sense of wellness by either patching chondrocytes in the knee so that a joint can repair itself, or going in and re-expanding a spinal disc, or just augmenting a lot of these technologies that we're getting in terms of regenerative cells in terms of us being able to grow new body parts, in terms of us being able to detect at very early stages a cancer and being able to go and do precision excision or precision installment of things that will allow growth and repair and outside of trauma, getting away from big surgery, using robotics to augment all of these other technologies to shift healthcare into just maintenance.

 

Anthony  43:26

I think that's a wonderful place to end. Thank you so much for sharing your insights and wisdom with us today Dr. Mohr.

 

Mohr  43:31

Happy to. Thank you very much for having me.

 

Anthony  43:37

Each episode, we are learning that simply replicating ourselves as humans in the AI and robotic worlds is not always the answer. Next week, we'll be going into another new world of AI computers that smell with none other than my good friend, Dr. Alex Wiltschko, as the special guest. Alex, you could take us around your labs and show us Osmo? It's gonna be awesome.

 

Alex  43:56

Absolutely. And nobody calls me Dr. Alex  Wiltschko. It feels very special. Thank you for including the honorific. I'm really looking forward to showing you what we're up to. And I'm looking forward to having you smell some of the work that we're doing.

 

Anthony  44:08

I can't wait. It's gonna rock. 

 

And finally, we'd love to know what you think of this series. You can write to us at TheoryandPractice one word, theoryandpractice@gv.com or tweet @GVT. This is a GV podcast, and a Blanchard house production. Our science producer was Hilary Guite, executive producer Duncan Barber with music by DALO. 

 

I'm Anthony Philippakis. 

 

Alex 44:33

And I’m Alex Wiltschko

 

Anthony 44:35

And this is Theory and Practice.