Part 1 with Philip Koopman and William H. Widen
This is the first of a two-part episode with Philip Koopman, Associate Professor at Carnegie Mellon University and William H. Widen, Professor at University of Miami School of Law.
The the two have been providing testimony to state legislatures about the safety of autonomous vehicles and authoring a bunch of articles. In this episode they walk us through the liability aspects of self driving cars and explain who the driver is.
Related Links
- https://assets.ctfassets.net/e6t5diu0txbw/54ngcIlGK4EZnUapYvAjyf/7a5b30a670350cc1d85c9d07ca282b0c/Comparison_of_Waymo_Rider_Only_Crash_Data_to_Human_Benchmarks_at_7_1_Million_Miles_arxiv.pdf
- https://www.jurist.org/commentary/2023/08/widen-koopman-automated-vehicles-criminal-law/
- https://www.wsj.com/business/autos/tesla-to-recall-more-than-two-million-vehicles-for-software-fix-a5fb3626
- https://static.nhtsa.gov/odi/rcl/2024/RCLRPT-24V051-2023.PDF
- https://static.nhtsa.gov/odi/rcl/2024/RCLRPT-24V064-8870.PDF
- https://safeautonomy.blogspot.com/2023/05/a-liability-approach-for-automated.html
Subscribe using your favorite podcast service:
Transcript
note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.
Anthony: You are listening to There Auto Be A Law, the Center for Auto Safety Podcast with executive director Michael Brooks, chief Engineer Fred Perkins, and hosted by me Anthony Cimino for over 50 years, the Center for Auto. Safety has worked to make cars safer.
Hey everyone, thanks. This week we’ve got two special guests. Coming back to the show is Phil Koopman. He is the associate professor at Carnegie Mellon. He is the. Essentially the expert in autonomous vehicle safety, I think it’s fair to say. And William Wyden, who’s a professor of law and University Miami School of Law.
So welcome guys. Hey, thanks for having us back. Alright so I wanna start off, here’s the scenario. Let’s pretend I get into a robo taxi and I’m going down the street. Go my very way. And there’s Michael out on the sidewalk, and my robo taxi hits Michael. Who’s at fault? Am I as the passenger? Does Michael get to sue me?
Does he get to sue the RoboTaxi company or do I just, tell the RoboTaxi, just keep going.
Philip Koopman: First is this the Piggly Wiggly parking lot? Oh, Mike, Fred, or are we out on the Thank
Fred: you, Phil. Thank you.
Philip Koopman: Or out on the public road. I’m, and I’m making jokes ’cause this is hypothetical. It’s not a real person to be.
Anthony: Yes. This is not a real scenario, but it gets to the question though, the main reason why I think we wanna talk to you guys today is who’s at fault? Who’s the responsible party inside an autonomous vehicle?
Philip Koopman: And the serious answer is what state are you in?
Anthony: Ooh. Right now I guess I’m gonna say California.
’cause that’s really the only place. These are on the road. Or Arizona, let’s say California.
Philip Koopman: There’s truck, there’s trucks on the road in they’re on the road in Texas. There’s trucks in the road in Arkansas, I believe, and maybe Oklahoma. Okay.
Anthony: Which state would you like me to be in?
Philip Koopman: One of those.
I think you throw the computer in jail. Oh,
William H. Widen: I think that was, I think that was Oklahoma and in Washington State there’s proposed legislation that, as drafted would say that it’s the A DS, the automated driving system that’s the driver. And it’s hard to know for sure what a court would do with that, but it looks like it’s putting liability on a cyber-physical system, which is not a person and has no assets.
Philip Koopman: Wow. Okay. So get right into it, right? Yeah. I think the shorter answer is it’s complicated, so it’s good. We have some time to hash this out.
Anthony: Has the, when that happens right now, so the case of the Cruise vehicle dragging somebody, Cruise didn’t cause the initial accident, but the secondary accident of dragging somebody underneath their car, they did.
So who’s the responsible party in that case? Does anyone
Philip Koopman: know?
William H. Widen: If you took what Cruise has said in conversations and what was admitted in this case of Nielsen, VGM. Cruise as a company takes the position that their vehicles owe other road users a duty of care. And so if. The Cruise vehicle drags a woman and a human driver.
Had they done that, would’ve been liable for aggravating her injuries. Then Cruise would, in theory, be liable for those injuries because they would’ve been caused by what we would call a negligent computer driver. Okay. And in the Cruise vehicle, I think it’s important to assume, I think it was the case there that it was unoccupied at the time and it was owned by crews.
Okay? Because you raise the issue if you’re in your own vehicle and it hit someone. You could have a theory where the owner is vicariously liable for any accidents caused by the vehicle after they hit the button, which says, go autonomous. That would be the worst possible outcome from having a system that promotes safety because the person who owns the vehicle has no control whatsoever over the safety of the system.
They’re not able to influence it. The only party that can influence the safety of the system is really the manufacturer or the developer of the a DS system.
Philip Koopman: So I wanna go back to the details of Cruise ’cause it’s complicated. But Bill, are you telling us that the mechanic who pressed the go button that morning is on the hook potentially.
William H. Widen: Well, that would be potentially, although the way that the people have been drafting the laws in some states would suggest that it’s the owner who has responsibility to maintain insurance. And so if they’ve maintained insurance. If you had specified what it took to make a claim under the insurance, which the laws don’t do then you could have a scenario where the insurance would pay.
But then the question is if the accident exceeded the policy limit, who’s responsible then?
Anthony: Wow. Okay.
Philip Koopman: Okay. Before we move on, yeah. See, by the time this airs, I will have a pre-print paper with a very detailed recounting of the Cruise incident. Based on the official report they made public, it’s a couple hundred pages.
In the high-level, narrative is one thing, and if you look at the details, you come up with an answer that’s a little bit different. So I’m gonna. I’m not gonna try and do that paper here, but the relevant thing here is that the Cruise RoboTaxi accelerated towards a pedestrian in a crosswalk on the other side of the intersection before any impact happened.
I. And we’re talking about an eight-mile-an-hour speed increase. So it was moderately aggressively accelerating towards this pedestrian in its travel lane, in its travel lane that was in a pa in a marked crosswalk. Yes, the pedestrian is crossing against the light and the Cruise said it’s Cruise prediction was out.
She’ll be gone by the time I get there, so it’s cool. And the, but if you look at the relevant California rule of the road, it says that when there’s a pedestrian in a crosswalk in front of you, you are required to slow down and or otherwise reduce risk. So it didn’t do that. Now the driver of the impact vehicle, the Nissan, was cited for violating that, that rule road rule of the road.
There’s no mention in the report whether the Cruise AV was cited, but you can’t actually. A RoboTaxi in California, so you know who knows what there, but I’ll let, I’ll hand this off to Bill in a second. In addition to the dragging where they, where the report admits that a human driver would not have dragged the pedestrian ’cause they would’ve said, where did the pedestrian go before they moved the car?
It’s very clearly in the report. There’s a second issue that you had. This car apparently violating a ru, a rule of the road by accelerating right into a pedestrian because the car thought the pedestrian would be out of the way by the time the car got there. I. Worked for the Cruise. It didn’t work for the Nissan.
’cause another thing buried in the report, the reason that poor woman stopped in front of the Nissan was not because she stopped for no reason. There was opposing traffic and she was trapped in the intersection. She was trapped in the crosswalk. So the Nissan driver could say I was accelerating towards her ’cause I thought she’d finished crossing the street and she couldn’t ’cause she was blocked.
What if she had run back towards the curb? The Cruise would’ve left itself without enough time to stop because it was accelerating into a dicey situation. And you have to ask gee, is that responsible driving? And the fact the apparent fact not a jury, so I don’t know how that would turn out, but the possibility that probability that it was violating, a California traffic rule.
By accelerating towards a pedestrian in a crosswalk might have made it a worse outcome if the pedestrian had run back. This is counterfactual if the pedestrian had run back and the Cruise had hit her as she was trying to flee back to the curb. So there’s actually two different things going on with that crash, both of which are issues for the Cruise vehicle behavior, not just the pedestrian driver.
William H. Widen: I would say one thing that’s interesting about compliance with traffic laws, first if you’re violating a traffic law in a way that the traffic law was designed to prevent harm, I. Then you can have what’s called negligence per se. And if you hurt someone in the process of violating a traffic law, there would be either conclusive or strong presumption that you’re liable for that.
Okay. I would say consider a while back, remember when Tesla was, I think it was Tesla was doing rolling stops at stop signs. And they decided. Apparently the programming team had decided that was okay. In fact, that mimics actual human driver behavior, but it actually also violates a traffic law.
And I believe that they were called out for that and they were supposed to fix that. So the question is clearly in the programming, if part of your charge is to have a vehicle that obeys traffic laws in this case it didn’t. And it resulted in harm. Now who could you say was responsible?
And Phil, I’ll ask you a question. There was some issue at one point about whether remote Monitors had given an instruction to the Cruise vehicle when it was pulling over. Do we know whether. That occurred yet or not?
Philip Koopman: Yes. And to be clear, I am assuming that everything in the report is a hundred percent factually correct, but almost certainly presents the most favorable possible face on the facts which I’m getting to in the paper I’m writing about.
There’s some threads you can pull that, that it gets interesting when you pull them. But that’s another discussion. The Cruise RoboTaxi. They say it braked aggressively, but it didn’t actually start braking until a quarter second. So the aggressive part didn’t happen until after impact.
It could have. It could have actually braked. It could have braked soon enough to not have hit the pedestrian at all. I. Report says that, sir Isaac Newton says they could have avoided the impact, including brake application delay, but they would’ve had to react instantly at computer speeds.
But they didn’t because they lost track of the pedestrian. Is what happened. I.
William H. Widen: So can, so they, can I back? I’m
Fred: sorry for interrupting. Can I back up? No, go just a second. Yeah, because you’ve talked about duty of care, but what does it, what does that exactly mean? Is that got a legal definition or is that just something that should be obvious from the plain English language?
William H. Widen: It, duty of care is a term that’s used in tort law to create an obligation on the part of a person to behave as a hypothetical, reasonable person would behave. And so when you say you have a duty of care, you are to try to take into account in your actions and behavior the welfare of other people and property.
And if you behave in a manner that’s considered unreasonable as a matter of law, then you would have negligence. Okay? The often what people say when they’re applying a negligent standard generally is you have to take those. Precautions which are economically sensible. Okay? So if you were the least cost avoider and could have done something to avoid a harm, and the chance of the harm was a certain percent, and if it occurred, there would be a.
A, a loss of a certain size, you can do a calculation in theory and say, oh you really didn’t behave in an appropriate manner because it would’ve been so easy for you to have taken this precaution. Now, on top of that, you have a duty to comply with the traffic laws, and if you did comply with a traffic law, you can be liable per se.
You would’ve violated a duty to the other road users by violating the traffic law, which you were supposed to comply with. So you always
Fred: have a duty of care to conform to traffic laws, and then the violation is. Subject to per se liability. Is that what you, what you’re saying?
William H. Widen: Yes, generally.
Although when you look at the traffic laws, in some instances they do make exceptions for situations. For example, if a tree had fallen across a road and you had to go around the tree to avoid the obstruction, that’s the sort of thing that might well be permitted under the rules of the road in a particular state.
And. If, even if that were not spelled out in the state rule if you look at the matter of enforcement by police would not enforce typically a violation for going in the wrong lane by going around an obstruction. I.
Philip Koopman: A lot of this has to do with, a lot of this has to do with what’s reasonable.
So in the Cruise crash, the questions, would a reasonable person have reduced speed when they saw a pedestrian about to get hit next to them?
William H. Widen: And I would say would a reasonable person having hit the pedestrian. Then continue to drive for another 20 feet at a certain speed to get to the curb.
The vehicle was executing I think a minimal risk maneuver or something of that
Philip Koopman: sort, which is a complete misnomer. Yeah. To answer Anthony’s question. So this, the, it did stop aggressively, but the aggressive part didn’t happen until milliseconds after the pedestrian impact.
’cause it break late. It did come to a stop for one-tenth of a second. Then it started again. So it was actually swaying back and forth on the suspension, but they marked a time where the sprung mass was at zero. That’s actually what they said. Sprung mass was at zero, and then a 10th of a second later it takes off on the MRM, it connects to the remote assistance within five seconds.
But that’s five seconds too late for them to do anything about it. And what it should have done is it should have recognized I was just in a collision and I’m not sure where that pedestrian that used to be around, and I don’t even know what I hit. I had an undifferentiated occupancy of the roadway.
What, say I stop and wait for the remote assistance to weigh in before it stopped moving around. But that’s not what it did. It took off 1 10, 1 10th of a second after it stopped.
Anthony: Oh, so in this case, the remote assistance, that’s the. That’s basically people sitting in an office somewhere monitoring
Philip Koopman: all these guys.
That’s like Cruise, Cruise remote assistance team, who I believe based on job ads is in Scottsdale, Arizona, but they’re in for Arizona for sure. And I think there’s a job ad in Scottsdale for them, so it’s probably there. Okay.
Anthony: Is there someone sitting in Scottsdale, Arizona, there’s a car accident in San Francisco?
Philip Koopman: And the car apparently initiates it, has a, hey, something, I just collide with something and it initiates a phone home sequence. And Squirts a three-second video saying, Hey guys in Scottsdale, check this out. This just happened. What should I do? But it doesn’t wait for the, what should I do?
By the time that’s done, it’s already taken off.
Anthony: This is not like the air traffic control system where these people are highly trained and they’re monitoring planes throughout their entire flight cycle.
Philip Koopman: I can’t speak to the training. Okay. They did not I can’t speak to the training. Dunno.
The the people back in San Francisco did not follow their procedures, but we don’t know what happened in Scottsdale. There’s remarkably little discussion of what happened there.
Anthony: Do we have any idea if they’re monitoring a car throughout its entire journey or
Philip Koopman: no? It. Based on the narrative and the reports, they’re waiting for it to phone home.
Now, they may have a status board where the vehicles are, no info on that, but it, there’s nobody riding shotgun. There’s no continuous video feed. It’s a very much a, If something bad happens, the car phone’s home and they say, okay, what’s going on? These cars are on their own. These cars are on their own until they ask for help.
William H. Widen: Even if there were a
Fred: continuous video stream, you’ve got latency that’s going to cause a delay of response longer than the time that was programmed for the vehicle to begin moving
William H. Widen: again.
Philip Koopman: Yeah, it with real time, it would’ve been dicey an attentive operator had. At least three seconds from the impact with the other vehicle to the Cruise.
So they had enough time to press a big red button if they were attentive. But then you can’t scale because you can only look at so many video screens and scale attentive. But that’s not what they did. Both they, Cruise and Waymo have said have told regulators that they do not remotely drive.
In the implication, especially in Cruise, it’s apparent that the cars are on their own until they get in trouble. Then they phone home, and if the car can handle knowing when it doesn’t know, that’s fine. But the problem here was it was completely misdiagnosed the situation and had no idea it had done that.
So it didn’t phone home, it just took action.
Anthony: Imagine if the air traffic system was like this, like there was no one monitoring it and just, Hey, I think I have a problem. There might be another plane nearby. Let me call somebody.
Philip Koopman: Yeah, the FAA has a plan for that, but it’s not in place yet. I
William H. Widen: mean, I think the remote.
Operators under current technology and with latency would be most valuable. If a vehicle had stopped, let’s say in the middle of a road and wasn’t sure how to negotiate its way to the side of the highway or another safe place, it could then get input to get out of the way of. Perhaps first responders or avoid some obstruction.
But it’s not really designed as I understand it, for real-time intervention
Philip Koopman: That’s right. And that’s what Cruise was trying to do. But the problem is if he doesn’t, not to phone home, you’ve got a problem. And that’s exactly what went wrong. And for at least part of this.
Michael: And a lot of that depends on whether Cruise has internet connectivity that day.
Right.
Philip Koopman: We won’t even get into that. Lots of issues. The,
Michael: So going back just a little, so we don’t know a lot about, what’s happened with the victim of this Cruise accident. I’m presuming there was a settlement, a non-disclosure agreement.
William H. Widen: We don’t know,
Philip Koopman: but we do know. What we do know is that she, it was not a fatality and she’s reported to be doing better.
Good. That’s all lined up. That’s all I’ve heard.
Michael: And so in that circumstance, if there’s not a settlement if the injured party,
William H. Widen: needs to seek
Michael: legal help to go out and, file a lawsuit against Cruise or, whatever needs to be done to make that party whole after an accident.
There are, there seem to be significant barriers to doing so. And that really comes up big in this area that there, it seems like the law is just not quite ready for automation and a lot of the really complex technical issues that it poses.
Philip Koopman: Let me start with the techie version and then I’ll defer to the to bill for the legal side.
So Fred’s question about duty care. Duty of care is central to all this because. If if there’s no duty of care then you probably are talking product liability, which is expensive, very difficult. And frankly, if there’s no, if there’s anything less than a catastrophic life-altering injury or a fatality,
probably, there’s just no way you can even pursue that unless you get a lucky break. And the circumstances are so obvious that a jury’s gonna rule in your favor. You can’t crack open the source code and do reverse engine, engineering analysis. You just, any potential settlement wouldn’t begin to pay the expense of doing that.
But if you establish a duty of care, the reason that’s magic for this situation is the duty of care. Means you can bring the tort law system into play and for tort law, you don’t have to reverse engineer all the software and good luck. Reverse engineering, machine learning. Maybe, but it’s gonna be tough.
You don’t have to reverse engineer. You could just say, look, let’s pretend that’s a human driver. Would that human driver have been acting recklessly? Yes. Then it doesn’t matter if it’s a computer, reckless is reckless. And so that’s it. So duty of care is saying we’re gonna judge the computer driver by the standards we would have used if it were a human driver.
You don’t take a human driver and reverse engineer their brain and ask why they did the stupid thing, right? And so you shouldn’t have to do that to robotaxis. If you have, if they have duty of care for safety of other to other road users, the same as human driver. You can just say did they behave a way a reckless human driver would’ve behaved?
Yes. Okay, then it’s their problem. No engineer is needed.
William H. Widen: You would have as typical a fact finding to try to determine what happened, and so that would be true either in the case of a computer driver or a human driver. In the case of the computer driver. With the information that Phil has from this initial report, you could describe exactly what happened.
The acceleration when a pedestrian is in the crosswalk and the dragging and how quickly it moved to the side, and all of these things. And the question would be, okay, let’s take those facts as given, right? What would a jury think if a human had been in that position and someone else had? Initially hit the pedestrian, right?
Would you think it was reasonable or not that the Cruise vehicle, if it had been driven by a human, would have hit the woman? And then on top of that, aggravated the injuries by driving to the side, and if a human driver would be criticized for that behavior and found negligent. We would then say, oh, okay, if the human driver would have liability, then the computer driver should have liability and.
Then the question, if the computer driver has liability is who is responsible for the computer driver’s breach of the duty of care? The computer driver is just a cyber physical system. It’s not a legal entity and it owns no assets. The logical person to be responsible in that case would be the manufacturer.
In this case, it’s easy if it’s the manufacturer operator, and that would be Cruise. And so a jury would just decide, was it negligent or not? That’s effectively what the claim was in Nelson BGM. Was that the Cruise vehicle who was changing lanes, it hit a motorcyclist. The motorcyclist was injured.
The motorcyclist brought a claim for essentially negligent driving against GM that a case ended up getting settled. But in the response to the complaint, GM admitted that the vehicle. Owed other road users a duty of care. And once you see that a little light bulb goes off and says, oh, okay, that makes it easy.
I can collect for an automated vehicle accident without having to prove a design defect, which requires the hiring of expert engineers to identify and trace a design defect as a cause of the accident.
Anthony: So let’s step back for a second. So I’m under the impression I could be wrong that right now computer drivers, robotaxis, legally can get away with a little bit more than I can as a human driver.
’cause if I was the human driver that ran over, a woman, dragged her and then lied about it to the police, at the very least, I would imagine I’d have a driver’s license anymore.
Philip Koopman: You can’t even give a ticket to a Robo Taxi in California at the moment.
Anthony: So they literally, they’re not even above the law.
They’re just exempt from the law, it seems.
William H. Widen: Yeah. The law has not evolved to deal with this complexity. I. It
Anthony: sounds like in California they exempted them from it. It’s not even that it hasn’t evolved
Philip Koopman: that says no, they didn’t exempt. It wasn’t exempted. It’s because the law is written that a ticket goes to a human.
There’s no human. So that is now what do we do? We dunno. So it’s, it was not an intent that state is not an intentional exemption. Now there’s other states where the AV industries come in. And written new laws that say, yeah, it’s not the company, it’s not the operator, it’s the computer.
It’s the computer’s fault. That one was just proposed in Washington. That one’s passed in a few states. It’s the computer’s fault. And you might say that makes no sense. It’s unconstitutional for the state and it might be, but whoever wants to assert a claim against the company first has to go to the State Supreme Court and get it ruled unconstitutional before they can do anything else.
So what you’re doing is you’re raising the barriers, you’re putting a big moat around the ability to make any claims beyond insurance, which in some states is as low as twenty-five thousand,
William H. Widen: right? What the companies would like to do is create a world, and it’s difficult, right? If you’re dealing with an owner-operator that’s running fleets, you can say, okay, the owner-operator ought to be responsible.
But in a world where third, private, third parties own a vehicle. Let’s say a Mercedes in Nevada or California, if you’re a level three vehicle what they would like to do at, if they could get away with it, would be to say, okay, the computer driver isn’t really a thing. It. It’s the A DS, the automated driving system is the responsible party, and the owner has to maintain insurance, which as Phil said, could be as low as twenty-five thousand.
In Washington, they were proposing 5 million, which still is not enough, but it’s more, and that’s the top.
Philip Koopman: Most states are less, most states are a
lot
William H. Widen: less. Yeah, but then you would presumably make a claim against the policy if you were injured. But one of the defects that I pointed out in testimony to Washington was we don’t really understand what it would take to make out a claim with the insurance company.
I. Right. Would you have to say that in fact there was a design defect before you can claim under the policy? Or would you say that they at that point invoke a computer driver idea and say it behave negligently?
Philip Koopman: Or would you have, of course, there’s no legal support, so you’re breaking new ground if you wanna make it work.
William H. Widen: Or I suppose you could say that you’re absolutely liable for any accident but that doesn’t make sense either because there are some people, the problem of suicide by automated vehicle where someone who is distraught jumps in front of a vehicle that couldn’t have been avoided by anyone.
It wouldn’t make sense necessarily to have strict liability for anyone in that in that scenario. So they just, but you have to understand that uncertainty in the law benefits the companies because it makes it much more difficult to assert a claim and successfully prosecute it for anything like a reasonable sum.
Philip Koopman: As a non-lawyer, a lot of what I see state-by-state amounts to well is indistinguishable from companies wanting to make the law not make sense as a barrier to entry. That you have to pull out your machete and whack through that vegetation before you can get at justice. One, go
Anthony: ahead. No, I’m just I just think you just graduated law school with that statement. There
William H. Widen: you, there you go. Let me give you a good example of technology creating a problem for the law, okay? In an era before we had electronic documents and electronic signatures, uh, you had a statute of frauds, which for certain kinds of transactions to be enforceable, required that you had a piece of paper with a physical signature on it.
In order to prove your claim. For example, in most states, all states that had a statute of frauds, a real estate transaction, a sale of property, would have to be evidenced by a writing that was signed by the party against whom enforcement was sought. Okay. Now, the law at the time that was developed was, I don’t know, the 16 hundreds, and you didn’t have computers electronic or anything.
And that law had been on the books for hundreds of years now enter PDFs and electronic documents and electronic signatures. There was a question in the law that came up was, does an electronic signature and electronic document satisfy the statute of frauds? Because it’s not, is it really a writing? We don’t know.
It’s electronic. We’ve never faced it. So what the law would typically do in a case like that, the common law, by case. The judges would reason through what the right answer should be. And you could get in that common law process, you could get multiple answers in, different states.
That became such an untenable. Outcome for business that businesses lobbied for and got federal legislation and state legislation. One is the Uniform Electronic Transactions Act, which specifically stated that an electronic document, electronic signature, would count. For anytime you needed a document.
And so they, technology created a problem that the law hadn’t thought of and wasn’t equipped to address. And then legislation went into place because business said, we need the certainty. And you ended up with a. Statute passed. What we are saying, and we have articles where we give sample language, that’s a proof of concept is shouldn’t you do the same thing with automated vehicles?
We don’t have a system where the law clearly applies, and so let’s pass a statute that says, hi, there’s a computer driver, manufacturer’s responsible and this is how it works. And. But the industry there doesn’t have an interest like they do in knowing whether they have a enforceable contract. They don’t have an interest in making their liability clear
Philip Koopman: to simplify that, the big ideas.
So by the way, as an aside, it never occurred to me that I would be a co-author in a large journal paper, but here I am. Thanks Bill. The proposal is to say. Is to say just electronic signatures had to be made, like real signatures or else too much stuff would break and you’d have to reinvent a lot of law.
We can reuse the law just by saying, Hey, you know what? Electronic signature, yeah, it’s a signature. All the rest of the law applies. The proposal is to say, Hey, computer driver, you know what? The business about reverse engineering and trying to prove a design defect is gonna be so burdensome and cumbersome that for 90 plus percent, I’m gonna say even 99 plus percent of the crashes, it just makes no sense.
It ran a red light. Why are we here talking about source code. It ran a red light and hit someone. We’re done. That’s a hypothetical crash, by the way. Right? If that’s what you know, it ran a red light. Why aren’t we done? And the answer is just like electronic signatures or real signatures.
We’re seeing a computer driver should just be a driver. Every, it says driver, computer driver counts too. And computer drivers are held not to perfection. That’s not what we’re proposing. Computer drivers are held to the same standard as human drivers. For better or worse, that won’t guarantee better safety, but it takes care of the vast majority of things so we don’t have to completely break the legal system to do with all this stuff.
And then whatever’s left over, then we have a discussion. But why wouldn’t we wanna solve the ninety-five ninety-nine percent of the problem the easy way? Why make it all hard? And the answer is because making it hard benefits the car companies.
Anthony: I do want to get to your Jurist.org article where you guys get into that, but something you just said about the computer driver and you’ve, you guys mentioned this in a few different things you’ve written.
Like a human driver, I had to take a written exam, I had to take an eye exam, I had to take a physical road test and prove all these things, and then years later my driver’s license expire and I had to go do it again. Computer systems. Don’t have to do this. Why not? Are you proposing with these computer drivers, do they have to take a road test?
Philip Koopman: So let me add something you left out. Okay? ’cause it’s one of my favorite talking points, right? There’s something you left out of that process. You had to show a birth certificate to pro provide to, to prove your human. And there’s a whole lot of stuff that comes with being a human, about being able to reason about what happens next in the world.
That. Machine learning’s pretty bad at so don’t underestimate that hurdle. That’s actually the biggest problem. We don’t know how to do that for machine learning. We don’t have a maturity and reasoning about the world test, and that explains a lot of the bad things happen. But to get back to the question in every law, state law, I can think of that I’ve seen that I recall.
Anyway, the computer automatically gets a driver license. It just gets one.
Anthony: What happens when they decide to upgrade the software or hardware? Their license is automatically renewed.
Philip Koopman: They’re good. You just get the new one, right?
William H. Widen: The way that they do, like in Florida, in other states that are very permissive, they just say that automated vehicles can be operated on the roads.
Philip Koopman: But some of them explicitly said a computer driver’s considered to have a license end. Right now in California, they need an operational permit, but that’s an operational permit, which is a little different than a license, right? But most of the states, the law says either proposed law in many states.
The past law says if you’re a computer, you automatically get a license. Full stop.
Anthony: Even in New York State, I had to, I got a real ID that upgraded driver’s license id. A couple years ago they made me even take the eye exam again, and I’m like, I have LASIK. You guys have records of this? Like my eyes are good.
They’re like, yeah, read line five again. But and with computers, drivers that software’s updated constantly. Co constantly.
Philip Koopman: They, Anthony, they have sensors and the sensor is, and again, then again, a rack full of lidars didn’t prevent a Cruiser over taxi from slamming into the back of a bus right in front of it.
But they have sensors. So I, I
Anthony: have sensors and they fail sometimes.
William H. Widen: It’s just a huge gap in the law. If I want to license a car, I have to have insurance. Okay. And so in that sense, if I have an AV, I would have to get insurance. There’s some overlap in the requirements to be on the road, for an AV, but they, there’s no competency
Philip Koopman: test.
Yeah. I would characterize them as they have administration requirements. You. You have to have, you have to have, I’m gonna be generic here, there has to be a number for the police to call if there’s a problem. Do you have to have insurance?
You have to register or get permission or get a permit or have done enough testing to convince California DMV that they should give you the next permit? But none of those things equate to what any reasonable person would consider a driving test. Some folks say there should be Missy. Cumming says there should be a vision test to your point about eyesight.
But it’s the company’s just, we have this thing called self-certification. I. At risk of pressing when of Fred’s buttons. We have this thing called Self, or maybe it’s Michael. I’m not sure which one. It’s all of us. It’s all three of you. Anthony, too. I’ll include you in, yeah. We have this thing called self-certification. Self-certification. What? Nothing. Self-certification to whatever. They decide they’re ready to go and they can basically give their own car or driver license with no requirements, no technical and no safety requirements placed on it other than they think they’re good to go.
That’s where we are.
William H. Widen: See,
Fred: let me jump in here a second. It seems to me that the courts often defer to industry standards to determine what’s reasonable. And there’s a bunch of groups out there like SAE. There’s the International Standards Organization, there’s United Nations, European something UNEC that develops standards that are accepted
William H. Widen: across Europe.
Philip Koopman: Well, Fred, you’re on
William H. Widen: that committee. Yeah, I know.
Fred: But. So go ahead and blame me. But anyway, all these committees are staffed by engineers, and the engineers who are staffing them are typically employees of the companies that are developing these technologies. Isn’t that is it? To me that seems like a huge gap that, that’s part of the problem that a lot of these reports and documents are put out.
By the organizations and people just defer to them and say hell, it came off, it came from SAE. It’s gotta be good stuff. Even though if you look onto the covers, it says, this report is not intended to be used by actual human beings for any reasonable purpose. It’s not a standard, it’s just a, a
William H. Widen: list of words.
Well,
Philip Koopman: that’s, so that’s the SAJ, a SAJ-XXXXXXXXX document you’re talking about Fred. That’s the levels. Whereas the other standards are supposed to be real, to be clear. Let me unpack the standards for a minute. You have some, you have SA j-thirty 16, which, and the fact I can rattle these numbers off often question causes me to question my life choices.
But j-thirty 16 defines the levels and some terminology and it is explicitly not a safety standard. So of course that means that the federal government in all the states adopted as the only standard they talk about and off, and I’ve heard it be referred to by bill sponsors as an SAE safety standard, which is completely incorrect.
In fact, it’s an unsafety standard. Because it does not require obvious safety things for some of the levels, like driver monitoring is not required, and operational design enforcement is not required for level two, but I will not go down that rant. But the other standards, you do have actual legitimate safety standards.
That would be good if adopted. ISO-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O. And every other industry I have I know of that’s not consumer goods.
So aviation, rail, medical industrial controls, they all scary petrochemical plants. They all follow their industry standards. But the automotive industry, especially the AV companies, say no, we, we have special snowflakes. They don’t apply to us. Now internally, I know the engineers actually do look at the standards, but their public messaging is, Nope nope.
They don’t apply to us. And the federal government’s not making them follow either. So we have this gap. You’re right, Fred, that they have these standards. The industry’s not following them. Now. NHTSA, national Highway Traffic Safety Administration, proposed in December, 2020. Hey, advanced notice of pros, rulemaking.
Hey, what say we make the industry follow their own standards that they themselves wrote as we heard, it’s the companies writing these standards, and that’s been gathering dust since December, 2020 with no motion. That’s where we are.
William H. Widen: And I would add that it’s not just for deployed vehicles, but for the testing of vehicles, there’s a standard for safety drivers, J-thirty-eighteen, which companies could follow, which would require a safety driver if you’re testing a vehicle.
Philip Koopman: And one of, to my knowledge, and one of them did follow, one of them did follow,
William H. Widen: yeah. Argo. Argo AI followed it and they got third-party certification from Tuv suit, as I recall or con conformity to J-Thirty 18. And I think they’re not in business anymore, but the no other company that I’m aware of says that they’ll test in accordance with J-Thirty 18.
So you have a problem with deployment, but even before that, you have a problem with the public road testing.
Philip Koopman: So the standards are there mostly. It isn’t that there are no standards, it’s that the companies don’t wanna follow ’em. Now other places, supply chain in the auto industry is pretty good about conforming to ISO-S-S-S-S-S-S-S-S. But if you read the safe publicly available safety reports especially the one from Waymo says yeah, we read the standards we’re gonna do, we’re informed by them, whatever that means, we’re gonna do our own thing.
And all the other companies have a similar
playbook.
Fred: But I haven’t seen any of those standards defer to or acknowledge the duty of care is that, they just exist. That’s not what they’re about
in parallel universes somehow.
Philip Koopman: Yeah, that’s parallel universes. They all deal with absence of unreasonable risk, which is a parallel universe to duty of care.
I
Anthony: jump back a little bit. So Bill, you mentioned in Florida that there’s no requirement for getting an autonomous vehicle on the road. You can just get one on the road.
Philip Koopman: Yeah.
William H. Widen: Okay. You’re and Texas, a lot of these places, what they want is they do another thing too, which is at the state level for added protection.
They’ll say that you can’t have any local legislation that would prohibit someone deploying an av. So that all the legislation will have to be at the state level. What they’re worried about is that, and I think they ended up doing that, Phil, in Pennsylvania, among other places where there’s a lot of AV companies headquartered what they don’t, they’re worried to death that in an urban environment you may have more, let’s say, liberal or democratic.
Electorate who would favor more regulation. They need to test in urban environments because they’re more complicated and it’s much easier to develop a vehicle that can go in clear weather down a divided highway and follow the white line. But they don’t wanna be shut out of the cities. And so the way you deal with that is to preempt any local legislation from dealing with the operation or deployment of an automated vehicle.
And so they try to neuter the local governments and at the high level at the state government have either limited or no requirements to put an AV on the road.
Philip Koopman: This is the same playbook they used for ride hailing. You get whoever’s on the transportation committee who doesn’t live in the cities and you lobby them however you lobby.
We’re not, I’m not gonna get into that. You lobby them to pass a law. Saying, yeah, autonomous vehicles bring jobs and economic opportunity and I’m sure the shape safety box is checked. Safety because reasons. Okay, let’s move on to jobs and money and yeah. Anywhere in the state. And we don’t want anyone interfering with these great jobs in economy and lobbying.
So cities, you’re not allowed to do anything. And we pass the state law and that’s it. That happened in Pennsylvania. Happened in California. Yep. And this is why San Francisco was so beside themselves upset that the companies were coming in. Acting like that you can’t do anything to us. We’re gonna do whatever we want.
And they were right. ’cause the state law saying the city couldn’t really do anything. They couldn’t even give ’em tickets.
William H. Widen: Now, one thing that’s important, as a matter of legal theory, there’s a lot of people who would say, look, we don’t need a lot of complicated regulation because what the tort law will do.
Is incentivize people to make safe products because if they don’t make a safe product or incentivize people to behave reasonably, because if they don’t control their behavior, they will end up having liability because they will lose a lawsuit. Okay? One reason that we like duty of care with manufacturer responsibility is it streamlines.
The ability to put an economic penalty on a manufacturer or a developer, which in a perfect market would induce them to produce a vehicle that was safe enough. But the moment that you make their liability unclear, so they don’t have a financial incentive to develop the safe product they may well fall short of what’s necessary.
They might decide as a business matter, they’re prepared to pay for a certain number of fatalities or what have you. And they think that’s, that, that’s fine. So the, they’ve even ruined the default rule that would otherwise create the incentive to develop a safe product.
Philip Koopman: Let me back up and unpack a buzzword there.
So tort law, this is the non, this is the engineer speaking Tort law is when you harm someone by not acting reasonably, they can come after you for compensation. That’s the simple version, right? That’s,
Anthony: if you’d like to find out more about Tort law, you can go to the Tort Museum online. Oh, boy. I wanna cut off today’s conversation here.
We’re gonna recall round up in a second. Thank you gentlemen. I’m, I know you’ll be here with our following episode, which hey, if you wanna know, the sausage is made. We’re gonna start recording in just a minute. But before we do that, Fred, do you have any interest of moving to Florida with me?
And we both, worked on circuit boards, we can create our own RoboTaxi company.
Fred: I think we’ve already done that. Have we, Anthony? We’re we’ve, I’ve, we’ve been bought and sold by Big AV, or Big G. who, who has bought and sold us, I can’t remember. I don’t remember that money. One of,
one of those,
Spam bots that came out.
Big Auto. That’s it. Big auto. We’ve been bought and sold by Big Auto.
Anthony: I’m gonna jump into some recalls. How’s that sound? Hey, there’s a company called Tesla and they recalled 2.19 million vehicles. Or some people like this if you’re a Tesla fanboy. It wasn’t recall, man. It was just some like thing that happened.
This is pretty much every car they’ve ever made, so I’m not even gonna list out the model years. Makes models. Actually, it is every car they ever made, isn’t it?
Michael: Yeah. Yeah. Including, including, the first recall on the Cybertruck.
Anthony: Oh, that Cybertruck so ugly. This was a visual warning indicator whose letter font size is smaller than 3.2 millimeters, one eighth of an inch as prescribed in federal motor vehicle safety standards, number 1 0 5 and 1 35 could reduce driver’s detection.
It went illuminated, increasing the risk of collision. Now, this just strikes me as just being lazy, like RTFM, like how did the, your engineers just not be like, oh wait, there, there’s actually a manual here that says, use this little bit of less cognitive load for me. I don’t have to think about it.
Michael: It’s not even a manual. It’s, it’s the code of federal regulations. It’s in the safety standard that you have to have a font of a certain size so that drivers can read it. It only happened in one specific mode of the vehicle. I think it was to do with parking and braking.
It’s, once again, it shows that Tesla’s not being as, strict on itself when it comes to adhering to federal
standards. Yes.
Fred: And for those who home, you gotta hire a human being who’s actually going to read the standards and then has the authority within the company to make the company adhere to those standards.
That’s not Tesla’s way you’re doing business.
Anthony: For those of you playing at home, even Tesla refers to this as a recall. Moving on Honda. Potentially 750,000 vehicles. This is 2020 to 2022. Honda Accords, Honda Accord, hybrids, Civics uh, Civic, two-D, four-D, five-D. Huh, interesting. It’s almost everything.
Yeah it’s a lot of vehicles. The 2020 to current vehicle year 2022 vehicle year. This is pages and pages of stuff. So this is in the event of a crash. The frontal passenger and knee airbags may deploy despite the. Despite the presence of an occupant in basically despite the presence of a child being there, and in which case the airbag should not go off.
And so this had seemed that they their tier one manufacturer of the circuit boards had a problem. So they said, let’s go to the second guys. They produce something crappy and no one tested it. Yep.
Michael: It’s it looks like what this is the occupant. Sensor that determines the weight of the person in the seat so that the airbag doesn’t deploy as aggressively for people of smaller stature, smaller weights to avoid injuries.
We saw a lot of aggressive deployments in the nineties, and that’s something that, that. The occupant detection systems, they’re attempting to decide how big the human in the seat is so that they could deploy the air bag at an appropriate force. And what it looks like happened here is, they had a tier Honda had a tier one supplier, one of their main suppliers who supplying the sensor to them.
Using parts made at a tier two supplier which is basically the supplier supplying the supplier and that tier two supplier, I’m not sure where they were and I’m not sure what this was, but there was apparently a natural disaster that took out the tier two suppliers ability to produce these, material for the circuit boards. And when they moved to a secondary supplier to replace that one, the secondary supplier allowed whatever material they’re producing didn’t work. It. It led to cracking of the capacitors, short-circuiting, and essentially. It ruined the ability of the vehicle to determine how large the human in the seat was.
And so when there’s a crash, the airbag is deploying, at full speed essentially and threatening, small women and children who might be in the front seat. I.
Anthony: Well, a good thing. Autonomous vehicles don’t use circuit boards or sensors or anything else that can fail anyway. Hey thanks again to our guests Phil Koopman and William Wyden, and they’ll be back for next week’s episode.
Michael: Bye everyone. Alright, I’m gonna hit stop.
Fred: Goodbye. Thank you.
William H. Widen: For more information, visit Www.autosafety.org.