Data-driven Insights with QuantivRisk’s Mike Nelson

This week we are joined by Mike Nelson, chair and founder of QuantivRisk, a risk management and technology company. The discussion focuses on QuantivRisk’s innovative approach to analyzing vehicle data and video to objectively assess automotive accidents. This episode covers the challenges of defining and litigating safety in the context of ADAS and autonomous vehicles, particularly the complexity involved in evaluating AI decisions during accidents.

This weeks links:

Subscribe using your favorite podcast service:

Transcript

note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

[00:00:00] Anthony: You’re listening to There Auto Be A Law the center for auto safety podcast with executive director, Michael Brooks, chief engineer, Fred Perkins, and hosted by me, Anthony Cimino for over 50 years, the center for auto safety has worked to make cars safer. Hey, welcome to the

center for auto safety podcast. The only podcast tracking that it’s been over 3000 days since full self driving. Has been shown to be better than a human, at least according to Elon Musk. Today we have with us a guest. We have Mike Nelson of QuantivRisk. Mike Nelson is the chairman and founder of QuantivRisk, a risk management and technology company.

Welcome to the show, Mike.

[00:00:48] Mike Nelson: Hey, thanks for having me, Gus.

[00:00:50] Anthony: Absolutely. So can you tell us a little bit more about QuantivRisk?

[00:00:56] Mike Nelson: So QuantivRisk is a company that takes data from [00:01:00] vehicles. And uses the data and the video to basically evaluate what happened in an accident in an objective way. Our position on a lot of this is somewhat agnostic as to who did what.

It’s more or less transparent way reporting. Here’s what the data shows. Here’s what the video matched with the data shows. To our thinking as we get further and further down this autonomous or automated vehicle, we always make the distinction between the two of those. As we get further and further along that pathway, we believe it’s going to be important to focus on all the cars and be.

The Rosetta stone of what that data looks like, cause it’s going to look different depending on what car or model update and real time, you’re going to need to know what was going on in the platform that was being driven and probably more than one [00:02:00] car at a time. We have. We’re well past our proof of concept stage, and we now have people on both the insurance side, the plaintiff attorney side, coming to us and saying, can you tell us an objective way what happened?

And we think transparency is incredibly important here. I noticed in some of your materials, you cite to the NTSB position on this, on how important it is to have that objective view of what happened after an accident. And so that’s what we’ve been working on is building out that platform.

And we have a tool that takes very Rosetta Stone and we have a tool that takes data from different sources and Gives a generalized view of what happened to folks that don’t really want to get that deep into the data. And then when you sync that when video is available to the video, it’s a really powerful tool.

Anyway, that’s the quantum risk is [00:03:00] building out that platform. And we expect to be the data aggregator for everyone to refer to. And really change the way risks are managed as we, look at the future of automotive technology.

[00:03:15] Anthony: How did this come about? How did you start?

Cause we see this with every car in the last, I don’t know, 10 plus years has a ton, just a ton of centers, sensors on it. It’s collecting a lot of data. And we know from a consumer point of view, they’re like, Hey, can I see my data? I was in an accident. I can’t get this. So how did you guys start this and how did, how are you actually getting access to the data?

[00:03:36] Mike Nelson: Yeah. So I’ll handle the first part of that question. I’ll get to the second part. So the first part in 2019 a gentleman driving on the Long Island Expressway, which is the world’s largest parking lot. He’s in rush hour, about 4. 30 in the afternoon on a Friday, and he’s in bumper to bumper traffic, and he’s in a Tesla, and he’s in traffic aware cruise control.

Even back [00:04:00] then, it was touted as Making bumper to bumper traffic less painful and the right way to deal with this stuff. What happened in the accident? I’ll get to in a minute, but what the gentleman driving a model X reported was, I don’t know what happened. My car went crazy.

And it’s easy to understand once you get the data in the video as to what did happen, but you can also understand why he thought his car went crazy. So when I received the data and the video, and there was some arm wrestling with Tesla back then, where I had to sue them first to get it, uh, was eye opening, and I thought, And this goes back to the days when I was a claims adjuster for an insurance company, but also a lawyer.

This is going to completely change how people evaluate accidents. Police reports are irrelevant. They’re terrible evidence anyway. They’re not even evidence. It’s mostly built on hearsay. And [00:05:00] and police departments don’t understand the tech in these cars. They don’t know how to evaluate them still.

We’re talking several years since what I’m talking about. And I contacted after several tries of getting through to Tesla. I finally got to the right person. And he said sorry, we only give this data to people that have badges or subpoenas. And I said I can fix the subpoena thing.

So anyway, so issue a a subpoena pre litigation subpoena. And got the data and we already had some video from the dash cam, but we also got some still images from the car and this accident is interesting and I know it’s a radio broadcast only, so I’ll try to describe it, this gentleman’s in the extreme right hand lane.

And he’s doing about three miles an hour,

[00:05:47] Anthony: right?

[00:05:48] Mike Nelson: And somebody comes in off an on ramp and it’s a, it’s an Audi and it starts to merge in front of his Tesla and the Tesla does not pick [00:06:00] up the merging vehicle. It’s following the track to trailer. That’s in front of it. And as the track, the trailer starts to move out, he can see the Audi, but his car doesn’t process, there’s a new object in my lane.

So his car moving at three miles an hour, just plows into the vehicle. He can see, and he assumes the car’s going to stop for so in the. In this part of the world, that’s called a cut in and back then Tesla’s were terrible at this stuff. And again, we’ll get into how that kind of technology has been marketing marketed over time.

But the cut in you can imagine this human being. Trying to process what is my car doing? I don’t understand this because I can see this, but he has to almost immediately process what is my car doing, which is different than how am I driving? He’s trying to account for both of those. And then you have to put in the [00:07:00] human factors.

Part of this is how long does it take for a human being to respond to this? And so. It’s debatable whether or not he stepped on the gas, the accelerator but it appears that he did from the data. And so instead of stopping, he compounds the problem by driving very fast, not, he managed to avoid the Audi but then he went into two adjacent lanes.

And wiped out a few other cars and total lost his car. So that’s the beginning of one of risks because we started to realize with the data and the video, we’re going to get to the point I made earlier, we can understand accidents that nobody else is going to be able to understand. And then, around that time, there was some really noteworthy accidents like the Mountain View crash out in California.

And started talking to the NTSB folks and DOT folks and started to realize. God, nobody else is even on this. So we created a company on the heels of [00:08:00] that. I had to leave the law firm I was associated with because they would have wanted part of the company if I was building it while I was still there.

And that all happened about The month before the pandemic hit. It was an unusual time to start a company, but I’ll take the pause button. I don’t know if there’s more questions about it. There was another part to your question and I forget the second part of it. Yeah.

[00:08:19] Anthony: So my next part is okay.

So you get hired by who, who hired you in this case? It was the driver of the vehicle, the insurance company. Both. Okay.

[00:08:29] Mike Nelson: The insurance company, but also the driver.

[00:08:30] Anthony: Okay. So they want to find out what happened. But the thing we’re always curious about, and there was, I think it was the Wall Street Journal in the last month, they hired hackers to, to basically reverse engineer Tesla’s hardware and software so they could see, Hey, this is how we can have access to the data and we can figure out what’s going on.

So how did you get access to the data? The, are you getting that when you said dash cam footage, are you getting this from police? Are you getting this from the Tesla itself?

[00:08:57] Mike Nelson: In that instance, it was, [00:09:00] um, off a dash cam initially. Yeah, but then that’s part of that. We eventually did bring a lawsuit against Tesla and can’t get into what was shared during discovery.

There’s a confidentiality place, but I can share very generally because this has been Tesla’s position all along. The instance, an event happens. In a Tesla, all the data in the vehicle gets beamed up to a cloud. The struggle everybody sees is how to get this out of the car, which is some of what the wall street journal was doing easier route is now.

And I do think I played a role in this Is the consumer can access a portal at a Tesla cloud that they have as part of being customer. And believe it or not, within 30 minutes, the data you want, even if it’s not accident related is yours. They will deliver that to you. Now they’re [00:10:00] very. Limited in what they will give.

They call it only the consumer data as opposed to proprietary data. And it’s just scratching the surface of what data they’re collecting. As part of the work we’ve done, we understand much better now the scope of the data. And that’s what the Wall Street Journal was getting into. But if you want to look at an earlier version of what we were working on, the New York Times did an article.

On related to a different accident, but on the same concept of what we’re doing, and they show some of the big thing we gave them some video related to that other accident, but then they reimagine exactly what we’re doing that. I think that article goes back to 2000 and 21.

[00:10:47] Anthony: Okay. And so what you were talking about with the Rosetta stone for these different OEMs, cause I have a Toyota, Michael has a Volkswagen, Fred’s got a Subaru.

I believe you have a Tesla, right? Yeah. So they [00:11:00] all store data in their own formats. I’m sure the video formats, we can find some standards, but. What other telemetrics data they’re storing? How are these guys storing it? Cause there’s no standard format saying, Hey, this is all an XML or, Hey, we encrypted it, or we did some weird hash on it.

Like how, how has that done?

[00:11:18] Mike Nelson: Yeah, our primary focus has been on Tesla because it was further ahead than anybody else. And once we figured out. How to access the data and the video. And by the way, now where that accident I was talking about in 2019, now as a consumer, you not only get your data, but you also get your video of the accident as well from the clock.

But getting back to different car manufacturers. Yeah, it’s all different. And some of it is being to a cloud a lot more than anybody’s, um, being honest about Fred and I were at the TRB meeting. Transportation Research Board meeting where inexplicably some of the car manufacturers are getting up and talking about all the latest and greatest tech.

And they’re all [00:12:00] talking about a lot of their data is being sent, from the car to a cloud. So there’s a very inconsistent approach by car manufacturers to how much is happening that way, where it’s being transmitted. But it has to be transmitted because these cars are all using that data to tune the artificial intelligence aspect of the driving environment.

And We know the data is going there and, but they’re, I think they’re struggling some of the other car manufacturers beside Tesla because they’re working in many ways about old platforms, not new platforms. They’re struggling with how to get that data up to a cloud for their purposes, not for the purposes I’m talking about.

[00:12:39] Anthony: And they’re still making the claim that this is our data, the OEM. You bought the car, you gave us all the money, but we own the data.

[00:12:46] Mike Nelson: So getting back to Tesla so I sued them a number of times to get data. And then they I think this agreement’s now stale, but it was, we’ll give you the data, Mike, just don’t sue us anymore.

But you can’t tell anybody. [00:13:00] You’re getting it from us. Okay, fine. So then the California Consumer Privacy Act comes along. And the person I’m dealing with, the Tesla said, we’re no fools. You’re, we’re going to give it to California residents. You’re going to walk into a court in Florida and you’re going to say, is that fair that you, Give California residences data and not a resident of Florida.

And he said, so we’re, we’ve decided as a corporation that we’re going to make it available to all consumers through this port. I’m not sure where I am as far as answering questions, but manufacturers haven’t gotten to that spot right now, but. The same rules are going to apply do apply to them.

And I don’t want to share with you too much about ongoing conversations with our, with all the car manufacturers, but it’s a push pull kind of thing we’re going through right now with them.

[00:13:53] Anthony: So I’m, without getting into details, in your general sense, are they more okay, we see this as the future, we’ll figure out how to give [00:14:00] it to you?

Or are they more nope, not going to happen?

[00:14:03] Mike Nelson: I think it depends on who you’re talking to inside those organizations and which organizations it is how enlightened that person is about forward looking issues. But another car manufacturer the other day, I had to do the same thing was, get a court involved and they were like, no more don’t do that.

And we can see where this is heading. And when we do see the business deed and we do see this as customer friendly so yeah, they should probably want to understand the data. Part of what I envisioned and our organization The better people understand how an accident happened, the more confidence they’ll have in the technology.

There should be a motivation by car manufacturers to make this data available without being forced to by the law. Now, I don’t think that’s the, there’s so many people inside that organization or those organizations are going to say, If I give you the data, you’re going to sue me. Frankly, [00:15:00] I think that’s not a very enlightened position to take, but, uh, told you before we started, I can talk a lot on this stuff.

So feel free to like, put a pause button up and I’ll stop.

[00:15:12] Fred: No, I got a question for you, Mike. A lot of the. AV manufacturers are touting the safety of their vehicles, and yet none of them seem to have a quantifiable or objective basis for their claims of safety. What, what do you think about that?

Is there any reason for them to say that my car is safer than X, or my car is safe enough because of Y, or how does that work?

[00:15:37] Mike Nelson: First I treat differently, um, autonomous vehicles. Which are largely in development versus ADAS vehicles which are a bit more mainstream. But even ADAS, which is not as drastic as automated vehicles, and probably maybe for the benefit of the audience think of AV as a, robot [00:16:00] that’s going to do most, if not all of the driving experience versus ADAS, which has been slowly creeping into the current car fleets and handles lane keeping assist and adaptive cruise control and parking.

So the latter being a call ADAS, which, um, Is more and more making its way into the fleet. The part of this that’s a struggle is they’re both saying these cars are safer. The insurance institute for highway safety just did a whole evaluation of ADAS related vehicles, which is not as steep a climb as AVs.

And only one car, a Lexus. Was found to have an adequate ADAS system. Now it also depends on what automation we’re talking about. So if we’re talking about automatic emergency braking, yeah, that does make cars safer. And the data starting to flow that those type of devices or that type of technology [00:17:00] will bring a car to a stop.

And I think a lot of people starting to realize this when they’re texting when they shouldn’t be, or they’re not paying attention. And also the car slamming on the brakes. And we’ve seen some commercials on that. So to that extent. It’s safer backup cameras that also will break if there’s an object back there, it can tell you the data on that is very strong that’s increased a lot of the backing safety issues.

As far as lane keeping assist and adaptive cruise control, I do not believe those cars are safer. And then you have to put the human factor in touch and on top of that. And so the folks at, MIT, for instance, and Brian Reamer’s group, if you know them They’ve done a lot of studies about how lax people are over time.

So picture you’re a driver of a car with some of this automation on a highway over time. It’s not even just in that driving event, which might be two hours, but over weeks and months of doing it, you tend to get [00:18:00] vault into a complacency about being vigilant as you go. You’re driving the car and that gets into a host of other issues and product liability issues from happy to talk about, you basically tend to trust the technology too much and that’s a risk in and of itself.

[00:18:17] Fred: Yeah, that’s exactly right. One of the things. One of the issues that we follow, though, is that ADAS is a squishy concept, and it’s self defined by the manufacturers, and you look at a company like Tesla, which, in fact, has fully autonomous driving, but it stinks and it’s dangerous, and they just say it’s level two, and, the owner or the driver is responsible it’s really not level three, but functionally, It’s got the same responsibility for the driver as any any other kind of automated vehicle.

The drivers fall asleep. The car drives itself. There’s, there’s a real definitional problem here. I think.

[00:18:58] Anthony: Let’s be clear. Tesla’s [00:19:00] lawyers define it as level two. Their marketing defines it as level five.

[00:19:05] Fred: But in there I work the steering wheel number four, . On their website, they say that the owner, the driver has to have complete capability to instantly take over, which of course is impossible, but.

In my opinion, that whole scenario increases the cognitive load for the driver, because you’ve not only got to be attentive to all of the operational aspects of the car, but you’ve also got to be attentive to the state of health of the computer that’s driving your vehicle. So I don’t see the safety advantage, but.

Narrowly, isn’t this a problem that, we need to have some authoritative definition of what the driving level is and what the consequences of that are, of just saying that, it’s an SAE level X, Y or Z is insufficient for consumer purposes.

[00:19:56] Mike Nelson: I agree with you, Fred. I’m on the society of Automotive Engineer [00:20:00] committee for EDR devices.

And this is something they even struggle with because they take the position that EDR data should be a tool that evaluates level two, but level three is coming. And so with level three, they’re creating new standards for what’s called data loggers, which will have much more information in there.

It’s this line between two and three is so fuzzy. And Mercedes is supposedly introduced a level three and it’s only being used in a couple of states. I’m not sure that’s even a true level three because it’s ring fenced. And there’s only certain areas you can use it. and has to be used on a certain miles per hour.

Getting also to this point that Fred just raised, I think is really important. And this actually is how Fred and I met. We were at a different conference called the Society of Automotive Engineers, government and industry conference that happens in every January. And there were three people up on a desk [00:21:00] and they were talking about a lot of these things.

And, time after time you hear these panels say almost the same thing. And some of it is manufacturers do tell you, you have to stay alert and they disclaim it in their materials. So problem for you, not for them. And then the second point that was made and caused me to react was, Somebody from the audience asked a question and said how are we going to adjudicate these problems?

And the response back was that’s going to be really hard to develop this legal system. And it’s going to take a long time. And I’m the front of this crowd. And I busted a move. I’m running around like a thousand people. And I grabbed the microphone. They’re like, we’re not having more time for your questions.

Like we’re helping sponsor this event. Give me the frigging microphone. And so I said, This is not a question, this is wrong to say manufacturers can disclaim responsibility in [00:22:00] their manuals or their driver customer material. I said, that’s crap. And then, I don’t know if I used the word crap, I may have.

But then I said, and this whole idea of it’s going to take a long time to get these legal systems in place. I agree with that, but I’ve been hearing this now for 10 years. We better get busy. This has to be done now. We’ve got to start working on it now. And so I sat down on my chair wondering if I overstepped my bounds and all of a sudden there was this guy named Fred poking at my shoulder saying, Was that you that just got up and said that?

So that’s how Fred and I met. But that, those are my positions on Some of the things we’re talking about this, those disclaimers and material and these, um, manuals are wrong. Ultimately, if you get into a court system, I don’t think they’re going to matter because product liability doesn’t allow for that kind of nonsense.

But but it’s what everybody says, and the car manufacturer is definitely saying it. We told you, [00:23:00] this is your problem if it doesn’t work. And I’ll give an extreme example of this, and I’ll stop after that to let you guys talk.

[00:23:06] Fred: So you see, Michael, I’m an excellent judge of character. I don’t care what Anthony says.

[00:23:13] Mike Nelson: So if you’re using, and again, I don’t want to make this all about Tesla, but they’re just the best example of some of this. Tesla has this.

I’ve seen numerous accidents where the Tesla pulling out, not backing up just instead of making an appropriate exit out of the parking space, cuts the corner too short, hitting objects or other cars. They’re near misses of human kind of injuries. And then when you go to Tesla and say, this is yours, this is a car mistake no, that driver is supposed to be using the phone to stop the car.

There’s no way. That you can simply pull your finger off the button in [00:24:00] time to see the car is not managing the exit from the parking space. That’s how silly it is. While we do, you have to be ready. No, that’s not the way the law is going to work and that’s not the way, that’s not even fair.

[00:24:14] Fred: The research we’ve seen shows that it takes anywhere from 10 to 30 seconds for a driver to take over control of the vehicle. And that’s when they’re, fully engaged, they’re in the driver’s seat, they know they have to take over, but they may be, playing cards with the person next to them, or who knows what.

So there’s a big range, but in any case, 10 seconds in a car is an awfully long time. A lot can happen within that. We’re going to expand this discussion to our friends at Uber and Cruise once we get to the section on Gaslight Illumination, so stand by for that. I’m sure you’ll have some comments.

[00:24:49] Mike Nelson: Yeah, the human factor piece, there was a study done by the Department of Transportation along with some private sector companies, and it was 2015, and [00:25:00] the way I read the study, and again, I’m not an engineer, Fred, you could probably do a better job of digesting this, but what I was reading is on balance, it was 7.

2 seconds to execute a takeover response. Seven seconds in an accident scenario, you might as well have it be seven hours. It’s over in seven seconds. It takes less than two seconds, but not much just to even realize you’re in a problem setting. Let’s not get time to come up with what is going wrong.

And then come up with a plan and then to execute on that plan to get out of a problem. So you can easily see seven plus seconds to avoid a disaster.

[00:25:47] Anthony: But not being a lawyer, I don’t understand this discussion around that the law has to catch up with this because there’s already product liability laws. There’s already traffic laws. All these laws exist. We’re just saying, okay, it’s not [00:26:00] Bob behind the steering wheel. It’s the Waymo driver, which is just some software and hardware behind the wheel.

Is it, is the law having to catch up figuring out who do we put, who do we give the ticket to? Is I recommend the executive staff or is it where’s the law missing? I don’t understand.

[00:26:17] Mike Nelson: I’ll give you, I’ll give you a very simple example of how I think the law is lagging behind besides the regulatory standards that hadn’t been built on any of this stuff.

There’s very few that are apropos for this technology. Like the federal motor vehicle safety standard don’t address any of this stuff. But I was involved in a case where this woman was driving in a Tesla she was in full self drive and the car will make a lane change without the human being actually making the lane change.

The car will do it itself. So she’s being aggressively followed by a car that thinks she’s going too slow. And She starts to move into the right lane again [00:27:00] by, by the vehicle, not her. The vehicle gets spooked by a tow truck that’s hiding under an underpass. So the car moves back into the left lane.

It’s only two lanes and slows from 77 to 55. That’s dramatic to a guy coming down the backside of that car. So there’s a rear ending accident. That would be 100 percent liability on that car. That would be negligence. There’s an argument as to whether or not somebody slamming on their brakes, but she didn’t could be held responsible for the accident.

But now, the car arguably misbehaved when it slowed down so dramatically. And it didn’t have to. So now, how do you litigate against, in the common way we think of accidents, between negligence with that other vehicle and then shifting some of that risk over to the manufacturer? [00:28:00] And Even just, I’ve noticed in some of your materials, how much you guys are object to the arbitration requirement some manufacturers operate on there.

So in that scenario, this person unwittingly had not unchecked the box when they purchased the car. So they were subject. Arguably subject to the arbitration requirement. If you have a dispute with Tesla, you have to go to AAA arbitration for that. And then how do you put into this other side of it? It has to be litigated in state court.

So I think this blended system of Legal risks and getting a better sense of how to move product liability and negligence concepts together is where the law has not developed. And yes, we can look to what’s been done in the past because surely there have been negligence and product liability lawsuits in the past.

But we’re talking about many accidents now. And you won’t find any jury charges about automation and human factors that [00:29:00] really are meaningful from the same. We don’t have. a judicial system that’s issued any holdings on this stuff. We don’t have a judicial system that’s even dealt with who owns the data.

And ownership of data is not even the right way to look at it, but we can talk about that later on if you want.

[00:29:16] Anthony: Okay. That’s now I have a better understanding of what’s missing there. I just still think we can give tickets to the executives every day.

[00:29:25] Michael: Yeah, I want, I’ll ask a quick question here.

What, this is what’s going on now, and now even Tesla has some capabilities, some of the decision making that their cars are using is based in artificial intelligence and how much harder does it become when you have to show that an artificial intelligence is decisions were negligent is it’s.

Is that something you can even do? Can you go back into the code or can you go back into the data and even point to a moment where [00:30:00] an artificial intelligence failed? Or does that bring up a whole other problem in proof and evidence?

[00:30:06] Mike Nelson: Yeah. So I’ll reference a different accident that we worked on.

This gentleman’s driving in a Model Y. He’s in the extreme left lane. And he’s doing about 75 on a highway. A tractor trailer moves from the extreme right hand lane to the center lane, not his lane. And the Tesla gets spooked by that moving tractor trailer. And aborts the full self driving by moving him into the median.

[00:30:36] Anthony: Oh my god.

[00:30:37] Mike Nelson: And fortunately there’s a safety cable system that stops his car but totals it out. The other guys in my company aren’t lawyers and they’re like how do you know? That the car caused this. I can see the data. We can see the data that the car reacted to the truck. What caused the car to decide to move into the grass?

[00:31:00] I don’t know, but that’s not right. But I can tell you to a certainty that they, Tesla is getting this information all the time about events and they’re figuring out what happened in the car and what the car was doing, And not just what the human beings do. Because they have to fix these, what’s called edge cases.

[00:31:22] Anthony: So Tesla clearly has some internal system where they can interpret this data, and it gets fed, and they get to see, Okay, this is what happened, this is what, what occurred. So they have a clear understanding of what happened. Is there a case where you can basically say, Hey we need that system. You guys essentially have the rule book.

The, this data is essentially useless without this rule book. Okay.

[00:31:43] Mike Nelson: Yeah, so there is, again, that’s part of the law. Okay. And it’s also part of the science of unwrapping this stuff. The Banner case in Florida is a really important case.

[00:31:54] Anthony: I’m sorry, what’s the Banner case?

[00:31:56] Mike Nelson: So the Banner case is the same, almost identical cases.

There’s two cases [00:32:00] in Florida where a Tesla goes under a track to trailer.

[00:32:03] Anthony: Oh, and

[00:32:04] Mike Nelson: it abducts the driver. And so the second one was Banner and that’s been an active litigation for a couple of years. And in that case, they got a lot of discovery, but what hampers other people from understanding what’s going on is Tesla’s always going to get a protective order in these cases.

And we can talk more about Banner later on if you want, but there’s a limited ability to see what’s being discovered in these cases. And I think that’s wrong. And, again, I’m talking about transparency,

[00:32:36] Anthony: right?

[00:32:36] Mike Nelson: I think confidence in these systems will be higher if people and car manufacturers can be on the same page about what took place.

Human beings don’t know. Now, that guy, the person who was involved in the accident died, so it’s almost unknowable what he was thinking. There’s some evidence that he was playing a video [00:33:00] game or watching a movie or something like that. So he wasn’t paying attention anyway. But why did the car not see the truck?

People back at Tesla understand that. Do we as a society understand that? Not really.

[00:33:13] Fred: Mike, I’m going to take a small issue with one thing you said. You said that once an edge case is identified, Tesla’s got to fix it. I don’t think they do, because in particular, if you look at Banner two years later, Tesla’s Essentially exactly the same thing happened in New Jersey with exactly the same consequences.

The driver there was once again decapitated in the collision. So I, I don’t think that there’s any evidence that Tesla’s looking at these things and saying this is egregious. We need to fix this. I wonder if Tesla’s got. An economic decision going on to say do we really need to bother to fix this?

Because it doesn’t happen a whole lot. Have you got any insights into that into their decision process?

[00:33:57] Mike Nelson: Yeah, I do. There was a, there was an article published [00:34:00] a couple of months ago, and I forget who published it, Fred, and I’m not advocating that. whether or not they should or could or whatever, but they do pay attention to events.

And so what? And this is for their own benefit. It’s not so much that they’re trying to eliminate risk. But they’re trying to generate a car that can think. And so they were looking at, they were downloading lots of data from all the cars and all the miles. And then somebody back in headquarters said, why are we paying attention to that?

Let’s just look at our best drivers and see when they run into trouble. And then we’ll digest what went wrong when they got into trouble, usually when they disconnect from autopilot, or it could be an accident, and then they are figuring out based on that, because it’s now a much smaller sample size. So they do by all reporting, take those kind of events [00:35:00] seriously.

And arguably are making a safe, in their mind, a safer car or a car that handles these situations better. But I’m leaving plenty of room for them to say, Nah, it’s not worth it. We’re not going to do that.

[00:35:13] Fred: We know from Anthony’s reporting earlier that they pay a lot of attention to the roads that the CEO happens to be driving on.

[00:35:21] Anthony: That wasn’t my reporting. That was I believe ours technical or the Washington post or wall street journal. I just read things and spit them out and I claim their mind. I’m like chat GPT. Okay. Look, I’m just taking somebody else’s work saying, Hey, it’s mine. Fred believes it. It works out. Great. Hey, listener.

Have you gone to autosavity. org and clicked on donate? You have? Ah, you’re wonderful. So what we were talking about before is, okay, so a lot of this stuff is hidden through discovery, but this is the, that’s standard in the history of tort law. It, none of this stuff comes out until somebody doesn’t settle a case and they actually go to [00:36:00] trial, and then in trial it becomes public.

Have any of these gone to trial and become public, or does Tesla manage to be like, this is public, but let’s keep this on the DL?

[00:36:09] Mike Nelson: Tesla’s tried a few of these cases and surprisingly have won some of them. But I think when they see they’re vulnerable, this Florida case is going to be important because the trial court is allowing punitive damages based on the representations made by Tesla.

And the judge found that a trier of fact. could determine that Tesla is exposed to punitive damages, which means that jury is going to get that question. That’s appealable. The judge allowing punitive damages in Florida is appealable immediately. So that’s not bouncing its way through the California. The Florida appellate courts.

But I can tell you that you’re going to be able to sell popcorn at the trial. Let’s go forward and I’ll probably eat some popcorn. Cause I just want to be able to process without the, the veil, the [00:37:00] shroud of secrecy behind how this goes, but I suspect if punitive damages are allowed, I imagine Tesla’s going to settle that case because not just about.

A car. It’s about, there’s a lot of what happened in the senior levels of management to allow this to fester.

[00:37:17] Anthony: No. We’ll, disclaim it was one lone rogue engineer. I think Volkswagen tried that with Dieselgate. I, so both of you and Anthony, I didn’t know. Sorry.

[00:37:25] Fred: I didn’t know if we had enough time for the whole gaslight illumination, but I’ve got one, one point that I wanna put in here because it’s very much on point if we can do that.

So there’s a blog post that says Uber and Cruise to deploy autonomous vehicles on the Uber platform. It’s published August 22nd, and the announcement said that Cruise is on a mission to leverage driverless technology to create safer streets and redefine urban life, says Mark Whitten, CEO of Cruise.

Quote, we are excited to partner with Uber to bring the benefits of safe, [00:38:00] reliable, autonomous driving to even more people. Unlocking a new era of urban mobility. So the questions arise, safer. Okay. It’s one of the words that they use and that’s an aspiration. Okay. So we’ll give them that, but the question is, how are they going to do it?

And I guess you would say probably by deploying safe vehicles. So my question is where they, where will they get them? There’s no federal or state definition of safety. And when they talk about risk, it has two parameters, consequence and likelihood. Any AV failure consequences are high because you’ve got a car and you’ve got passengers and people, all that.

Really, the only question is, what is the likelihood of occurrence? With a joint effort by Uber and cruise, you’ve got to ask if it’s as frequent as Uber sexual assaults, there’s going to be a big problem. So what does safety mean? Per ISO, International Standards Organization [00:39:00] 26262, quote, it’s the absence of unreasonable risk, quote.

So if that’s the definition you’re using, you’ve got to ask who determined what is acceptable and who has published the Uber Cruise definition of reasonable risk so that we know what we’re dealing with. We’d like to know the limits. If you go with another definition of safety. Per Miriam Webster, it says the condition of being safe from undergoing or causing hurt, injury, or loss.

Then you’ve got to ask what does safe mean? And per Miriam Webster, it means it’s free from harm or risk, unhurt, or secure from threat of danger, harm, or loss. Clearly, there’s no objective basis for saying this. I think the Uber Cruise announcement doesn’t meet any of the commonly accepted definition of safe.

They really owe their public the definition of safe and safer so that people can objectively evaluate this. And here’s the [00:40:00] gaslight illumination part. Since their use of safe is untethered to either truth, objective evaluation, or commonly accepted definition of safe, the UberCruise announcement affirmatively meets the Harry Frankfurter definition of bullshit, and therefore is my candidate for Gaslight of the Week.

[00:40:17] Anthony: Brad, I’m flattered that you’d choose my, my, my constant nominee of GM Cruise.

[00:40:24] Fred: It has Uber. It slipped over. So

[00:40:25] Anthony: I’m not infringing on your territory. I don’t know. I it’s feel, Michael, what do you think? Do you think he’s stealing my thunder?

[00:40:31] Michael: Yeah, I do. Yeah. I was worried about that, but you seem like you’re handling it pretty well.

[00:40:37] Anthony: And externally, I can hide my emotions internally. It’s rage.

But that, that, that is the excellent question of this is just marketing nonsense. I think we can all agree on that.

[00:40:50] Fred: Sure. But what is the developer’s responsibility to define the safety limits they’re imposing on the public to the public that’s [00:41:00] supposed to be consuming their goods or services?

What, what is their responsibility and what are the consequences of they’re not publishing. their objective limits or even their aspirational limits.

[00:41:12] Anthony: Do these companies have that internally inside their hardware and software development process? Do they have guidelines of this is safety? When I worked at large software companies, we had definitions of, we have to meet X, Y, and Z.

If you’re outside of these parameters, this lovely man named Steve Jobs will meet you in an elevator and scream at you. You have to do this. I get the sense that these AV companies are just. Hey, let’s let’s let’s imagine the Jetsons. Ha. And not have anything behind it.

Or am I just being naive?

[00:41:48] Mike Nelson: I think the standard that Fred mentioned, 26262, is a good start, but it’s nowhere near enough. A and that’s, these are all good examples, Anthony, when you said, what do we [00:42:00] mean the legal system has to develop? All of this is in some ways part of the legal process.

Management of risk. I think it’s also worth noting, and I’m sure others will note this, that we’re talking about Uber and we’re talking about

[00:42:16] Anthony: Cruz

[00:42:17] Mike Nelson: and both companies have had terrible situations that made them stop and supposedly divorce themselves from this stuff. And now they’re back at it again.

So the history, their track record on this is not great. But also it’s, we’ve got to bear in mind that what they’re doing is they’re going to put these cars in very narrow ring fencing environments and travel at very slow speeds that doesn’t make them safer. But it’s less likely to cause a horrendous problem.

Although there have been instances where cars have drug people down the road at very slow speeds, still not a very safe situation. But I do I think they’ll [00:43:00] continue to work at this because it’s just too much money in this to develop a fleet that picks you up. And takes you somewhere and, using buses in an urban environment, which is what we’re talking about, even if they’re minivans and drops you off at a destination is similar to someone between a taxi and a bus, but the do I think they’re going to be able to establish what’s safe is to everybody’s satisfaction?

No, because safe right now is not, for all the reasons that Fred talked about it’s hard to quantify. And so you see some of the positions taken by Senators Markey and Blumenthal that these vehicles have to be safe as a human being. It’s really hard to quantify what that means.

[00:43:50] Fred: And whether it’s a drunk human being or a sober human being or, hey I don’t know if you saw this announcement, Mike, but Waymo has now been approved to.

[00:44:00] For its services on the interstate highways in San Francisco. So where before they were restricted to geo fenced and slow maneuvers and slow traffic. They’re now basically getting carte blanche to zoom down the highway. So I’m very concerned about that.

[00:44:19] Mike Nelson: I am too, Fred. I think, I’m not, it wasn’t clear from what I read about it, whether there was a safety driver in the vehicle, but even Uber safety drivers didn’t manage the risk well, because they were lulled into a false sense of security and they were after miles, not necessarily safe miles.

And so we’ve seen that movie before. But and then if it’s not a safety driver in the vehicle, we’re talking about a safety driver off site. I can’t imagine that’s as good as, I’m sure there’s some rationale that it is, but it’s not. There’s too much going on. Other cars with drivers waving, saying, Let me into the lane.

I want to see the machine that does that, even on a highway. I spent a lot of time in New York City, and if you walk the streets of New York City, [00:45:00] At any intersection, you have food delivery guys on e bikes on the sidewalk in the wrong direction. You have jaywalkers people pushing food cart trailers out in the middle of traffic.

Tell me there’s going to be a vehicle that can manage all that. Maybe someday, not in my lifetime.

[00:45:20] Fred: All right. I saw a traffic sign just the other day. It said speed limit 25 in a school zone when children are present. So how in the world is a computer driven vehicle going to process that instruction?

Number one, or even recognize it. Number two, how is it going to look around and say, Oh, this is a fire hydrant. That’s not a child. That’s this is just an enormous task to do things that are. Very routine for human beings. I’ve got very little hope. I don’t I’m old, but I don’t intend to see any of this happen before I let, while I’m alive.

[00:45:56] Mike Nelson: As a lawyer you would hear people say I swore up to avoid a squirrel. That’s not [00:46:00] any squirrel. Are you supposed to swerve to avoid the moose? So where’s the shades of gray? How big is the animal that you’re supposed to plow through or plow over versus the one you’re not supposed to hit?

There’s a lot of those questions. We’re not even talking about a vehicle now we’re talking about a wild animal and

[00:46:19] Anthony: where’s the food delivery driver? Is he bringing me my food? I want that guy alive. The other one, I didn’t order Indian food tonight. How do you

[00:46:26] Fred: discriminate a moose from

[00:46:27] Anthony: somebody at Comic Con?

Ayo! The more concerning thing about Waymo, I don’t necessarily think they have safety drivers on the freeway, is that anytime I see a Jaguar moving at highway speeds, I want to get away from it.

[00:46:42] Fred: Yeah, my understanding is they do not have safety drivers. They’re going full autonomous on the freeways in San Francisco.

I

[00:46:49] Michael: think they were going to have safety drivers for a limited period of time before moving into no safety driver territory.

[00:46:56] Anthony: Yeah, they posted some, one of the people, the product people at [00:47:00] Waymo posted saying, Hey, on LinkedIn saying, Hey, we’re opening this up to just the Waymo employees first.

We’re going to go free. And in the comments thread, if you’re on LinkedIn, you see this, there’s a guy named Fred Perkins. Who’s did you guys get permission from everybody else who you’re putting in part of this test?

[00:47:15] Fred: Great point, Fred. Who would say something like that on LinkedIn? It was you.

Oh, okay. Somebody changed the tire on the side of the road to get permission,

[00:47:23] Anthony: so you’re both members of the SAE and this is going to be my naive question of the week. So all of these cars are recording data. So they all have some version of an event data recorder, the car version of a little black box, we’ll say.

Is there any chance that data will be required to be stored in a standardized format?

[00:47:45] Mike Nelson: It’s in a standardized format now.

[00:47:47] Anthony: Oh, it is? So like every manufacturer uses the exact same things?

[00:47:50] Mike Nelson: No.

[00:47:51] Anthony: So that’s what I mean

[00:47:53] Mike Nelson: No, it is it’s they’re manufacturers that exceed the minimum aspects of it. But in this [00:48:00] is all subject of 563 Which is the federal standards?

So they have to account it’s very similar to what they do with They don’t tell you how to build a safe windshield. They just tell you what the parameters of safety means. So the recorder must do this. So it has to record at five seconds. Some manufacturers are doing more. Some manufacturers are recording some animation, automation.

But there are some minimum standards to this, but the EDR device is an antiquated device. It was never built for this. And if there really is, if you boil it right down to what’s required, there’s probably 15 signals because there’s some redundancy in there and it’s recorded at two tenths of a second at its closest interval.

Tested by comparison, the data is being recorded at one one thousandths of a second.

[00:48:51] Michael: Milliseconds. Yeah.

[00:48:53] Mike Nelson: My saying about this is, a lot happens in a second, sequentially, to tell the [00:49:00] story. So you got five seconds of data. It’s only recorded in the event of an event. And it’s looking backwards and it covers a very narrow set of circumstance.

It was supposed to be about really. Airbag safety and automatic emergency braking and dystonic dystonic DSC controls. And when, that’s the committee I sit on, and when you, it’s mostly populated by OEMs, and they start talking about, oh, you don’t know how expensive it is to expand this thing, and five seconds is enough, and, can look at what this data logger concept is, Which is it’s going to capture video if it’s available.

It’s going to capture what automation was in use. But this is something I think you guys might find interesting because it’s an offshoot of what we’re talking about there. Several states have created state based EDR laws, not just the federal ones. I would [00:50:00] say half of those states that have state based EDR laws, they describe in EDR what we would think of as 563, that little disc, that little device in the car.

But they also describe it as if the vehicle’s capable of transmitting data from the car to a car, that’s an EDR.

[00:50:20] Anthony: Okay.

[00:50:21] Mike Nelson: We have federal law saying EDR data belongs to the consumer. So if you wrap those two together, you get a very expansive view. Of what an EDR device is, and by virtue of that, who’s entitled to get that?

The OEMs, when I brought this to their attention in this committee, they were like, really? We got that? Yeah, so there is state law that also has to be evaluated as we look at These kinds of things about EDR, what’s current status right now.

[00:50:51] Anthony: Okay, so there’s no larger standard moving forward to capture all of the data that’s available now.

I’m thinking of an analogy [00:51:00] of, in the music world, there’s something called the MIDI standard. It was designed in the late 70s, and it lasted pretty, it’s up till today, and it’s a very kind of open standard. Every manufacturer that uses this, use the exact same thing. They haven’t changed it. They’ve made an update to it now, but they haven’t changed it forever.

It’s incredibly robust and allows a lot of data capture. I’m seeing something like this for auto manufacturers where it’s, we know what data is available now. We know what could be added in the future and will last us decades. OEMs are let’s let’s just do this stuff proprietary for our own purposes.

[00:51:38] Mike Nelson: So it’s funny you mentioned that. I think my next mission will be to develop with the appropriate community developing a set of data that’s accessible to understand what happened in the event of an event. And some of the tension on that, though, is what do you do with data privacy? Who has rights to [00:52:00] do this?

And who has rights to get that data? Not just, create it and disperse it. This is some of the law stuff, Anthony, that I think we really need to work on as a society, and I think we have to start working on it now. Think about this way. If you’re driving a vehicle and the vehicle is recording your speed, does that mean nobody else can access what your speed was at the time of the accident?

Don’t you have some societal obligations to Disclose what your speed was. And so that’s a bit of a hairy Gordian knot that you get into. How do you balance some of these, divergent interests? People can see what you’re doing in the data. Versus I have some privacy concerns about people seeing where I’m going at any one time and, you what about the cockpit cameras that are managing, human [00:53:00] responsiveness to certain environments.

So I, we were just working on a case. A couple of days ago where this person alleged that they were struck by a vehicle that was making left hand turn in front of them it was a secondary road, so two lanes in each direction, shopping districts on both sides this gentleman took off. It happened to be in a Tesla, but the Tesla didn’t do anything wrong here.

So this gentleman at a red light leaves the red light at full acceleration. And within nine seconds was up to 104 miles an hour. And this vehicle that was attempting to make a left hand turn stopped, didn’t hit him, and he then went off road and took out a telephone pole. He was alleging that he had been struck by this person making a left.

It didn’t. It’s clearly shown in the video, but nobody had actually understood how fast he was going. And there’s no way you can drive 104 miles an hour in that environment [00:54:00] from a sudden stop and be safe, not negligent, whatever. And so it does show you there’s an, there’s a, there’s this space. We have to make a society to say, What the heck went on?

And don’t we have access, even if that person doesn’t want to give it up, to take that information, understand what took place, and that does, that level of transparency, I think, is critical. Is owed to the public so they can evaluate legal issues and whether the equipment’s right and who’s right or who’s wrong and I do applaud the NTSB after the Mountain View analysis said we have to have greater transparency.

They don’t really talk about who has rights to the investigative data, but they say, People investigating these accidents need to have the access to it.

[00:54:56] Fred: We’re here to help, Mike, but that depends upon our listeners doing what, [00:55:00] Anthony? Going to autosafety.

[00:55:02] Anthony: org and clicking the donate button. Click it once, click it twice.

Just click it once and then put in your credit card information. Clicking it more than once, just you’re reloading the page. It’s just weird. Why would you do that? Michael Brooks, I have a question for you. Maybe, or actually maybe the other Mike will have this answer. So what you’re just talking about there in terms of privacy and whatnot, I thought.

As a legal concept, once you’re out in public, you’re seen by other people, your expectations of privacy are nil. I can’t be, I’m driving a car.

[00:55:31] Michael: That would be my argument. And that’s always my argument, particularly if you’re violating the law or the speed limit in this guy’s case, but yes you don’t have an expectation to privacy if you’re traveling on a public road would be my argument there.

[00:55:45] Anthony: That’s what I thought. I agree. Counselor. Objections, Fred? None here. Perfect. Great. Case dismissed.

[00:55:51] Mike Nelson: I think there’s a social compact here too. If you’re using the roadways and you’re potentially exposing other people, not [00:56:00] just other cars, but pedestrians you’ve made a con almost like a DUI concept.

If you’re operating this vehicle. And the police have good grounds to suspect you’re inebriated. If you refuse a breathalyzer test or some sort of other tests to evaluate whether you’re sober or not, you lose your license. If you refuse, because there’s a certain element of this that is for the good of society, as opposed to for the safeness of society, as opposed to your individual rights.

But again, we’re talking about, Anthony, I can go on all day about way the legal system has to evolve around these issues. These are very difficult issues. But it gets back to my bigger point, which is, we better start working on it now. And I do think getting to data standards that should be subject to interrogation.

The event of an event is where we have to go now. Draw that line for me, and I’ll have ten different people disagree that the line’s in the right spot. [00:57:00]

[00:57:00] Anthony: Fair enough. Ah, I think we’ve taken up enough of our listeners time at this point. Does anyone have anything to add before we wrap up?

[00:57:09] Fred: Only that we’ve had some discussions on, about duty of care, which I’m, perhaps you’ve heard on our podcast, Mike, but seems to bear on this issue as well.

[00:57:20] Anthony: We’re going to have you come back and talk some more. How’s that sound? That’s great. All right. Thank you, Mike Nelson. Of quantum risk for coming to join us today and giving us a whole bunch of questions for the next hour.

We bring you on. So thanks listeners till next week. Next week, we’ll have more gas light. If Fred chooses GM cruise again, I’m driving up to Massachusetts and having a discussion with him. And we’ll have recalls. Oh, we missed a bunch of recalls this week and all sorts of fun stuff from the news. Until next time.

Bye bye. All

[00:57:49] Fred: right. Thank you.

[00:57:50] Anthony: Bye bye.

[00:57:51] Fred: Yes.

[00:57:52] Anthony: Hi everybody.

[00:57:52] Michael: For more information, visit

[00:57:56] Mike Nelson: www. autosafety. [00:58:00] org.