Trust, but regulate

The autonomous vehicle industry has their own PR team that is trying to compete with our Consumer Autonomous Vehicle Bill of Rights. It’s a bunch of hogwash that they regurgitate every few years. A few Senators that can see through this have written a nice letter to NHTSA telling them to regulate AV’s, British Columbia bans AV’s and another Tesla driver makes the fatal mistake that his car is an AV and not lipstick on a pig. Fred gets into the dangers of AI through the story of mushrooms. Plus recall roundup that starts with a… Tesla.

This weeks links:

Subscribe using your favorite podcast service:

Transcript

note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

All right. Welcome to the show. Good morning world. Yes. Yes. So, hey guys something we’ve talked about a number of times in this show, I got to see it witnessed first hand outside my window the other day. The baby being trapped in a locked car.

Michael: Oh, no.

Anthony: Yeah, well, okay it turned out fine because I live in Manhattan and the fire department’s like every it’s, they’re always within a few minutes away.

But we hear all these sirens, look outside the window, and in a Mercedes SUV you see the fire department. And Using little wedges to try and pry open the front doors, and they’re using the little, these wire hooks to try and pop the handles. It took them a while. Baby was fine. Baby slept through the entire thing.

If the car alarm goes off, the kid still passed out. But it was it was my second incident I got to see out my window that day. Other one was two people trying to make the same left turn at the same time. Ha! Never ends well.

Fred: That’s, no. Hey, my my daughter in Philadelphia, outside of Philadelphia, experienced one of the things that we were talking about.

Or, at least a close neighbor. A Kia in a garage burst into flames spontaneously and burned the house down. Oh my God! Yeah. Everyone okay? Okay. Everyone’s okay. House is completely gone.

Anthony: Oh, that’s horrible. Hey! For those of you who just tuned in, this is not the world’s most depressing podcast. Because all of them have happy endings.

Everyone gets out alive. So let’s we have this thing called the Consumer Autonomous Vehicle Bell of Rights. I’m working on it for, say, over a year. Is that about right? Yep. And hey, there’s this other group called the Autonomous Vehicle Industry Association. Oh, they use very nice blue and green color combinations as if it was the web back in the late 90s.

And they have something called the Trust Principles to advance the safe operation of ABs in communities across America. This is corporate nonsense. Just putting that out there. They’re like, hey, these These auto safety nuts have something. I bet we can come up with something better. Ooh! Remember fourth grade where you had to put a word and create an acronym based off of it?

What’s a good word we can use? Lies. No, that’s not gonna be it. Bullshit. No, too many things to go with. Trust. And so they have their their trust principles. And my favorite part, as we have a link to this, is throughout it, it’s peppered by we’ll disclose information as required by law. Every little thing will disclose enough as required by law, realizing that there’s very little laws around what they have to do, and as we’ve just talked about autonomous vehicles, it’s the Wild West, but they’ll disclose whatever you need as required by law.

Fred: Well, the industry tries out these bullshit organizations whenever they confront public Disinterest or distrust, there was one called the AV safety coalition. It was popped up a couple of years ago, which is equally bullshit, but it seems to be. A vehicle for getting sub Rosa information into some of the standards because it seems to be, a technical standards group, when in fact it’s nothing but an advertising group. And the part of this AVIIA trust principle that I thought was notable is their first sentence, which says, AVs are being developed to make America’s roads safer, offer additional mobility options to people who do not currently have them, Increase accessibility opportunities to people with disabilities and ease supply chain challenges by making goods movement more efficient and sustainable.

But the truth is that AVs are being developed to increase the rate of return on investment by the automotive industry. The rest is just aspirational window dressing. And I guess if Garble has showed that if you make a lie long enough and persistent enough, people will start to accept it.

Michael: Yeah that’s what they’ve been doing is repeating this, the same thing over and over again. A lot of what’s behind this trust us pitch it’s something that’s the industry has used, for decades now, well over 50 years. And mainly when they want to fend off regulatory action by NHTSA or a law by Congress that might actually hold their feet to the fire and make them do some of the things they’re saying they’re going to do.

So, it’s a big reason why in Europe, they have a different type of certification system that we have in America. They have, in Europe where the authorities can actually look over some of the vehicle performance and the vehicle tests before these. Parts or the vehicles are approved in America.

We have self certification where we have to trust the manufacturers. And it’s why, for instance, a modern example might be why it’s taken so long to get good automatic emergency braking into vehicles. We’ll talk about that more later today, but it’s taken well over a decade to, require a technology that’s proven to save lives.

And to require it to go into vehicles. So, I guess our point to the ADIA, which is basically, Waymo, Cruise, Aurora Zooks, all of the big companies coming together and forming a policy group that does things like testify before Congress on their behalf and other things. It all comes down to action speaking louder than words, right?

So far, all the actions from industry show they want to proliferate the tech and make money off of it more than they want to ensure safety. And, the public trust has been relatively stable, or mistrust, I should say, at around, two thirds of the public since they started putting these things on the road.

People aren’t being persuaded, and that’s Not because they’re not telling us the right things because they’re not showing us anything. They’re not showing us the data proving these vehicles are safer than the really bad human they keep talking about. They’re not showing us how these things are going to be life changing for people with disabilities or people who lack mobility options, they’re not showing us how These things are going to be sustainable, especially when they’re clogging up roads and causing even more traffic.

So, we keep getting all of these, like Fred says, aspirational statements from the industry. But ultimately there’s no proof behind anything they’re saying.

Fred: well, let’s get back to the principles, Michael. That’s the, here are the principles. Okay. Spells out their acronym, transparent interactions with government officials in the public.

Responsible integration of AAVs into communities and deep engagement with law enforcement and first responders. Upholding the highest cybersecurity and privacy standards. Safety first culture and governance. Transportation policies that will increase safety and public trust of AAVs. Well, come on, if they were serious about this, they would have done all that before they put the damn cars on the road.

They’re coming in after the fact saying, oh, now that we’ve got these things out there, we might as well spend some energy trying to make them actually work safely and not kill people. It’s astonishing that only now, after years of operating on the public roads, they’re finally calling for development of what seems to be the bare minimum of some kind of standards that should have been in place long ago.

Anthony: Well, Fred, so how does this compare to the Consumer AV Bill of Rights? So, one of the things they have is the AVIA commits to continuing robust engagement with policy makers at the federal and state levels on the cyber security and privacy practices of AV operators. Now, if I’m drunk and I read that, I’m like, ah yeah, that sounds great, yeah, but as a sober person.

I’m someone’s trying to pull a fast one on me. So. With something like that, like, how do we cover that in the Consumer Autonomous Vehicle Bill of Rights?

Fred: Well, we have our requirements. We don’t have aspirational statements saying, maybe someday, and God willing, and the creek don’t rise, we’re going to do something, the right thing.

What we’ve established with the AV Bill, Consumer AV Bill of Rights, affectionately known as CAVBOR, is a set of requirements for both Engineering requirements, minimum requirements, and also for legal requirements to govern the operation of the AVs on the streets. And these are not maximum standards.

These are minimum standards before we think they should even be allowed to be tested on the roads. So, I encourage people to go back and read the A. V. Bill of Rights in that context. And we will be publishing an updated version here in a few days. So, it’s a good time to check in on that.

Anthony: One thing they have listed in there that I think all of us will agree with 100%, and I know you’re like, well, okay, what’s the punchline?

No, it’s not. It’s increased funding for NHTSA and the Federal Motor Carrier Safety Administration. I think we’re all for that, right? So we can agree on something.

Fred: Absolutely. Yes.

Michael: There

Anthony: we

Michael: go. But, the difference is, we say that and mean it, they say that and then some of the folks on that list will lobby against additional funding for NHTSA.

There’s no question about it.

Anthony: Wait, are they talking out of two sides of their mouth? Three sides. Three sides. Three sides. Oh boy. Wow. New it’s a new Picasso. On a different tact of, from an actual U. S. Senator, someone who actually proposes laws because the A. V. I. A.

Michael: This is from multiple senators.

Anthony: Multiple senators because the A. V. I. A. It sounds impressive. It sounds like, hey, Cargill we make, Good things to life. Monsanto, we’re not poisoning the entire planet. We’re hugging you with blue green colors. This is from out of Senator Markey’s office. I think he’s the lead on that. I can say that right?

He is the lead. Okay, great. Because the headline is, in all caps, I don’t know why they do this, Senators Markey Blumenthal lead call for stronger action from federal regulator on autonomous vehicle safety after high profile crashes. Thank you. In their letter, the Senators wrote we cannot allow partially automated driving systems and automated driving systems to accelerate the road safety crisis.

NHTSA must take firm control of the wheel and steer manufacturers towards prioritizing safety. It’s poetic.

Michael: Yeah, and they’ve gone through basically, there’s six Senators involved in this effort. It’s Senator Markey, Blumenthal Warren Sanders, Lujun and Peter Wilk. Ernie. Yes. Lujan and Peter Wilk.

Lujan, sorry. And so they all signed a letter to NHTSA saying hey, look, we need to get some things done here. NHTSA’s kind of, in limbo on, on, on many of these issues and hasn’t taken a lot of firm steps. NHTSA’s collecting data right now. But We’d love the agency and we want the agency to consider some policy actions to make autonomous vehicles safer.

The industry and their trust principles is basically saying, yeah, we’re going to do these things, but Ultimately, we don’t want you putting any of this into law or regulation. We don’t want our feet to be held to the fire. And the senators are saying, look, we need concrete steps here.

And a lot of these steps are things that you’ve heard about on this podcast before, they want to geofence vehicles or restrict vehicles to their operational design domain, which is basically, if you’re going to put an autonomous vehicle on the road, you can’t let it drive in places where it’s not ready to drive.

They want to eliminate, they want to have, and this is a confusing area because they want to have NHTSA look into the confusion around things like autopilot, full self driving, naming vehicles in ways that imply they have a lot more capability than they actually do. Which makes drivers fall into the trap.

We’ve talked about a number of occasions, the level two or level three traps where drivers overestimate the vehicle’s capabilities and it gets them into lots of trouble. They want better and more transparent data and publicly transparent data. NHTSA right now is collecting a lot of data on autonomous vehicle and conditional automation and its standing general order data.

But a lot of that is, coming to the public and being redacted. The crash descriptions, for instance, are being redacted. So the pub, while NHTSA may be getting some pretty good data there, the public’s not getting much. And NHTSA needs more and better data under the Standing General Order. So, right now, the Standing General Order was just, it’s an order from NHTSA to the manufacturers requiring them to report crashes, essentially, involving this, AV technology or, advanced driving assistance.

What it’s not is a permanent law or regulation. If a new administration comes in next January, they could eliminate the entire program with a stroke of a pen and the government wouldn’t be collecting any data. So that’s something that needs to be put into law. And then a few other things they want, More caution before you exempt vehicles and allow them to go on the road.

They need more information on anything you’re exempting from federal safety standards, but particularly autonomous vehicles. They want to regulate, remote assistance operators. We’ve talked about some of the issues there with latency and other problems, and they want, New standards.

We don’t have any federal motor vehicle safety standards on autonomous vehicles and we want federal vehicle standards that every vehicle has to comply with. If it’s going to say, Hey, I’m an automated vehicle, I’m going to drive you. It’s going to have to comply with certain standards to ensure safety.

So these, this initial segment of the podcast is basically saying, there are people out there who want concrete solutions and they’re people Who aren’t who don’t want concrete solutions. And the people who don’t want concrete solutions are the ones who stand to make the most money off of it, surprisingly.

Fred: I also want to proudly point out that two of these senators are from the great state of Massachusetts just, for your information. Liberal Northeast nonsense. I love it. My people.

Anthony: So in this letter they, it seems almost that they’re, when they wrote this, they had in mind one specific company who’s egregious in what they do.

So my, one of my favorites and here is restrict driving systems to the roads. They are designed for. Also known as operating design domain now We’ve talked about Ford and GM and their cruise systems where they are restricted We’ve talked about Mercedes Benz and their level 3 system where it only works in Nevada and certain conditions What’s the company that just says fuck it?

We’re gonna do whatever we want. We’re gonna lie We’re gonna say this is the magical car system that does all sorts of stuff and just let you run anywhere Is there anybody doing that?

Michael: Yeah, we’re all aware of who that is and that’s, that’s one of the problems that, this letter is being sent to NHTSA.

There’s a question as to whether NHTSA has any authority to regulate, the statements and that, that confusion part of this, that’s something that, Traditionally would be, an unfair deceptive trade practice that’s regulated by the Federal Trade Commission. They’re the government agency that is responsible for, investigating any type of confusing marketing or marketing and or advertising that might lead people to believe the wrong thing that might contribute to fraudulent issues and all sorts of things.

But. NHTSA has continually maintained that is something that is outside of their purview. The FTC has been hesitant to act. We don’t know if that’s because there’s a massive, powerful billionaire on the other side of things that they don’t want to take on, or if it’s because they really don’t feel they have authority, but At this point, someone needs that authority and someone needs to exercise it because this is an issue with Tesla that’s gone on for far too long and people are continually and still buying Teslas thinking they’re going to drive them as we’ll probably see in some of our other news this week.

Anthony: I’m going to get right into that as a matter of fact. From Jalopnik, Tesla driver who trusted autopilot charged with killing a motorcyclist. The Tesla driver who told police he was, quote unquote, putting the trust in the machine to drive for him, was inattentive to the road in front of him, and was only brought back to reality when he heard a bang as the car lurched forward.

That bang is, I hit somebody. That you ever saw, right? And he’s, he was not impaired by drugs or alcohol. But putting the trust in the machine to drive for him. And this is an issue we’ve talked about a thousand times. Why would you do that? I don’t know. Maybe if you’re spending extra money on something called full self driving or autopilot that you think, eh, maybe it lives up to those names.

Does not

Michael: yeah, this is something there are a lot of issues in this crash that have been things we’ve talked about one of them Importantly is motorcycles Autopilot seems to have difficulty detecting motorcycles and frankly That’s not something that we think is going to change in the near future without some push from the federal government and this is Automatic emergency braking rule, for instance, they left out motorcycles as one of the things that has to be detected right now, your automatic emergency braking is looking for the rear of other vehicles, but it seems to struggle on, different, other different types of vehicles, taillight, they’re struggling to capture a lot of the taillights.

And this is something.

Anthony: I’m sorry, the AEB rule, like, I know we’ll get to this a little later, but it exempts motorcycles from this?

Michael: It doesn’t exempt, it just doesn’t. It doesn’t require manufacturers to detect. One of our big criticisms of it is it doesn’t require manufacturers to put AAV in vehicles that’s able to detect motorcyclists, cyclists and other and other types of vehicles and other situations.

It’s primarily to prevent a vehicle from crashing into the rear of another vehicle. That is a two wheeled vehicle.

Anthony: Another four wheel

Michael: vehicle. And the technology, I don’t know if it’s because the technology isn’t quite to the point where they can’t detect a motorcycle versus a car. Cars are obviously a bigger target or a bigger, more easily detectable.

But this is exactly the type of

Fred: vehicle. Oh, sorry. This particular motorcycle was stationary on the side of the road, right?

Anthony: I don’t believe so. No, I think.

Michael: I didn’t get that. I didn’t get that. I thought I wrote that in there.

Anthony: No, he was It rear

Michael: ended him at speed. Yeah. Which implies they were moving.

Anthony: Yeah, as far as I can tell it stayed in its lane and it just drove over him.

Fred: Yeah. Not that I would, not that I would excuse it, but I was This actually bleeds over into something that we were talking about last week, which is that two labs I now have learned have been investigating the time it takes a human being to take over from the automatic system.

So even if you’re alert, even if you were alert and using a level three or a level three like system, Tesla’s autopilot, for example. At the level two system. And you saw the and you saw the motorcyclist. It can take you well over 10 seconds as a normal human being to recognize the situation and take over for the vehicle that has a lapse in its own control system.

This is simply not enough time for a typical human being to adjust and avoid whatever the problem is that the AV system failed to detect. Thank you This may be an issue as well of this. It could well be that the person actually did see the motorcycle and simply didn’t have time to respond. We don’t know all those details yet.

Well, in this case, the guy

Anthony: said, I think he was reading a book. Yeah, he was like, I’m just letting the machine do it. But this ties back into, so the egregious examples we’ve seen from Tesla with calling things. really aspirational names for what they’re not. And this is causing, Senators Blumenthal and Markey and whatnot to come up with saying, Hey, we need some real strong regulations around this.

And it comes back to something I’ve asked both of you before is this. So with legacy manufacturers, Ford, GM, Mercedes, and whatnot, they’re at least trying to say, Hey, we’re going to disengage the system. If we can sense that you’re not doing this. So they’ll actually put in systems that say, Hey, this doesn’t work here.

This doesn’t work here. This doesn’t work here. You’re not paying attention. We’re off. Because they’ve been around for a hundred years, and they’ve had the regulators beat the crap out of them repeatedly for doing dangerous things. Well, not just

Michael: the regulators, the lawyers as well. Well, yeah, they’ve had Ultimately, yeah.

Anthony: Regulators plus action, yeah. Or is just Tesla just doesn’t have enough experience with getting, taking a court in drag?

Michael: I don’t, Elon Musk seems to have a fetish for being sued, but I, they, the case that they settled recently the Huang case that we talked about a couple of weeks ago of the guy in California, yeah, who ran into the thing.

That was the first case I was aware of that they’ve ever settled involving cars. And I know there are just dozens of other cases out there right now that are progressing through the courts because Tesla is unwilling to settle. So. I don’t know. What does that tell you? In this case the driver was charged with vehicle or homicide and is out on a hundred thousand dollar bail So I hope that was a good book,

Anthony: right?

So what happens next? Not quite this podcast for but what happens next with the marquee bill? Like how does a bill become a law?

Michael: Well, it’s not a bill yet. It’s a letter to NHTSA and essentially, what the role NHTSA. And so if NHTSA continues to not wrap its head around some of those areas and start on good regulations, I would ask them to start on a regulation around geo fencing and.

Operational design domain yesterday, if they could that’s a critical standard that obviously vehicles aren’t quite meeting at this point.

Fred: Well, NHTSA has the authority to do everything that has been requested, right? There’s no question about that.

Michael: Yeah. And they, and that’s what essentially the oversight role is.

Congress is saying, Hey, get your act together on these, or we’re going to start pursuing legislation to make you get your act together in these areas.

Anthony: Got it. Okay. And what’s been NHTSA’s response so far? Hey, I’m just the current Acting Administrator, as I keep

Michael: They just said, they just sent the letter, so it’ll be a while before the response, but I imagine the response will be, hey, we’re working on this, and yeah, that’s The typical response.

Anthony: So, just wrapping this one up. They have a great line in there. It says public roads are not a sandbox for manufacturers or operators to play in. And regulatory agencies like NHTSA should be highly cautious about providing lax pathways. Onto the road for dangerous vehicles, which I think is perfect.

And clearly they’re all fans of this show. As you are at home, go to autosafety. org, click on donate, click it once, click it twice, but enter in your credit card details or else just click on it. Just it’s doesn’t, it doesn’t, it’s just pointless.

Michael: Yeah. And at public roads right now, I hate to say it.

They are that sandbox.

Anthony: Alright hey just cause we need, I think we need a little light break here, a little upbeat moment here. Hey, Cybertruck! That’s all I have to say, just that. Now no, this is a, there’s a guy, a Jalopnik article, a guy takes his Cybertruck to the car wash and then the car is like, yeah, we’re done.

Like, it, the screens go black, it doesn’t work anymore, and he had to call up tech support, and they’re like, Who is this? Are you messing with us? And they, like, he looks, and I don’t think he gets anybody, he reads the manual, good for him, tells them some weird key combination on a steering wheel to press, like down, up, A, B, left left and to reboot, and he does all this stuff, and then nothing happens.

Or at least he thinks nothing happens, because there’s no message on the screen, and then I don’t remember how many hours later, five hours later, the car’s like, Alright, I’m back. Like, what? Like, how? I don’t

Michael: There’s no explaining some of that.

Anthony: Yeah, don’t get your Cybertruck wet, people. I,

Michael: yeah, that’s a, I think the weirdest part of this story is that, you can have your warranty voided by taking your Cybertruck into a car wash and not putting it in car wash mode.

Yes, there’s an actual car wash mode that you have to scroll through all of your endless menus on your touch screen and put your vehicle into car wash mode before you enter a car wash. I don’t know that sounds really odd to me. Having grown up in the South where big thunderstorms, I just have to think are going to present all sorts of water and spray challenges for vehicles that require them to be sealed.

It’s it just seems to that there’s not a real good way here. Not a real good answer for an owner you know Do you know when you buy a vehicle that it’s got this, Car wash mode that you have to turn on before you go into a car wash. How do you figure that out? How do you avoid voiding your warranty because you didn’t put it in car wash mode There are a lot of questions here that make me, you know Less likely to own a Tesla than before I read this article, it was already a pretty low chance anyway.

Anthony: As best I can tell a carwash mode. The only thing it does is it turns off the windshield wiper.

Michael: It’s it looks like it’s like it and puts

Anthony: the car in port or

Michael: seals ports or something, I don’t know. Yeah, I don’t understand the warranty issue there. I think

Fred: there’s a built in umbrella that pops up and, covers the car to keep it from getting wet.

So your ego doesn’t get moist. Absolutely. And you can choose the color. I think that’s one of the options they’ve got available for subscription fee.

Anthony: Oh boy. So this is, let’s move on to something serious. Canada. We’ve all heard about it. We’ve all seen it on maps and you’re like, what is that giant place with only four States?

This is bizarre. Okay. Maybe there’s seven and the provinces doesn’t matter. Canada not only has free healthcare, but the. Canadian province of British Columbia has banned self driving vehicles that exceed an SAE autonomy rating level of 2. So you can’t get your Waymos, your GM Cruze, your Mercedes Benz EQS with level 3.

And that’s about it right now. You can still get your Teslas, people. I’m sorry. They’re not autonomous vehicles.

Michael: But they don’t even meet level three. So yes, Tesla owners are okay. Yep. For now.

Anthony: But bro, my, no, my car drives itself. It made a left hand turn down the street and then it got me off the off ramp.

No, you gotta go out. You gotta try it. Okay. You gotta try this car. You gotta try. Yeah. Okay.

Fred: Well, there’s two things to say about this from my perspective. One is that there’s no, Federal or national authority that verifies that a vehicle is level 2 or level 3 or any other level. So, it can be whatever level the manufacturer wants it, wants to claim that it is.

So, with a simple declaration by any of the car companies currently putting out level 3, that no, it’s not really level 3. It’s really level 2 plus. They can avoid the penalties associated with this. The other thing is, again, something that we talked about before. A couple of laboratories have been looking at the transition from automatic controls to manual controls on these cars.

And both of the laboratories that are doing that, that I’m aware of, Have determined independently that it is simply too dangerous to test this maneuver on public roads using actual live human beings and real cars. So they do it with simulators. My observation is that if it’s too dangerous to test using controlled circumstances, why in the world is it safe enough to put on the road with untrained observers and no controls?

This is inherently very hazardous, so, well, hats off to the B. C. government. Well done.

Anthony: Yeah, there’s an

Michael: article They don’t have any level threes there yet, either. So, they’re essentially saying, don’t come here. In some respects, they’re, I think and as far as I know, the only level three that we’re aware of just went on, is just starting to go on sale in Nevada and California, the Mercedes.

Anthony: Yeah, so have you guys signed up to get your Mercedes, your EQS, or S Class sedan?

Michael: Did they make an economy model?

Fred: Ha! The Tesla full self driving is, in all respects, a Level 3, except for Tesla’s declaration that it’s a Level 2. If you go into the definitions, it’s clearly a Level 3 automobile.

They’re just using sleight of hand to say, well, no, it’s not. And we don’t have to respond to laws regulating level three, but it is. So I think that, there may be a step here where BC government has to step up and say, Hey, we’re looking at you, Tesla, I think clearly that those vehicles fall under the intent of what BC is trying to ban.

Anthony: Stay attention to this show and this segment called Canada Watch. Will they build a wall or not? Yeah, so the Mercedes has it’s selling these in Nevada and California, I believe, the only place they’re allowed to these Level 3 systems. And what I like about these, this is neat, their system they call DrivePilot.

When drivers have DrivePilot enabled, these cars will illuminate turquoise lights on their side mirrors, headlights, and taillights to let other drivers and law enforcement know they’re operating autonomously and they have more money than they should. I think it’s good that it’s great that it’s announcing to everybody, Hey, this is what’s happening here.

But OK, so it’s got these lights and it’s letting people know. But, my car broke the law. Sorry, officers, my car did it. See, the turquoise lights were on. Give my car the ticket. They still haven’t figured that one out, have they?

Fred: Well, I think it’s a good idea to, let people know and be afraid and be very afraid and try to avoid these things.

It’s also a good idea for the police to know that If these turquoise lights are on, that they have to be very wary about approaching the car because there’s no way in the world of knowing what it’s going to do. But yeah, as far as avoiding responsibility goes I’m not sure turquoise is the way to do that.

What color do you think? More of a cerulean blue? Yeah, that’d be good, I think. But, there should be, and people have talked about distinctive, Signals that show people when a car is being driven by a computer rather than a human being. Signs

Anthony: that pop out saying, look ma, no hands.

Fred: I don’t think there’s any standard yet that says this is what you should do.

But I think that, what Mercedes has done is a good step. Despite the fact that the reason they’ve done it is not a good step.

Anthony: All right. Okay. So we’ve covered Canada. We’ve covered Mercedes. So Fred, how are you feeling about doing some Tau time?

Fred: Oh, sure. Let’s do that. Let’s do that.

Anthony: I’m really looking forward to, because it’s titled Shrooms and Cars Death by AI Reliance bum

Fred: Dun not the kind of shrooms people are thinking about, but that’s okay.

And this is really a discussion about the fault of, or the limitations Of AI. So I’m going to do a little background here. AI for our listeners is no one thing. There’s no world standard of what AI is and what it’s not. There are several types and a lot of different implementations. So what’s called weak AI.

Is what you’ve confronted in your daily life. So Siri, Alexa, AVs are all called weak AI and strong AI would be like Hal from 2001, a space odyssey. I’m dating myself because a lot of people haven’t seen that movie who are suffering from being young, but nevermind. So weak AI all confront. But weak AI is.

Is evidenced by machine learning, or it can be deep learning, and machine learning is one or two hidden layers of weighted logical nodes interconnected, blah, blah, blah. Deep learning is three or more layers, typically hundreds. Either type, though, can produce spurious or hallucinatory results. And this article actually looks at people who are using Chad GPT and similar AI capabilities to produce books that have lethal Imperfections in particular, they’re looking at poisonous mushrooms that are misidentified and field characteristics of those mushrooms that are also misidentified and yet the books are readily available and they’re pumped out by machines and put onto Amazon for distribution to the public, even though they’re actually very hazardous.

They misidentify a lot of the mushrooms and It has lethal consequences, or can have lethal consequences for people eating poisonous mushrooms based on the recommendations of the AI. So why do we care about that here? Because all of these break the human oriented paradigm of unambiguous traceability between inputs and outputs.

So, for example, when you get into your car, one of the things you want to do is start your car, right? And so there’s a button or a key or something. That you push or turn to start the car and the car starts. And so there’s a nice one to one relationship. But the other side of that is that if you confront a car that’s running, you want to know that somebody did turn it on purposefully.

You don’t want to know that it’s not running by itself. You want to have that bilateral relationship between the inputs and the outputs that are desired. The problem with machine learning is that you don’t have that because the relationship between the input and the output is murky and it’s not humanly decipherable because it’s built into the machine logic.

So the machine is doing something in there. We talked about before that it’s acting like a murmuration of starlings. It’s just somehow there’s this group logic that mysteriously relates one thing to another. But you can’t really have the kind of relationship you’ve got between. Pushing the start button in your car and having the car actually start.

So this is, this book is a great example of that. And the book talks about how or I should say, the publication by Public Citizen that describes these books and the problems of these books talks about how this misidentification occurs, how frequent it is and how, despite the fact that it used AI or machine learning, You still end up with results that are very bad and potentially lethal.

It’s not to say that their AI is inherently bad. In fact, the last paragraph of the article is, I think, very revealing. It says, This is not to say that AI systems directed by knowledgeable technologists can’t or won’t leverage these technologies. to benefit individuals in society in ways that would not be possible without them.

It is, however, to insist upon refusing to overlook the essential human role in directing these technologies toward human goals and to suggest skepticism when the goal of maximizing profits encourages exaggerating the technology’s capabilities. This actually gets us right back to Tesla. We were talking earlier about Tesla trying to promote sales and what I’ve been reading is that they don’t have sufficient capacity right now to develop new models that they can put on the market.

They talk about doing that, but they’ve apparently abandoned the idea of getting even a lower price model out there. So instead, they’re reverting to saying, well, we’re going to use AI at every level of controls. To make our full self driving even better. That is something that is appealing to the public, and they’re trying to drive sales by saying, Look, we’re done with all this human crap. We’re going to go to AI forward and backward and everything will be great. But I think it’s important for the public to realize and for their potential customers to realize that all of these AI implementations are inherently fraught with ambiguities.

They’re fraught with hallucinations, and you cannot get a car, you cannot get an AI driven car ready to handle unexpected, critical, or lethal situations by training it on data that’s produced by cars that are not going through those situations. So no matter how long you drive down a sunny, straight highway with no water on it, it’s not going to prepare you for a child running out in front of the car.

On a rainy day, just it isn’t going to happen. You don’t have enough data to do that. And all of these systems, all of these weak systems that they’re talking about, which includes the autonomous vehicle logic that’s driving these cars, need to be trained by humans, need to have lots of training. And you simply cannot get enough examples of critical situations.

To train these vehicles in any way that’s going to allow them to project behavior onto some situation that they haven’t seen yet simply isn’t going to work. So you are, as long as everything is fine, you’re having a great time having AI drive your car. The problem is that the problems occur when things are not going fine.

And when you have unusual situations, the unique capabilities of human beings. To take what they’ve learned and project it onto a critical situation very rapidly is something that you’re not ever going to see in our lifetimes, anyway, on an AI driven vehicle. So that’s the lesson from misrepresenting mushrooms that brings up that has been brought up by Public Citizen for the world at large and AVs in particular.

As they confront the proliferation of AI systems into the control of the vehicles. Ooh, that was a long sentence or two.

Anthony: That was good.

Michael: That has a lot of implications. It’s not just. It looks like people are essentially saying, Hey AI, write me a book on mushroom foraging that I can sell on Amazon and then popping it on Amazon without, the typical human curation process that would check for things like whether a dangerous mushroom and a safe one are identified properly.

Fred: Yeah, that’s right. That’s exactly what they were doing. And that’s exactly what the car companies are doing as well. By putting AV controls under the control of AI systems that have never been vetted, that have never been checked by any third parties, that are opaque to human understanding. This is

Anthony: one of the articles that the Tesla fans always like to make, is that every Tesla has been collecting all of this data, and they’re collecting all of this data, and then they run it through some magical neural network processing.

Without those gross, smelly humans to actually review and identify and see these edge cases and explain what would happen in there. It’s just it’s useless. It’s not as helpful as people like to think it is.

Fred: Yeah, and when people have a crash typically lots and lots of factors contribute to that crash.

And And it probably can never be exactly replicated by any other person. So, how you can generalize from one into another, from all this data that Tesla has, by the way, surreptitiously collected from your vehicle, and projected onto other vehicles it is something that cannot be easily qualified with any kind of statistics that anybody can really understand.

You’re putting your life in the hands of the statistical representation. That some unknown machine, or unknowable machine, has put onto a logical process that’s buried deep within the guts of a computer somewhere that no human being will ever fully understand. Well.

Anthony: Trust the machine.

Fred: Trust the machine. Yeah. I’m still waiting for us to get these free tests so we can test them out ourselves.

Anthony: Well, I can’t get you a free Tesla, but I can get you a free months of full self driving beta. That’s right. If you act now, I’ll get you full self driving beta for one month, 30 days in a row. That’s right.

Make you your passengers and every other rotor user of the road, an unwitting test subject. Anyway a friend of mine who’s blind, he uses, he’s been experimenting with chat GPT to have him read restaurant menus to him, and he’s like, it’s great. He’s like, I’m having it read stuff and it’s inventing things on the menu.

He’s like, this sounds incredible. It has strawberries in this whatnot. And he’s trying to order it and they’re like, what are you talking about? He summed it up best with, I think, these AI models, the way they currently are. They’re like a five year old. They’re just gonna like, hey, make things up to make you feel better.

This is what happened. And then there was a cape and there were strawberries involved and it’s, hallucinating at a restaurant, actually hallucinating at a restaurant, that could actually be really bad for people with some sort of food allergies.

Fred: I’ll give you an example from my life. I was in Japan in a restaurant and I was using Google Translate because I don’t speak Japanese.

And I looked at the cover of the menu, which was a basically a drawing of trees in winter, interconnecting branches, and you can figure out what that looks like. And the Google Translate was finding Japanese characters. In that drawing of branches that were interlinked and crossing over on the cover of the menu.

It was very interesting to see those words emerge from this completely random information. But it was one example I’ve had of a hallucination from AI that was very striking.

Anthony: I was in a Japanese restaurant a long time ago and I’m pretty sure they fed me whale blubber. And they laughed at it. Ha.

It was disgusting. Don’t eat well, bubber. Go to autosafety. org and click donate. Do that, tell your friends. Five stars. Share it around. You’ve all been great with that so far. We appreciate you so much. But continue. The more the merrier. Be safe. Wear your seatbelt. And let’s continue onwards. Alright. I guess it’s it’s time for recalls.

Let’s do some recalls. How’s that sound? Hey! Let’s start off with a little company called. Ready? Wait for it. Wait. Tesla! That’s right, Tesla’s recalled 3, 878 vehicles, something called a cyber cuck, cyber truck. And what happens is the accelerator cover pedal thing slides off. We talked about this last week, I believe.

Jams underneath the passenger well and you keep accelerating forever. Well, this is the actual recall has hit Tesla. And it’s his database. And so that’s why we’re talking about it again. Some intrepid people on the internet realized, hey, I can just drill a rivet through this pad and put this on top of the accelerator pedal thing.

But hey, for a hundred grand, get better glue. That’s awesome.

Fred: This would be a really bad occurrence in your piggly wiggly parking lot. And by the way, has anybody on the internet started calling these cyber sucks yet? Cyber?

Anthony: No. No, I may probably. I focus on positivity when I go on the internet.

Next one Toyota, a rare entrant to the recall roundup Toyota Prius 55, 690, 2023 to 2024 Toyota Priuses they left rear door, right rear door their assembly handles their handles are what are they doing with their handles? Oh, there’s a short circuit and that caused the door to open unexpectedly.

Michael: Yeah, the door, yeah, the door opens randomly. And it’s because of water intrusion in the vehicle. And I think that the the example that they said if a large amount of water splashes on the, it’s the switch for the rear door. Door lock system causes a short circuit. You can go driving off onto the highway and then all of a sudden the door opens unexpectedly.

So that’s obviously a problem. But they give the example of the large amount of water splashing is in a carwash. So maybe they need a carwash mode too.

Anthony: I think they do. Everyone needs a carwash mode. Next up. Hey, Toyota again, twice in one week. My word, 509 vehicles. The 2018 to 2021 Lexus LS 500. Fred, you have the 5, 000, right?

Cause you’re better. The subject vehicles are equipped with a 10 way power front passenger seat, unique to the executive package. That contains a long sleet seat slide rail to allow for the seat to move forward and create room for rear sun. So basically this sensor will break.

Michael: Basically they put a, um, this 10 way executive seat.

I don’t know what it is, a captain’s chair inside of the, these Lexuses and the executive suite chair interferes with the occupant The sensor that tells the car whether or not, occupant classification sensor, the sensor that tells the car that, there’s a person in the seat and that person is heavy enough to have an airbag deployed in their face, i.

e. it’s not a child or someone very small, and it can tell them, how hard or how forcefully to deploy the airbag. I

Anthony: guess in this

Michael: case, the executive package is not executing the airbag properly. So, that’s something that people really want to get into their dealer and have fixed. I don’t think there’s going to be a fix available until June, however.

So keep an eye on that for the few hundred Lexus owners involved here.

Fred: We could drift into current politics if we really wanted to go with the executive airbag. Entry. Should we do that? No.

Anthony: That is a hard no. Last recon we have Jaguar. Now this makes sense. 2, 409 vehicles. This is the 2021 to 2023 Jaguar E pace.

And this is the replacement airbag module is able to break through fascias where the weakening cut is out of specification. So I guess they had to replace some airbags on this and the little cut where it explodes through is a little too weak.

Michael: Well, the passenger airbag deployment door weakening cuts are out of specification, so it looks like the little door that opens for the airbag to deploy, it may not open quite as easily as it should, and that airbags can tear when they get caught on it, because the door is not opening.

Fast enough for them in simpler terms. And that’s like a foreign airbag is, not only do you lose the potential protection provided by having that giant pillow in front of you, you also risk the gases they’re flowing into that airbag to fill it up so quickly can be released onto the occupants.

Fred: So the dashboard, if you look at it, it was nice and smooth, but if you looked at the backside of the dashboard, you’d find that there were channels cut into it. That make it weak in certain places so that when the airbag deploys, it can burst through easily and in a predictable manner. So that’s what they’re talking about.

These are, this is nothing that’s visible to the passenger that’s in the car, the owner of the car, unless you take the dashboard off. We actually, we dealt with this a few years ago. Remember, Michael, when we were talking about the fact that, The vinyl and the plastics in the dashboards are aging prematurely in the sun and they’re getting stiffer.

We had questions about whether or not they were causing the potential for airbags to deploy incorrectly. Different issue, but similar concerns.

Anthony: If you’re affected by any of these recalls, contact your dealer, get some mushrooms, and then contact your car dealer and get it fixed.

Fred: Don’t forget, the recalls are free.

Anthony: Yes. No one will charge you for a recall. Last thing I want to touch on is this is a fun article in street blogs. Street blog. Streetsblog. org. Streetsblog. org. There you go. Wow. Oh, look, okay, and it talks about five car culture euphemisms we need to stop using. And that’s what I like in an article.

It’s something that comes right off the bat and tells me how to live my life. But it comes up with number one, traffic accidents, and I like this. Planes don’t have accidents, they crash. Cranes don’t have accidents, they collapse. Traffic, this is a car crash. Very, you know, , they’re good little soldiers here with this, but I, I agree.

They’re not an accident so much. They’re a no.

Michael: It’s failure. It’s something that, it’s something that I, probably was guilty of a lot earlier in life and earlier in my career even. But, the difference between calling something a crash in an accident isn’t just a, a.

It’s not just a different word. It implies something very different. An accident suggests that no one is at fault or, no, even the vehicle. No, nothing’s at fault. It’s just an accident. It happens. Whereas a crash suggests that there’s some at least intentional act or they’re, someone really screwed up.

So, and it’s important. I think studies into, that I’ve looked into. News articles about pedestrian crashes that use accident were less likely to assign blame to the driver and more likely to assign blame to the pedestrian as a person being at fault. So it, words matter, is a good point here.

And I calling something an accident, calling a car crash, a car accident doesn’t really put the blame where it maybe should be.

Anthony: Agreed. The next one they have is transportation sector emissions. And I love this because that just is just too broad. It’s not my car, it’s the transportation sector.

But the fact is roughly 78 percent of transportation sector emissions comes from SUVs, pickups, minivans, mini and heavy, medium and heavy duty trucks and passenger cars. Whereas the remaining, 20 percent comes from everything else. So they suggest instead saying car and truck emissions, which I think would go a long way towards cleaning up the air we breathe.

Michael: Yeah, and that everything else also includes, ships and boats and pipelines, which are sending cars to us and transporting the fuel for cars. So you could probably say they’re over 80%.

Anthony: There you go. Next up they have traffic jams and congestion. Not saying that you’re, you, that you’re in traffic, but you actually, you are traffic.

Michael: You are traffic. I think that’s a good point. Just to, we tend to think of traffic as a thing that is outside of us that we have to face. But actually, when we’re in it, we are traffic. There was a fun video that went along with that on the blog that I would encourage people to take a quick look at.

Anthony: Yeah. So remember, you are traffic unless you are Steve Wynwood, and which point you were in traffic, you were in traffic, transportation, engineering, jargon. This is also a good one. This is what every industry and sector does, is, hey, we’re not going to use English anymore, we’re going to create Scientology type words and what not to obfuscate what we do and make us sound much more impressive than we really are.

Transportation engineers aren’t just car infrastructure engineers. You’re just a guy. You’re just a person. Like, you’re working on some infrastructure. Okay? Like, relax a little bit. So, use layman’s terms. Is the walk away there. The walk away? The take away. The take out?

Take away. Take away, lay away. Ah, and the last one they have, basically, a million other things, such as ocean microplastics. Did you guys okay. Now, I know Michael read this article. Fred, did you read this article? I did. You did, dammit. Well, I can’t ask you. Okay, for those of you playing the home game Ocean Microplastics, how much, what’s the percentage you think comes from car tires?

Do. Hey! For those of you who guessed 78%, you’re correct. What? 78 percent of ocean microplastics comes from car tires? I that’s mind blowing to me.

Michael: Yeah that’s mind blowing and I think that number will, if we met all of the goals of electrification by some of the timelines that have been set out without changing tire chemistries and without lightening vehicles, then we might see that percentage increase.

Fred: Well, I think the bad news is that this percentage will go down because the reason the microplastics content from tires is so high is that tires shed their rubber as very small particles. And plastics that get dumped into the ocean take a while to turn into microplastics because they’re big gross objects.

So if you were to take all the plastics being thrown into the ocean as trash and first shredded them into tiny little particles, you would see that the ratio shift towards the smaller content from the tires. That would not be a good thing, by the way, but, that’s part of this, there’s a little bit of a bias in there.

If you were to

Michael: Just tires have a leg up in, in achieving microplastic status. Yeah, They’re shred off in smaller They’re self grinding.

Anthony: So the next time you’re in your

Michael: self grinding tire and the self driving car,

Anthony: I like that. The next time you’re in your favorite seafood restaurant, be sure to order the haddock with a little bit of Goodyear sauce.

Fred: Or you can order the radial or the buyer’s supply.

Anthony: I am! Hey, and with that listeners, that’s another episode, that’s another hour and change of your life that you can never get back. But we thank you for spending time with us. Go to audiosafety. omg, donate, and I’ll keep saying it.

Fred: That’s it, bye bye.

Thank you for listening, bye bye. Bye everybody.

Michael: For more information visit www. autosafety. org