Featured Image

Coping With Havoc Is A Must For AI Autonomous Cars 

Whether AI self-driving cars will be able to contend with the havoc created by human drivers and pedestrians remains to be seen. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider 

Havoc is a standout.   

In sports, one of the more unusual and lesser-known metrics for analyzing football teams consists of their havoc rating.  To calculate a havoc rating, you count up the number of football plays that your defense was able to disrupt against the opposing team, such as plays when your defense was able to intercept the football or forced the opposing team to fumble the ball, or tackled the opposing side for a loss of yardage, and so on. Next, you divide that count of disrupted plays by the total number of plays undertaken by the opposing team.   

The resulting fraction is turned into a percentage, allowing you to readily see what percentage of the time that the defense was able to mess-up the opposing side’s offense.  For example, if there were 100 plays by the opposing team and your defense was able to undermine the offense on 25 of those plays, you would have a havoc rating of 25% (that’s 25 divided by 100).   

The offense wants to keep the havoc rating as low as possible; the defense is aiming to get as high a havoc rating as they can, showcasing how often they can cause the offense to slip-up. If you had a havoc rating of 100%, it would mean that on every play that was run by the opposing team, you managed to confound their efforts. That would be tremendous as a defense. Of course, if you had a havoc rating of zero, it would suggest that your defense is not doing its job and that the opposing side is making plays without being at all disturbed or undermined. 

Havoc ratings can be used in other endeavors too. Perhaps we ought to be using a havoc rating when it comes to the emergence of true AI-based autonomous self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding Havoc And Self-Driving Cars  

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are considered semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5. We don’t yet know if this will be possible or how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).   

Since the semi-autonomous cars require a human driver, I’m not going to try and apply a havoc rating to the efforts of a Level 2 or Level 3 car. We could do so, but it would make the havoc aspects murky because there would be a portion attributable to the human driver and another portion caused by the automation, likely a blur of the two sources.   

Instead, let’s focus on the havoc aspects involving true self-driving cars, ones at Level 4, and Level 5. With the AI being the only driver, the havoc aspects can be assigned to the driving system per se.   

There are two ways in which havoc can arise: 

1)      By the actions of human drivers and for which the AI must contend 

2)      By the action of the AI driving system and for which other drivers need to contend   

I think that we would all agree that human drivers often create havoc in traffic. As such, the AI driving system must be able to cope with havoc instigated by nearby human drivers. 

You might be somewhat surprised at the second way in which havoc arises, namely by the actions of the AI driving system. Many pundits claim that AI driving systems will be perfect drivers, but as you’ll see in a moment, this is a false and misleading assumption.   

Before I jump into the fray, some pundits also assert that we’ll have only self-driving cars on our roadways and therefore there isn’t a need to deal with human drivers. Only someone living in a dream world would believe that we are only going to have self-driving cars and won’t also have human drivers in other nearby cars.   

In the United States alone, there are about 250 million conventional cars. All those cars are not going to suddenly be dispatched to the scrap heap upon the introduction of self-driving cars. For a lengthy foreseeable future, there will be both human-driven cars and self-driving cars mixing together on our highways and byways. 

It stands to reason.   

Human Drivers Create Havoc   

Consider the apparent notion that human drivers can create havoc. 

You are driving along, minding your own business, when a car that’s to your left opts to dart in front of your car and make a right turn at the corner up ahead. 

We’ve all experienced that kind of panicky and curse invoking driving situation.   

The lout that shockingly performs such a dangerous driving act is creating havoc. 

They are likely to disrupt your driving, forcing you to heavily use your brakes, maybe even causing you to swerve to avoid hitting their car. A car behind you might then need to also take radical actions, trying to avoid you, while you are trying to avoid the transgressor.   

There might be pedestrians standing at the corner that see the madcap car heading toward them, forcing them to leap away and cower on the sidewalk.   

Imagine then a human driver that throughout a driving journey might undertake some number of havocs producing driving actions. Divide the number of havoc acts by the total number of overall driving actions, and you have a percentage that reveals their havoc rating. 

The higher a havoc rating for a driver, the worse a driver they are. For a driver with a low havoc rating, it tends to suggest that they are not creating untoward driving circumstances while on the public roadways. 

Are you already thinking about a friend or colleague that you are sure must have a sky-high havoc rating? 

I’m sure you know such driving Neanderthals.   

Currently, few of the self-driving cars that are being tried out on our roadways are particularly versed in dealing with high havoc-rated human drivers. 

Most of the self-driving cars generally assume that the surrounding traffic will be relatively calm and mundane. You can think of those self-driving cars as acting a bit like a timid teenage driver that is just starting to drive a car. Those novice drivers hope and pray that no other driver will do something outlandish.   

If other drivers do crazy things, the teenage driver will resort to the simplest possible retort, which might be applicable or might make the situation even worse. In the case of getting cut off by the driver to their left that is darting toward a right turn, the novice driver might jam on the brakes and come to a sudden halt. Doing so might not have been the best choice, and it could end up with the car behind them rear-ending their car. 

 True self-driving cars need to step-up their game and be able to contend with high havoc human drivers.   

This capability can either be hand programmed into the AI driving system or can be “learned” overtime via the use of Machine Learning (ML) and Deep Learning (DL). I don’t want to though suggest that the ML and DL are equivalent to human learning, which they most decidedly are not. There is no kind of common-sense reasoning involved in today’s ML and DL, nor do I expect to see such a capability anytime soon. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake- news -about-self-driving-cars/  

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars Create Havoc 

Now that we’ve covered the obvious use case of human drivers that create havoc, let’s explore the lesser realized aspect that self-driving cars can also generate havoc.   

Suppose that a true self-driving car is coming down the street. The self-driving car is moving along at the posted speed limit and obeying all the local traffic laws.   

A pedestrian on the sidewalk is looking at their smartphone and not paying attention to the traffic, and not noticing the sidewalk activity since their nose is pointed at their phone.   

Oops, the distracted pedestrian nearly walks right into a fire hydrant. At the last moment, the pedestrian side steps around the fire hydrant, moving suddenly onto the curb.   

The AI driving system of the self-driving car is using its cameras, radar, LIDAR, ultrasonic sensors, and other detection devices to monitor the traffic and nearby pedestrians.   

Upon detecting the pedestrian that seems to be bent on entering into the street, and not realizing that the pedestrian was merely avoiding conking into a fire hydrant, the AI calculates that the pedestrian might get into harm’s way and end up in front of the car.   

Wanting to be as safe as possible, the AI instructs the car to come to an immediate halt. 

Well, it turns out that the sudden stop of the self-driving car then leads to a human-driven car that is behind the driverless car to rear-end the self-driving car. 

The point is that the actions of the AI driving system can be well-intended (though don’t ascribe human intention to the AI system, please, at least until someday the “singularity” happens), and yet the efforts produce havoc. 

Similar in some respects to the earlier description of a novice teenage driver, the AI system is going to be performing driving acts that have as an adverse consequence the generation of havoc.   

Thus, perhaps we ought to be measuring the havoc ratings of self-driving cars.  

Conclusion 

A driverless car that has a high havoc rating should either be prevented from driving around or at least shunted into specific driving areas whereby the havoc producing actions won’t have serious consequences (such as when moving at very low speeds or driving in lanes devoted exclusively to self-driving cars). 

I realize that some of you might be exclaiming that the havoc producing self-driving car can readily be updated with better software by undertaking an OTA (Over-The-Air) electronic communications and downloading improved driving AI. 

Yes, that’s true, but you are also mistakenly assuming that somehow those changes are going to be immediately ready and usable.   

Not so.   

Gradually, over time, presumably, the AI driving systems will be improved.   

Meanwhile, we are going to be somewhat at the mercy of whatever havoc producing AI driving systems are on our roadways. 

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/  

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/  

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Allow me to quote from Shakespeare (Act 3, Scene 1): “Cry ‘Havoc!,’ and let slip the dogs of war.”   

This famous line from the play Julius Caesar is spoken by Mark Antony and indicates that he wanted to go after the assassins that murdered Caesar. 

The dogs of war are most likely actual dogs that were trained for warfare, and he was saying that the killer dogs should be let loose to attack the assassins, though the expression might also mean to let loose the military forces overall. 

For the self-driving cars that are currently being let loose on our roadways, and once they no longer have back-up human drivers attending to the driving of the AI system, will we be potentially incurring havoc and will those AI systems be able to also contend with the havoc created by human drivers?   

Nobody knows, and especially nobody knows if we aren’t measuring the havoc producing and havoc handling capabilities of self-driving cars.   

Automakers and tech firms might be well-intended in their spirited efforts to get self-driving cars onto our roads, but let’s not also allow ourselves to fall into the trap of unleashing havoc.   

I think Shakespeare if he were alive today, would likely have something to say about that.   

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends. 

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/ and http://ai-selfdriving-cars.libsyn.com/website] 

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Featured Image

Parable About ‘The Upstream’ Provides Key Lessons For AI Autonomous Cars 

Balancing the upstream with the downstream might be the best approach dealing with problems introduced by the advent of self-driving cars on the road.  

By Lance Eliot, the AI Trends Insider  

There is a famous allegory called the Upstream Parable that provides numerous valuable lessons and can be gainfully applied to the advent of AI autonomous self-driving cars. 

The Upstream Parable sometimes referred to as the Rivers Story, has been attributed to various originating sources, including that some suggest it was initially brought up in the 1930s by Saul Alinksy, political activist,  and then later by Irving Zola, medical sociologist, though it was perhaps given its greatest impetus via a paper by John McKinlay in 1975 that applied the parable to the domain of healthcare. 

I’ll start with a slimmed-down version of the story. 

You are walking along the bank of a rushing river when you spy a person in the water that seems to be drowning. Heroically, you leap into the water and save the person. A few minutes later, another person floats by that seems to be drowning. Once again, you jump into the river and save the person.   

This keeps happening, again and again. 

In each case, you dive in, and though you manage to save the person each such time, doing so denies you the chance to go upstream and ascertain why all these people are getting into the water to begin with, for which you might be able to bring the matter to an overall halt and prevent anyone else from further getting into the dangerous waters.   

And that’s the end of the story. 

You might be thinking, what gives with this?   

Why is it such a catchy parable? 

By most interpretations, the story offers a metaphor about how we oftentimes are so busy trying to fix things that we don’t pay attention to how they were originating. Our efforts and focus go toward that which we immediately see. And, especially when something is demanding incessantly our rapt attention right away. If you can take a breather and mull things over in such a situation, you might ultimately be able to solve the matter entirely by going upstream, make a fix there, rather than being battered over and over downstream.

In fact, it could be that one fix at the upstream would prevent all the rest of the downstream efforts, meaning that economically it is potentially a lot more sound to deal with the upstream rather than the frenetic and costly downstream activities. 

This can be applied to healthcare in a myriad of ways. For example, suppose that a populace has improper hygiene habits and lives in a manner that encourages disease to take hold. Upon arriving at such a locale, your first thought might be to build a hospital to care for the sick. After a while, the hospital may fill up, so you need to build another hospital. On and on, this merry-go-round goes, devoting more and more resources to building hospitals to aid the ill.   

It would be easy to fall into the mental trap of putting all your attention toward those hospitals. 

You might chew-up your energy on dealing with: 

  • Are the hospitals running efficiently? 
  • Do hospitals have sufficient medical equipment? 
  • Can you keep enough nurses and doctors on-staff to handle the workloads? 
  • Etc. 

Recalling the lesson of the Upstream Parable, maybe there ought to be attention given to how the populace is living and try to find ways to cut down on the breaking out of disease. That’s upstream and it is the point at which the production of ill people is taking place. Imagine, if you did change the upstream to clean things up and prevent or at least reduce by a large measure the rampant disease, you’d no longer need such a large volume of hospitals, and nor all that equipment, and nor have the issues of staffing the medical teams in a large-scale way.   

Notice too that everyone involved in the matter is doing what they believe best to do. 

In other words, those building all those hospitals perceive a need to heal the sick, and so they are sincerely and genuinely “doing the right thing.” Unfortunately, they are consumed mightily by that task, akin to pulling drowning people out of the rushing river, and thus they fail to consider what’s upstream and potentially better ways to “cure” the people of their ills. 

Okay, that’s the overarching gist of the upstream and downstream related fable. 

There are numerous variants of how the story is told.   

Some like to say that the persons falling into the water are children and that you are therefore saving essentially helpless children (and, as though to go even further, sometimes the indication is that they are babies). 

I guess that might make the parable more engaging, but it doesn’t especially change the overall tenor of the lessons involved. 

Here’s one reason that some like to use children or babies in place of referring to adults.   

A bizarrely distorted reaction by some is that if it is adults that are falling into the water, why aren’t they astute enough to stop doing so, and why should it be that anyone else should be worried about saving adults that presumably should know better (thus, substituting children or babies makes that less arguable, but I must say that the somewhat cynical and bitter portrayal of adults is a bit alarming since it could be that something beyond their power is tossing them into the drink, and anyway it fights against the spirit of the parable overall). 

Another variation of the story has a second individual that comes to aid in saving the drowning subjects. 

At the end of the story, this second individual, after having helped to pull person after person out of the river, suddenly stops doing so and walks upstream. 

The first individual, still steeped in pulling people out of the water, yells frantically to the second individual, imploring with grave concern, where are they going? 

I’m going upstream to find out what’s going on and aim to stop whoever is tossing people into the river, says the second individual. 

End of story.   

That’s a nifty variant. 

Why? 

Well, in the first version, the person saving the lives has no chance to do anything but continue to save lives (we can reasonably conclude that if the saving were to be curtailed, person after person would drown).   

In the second version, we hope or assume that the first individual can sufficiently continue to save lives, while the second person scoots upstream to try and do something about the predicament. 

Of course, life is never that clear cut. 

It could be that the second person leaving will lamentably present a serious and life-denying result at the downstream saving-lives position. 

In which case, we need to ponder as to whether it is better to keep saving lives in the immediate, rather than trying to solve the problem overall, or that you must make a death sentence decision to essentially abandon some to their deaths to deal with the problem by sorting out its root. 

On a related topic, nearly all seasoned software developers and AI builders tend to know that whenever you have a budding system that is exhibiting problems, you seek to find the so-called root cause. 

If you spend all your time trying to fix errors being generated by the root cause, you’ll perpetually be in a bind of just fixing those errors and never stop the flow. 

Anyway, the variant to the parable is quite handy since it brings up a devilish dilemma. 

While in the midst of dealing with a crisis, can you spare time and effort toward the root cause, or would that meanwhile generate such adverse consequences that you are risking greater injury by not coping with the immediate and direct issues at-hand? 

Keep in mind too that just because the second person opts to walk upstream, we have no way of knowing whether the upstream exploration will even be successful. 

It could be that the upstream problem is so distant that the second individual never gets there, and in which case, if meanwhile, people were drowning, it was quite a hefty price to pay for having not solved the root problem.   

Or, maybe the second individual finds the root, but they are unable to fix it quickly (maybe it’s a troll that is too large to battle, and instead the second individual has to try and prevent people from wandering into its trap, but this only cuts down on say one-third of the pace of people getting tossed into the river). 

This means that for some time, those drowning are going to keep drowning.   

Here’s an even sadder possibility. 

The second individual reaches the upstream root and tries to fix the problem, yet somehow, regrettably, makes it worse (maybe it was a bridge that people were falling off, and while attempting to fix the bridge, the second individual messed-up and the bridge is even more precarious than it was before!).  

It could be that up until then, the first individual was able to keep up with saving those drowning, and now, ironically, after the second individual tried to fix the problem, and in the meantime wasn’t around to help save the drowning victims, there are a slew more people falling into the water, completely overwhelming the first individual. 

Yikes! 

As you can see, I like this latter version that includes the second individual, allowing us to extend the lessons that can be readily gleaned from the parable. 

Some though prefer using the simpler version. 

It all depends upon the point that you are trying to drive home by using the tale. 

For those of you that are smarmy, I’m sure that you’ve already come up with other variations.   

Why not make a net that is stretched across the river and catches all those people? 

There, problem solved, you proudly proclaim.   

Well, which problem? 

The problem of the people drowning at the downstream position, or the problem of the people being tossed into the river and possibly leading to being drowned (hopefully, they don’t drown before they reach your net). 

In any case, yes, it might be sensible to come up with a more effective or efficient way to save the drowning persons.   

That doesn’t necessarily negate the premise that it is the root that deserves attention, but I appreciate that you’ve tried to find a means to reduce the effort at the downstream, which maybe frees up those that are aiming to go upstream to find and fix the root cause. 

Bravo. 

One other last facet to mention, and it somewhat dovetails into the notion of creating and putting in place the net, sometimes there is such a massive setup of infrastructure at the downstream that it becomes unwieldy and takes on a life of its own to deal with.   

Furthermore, and the twist upon a twist, suppose that the net gets nearly all, but a few happen to go underwater and aren’t saved by the net.   

Imagine someone standing downstream of the (already) downstream net. 

They might end up in the same parable, and upon coming up to find you and your net, believe they have found the root cause.  

It could be that the root cause is further upstream and that there are lots of other intervening downstream solutions, all of which are (hopefully) mitigating the upstream, yet it might be difficult to figure out what’s the root versus what’s not the root. 

There could be a nearly infinite series of downstream solutions, all well-meaning, each of which makes the whole affair incredibly complex and confounding, while there might be an elegant end to the monstrosity by somehow getting to the real root.   

Well, that was quite an instructive look at the fable. 

You might be wondering, can the fable be used in other contexts, such as something AI-related (that’s why I’m here). 

Yes, indeed, here’s an interesting question to ponder: “Will the advent of AI-based true self-driving cars potentially find itself getting mired in downstream matters akin to the Upstream Parable?” 

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

  

The Levels Of Self-Driving Cars 

  

It is important to clarify what I mean when referring to AI-based true self-driving cars. 

  

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

  

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

  

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

  

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out). 

  

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). 

  

For semi-autonomous cars, the public must be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. 

  

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

  

Self-Driving Cars And The Parable 

  

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. 

  

All occupants will be passengers. 

  

The AI is doing the driving. 

  

Sounds pretty good. 

  

No need for any arcane fables or tall tales. 

  

But, wait, give the Upstream Parable a chance. 

  

Some today are arguing that more regulation is needed at the federal level to guide how self-driving cars will be designed, built, and fielded. 

  

Those proponents tend to say that having the states or local authorities in cities and counties having to come up with guidelines for the use of self-driving cars is counterproductive. 

  

You might be surprised to know that many of the automakers and self-driving tech firms seem to generally agree with the notion that the guidelines ought to be at the federal level. 

  

Why? 

  

One reason would be the presumed simplicity of having an across-the-board set of rules, rather than having to adjust or craft the AI system and driverless car to accommodate a potential morass of thousands upon thousands of varying rules across the entire country. 

  

On the other hand, a cogent argument is made that having a singular federal level approach might not allow for sufficient flexibility and tailoring that befits the needs of local municipalities. 

  

Let’s suppose that the local approach prevails (I’m not making such a proclamation, it’s just a what-if). 

  

If self-driving cars have trouble coping at the local levels, we might become focused on the downstream matters. 

  

Meanwhile, one might contend that it was the upstream that needed to provide an overarching approach that was sufficient to abate the downstream issues. 

  

Back to the parable we go. 

  

Suppose a fleet of self-driving cars is owned by a particular automaker. 

  

The self-driving cars communicate with a cloud-based system, via OTA (Over-The-Air) electronic capabilities, and pull down patches and updates to the AI system that’s on-board, and also the on-board system uploads collected sensory data and other info from the self-driving car. 

  

Pretend that something goes awry in the self-driving cars of that fleet. 

  

Do you try to quickly deal with each individual self-driving car, which might be on the roadway and endangering passengers, pedestrians, or other human-driven cars, or do you try to ferret out the root cause and then see if you can get that patch shoved out to the fleet in-time? 

  

Some assert that this very kind of issue is why there ought to be a kill button or kill switch inside all self-driving cars, allowing presumably for a human passenger to make a decision right there in the driverless car to stop it from processing. 

  

In any case, you could liken this to the upstream versus downstream fable. 

  

Pleasingly, once again, lessons are revealed due to a handy underlying schema or template. 

  

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake- news -about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Conclusion 

  

Generally, the Upstream Parable is pretty handy for lots of circumstances. 

  

Part of the reason it is so memorable is due to the aspect that it captures innately what we see every day, and helps to bring to light the otherwise hidden or unrealized elements of systems around us that we are immersed in. 

  

While standing at the DMV and waiting endlessly to get your driver’s license renewed, you have to let your mind wander to keep your sanity and wonder whether you’ve found yourself floating in the downstream waters. 

  

Drowning in paperwork! 

  

If the DMV had its act together, there’d be a solution at the root that would make your desire to renew your driver’s license a bit less arduous and frustrating. 

  

For sanity sake, go ahead and use the fable to your heart’s content and keep finding ways to balance the downstream with the upstream, aiming to prevent problems before they arise and make the world a better place. 

  

That’s a good lesson no matter how you cut it.  

 

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends. 

 

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

 

 

Covid Abruzzo Basilicata Calabria Campania Emilia Romagna Friuli Venezia Giulia Lazio Liguria Lombardia Marche Molise Piemonte Puglia Sardegna Sicilia Toscana Trentino Alto Adige Umbria Valle d’Aosta Veneto Italia Agrigento Alessandria Ancona Aosta Arezzo Ascoli Piceno Asti Avellino Bari Barletta-Andria-Trani Belluno Benevento Bergamo Biella Bologna Bolzano Brescia Brindisi Cagliari Caltanissetta Campobasso Carbonia-Iglesias Caserta Catania Catanzaro Chieti Como Cosenza Cremona Crotone Cuneo Enna Fermo Ferrara Firenze Foggia Forlì-Cesena Frosinone Genova Gorizia Grosseto Imperia Isernia La Spezia L’Aquila Latina Lecce Lecco Livorno Lodi Lucca Macerata Mantova Massa-Carrara Matera Messina Milano Modena Monza e della Brianza Napoli Novara Nuoro Olbia-Tempio Oristano Padova Palermo Parma Pavia Perugia Pesaro e Urbino Pescara Piacenza Pisa Pistoia Pordenone Potenza Prato Ragusa Ravenna Reggio Calabria Reggio Emilia Rieti Rimini Roma Rovigo Salerno Medio Campidano Sassari Savona Siena Siracusa Sondrio Taranto Teramo Terni Torino Ogliastra Trapani Trento Treviso Trieste Udine Varese Venezia Verbano-Cusio-Ossola Vercelli Verona Vibo Valentia Vicenza Viterbo

Recent Posts

Archives