I just read a really interesting article about determining liability when there is a car accident involving an autonomous vehicle. The article presents some of the findings from a study done by Columbia Engineering and Columbia Law School that indicates that they believe that they can use game theory to determine liability in those cases. If you’re interested in that topic, I recommend the article (on Insurance Journal, of course) and I recommend the report it is based on.

NOTE: I read the article on IJ, not the whole research report (yet).

One of the key items that the researchers keyed in on was the human driver’s moral hazard.

It is generally assumed that autonomous vehicles will operate within strict parameters, such as the speed limit of the roadway its on. It may also change its operation based on the amount of traffic, the weather, and other potential variables. An autonomous vehicle should operate in a very predictable manner.

Human drivers are another story. While you might normally be able to predict how your spouse will drive in a certain situation, you can’t predict how the other drivers around you might drive given the conditions that you’re driving in.

Here is an interesting quote from the article, “The team found that an optimally designed liability policy is critical to help prevent human drivers from developing moral hazard and to assist the AV (autonomous vehicle) manufacturer with a tradeoff between traffic safety and production costs.”

I’m not sure if they believe that there is a way to mitigate these risks with a liability policy, but that’s not my point today. This quote makes me feel like we need to address the difference between a moral hazard and a morale hazard, because (spoiler alert) the researchers used the right term, calling it a moral hazard.

What is a hazard?

According to Kaplan’s Glossary of Insurance Terms, a hazard is a specific situation that increases the probability of the occurrence of loss arising from a peril or that may influence the extent of the loss.

That’s great. What does that mean? A hazard is anything that makes it more likely that something bad will happen or might make it worse if it does happen.

If everyone in your county knows that there is a road that the highway patrol and sheriff’s department don’t patrol, there may be an increased hazard driving there because of the increased possibility that someone might drive too fast.

If you live in some counties in Florida, you have increased risks of certain losses. It might be wind losses, water losses, or sinkhole losses, based on the location. The location that you live in might increase certain hazards.

What is a moral hazard?

Going back to Kaplan, a moral hazard is a condition of morals or habits that increases the probability of loss from a peril.

This indicates that the hazard comes from within a person. It’s the moral code that they operate under. According to the Oxford dictionary, morals are “a person’s standards of behavior or beliefs concerning what is and what is not acceptable for them to do.”

This tells us that a moral hazard can be created based on what a person believes is the right way to act in a given situation. Going back to the article then, the moral hazard could be the beliefs of the human drivers as they interact with the autonomous vehicles.

Kaplan also mentioned habits. A habit can create a moral hazard in part because we build habits based on what we perceive to be acceptable behavior. That tells us that the human drivers may begin to take up a different style of acceptable driving the more that they interact with autonomous vehicles.

What is a morale hazard?

Kaplan (again) tells us that a morale hazard is a “hazard arising out of an insured’s indifference to loss because of the existence of insurance.”

You have heard people say that they can drive any way that they want because, “the car’s insured.” That’s a morale hazard. It’s a lack of care created because they know that there is backup. What could possibly go wrong? I’m not going to be held responsible. Someone else will cut the check.

Which is it: a moral or morale hazard?

In this particular story, the researchers used the right term. There is the possibility of a change in moral hazard as human drivers interact with autonomous vehicles and as our human operated vehicles gain different levels of autonomy.

Many people begin by driving very cautiously, even too cautiously at times. They stop and look both ways four or five times before entering the intersection. They drive just under the speed limit. They make sure to signal every lane change or turn.

Then they get a little experience under their belts and they decide that they can relax a little. They begin to push the speed limit. They make quicker lane changes with less space between vehicles. They perform the classic rolling stop.

The researchers were looking into the evolution of human driving habits around autonomous vehicles and they seemed to believe that the more human drivers interact with autonomous vehicles, the more comfortable they will become with them. The more comfortable they get with the autonomous vehicles, the less careful they will be.

In the end, they perfectly understood the moral hazard of drivers’ comfort with the other vehicles around them.

PS – I agree. Drivers will likely get too comfortable with autonomous vehicles and thereby increase the potential of accidents.

Want to stay up to date?

Get the latest insurance news
sent straight to your inbox.