Select Page

Every day seems to bring a new article about how artificial intelligence is changing the world. Change brings risk though, and I’m not convinced all of those risks will be covered by insurance.

When you think of artificial intelligence (AI) it is easy envision the HAL 9000 computer from Arthur C. Clarke’s Space Odyssey books and the movies made from them. Although the books say that HAL was activated in 1997, 20 years later AI clearly hasn’t reached that level. Tremendous strides forward are being made though.

What is artificial intelligence? The best definition I’ve seen is the one developed by TechEmergence. Their definition states that:

Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time.

Put more simply, an AI is a computer system that can learn from the information it receives and independently determine what actions to take to achieve the desired result.

Amazing Capabilities of Artificial Intelligence

“You may not realize it, but artificial intelligence is omnipresent in your life”

You may not realize it, but artificial intelligence is omnipresent in your life. Was your Gmail full of spam today? If not, then the AI in Google’s spam filter is working. It learns what emails are spam when you mark them as such, and based on its analysis of those emails it classifies future messages. AIs in other spam filters do the same thing. Is Amazon remarkably good at suggesting things for you to buy, and showing you relevant search results? That’s their AI at work. Have you had a payment card declined when it shouldn’t have been? You can thank your bank’s AI. The AI considers every card transaction and uses that information to determine whether a particular transaction is fraudulent. The algorithms aren’t perfect though, and that’s why banks sometimes put a hold on a card when they shouldn’t. Targeted advertising is made possible by AIs. By analyzing things like your browsing history, purchases, Facebook posts, etc., an AI can determine what your interests are, how much you are likely to spend, and serve you targeted ads that it thinks you will respond to.

Artificial intelligence can be eerily accurate. Algorithms used by Target to analyze shopping patterns reportedly identified a teenage girl as being pregnant. Target then sent coupons for baby products to her father’s great consternation (he didn’t know the stork was airborne). In an even more amazing example, an AI analyzed over 100 million news articles and was able to determine Osama bin Laden’s hiding place within a 200 kilometer radius in northern Pakistan. Abbottabad, the place where he was ultimately found, was within that radius.

Challenges Created by Artificial Intelligence

While artificial intelligence can do some incredible things, it clearly isn’t perfect. AI performance problems arise because of the way the AIs are constructed and of the way they learn.

There are numerous examples of AIs not performing as expected because they weren’t built well or with enough capability. The well-publicized fatal crash of a Tesla car in autopilot mode is one example. The car’s artificial intelligence failed to distinguish between the sky and a white 18 wheeler that crossed the car’s path. In 2015 Google Photos tagged pictures of black people as gorillas. Elite Dangerous, an online multiplayer game, incorporates an AI. In an update to the game the AI gained the ability to create weapons that the game’s developers never intended, and that had devastating effects on players. The developers referred to the problem as a bug, meaning that there was an error in the computer code.

“An AI cannot differentiate between good and bad information”

Although an artificial intelligence may fail because it wasn’t designed well enough, a fully capable AI can also be problematic. An AI needs to learn. In that respect artificial intelligence can be pretty stupid. An AI cannot differentiate between good and bad information. It will learn from it all. The AI is going to draw conclusions and initiate actions based on the data it receives. If the data is no good, the analysis and actions of the AI will be bad. If legitimate email is marked as spam, then the AI in your spam filter will prevent legitimate email from reaching you.

The “garbage in garbage out” problem with artificial intelligence can be managed when you have control over the input. You can control what messages are designated as spam. But what happens in situations where no ability to control the input exists?

The Microsoft Tay experiment in 2016 is a great illustration of this problem. Tay was a chatbot designed to simulate the conversation of a 19 year old American girl and to learn from its interactions with other Twitter users. After Tay was launched Twitter users began sending it racist and inflammatory tweets. This led Tay to begin sending similar tweets. Microsoft suspended the experiment within 16 hours after it began.

The Tay experiment teaches a good lesson in the use of AI to interact with the public. Chatbots are increasingly being used as a form of advertising to allow brands to connect more directly with consumers. Those that are designed to learn from their conversations with consumers could be vulnerable in the same way Tay was.

Insurance for Loss Caused by Artificial Intelligence

The failure of an AI to perform as intended obviously can produce losses for a company. This is true whether the problem is in the design or implementation of the AI, or in the AI’s functioning based on learning from poor information. An AI’s actions can impact the products and/or services provided to customers. That could conceivably result in lost business income and possibly even third party claims. An AI could also damage a company’s reputation.

Would those losses be covered by insurance? Possibly.

If customers or other third parties bring claims alleging that an AI malfunctioned, e.g. caused legitimate email to be tagged as spam, such claims could be covered by insurance. Technology errors and omissions (E&O) policies are designed specifically to cover losses resulting from faulty software and other technology products and services. Coverage could conceivably be available in other E&O policies, though I would never assume that is the case.

The insurance coverage picture is murky in situations like the failure of a car’s AI that leads to a crash. This is because E&O policies typically exclude loss arising from bodily injury and property damage. While it is possible that general liability policies could cover the resulting loss, that is by no means certain. Any company that believes it has such an exposure should carefully review their policy to ensure that coverage exists. If such a loss would not be covered, cyber and technology E&O insurers may be able to offer a solution.

Coverage should be available under circumstances where an AI performs as intended, but produces poor results because it learned from bad data. In a situation like the Tay experiment, Microsoft arguably did nothing wrong. It created a chatbot to learn from users, and the AI did that well. While it is possible that a company in Microsoft’s position could face an argument from its insurer that the E&O policy isn’t triggered because no wrongful act was committed, as a practical matter I think any third party claim is very likely to allege that the company deploying the AI did something wrong that allowed the AI to learn from bad data.

The insurance coverage picture for a company’s own losses resulting from its use of artificial intelligence is less promising.

An artificial intelligence’s failure to perform well could arouse the interest of regulators and law enforcement. The company receiving that attention obviously will have to respond. E&O policies will often exclude such claims. Cyber policies can include regulatory coverage, but it generally will apply only to claims arising out of privacy or system security events.

An AI’s failure to perform could impact a company’s income. If the AI doesn’t work well, customers may not buy more from the company. A game developer selling a game that has a malfunctioning AI may not sell many more games. Very few policies will cover that loss.

The same is true for damage to a company’s reputation resulting from an AI’s failure. A game developer, an anti-spam service or software provider, or another company that sustains reputational damage is very unlikely to have insurance coverage for the resulting financial loss. It may be possible to purchase such coverage, but it will probably require a specialized policy.

Artificial intelligence is benefitting businesses and consumers. Its use is growing dramatically, as are its capabilities. While AIs can solve many problems, they also bring some of their own. The failure of an AI for any reason could produce financial losses for a company. Companies that are dependent on AIs need to carefully evaluate whether those losses will be covered under their existing insurance policies. If a company does that and doesn’t like what it finds out, it should consider insurance that can fill the gaps.