Lovethorn. Revolutionising risk.

What can insurers learn from marketing and dragons?

When it comes to AI, what’s good for marketing isn’t necessarily good for insurers.

Gailynn Nicks, Chief Product Officer

7 mins

Several key marketing technologies – also known as martech – have been adopted and trialled in insurance value chains over the last few years. Among them are consumer targeting, dynamic pricing and sentiment analysis, all of which on the surface can bring real benefits to our industry. However, deep understanding of such technologies is crucial to their long-term value, and vitally, to the avoidance of regulatory and other detrimental potholes along the journey.

AI generated image of dragon with shampoo bottle

All imagery AI generated.

A little background. Advertising sits at the epicentre of the digital data economy, it always has. And advertising is about selling.


Programmatic advertising (the automated buying and selling of online advertising space) started with the idea that the digital ecosystem offered the most cost-effective method of real-time ad buying for targeting consumers. As soon as the target consumer’s eyeballs hit the screen, whichever ad won the real-time auction would be revealed. At the time, the idea was that media buyers would get the cheapest possible placement because all media were equal in terms of effectiveness. This commoditisation of both advertising and media content was intended to massively decrease how much advertisers needed to spend to get in front of their target audience. Havoc ensued amongst agencies and media owners alike, but, as with most seismic technology-driven changes, it didn’t take long for those with money and power to change the game.


Over the next decade, platforms like Google and Facebook harvested personal data from their users whilst closing down open access and creating data monopolies. The topline impact of leveraging these data monopolies for Google, Meta and Amazon was nothing short of phenomenal. Between them in 2022 they accounted for over 50% of total global ad spend1, up from 10% in 2012. This huge growth took place in an environment where ad spend not only grew from around $550bn to $780bn, but digital’s share of that spend grew from less than 10% to more than 60%.

AI generated image of personal data

A key driver of this growth story was the narrative that not only is individual targeting vastly more effective than mass advertising, but that the targeting itself is highly accurate. Mathematically this was done through calculating the likelihood, based on aggregate models, that any individual set of eyeballs matched the target audience. Likewise, in aggregate, this enabled more effective and efficient reach of a target audience, but it certainly did not ensure that each receiver only enjoyed the correct targeting. In fact, most models only achieve a modest improvement in reaching the “right” eyeballs, but in aggregate, that is enough to change the effectiveness of a campaign.

What is the cost of this error in reaching the “right” eyeballs? From an advertiser perspective, the premium paid for targeting is wasted. From a consumer point of view, seeing ads for something irrelevant to you, like anti-dandruff shampoo or nappies, as you browse the web or scroll through social media, causes little to no detriment. There may be data privacy issues in the background, but there is no financial or substantive cost to wrongly targeted consumers.


What about dynamic pricing? In the case of dynamic pricing models, for example airline tickets, the mechanism is slightly different. Here, the difference in the price offered to any individual is primarily driven by the fungibility and cost management of the offered product or service. There may also be a loading factor based on your own online behaviour derived from your cookies, either specific to that purchase or linked to other data associated with you. In this case, it’s possible for the application and accuracy of that data to result in substantial extra costs (why does the price of this flight keep going up every time I go back to it?!?!), but the nature of demand-based pricing in such sectors is generally viewed as fair.


Then there is the use of sentiment analysis and biometric data. Facial coding (measuring human emotions through facial expressions) is commonly used to predict how well advertising is likely to perform, audio data to assess the emotional state of someone speaking to a call centre operative, and text analytics to understand the underlying sentiment of a consumer. Each have degrees of error. Facial coding can be pretty good at predicting ad performance across a large group, but it certainly wouldn’t be accurate enough to bet the farm on the assessment of any individual in that group. Voice and text are similar in that they are good for “gisting” i.e. being able to get the gist of what people are saying or feeling, and can add value to judgements made on a range of variables, but they are far from foolproof. Again, aggregation provides a useful view but, when facing an individual customer, you are looking at modest improvements with high margins of error.

Enough about marketing. What does this mean when you apply such techniques to insurance?


Recent forecasts have insurers spending almost 50% of their marketing and advertising budgets through digital channels2, so there is a lot at stake. Small changes in price factors, offer elements and the application process can have a significant effect on insurer profitability, so making use of data and analytics from other sectors makes sense. The challenge is how to execute them appropriately in insurance, especially where there is considerable regulation, and the impact on people can be both substantial and discriminatory.

AI generated image of facial coding

One of the main ways that analytics adapted from martech are being used in insurance sales is price optimisation. Algorithms are designed to identify those “likely to have” low price elasticity or those “susceptible” to price walking and then to make offers accordingly. This may seem harmless at first, but the calculation of those propensities comes from aggregated, and often uninterpretable, models then applied at an individual level. What is presented to consumers is often opaque, with calculations underpinned by either incomplete data or via unverifiable second- and third-party sources.


Further, protected demographics such as age are among common explanatory variables in such propensities, even when not overtly included in models. When you subtly and deeply understand these propensity models, you can see multiple margins of error, often made worse by poorly defined (and potentially illegal) assumptions and parameters. In insurance the cost to an individual from the use of such models can be substantial, and insurers using them to make decisions without interrogation are asking for trouble with both regulators and consumers.


Finally, although now discouraged by regulators, methods such as sentiment or emotion analysis could be used to identify potential fraudulent claims. This might have intuitive appeal, and could well give insurers a return in aggregate, but the application of these methods to individuals is problematic and the personal cost high, not just financially but also with the risk of exclusion from payouts and future cover. Even the suggestion that such technologies were in use from a US based provider recently caused immediate consumer, media and AI expert backlash – resulting in an embarrassing roll back and clarifications from the insurer.    


Our entire justice system is based on the presumption of innocence, so practices that either exclude people without clear evidence of actuarially proven exemption characteristics, or that cause individuals to incur a financial penalty based on aggregate associations with protected characteristics is asking for litigation in the future. Where any regulated characteristics are highly correlated with commercially appealing dimensions such as price insensitivity, regulators and consumer organisations are bound to intervene. We already see class action in the US on the basis of provably biased outcomes from these models, contravening fairness and discrimination rules.


Insurance isn’t an anti-dandruff shampoo. Insurance isn’t a pack of nappies. Insurance isn’t an airline ticket. Understanding the benefits available from the evolution of martech is important, but as an industry, we need to be much more careful when it comes to using propensity models. The easiest way to audit is to monitor issues via real world outcomes, but the models themselves need more careful attention. We must tread the algorithmic route from aggregation to individual application delicately. We must truly understand the types of methods and errors involved – also the veracity and completeness of the input data used. We must recognise the assumptions and parameters set in these models to determine the regulatory compliance they embody.


The potential consumer detriment from bad models is too high to be allowed to run unrestricted, but there will also be high costs in potential consumer backlash, fines and lost business for insurers who get it wrong.


J.R.R. Tolkein said “It does not do to leave a live dragon out of your calculations, if you live near him.” In insurance, leaving the regulatory and legal risks from bad models out of your calculations is simply ignoring the dragon.


We’re Lovethorn. We’re revolutionising risk.

 

References

1          https://digiday.com/marketing/the-rundown-google-meta-and-amazon-are-on-track-to-absorb-more-than-50-of-all-ad-money-in-2022/

2         https://www.capgemini.com/gb-en/insights/research-library/competitive-advantage-through-digital-marketing/