What the examples I give above have in common is this.

    1. The data is clean and abundant (e.g. the record of “like” clicks on Facebook; patterns of pixel-intensity in an image; micro-adjustments to steering direction).

    1. The relationship between the data and the outcomes is simple: linear relationships between these data signals and the outcomes (roughly speaking).

    1. How to interpret the results is clear (these “likes” match the themes Donald Trump expounds; this pixel pattern matches the facial characteristics of a Uyghur; those steering adjustments keep the car in its lane).

Let’s create an imagined metric, X.  It is the ratio of quality and abundance of data to the difficulty of its analysis and deriving conclusions therefrom.

Represented directionally,
Χ=(DQ∙DA) / (DiffA∙DiffC)
where:

    • DQ is the quality / cleanness of the data

    • DA is the abundance (amount, velocity, frequency) of the data

    • DiffA is how difficult it is to analyse the data, including the amount of judgment required in choice of model and modelling approach

    • DiffC is how difficult it is to draw conclusions from the analysis, or to interpret its results

The higher X is, the more relevant is ADS.  Both ADS and traditional actuarial work follow the DATA IN → CRUNCH → CONCLUSION framework.  The less you need to intervene in these three steps the more weight you can give ADS.  Good data with low modelling complexity is the environment where automated algorithms thrive.

In the field of actuarial or statistical analysis in insurance, there are some areas where ADS may bring great insight.  An obvious one is a single high-volume line of business (e.g. UK motor) where price optimisation needs large quantities of real time data and crunching, and quick response times in a highly competitive market.  It’s the closest we have to algorithmic stock trading.

But this is rare.  Insurance rarely needs such lightning fast responses, and very rarely will managers have enough confidence in the data to let a computer do their thinking.

In a nutshell, it is far easier for Amazon to predict what book I’d like to read than it is for me to tell you what your Commercial Property loss ratio is going to be next year.

The real insurance world weighs on X.  It weighs on all of its components, decreasing the top line of the fraction and increasing the bottom line.  Data is almost never abundant and immaculate.  The choices of modelling approach, model parameters and of data-cleaning are laden with judgment that requires both experience and clear and transparent communication.  The conclusions must be carefully considered – discarded when not sound, well-communicated in all cases.  The quality of all findings must be drawn out and made clear to the end-user, and actions or other commercial ends suggested by the findings should be presented.

So whenever data is compromised in quality or volume (low DQ or DA) or whenever the analysis is challenging or its outcomes nuanced (high DiffA or DiffC), you can see we have low X.

Low X is very, very common.  It means actuaries are needed for their experience and judgment.  Those tough exams and years of commercial experience, for which actuaries get paid the big bucks, are enduringly valuable.

Don’t get me wrong: it is important to build processes and software tools to create control cycles of actuarial insight.  This is the essence of efficiency and consistency.  It is what Calibrant does.  And I recognise that some circumstances do lend themselves to the algorithmic approach, including as contributions to an overall process still guided by an actuary.

But anyone who thinks that the actuaries can be replaced with a machine is either fooling themselves or (worried about those big bucks) guilty of wishful thinking.  The insurance world can accommodate ADS and actuaries both, but if it had to choose it couldn’t lose the actuaries.

Other articles...

To err is human, to underwrite divine

There has been a lot of speculation recently around the use of AI in insurance and how this signals the death knell for Underwriters. Having mopped up the coffee that I spat out and checked my life insurance provisions, this got me to thinking. Surely not all underwriters? But if not, then who?

READ MORE
Efficient or effective?

The ineffable question of the “effs”.  What should a business aim for – and more specifically, for us, the underwriting function in an insurance business – efficient operations or effective decision making?

READ MORE
The boiling frog

We have all heard of the boiling frog syndrome which says: if a frog is put in boiling water, it will jump out to protect itself. However, if you place a frog in cold water and slowly turn up the heat, it won’t notice the subtle increase in temperature and be boiled alive.

READ MORE
Playing the game

Game theory aims to shed light on outcomes where multiple actors operate.  Dry academe often assumes “ceteris paribus” even in social sciences, but in the real world when we are trying to anticipate the outcomes of decisions one actor makes it is essential to consider the possible and likely responses of other actors…

READ MORE
An inspector calls

In various parts of our lives, we are examined and tested to grade our ability and knowledge, be that driving tests, ‘A’ levels, or piano exams.

And so it is with the insurance industry.  Boards and regulators are increasingly pushing for demonstrable oversight.  This is especially true in the field of Delegated Authority (DA)…

READ MORE
Stick to Your Knitting

Managing General Agents (MGAs) provide insurers a broad opportunity for profitable growth. Operating in niche sectors with lean efficient models and providing customers with products that they often find hard to come by in the mainstream.  All of this is wrapped up with the relevant expertise and providing insurers with access to markets that…

READ MORE
Why an MGA shouldn’t fear the hard market

As 2022 was drawing to a close, insurers across the UK were starting to look at their results, keeping their fingers, toes and everything else that they could, crossed in the hope that December would fizzle out in a whimper.

But alas it was not to be as temperatures plummeted and the industry suffered…

READ MORE
Pricing triptych – Part 3: Four fallacies about cat pricing

Catastrophes – natural events where one “system” (wind, earthquake etc) causes multiple insurance claims in one go – are difficult.  Much else in insurance pricing is subject to conceptually simple averaging-over-time, but cats are different.  You can’t take a 5 or 10 year history of natural-peril losses and assume you have an accurate view of exposure to catastrophes…

READ MORE
Pricing triptych – Part 2: Four considerations on pricing strategy

In part 1 of this triptych on insurance pricing we looked at Technical Price (TP): what must be covered over the long run to make reasonable profits.  I finished that note by considering TP in the context of the market.  Here, I take this theme further and explore more how to think strategically about pricing.

READ MORE
Pricing triptych – Part 1: The four fundamentals of pricing

This note covers the fundamentals: the three building blocks of a technical price and a note on the role technical price plays.  It is followed with a deeper look at pricing strategies.  Finally we consider a knotty topic, cat(astrophe) pricing, in particular the use of cat models to inform the price.

READ MORE
In data we trust (so make sure it’s right)

On 26th September 1983, at the height of the Cold War, Soviet missile monitoring picked up apparent signs of five nuclear missiles launched from the US.  Retaliation was the default planned reaction.

READ MORE
Conflict: anticipating it, managing it

Charles Dickens opened A Tale of Two Cities with these lines, contrasting calm enlightenment London with turbulent revolutionary Paris: “It was the best of times, it was the worst of times…

READ MORE
Carrier vs MGA

Calibrant was established to bridge a gap.  The gap is that between MGAs and their carriers.  And it finds many forms.

READ MORE