Don’t big-up the TP; but don’t undermine it either

Shocking though it may seem to those who think actuaries are pointy-headed geniuses, building technical pricing models isn’t pricing.  The price is determined by the underwriter, or often just by what the market can bear.  In retail insurance the rating factor relativities do affect price, but even here great levers up and down are deployed to change the level (if not the relativities) of the prices.

Not only is TP distinct from Actual Price (AP), but TP itself is sometimes deficient.  Problematical datasets, or not enough data to inform all the complexities we must allow for, mean TP is sometimes not a good predictor of the future claims outcomes.

What is important is the discipline that calculating TP brings.  And doing it consistently and each year.  And comparing it to AP to deliver a key measure of rate adequacy: AP/TP ratio.  And improving it as new information and data emerge.  And monitoring rate changes.

Insert TPs and AP:TP ratios into a control process that calibrates results to expectations and refines this over time.  This is the sharp end of actuarial work, where analytical insights drive profit (and it is the core of what Calibrant does).

TP is the backbone of professional pricing discipline, even if it’s not how an insurer should actually set their prices.  And for this reason, it mustn’t be undermined.  External influences (e.g. from premium-hungry underwriters) trying to manipulate TP can be an unedifying sight.  “Remove this outlier”, “that claim won’t happen again”, “10% inflation: you’re crazy!”, “the market will never take that price”, and my favourite “can I have an underwriter judgment factor in my pricing tool please?”

These phenomena undermine the value of TP.  Accept the weaknesses in TP without subverting it, and allow data and experience to improve it over time.  It isn’t the immaculate utterance of a divine oracle.  But provided underwriters are engaged, it will help the business monitor and then improve performance.