Fallacy 1: Cat models are right

Bureaucracy’s love of silos and task-churning is often pernicious.  Clean the data, upload to the model, crank the handle, put the results into tables, populate the Solvency 2 capital model, job done; and repeat next quarter.

There is an epistemic question which should be asked but often isn’t: how close to the truth are cat models’ predictions?  Some silos – notably those running cat models and broader capital models – tend to believe the results.  Others – notably those in commercial decision-making functions like underwriting – don’t.  The silos don’t compare notes.  The former does its thing, the latter shrugs its shoulders and gets on with doing business.

I have seen situations where the modelled AAL for one natural peril is higher than the total premium for property insurance in that region.  This is striking: it suggests the model and the market differ in their view of average cat losses by a factor of more than three.

Markets can be inefficient and wrong, but surely not that wrong.

I can think of reasons for this error – notably around the observation that most attention is paid to the tail and not the mean, especially for the stakeholders like regulators and reinsurers whose existence depends on not underestimating such downside scenarios.

I’ve been told by underwriters that some cat models for some perils are heavy (as I observe above) and others are reasonable or even light.  This is hearsay, I confess, and in their defence let’s observe the massive effort the cat modelling firms put into their work and refining it over time.

But the main point is this: do not take cat models’ AALs as gospel.  Always understand and challenge