P/C Industry Depends Too Much on Catastrophe Models, Says Pioneer Clark

By Andrew Simpson | April 19, 2011

Catastrophe models are a great risk management tool for property/casualty insurers but even the person who created the first one is worried that they are being given more credit and influence than they deserve.

Karen Clark is an expert in the field of catastrophe risk assessment. She developed the first hurricane catastrophe model in 1983 and in 1987 Clark founded the first catastrophe modeling company, known today as AIR Worldwide Corp., which she sold to Insurance Services Office (ISO) in 2002.

The need for insurers to understand catastrophe losses cannot be overestimated. Clark’s own research indicates that nearly 30 percent of every homeowner’s insurance premium dollar is going to fund catastrophes of all types.

“[T]he catastrophe losses don’t show any sign of slowing down or lessening in any way in the near future,” says Clark, who today heads her own consulting firm, Karen Clark & Co., in Boston.

While catastrophe losses themselves continue to grow, the catastrophe models have essentially stopped growing. While some of today’s modelers claim they have new scientific knowledge, Clark says that in many cases the changes are actually due to “scientific unknowledge”— which she defines as “the things that scientists don’t know.”

“Companies should not be lulled into a false sense of security by all the scientific jargon which sounds so impressive because in reality… the science underlying the models is highly uncertain and it consists of a lot of research and theories, but very few facts,” says Clark.

Today Clark is telling insurance company executives that they don’t need more models but they do need more insight into their catastrophe losses. They need new approaches – as well input from real, old-fashioned human underwriters– that can provide that insight.

“[C]atastrophe losses are so important already, we don’t really have to limit ourselves to just one approach or methodology,” says the consultant.

In the following edited version of a longer interview, Clark discusses some of the limitations of catastrophe models and recommends some new approaches. The complete interview with Clark may be heard on Insurance Journal TV.

Your concern is that insurers and rating agencies, regulators and a lot of people may be relying too heavily on these models. Is there something in particular that has occurred that makes you want to sound this warning, or is this an ongoing concern with these?

Clark: Well, the concern has been ongoing. But I think you’ve probably heard about the new RMS hurricane model that has recently come out. That new model release is certainly sending shockwaves throughout the industry and has heightened interest in what we are doing here and our messages…. [T]he new RMS model is leading to loss estimate changes of over 100 and even 200 percent for many companies, even in Florida. So this has had a huge impact on confidence in the model.

So this particular model update is a very vivid reminder of just how much uncertainty there is in the science underlying the model. It clearly illustrates our messages and the problems of model over reliance.

But don’t the models have to go where the numbers take them? If that is what is indicated, isn’t that what they should be recommending?

Clark: Well, the problem is the models have actually become oversimplified. What that means is that we are trying to model things that we can’t even measure. The further problem with that is that these assumptions that we are trying to model, the loss estimates are highly sensitive to small changes in those assumptions. So there is a huge amount of uncertainty. So just even minor changes in these assumptions, can lead to large links in the loss estimate. We simply don’t know what the right measures are for these assumptions. That’s what I meant… when I talked about unknowledge.

There are a lot of things that scientists don’t know and they can’t even measure them. Yet we are trying to put that in the model. So that’s really what dictates a lot of the volatility in the loss estimates, versus what we actually know, which is very much less than what we don’t know.

Why are companies using these so heavily? Is there pressure from regulators or rating agencies?

Clark: Well, there certainly is pressure from rating agencies. While rating agencies won’t acknowledge it, they do apply pressure to use the models and particularly the models that give the highest loss estimates. I think they believe that this gives them a consistent approach. But it really doesn’t because the agencies base their ratings on information from different past models, different model versions and without any independent checks on exposure data quality.

As was all too clear with the recent RMS model update, these models’ differences and exposure data quality differences typically lead to loss estimates that vary by 100 to 200 percent or more. So we believe that the rating agencies need a process that really is more consistent and transparent.

They should consider consistent, robust and transparent scenarios to represent the cat risk in each peril region. Then these same benchmark scenarios can be applied to every company’s portfolio so that the rating agencies are truly comparing apples to apples.

So because the rating agencies do ask for numbers from the models, obviously companies do feel compelled to use them.

How often are models updated?

Clark: Well, there are major updates and there are minor updates. I would say major updates come every several years. I don’t think RMS has had a major update for five or six years. But every year there are some changes in the loss estimates, even in Florida. For example, the Florida Commission on Hurricane Loss Projection Methodology has an annual process — actually now they have changed it to a two year process; it used to be an annual process — whereby the modelers have to submit lots of information and their loss estimates for Florida.

If you look at that data, you see every year the model results change. They go up. They go down. They go up. They go down, even from the same modeler. That is the point we’re trying to make: that we’ve pretty much squeezed all the information we can squeeze out of the scientific data that we have. So most of these changes we are seeing are just noise, just uncertainty in the system. Because we can’t pinpoint the numbers exactly, there are wide bands of uncertainty.

Given the uncertainties, how do you advise insurance companies to use these models?

Clark: I would love to see companies keep them in the context of these are tools, but they are only one small piece of the catastrophe risk assessment and management process. They give a view of the risk, but they don’t give the total view. And companies should be using other approaches and other credible information to bring into their process. They need to get more insight by using all sources of credible information and not just a model …

So the point is one, they’re one tool, they’re part of the risk assessment process. They should not be used to tell the whole story, and they should not be used as providing a number. One of the biggest problems with model usage today is taking point estimates from the model and thinking that that’s an answer. There is an enormous amount of uncertainly around those point estimates. So companies need to use other credible information to get more insight into their risk and to understand the risk better, what their potential future loses could be.

Companies should not be lulled into a false sense of security by all the scientific jargon which sounds so impressive, because in reality, as we’ve already discussed, the science underlying the models is highly uncertain and it consists of a lot of research and theories, but very few facts.

So companies need to keep this in mind. They need to be skeptical of the numbers. They need to question the numbers. And they should not even use the numbers out of a model if they don’t look right or if they have just changed by 100 percent.

What types of additional information should they be using? Is there still a role for underwriters?

Clark: … One, what other information they should be looking at from a scientific point of view? They should be looking at information such as what historical events have happened in a particular geographic region. What would the losses be today if these losses were to reoccur? What would the industry losses be? What would my losses be? What future scenarios can I imagine based on what’s actually happened and what would the losses be from those scenarios? …

What kind of events have happened in this area, what do we know about those events? And then what does that tell us about what future events could be and what my losses could be? So that’s some important information they should be looking at from the scientific point of view.

The underwriting question is a great one, because what’s happened in the industry is we’ve gone from 20 years ago with no model, all underwriter judgments, and we’ve now gone to the other end of the spectrum, which is all models and no underwriter. We think we just want to rely on the models and underwriters don’t have anything to add.

But that’s not really the case because underwriters do have a lot of knowledge about the accounts they’re underwriting. They have information that the models will never have. The models are very general tools and, in fact, they’re very blunt tools.

I likened the progress in the models going from a handsaw to a chainsaw but these are not surgical instruments. They’re not…they don’t have a lot of detail in terms of actual risk but the underwriters do have that information.

And where we should be with this catastrophe risk management is a more happy medium where we have a scientific approach, but we also have an approach that enables or allows underwriting knowledge to be added to that.

The catastrophe models can get you in the ballpark, but the underwriting knowledge can then give you a better, a more focused view of individual risk. So we believe that the best approach combines the best of all worlds and doesn’t have one to the exclusion of the other.

Is the bottom line that cat models are a great tool but they’re not the be all and end all?

Clark: Right. How I like to express it now is that they are a great all-around tool, but they’re not the best tool for all purposes. And it’s time — given the importance of catastrophe losses that we look at some other approaches and some other ways that we can assess and manage the risk. … We need to start thinking outside the black box and introducing some of these new approaches.

Click here to listen to the complete interview.

Was this article valuable?

Here are more articles you may enjoy.