We use performance cookies to collect information about how you use our website (for instance which pages you visit most often). Cookies help us to improve your online experience with Aspen. Find out more here


Aspen Opinion

Evaluating vulnerability within natural catastrophe models

March 13, 2017

Amaryllis Mouyiannou PhD, Vulnerability Specialist in Aspen Re’s Research & Development team, considers the flexibility accorded to more recent natural catastrophe models. As ever, the assumptions and method in arriving at loss probabilities should be given as much consideration as the results. Only now, levels of communication with modelling companies and open architecture enable a better understanding and this is helping to secure more certain outcomes.

A new generation of models

We use catastrophe models in order to assess the probabilistic risk from potential natural disasters. The catastrophe models are able to estimate the probable loss at given return periods, provided the details of the insured assets are known (in terms of location and characteristics). Catastrophe models have made significant advances in the last 20 years with the inclusion of more countries and incorporation of up-to-date scientific methodologies. New features, for example liquefaction and tsunami risk, are continually introduced to enhance the risk estimation. Earlier models were said to operate within a black box as there was little understanding of the relationship between the exposure detail input and the probabilistic loss output. However, as our Aspen Opinion A different type of Update highlighted, the black box detractors have been somewhat silenced as evolution has generated more detailed, transparent and modifiable versions. Nevertheless, model evolution in the last couple of years has not been as fast as many hoped.

The transparency and adaptability allows a “smarter” model use enabling better representation of modelled risks. Model components can be individually evaluated and modified when necessary, given the reduced opacity and the very detailed support around modelling companies’ assumptions and best use. The greater transparency also allows for direct comparisons of components between different models and their validation with external independent references. This has been beneficial to all concerned. The model users can be more confident concerning the relevance of the model to their particular business. Meanwhile, the model designers receive valuable feedback on model users’ needs. This “smarter” use of the models still demands a specialised knowledge of vulnerabilities; an inappropriate choice of assumptions can have a negative impact on the final risk/loss calculations. Great care must always be taken in all aspects of modelling.

A number of (re)insurers, including Aspen Re, use a blended approach to catastrophe risk management through incorporation of outcomes from more than one model. Often, the different available models need to be evaluated and their differences understood - with the use of a particular one, or the combined use of more - tailored for the assessed risk. In order to produce a more accurate modelling solution, all the model components should, if possible, be evaluated, compared and validated.

Calculation components

In order to predict the possible losses arising from an event, the potential hazard intensities at each exposure location need to be connected to the monetary loss through a module that estimates the expected physical damage on the insured assets. For the (re)insurer, assessment of damage potential includes buildings, contents and business interruption. This assessment is covered within the vulnerability module of each catastrophe model. The key calculation components - or modules - within natural catastrophe models are shown in Figure 1.

Figure 1: Key components of natural catastrophe loss assessment

Source: Aspen Re R&D

The vulnerability calculation is dependent on the exposure description (location and characteristics) and the hazard representation that is expressed by relevant intensity measures (IMs). The vulnerability component output can then be calculated as the damage potential. This is expressed as a mean damage ratio (MDR) - i.e. the ratio of average damage loss to total replacement value of the insured asset and is a function of the hazard intensities. The functions of MDRs and IMs are key to the vulnerability assessment. The MDR values together with the policy conditions are employed in the loss calculation to assess the probabilistic losses for different return periods. Within each model, it is important to evaluate which vulnerability functions give a true reflection of claims settlements and represent the insured buildings rather than the general engineering view of structural performance.

Two-stage development process

The vulnerability component is derived during the development phase of each catastrophe model through a two-stage process - the fragility estimation and the vulnerability assessment. At the first stage, the performance of the insured assets (structures, contents and business interruption) during events of various intensities is expressed in terms of engineering demand parameters (EDPs). These EDPs can be calculated through a variety of methods while the fragility assessment methodologies divide into two categories - the empirical and the analytical. Empirical methodologies derive the expected performance from past damage surveys of similar events, experts’ judgement etc. In contrast, analytical methodologies can vary from considering very simple models to very detailed ones that take all the facts (e.g. all building characteristics in the structural damage calculation) into account. Empirical and very simplified analytical methods were employed in the development of early models but evolution has led to the incorporation of increasingly advanced and detailed analytical methods. The fragility and vulnerability estimation within the seismic models is keeping pace with the earthquake engineering advances through consideration of the available updates in structural and content earthquake performance.

In the second stage of the vulnerabilty module development, the EDPs are translated to MDRs that express the expected damage as a percentage of the replacement value of the assets.

Figure 2 shows the vulnerability module derivation for a seismic model. First, a relationship between the earthquake intensities and the building performance has to be established. The building resistance increases with seismic demand until the building reaches its resistance capacity. Resistance then falls as damage occurs and the building degrades until it collapses. The graph represents the fragility of the building with resistance represented by distinct performance levels such as the initiation of cracking, the failure of some structural parts etc. The performance levels are then translated to loss damage, i.e. the cost of restoring the building to its initial condition (repair or complete replacement) expressed as a percentage of its replacement cost (MDR). Collapse translates to 100% MDR. Quantifying loss due to business interruption can be expressed as a percentage of total value through the assignation of downtime periods to building performance level. In turn these are expressed  as a function of the initial earthquake intensities. A vulnerability function is derived for each building typology and these may be employed in the open modelling process.

Figure 2: Assessment of building performance to damage ratios

Source: Aspen Re R&D

Dealing with opacity

Both the fragility estimation and the translation from EDP to MDR are carried out during the development phase of the models, and thus are still somewhat opaque. Only the overall vulnerability functions (i.e. the relationships between event intensities, expressed in terms of IMs and MDR) can be seen when using a model and often the documentation does not explicity describe details and assumptions used. A better understanding and better subsequent use of models requires comprehensive evaluation of the underlying IM and MDR methodologies.

Hazard representation

The hazard representation within the vulnerability functions can have a strong impact on the final loss estimation. A variety of IMs can be used to describe the hazard at each exposure location, given an event. So different models for the same peril and region may use vulnerability functions with different IMs and the uncertainty in the loss calculation can be reduced through adoption of the most appropriate IMs.

For example, the range of IMs is very broad for seismic hazard. They vary from those that only consider acceleration on the ground (structural-independent IM) to those that are bound to the natural frequency of vibration of the structures (structural-dependent IM), representing the intensity experienced through the building which varies according to floor and building height). Structural dependent IMs have a greater correlation to structural performance than structural independent IMs and thus can reduce calculation uncertainty. Nevertheless, the former’s advantage is only relevant when the structural details (e.g. a building’s height - used as a proxy for estimating natural frequency of vibration of buildings) are known. If these exposure details are unavailable then use of such IMs can introduce randomness in the hazard representation and consequently errors to the loss calculation.

Figure 3 shows the different loss outcomes of four seismic events using structural dependent vulnerability functions derived from different building height assumptions. Losses, assuming mid, high and tall-rise buildings, are compared with those based on a low-rise building assumption. In Event 1, the loss from a tall-rise building is 80% greater than that of low-rise. This model is not appropriate to use where building heights are unknown; in this instance a model with structural-independent vulnerability functions would be more appropriate.

Figure 3: Percentage loss change of modelled earthquake event losses using different building assumptions

Source: Aspen Re R&D

Devil is in the detail

Model users should be aware of the constraints and the sensitivities of the various model components – especially the vulnerability component. Model evaluation will consider incorporation, or otherwise, of engineering advances, the assumptions and their potential sources of uncertainty. Improvement has been made through the adoption of more recent engineering developments but exposure resolution and description quality are still likely to impose limitations. Newer, more advanced and precise models can be ‘data hungry’ and, moreover, may generate uncertain (and even unstable) loss calculations if the granularity of exposure quality is limited. Frequently, the (re)insurer still receives exposure data lacking in detail or already subject to classifications or translations that are not consistent with the modelling requirements.

Yet, advances in models make for a fundamental difference. Through greater knowledge and transparency, users can choose the right guidelines so that the available data is used in the best way.

Click here for a pdf version of the article.

Back to articles

The above article/opinion reflects the opinion of the author and does not necessarily represent Aspen's views. The article reflects the opinion of the author at the time it was written taking into account market, regulatory and other conditions at the time of writing which may change over time. Aspen does not undertake a duty to update these articles.