Frequency and Severity: Keys to Understanding Risk

Before I write an article about the difficulty in categorizing industries into low hazard, medium hazard, and high hazard categories, which is worth some examination and discussion, I need to write some about the how the concepts of frequency and severity factor into understanding risk and safety. I will return to the concept of the hazard level of industries, but first, here is some discussion about frequency and severity.

To begin with, frequency and severity (as they relate to potential losses to people or property) are factors that help a risk manager or safety professional get a sense of the magnitude of a particular risk. Though objective measures are sought whenever possible, there is typically some element of subjectivity inherent in any examination of these concepts and their relationships. There is also the significant difference in the analysis of these factors from a basis of historic experience versus potential future outcomes.

Frequency, also sometimes referred to as “likelihood” in certain models, refers to how often a particular adverse outcome will happen or is expected to happen. When looking at historic data, the frequency of actual events can be rated compared to factors such as work hours, days, months, quarters, production units, or even revenue dollars. When looking forward without the benefit of historic data, consideration can start with the simple question: “How likely is it for X to happen?,” which, of course, can lead to a wide range of conclusions related to what the answer is based on and what assumptions are made.

Examining Frequency
Frequency or likelihood can also be examined in a very detailed way, by looking at potential causal factors in detail and how those factors are linked together in cause chains or cause trees. For very large risks, and process safety environments, this sort of analysis is worth doing with a great amount of focus. Unfortunately, though, for lesser risks, assumptions are often made with very little in the way of analysis. There are ways to include some quality analysis that is reasonable in terms of time and effort required, and it is very beneficial to always at least consider what a given ranking of frequency or likelihood level is being based on and if that is sufficient for the given risk.

Scales for Frequency and Severity Levels
Before we discuss severity levels, it makes sense to discuss what scales are used in these analyses. Because frequency and severity are very frequently depicted in a two-axis arrangement, with frequency going from low to high on one axis and severity along the other, it is common for a single scale to be used for both frequency or severity, though there is no reason that the scales must be the same. The simplest version is a binary scale, with “low” and “high” as the only options.


It is much more typical (and easier for those attempting quick categorization) to see three levels used, with the addition of ” medium” or “moderate” between low and high. There are examples of much more detailed rankings as well, including rankings that use numerical factors to express a level. Though these give an appearance of precision, there is the issue that seemingly precise rankings may not be as scientific as they seem, particularly considering the process used to arrive at a given numerical ranking. A basic three-by-three grid, as depicted below, is a common and useful starting point for risk ranking.


Axes and Relative Weighting
It’s also useful to note that frequency and severity can be depcited across either axis, but using the vertical axis for severity and the horizontal axis for frequency tends to communicate relative risk ranking more effectively, especially if severity is give more relative weight than frequency.

Understanding Severity
Severity relates to the possible outcome of a given adverse event. Unlike frequency or likelihood, there are more natural sets of groupings that may be employed to place potential outcomes into categories. An example of such a set of groupings places outcomes such as non-lost-time (medical only) injuries as “low severity,” injuries that result in lost time or indemnity claims as “medium severity,” and injuries that result in some level of permanent disability being classified as “high severity.” There are many groups of other criteria that may be used as well, such as dollar values of claims, and also more complex categories of the nature of injuries.

Multiple Individuals
When multiple individuals are subject to possible injury by a given event, even if the injuries are minor, more weight will be given on the severity scale. Exactly how to factor multiple smaller injuries versus single larger injuries does present some challenges, though, so this must be done with some careful consideration.

Using an “Impact Factor”
A value can be developed, with frequency and severity relatively weighted, for any spot on the grid. My examples above use a simplified analog for that value, with green-yellow-orange-red color progression representing more total risk. The idea of an impact factor can take the total risk position on the chart and place a value on that place, with the highest risk items being assigned a higher value. Once again, the use of numerical values can provide for some interesting analysis possibilities, but also is subject to the same illusion of precision that applies to individual factors given numerical values. If numerical values are used, make sure to consider the source of those numbers and to be careful to not assign excessive importance to them just because they are expressed numerically.

Why This Matters
There is naturally much more to explore in the realm of frequency and severity, but the concepts presented above should form a starting point at least for applying them in practice. It is also important to consider how the conceptual framework for the relationship between frequency and severity form an underpinning of a solid understanding of risk, that is at least as valuable as any placement of a given possible scenario on a grid or matrix.

Putting Risk and Severity Consideration to Use
If you have familiarity with the textbook treatments of this topic, you will likely have noticed that not every term and definition related to the topic has been covered. One of the reasons for that relates to the value of these concepts in application versus theory. Theory is important, and sound theory is necessary for any practical application to take place. But the application is the most important consideration for the risk and safety practitioner, including the explanation of these ideas to those who need to put them to use. Simply put, people need to understand that a given potential risk might have a certain level of likelihood and a certain severity of outcome, and that those considered together give a glimpse of the relative overall level of risk. It is also essential to understand that the average person is typically subject to varying levels of clarity in their understanding of risk factors. A matrix of frequency and severity affords a better basis for discussion of the actual risks involved, and can elevate a discussion beyond uninformed assumptions.

True Total Cost of Risk

There is a term used in Risk Management field, “Total Cost of Risk,” also known as “TCOR.” This is an important concept in the strategic planning process for making decisions about the purchase of insurance, the deployment of alternative risk programs, and setting retention levels. The traditional “Total Cost of Risk” number includes the following costs:

– Insurance premiums and fees
– Retention (losses below the threshold at which purchased coverage applies, or losses which are self-insured)
– The cost of risk management administration

TCOR is an important number to understand and make use of both for management of a specific company’s program, but also for comparing different business units, benchmarking, setting targets, and in the process of evaluating acquisitions, reorganization, and business process reengineering.

What I’d like to suggest, though, is that while you maintain a good understanding of the traditional sense of TCOR and your organization’s performance in that area, that you also consider a broader idea related to the big picture for risk costs. hence, the idea of “True Total Cost of Risk,” which is not a term with the same universal standard meaning as TCOR, but a very useful conceptual idea to help maintain and communicate a broader view of the implications of risk in organizations.

The idea behind “True Total Cost of Risk,” or “TTCOR” is that there are indirect, uninsurable, and peripheral costs beyond those three categories that may be hard to quantify, but have a definite and substantial cost impact on an organization.

The specifics of indirect accident costs deserve separate and detailed treatment, but are basically related to costs brought on by a loss that are not included in the claim costs, such as the costs of replacement workers or supervisor accident follow-up. These indirect accident costs are a major component of the TTCOT idea, but there is more.

You also might want to consider:

– Costs outside the risk management function in an organization per se that still relate to the administration of the risk management function. An example of this might be shared information technology, software development, or systems support functions.
– Time and effort involved in contractor selection and selection of replacement contractors, both for construction and non-construction purposes
– Impact on operations by constraints imposed by insurance policy conditions or limits (this is another potentially complex idea that also deserves more in-depth treatment)
– Hindrances or facilitation of the speed that new products or offerings can be brought to market by the efforts required to manage the risk of those new offerings
– Design, engineering, and planning costs resulting from code compliance, risk control, or liability reduction efforts

… To name a few. Understanding the breadth and depth of the impact of non-speculative risk of loss can help speed organizations toward their goals, and face fewer surprises down the road.

How good are we at judging risk around us?

“I’ve never had a problem before, and I’ve been doing this for years!

You hear it regularly, and perhaps even say it ourselves. We use our own incident-free experience as justification for the acceptability of an activity. Here’s the problem with that: We are notoriously bad at judging many categories of risk around us. The reason is simple. “Getting away” with a risky behavior does not really prove anything, because the relative probability of an incident or injury can vary widely and still have the same result in the very limited sample that is our own experience.

Here is an example: Terry grew up in a home where his mother would cook a meal and leave the remaining food to cool on the stove, and often not refrigerate it for hours afterwards. Terry continued the practice when he began to live and cook on his own. A woman he began dating noticed the tasty batch of stew that he’d made her for dinner was left on the stove (without the burner on) not only after dinner but all the way through the rather lengthy movie that they watched. She said “Are you not keeping the leftovers?” He said, “Why would you ask that? Wasn’t it good? There’s a lot left.” She replied, “It’s just that it’s been sitting out since 7, and it’s nearly midnight now.” Terry’s answer? “It’s fine.”

In his mind it was fine. But here are some facts to consider:
– The meaty stew he cooked was clearly and demonstrably likely to have far more growth of pathogens that could cause foodborne illness when cooled that slowly and held at room temperature
– Thorough reheating of the stew, as was his common practice, could kill many types of pathogens and result in no ill effects, on two conditions 1. That heating is truly thorough, sustained, and to a sufficient temperature and 2. That there are no toxins or toxic byproducts associated with the particular organisms that affected the stew
– Terry’s own childhood and youth contained relatively frequent episodes of what his family called “stomach flu” even though the actual nature of the illness and cause were never really understood.
– Terry didn’t think that the gastrointestinal issues that he’d experienced were out of the ordinary for a typical family, largely because none were particularly severe, and perhaps more importantly because he didn’t really have any sense of how often other families experienced these illnesses.

Do you see how “It’s fine” is really not true in this case?

Knowledge is Power (When Applied!)

This is the kickoff of my new site,, where I will write about the understanding of and control of workplace (and other forms) or risk. My main premise will be this: Many business owners and managers understand the need to take properly calculated risks when it comes to business development and financial issues, but are in the dark when it comes to non-speculative risk of loss. Developing an understanding and application of that type of risk is essential to sound business. I’ll help guide you through that process.