A mandate prescribed by the Biggert-Waters Flood Insurance Reform law is a major challenge for a federal weather agency, an executive of NOAA revealed at an insurance conference last week.
Edward Johnson, director of strategic planning and policy for the National Weather Service division of the National Oceanic and Atmospheric Administration, outlined the requirement that charges NOAA with sorting out wind from water damage for slab claims, and highlighted the agency’s resource limitations.
Speaking at the Extreme Weather Insurance Risk Management Conference in New York City, Johnson referred to the Consumer Option for an Alternative System to Allocate Losses (or COASTAL) Act, which was signed into law on July 6, 2012 as part of the larger Biggert-Waters flood insurance program reform law.
“It calls on NOAA to do something that we’re busy trying to figure out how we’re going to do—and the limits on our ability to do it,” Johnson said, noting that the Act says that for certain categories of storms, NOAA will have to produce a Named Storm Event Model and an associated Coastal Wind and Water Event Database.
“The intent there is to produce data that will differentiate between wind damage and storm [water] damage for coastal storms,” he said.
Specifically, the Act requires NOAA to produce detailed post-storm assessments following certain landfalling systems, he said, noting the Act’s goal of lowering costs to FEMA’s National Flood Insurance Program by improving the determination of wind versus storm surge damage.
NOAA is supposed to “help clarify the wind/surge timeline and help determine when the greatest damaging forces occurred at the site of the indeterminate loss,” he said. In other words, the aim “seems to be to identify at a parcel level—individual homes and commercial properties—whether that blank slab over there was blown away by the wind first or washed away by the floods first,” Johnson said.
“That level of spatial precision and temporal precision is going to push the limits of what can we do,” he added.
For one thing, Johnson said that the modeling focus for storm surge at the Weather Service is on predictive models. “There’s a really high premium on models that run quickly, that can produce storm surge estimates that adapt very quickly to changes in the storm’s position and intensity.”
“There are different models that are used that are much more precise, but also much more computationally intensive. They are probably more appropriate for post-storm analysis,” he said.
Johnson noted that the Act has a standard in there of 90 percent accuracy. “We’re busy trying to figure out how close we can get,” he said. “Questions about how those words are going to be interpreted are being worked right now,” he said.
Key challenges for NOAA related to “insufficient density” of the surface observation and water-level observation networks, he said. “It’s a challenge to find hardened observing systems that will stand up to these events,” he said. “You’re observing system doesn’t do you a lot of good if the storm…washes it away,” he said.
There are technologies that other organizations have used to install equipment ahead of the storm. That may be one of the solutions,” he said.
Bringing data from the two networks together is another challenge. “Remember, the ask [to] differentiate between wind damage and water damage. So they [the observation systems’ data] both have to be pulled together with comparable accuracy,” Johnson remarked.
He said NOAA also faces resource issues related to manpower and computer time.
“The Act has lots of nice things that NOAA is going to do, but none of this to do it with,” he said, rubbing his thumb and forefinger in the air with a gesture to indicate a lack of funding for the agency that now has 5,000 employees.
Specifically, he highlighted a lack of resources to conduct post-storm assessments within 90 days at high-level of impact detail.
“Some of the post-analysis models are computationally intensive. That’s another resource,” he said.
Was this article valuable?
Here are more articles you may enjoy.