Scientific integrity and U.S. “Billion Dollar Disasters”

Scientific integrity and U.S. “Billion Dollar Disasters” Roger Pielke Jr npj Natural Hazards volume 1, Article number: 12 (2024) Cite this article 10 Altmetric Metrics details Abstract For more than two decades, the U.S. National Oceanic and Atmospheric Administration (NOAA) has published a count of weather-related disasters in the United States that it estimates have exceeded one billion dollars (inflation adjusted) in each calendar year starting in 1980. The dataset is widely cited and applied in research, assessment and invoked to justify policy in federal agencies, Congress and by the U.S. President. This paper performs an evaluation of the dataset under criteria of procedure and substance defined under NOAA’s Information Quality and Scientific Integrity policies. The evaluation finds that the “billion dollar disaster” dataset falls short of meeting these criteria. Thus, public claims promoted by NOAA associated with the dataset and its significance are flawed and at times misleading. Specifically, NOAA incorrectly claims that for some types of extreme weather, the dataset demonstrates detection and attribution of changes on climate timescales. Similarly flawed are NOAA’s claims that increasing annual counts of billion dollar disasters are in part a consequence of human caused climate change. NOAA’s claims to have achieved detection and attribution are not supported by any scientific analysis that it has performed. Given the importance and influence of the dataset in science and policy, NOAA should act quickly to address this scientific integrity shortfall. Similar content being viewed by others The global historical climate database HCLIM Article Open access 19 January 2023 The evolving landscape of sea-level rise science from 1990 to 2021 Article Open access 14 July 2023 No evidence that mandatory open data policies increase error correction Article 15 September 2022 Introduction In the late 1990s, the U.S. National Oceanic and Atmospheric Administration (NOAA) began publishing a tally of weather and climate disasters that each resulted in more than $1 billion in damage, noting that the time series had become “one of our more popular web pages”1. Originally, the data was reported in current-year U.S. dollars. In 2011, following criticism that the dataset was misleading, NOAA modified its methods to adjusted historical losses to constant-year dollars by accounting for inflation (https://www.washingtonpost.com/blogs/capital-weather-gang/post/2011-billion-dollar-weather-disaster-record-legit-or-bad-economics/2012/01/12/gIQADocztP_blog.html). By 2023, the billion dollar disaster time series had become a fixture in NOAA’s public outreach, was highlighted by the U.S. government’s U.S. Global Change Research Program (USGCRP) as a “climate change indicator” (https://storymaps.arcgis.com/collections/ad628a4d3e7e4460b089d9fe96b2475d?item=1), was a cited as evidence in support of a “key message” of the Fifth U.S. National Climate Assessment showing that “extreme events are becoming more frequent and severe” (https://nca2023.globalchange.gov/chapter/2/). The time series is often cited in policy settings as evidence of the effects of human-caused climate change to increase the frequency and intensity of extreme weather events and associated economic damage, including in federal agencies, Congress and by the U.S. President (https://www.congress.gov/bill/118th-congress/house-bill/598/text; https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/14/fact-sheet-biden-harris-administration-releases-fifth-national-climate-assessment-and-announces-more-than-6-billion-to-strengthen-climate-resilience-across-the-country). In addition to being widely cited in justifications of policy, as of March, 2024, NOAA’s billion dollar dataset has been cited in almost 1000 articles according to Google Scholar (https://scholar.google.com/scholar?hl=en&as_sdt=0%2C6&q=%22billion+dollar+disasters%22&btnG=). This paper evaluates the billion dollar disaster time series by applying criteria of NOAA’s Information Quality and Scientific Integrity policies. The evaluation finds that billion dollar disaster time series fails to meet NOAA’s criteria for “information quality,” specifically, NOAA’s criteria of traceability, transparency, presentation, and substance. Thus, the billion dollar disaster dataset is not simply an insufficient basis for claims of the detection and attribution of changes in climate variables (or a consequence of such changes), but the dataset is inappropriate for use in such research. Throughout, I use the terms “detection” and “attribution” as defined by the Intergovernmental panel on Climate Change (IPCC)2. Climate data should be the basis for claims of detection and attribution of changes in climate variables, not economic loss data. Because of the shortfalls in scientific integrity documented in this evaluation, policy makers and the public have been misinformed about extreme events and disasters in the United States. Results Evaluation of policy or program performance is among the most common and influential practices in applied policy research. Policy evaluation tells us if actions by government programs and agencies are meeting their stated goals and provides insight into reasons for successes and failures. As such, evaluation offers important input that empowers policy makers to correct course and supports efforts by the public to hold governments democratically accountable. A systematic evaluation includes four distinct intellectual tasks3,4: (a) identification of goals to be achieved, (b) metrics which can be used to assess progress (or lack thereof) with respect to goals, (c) data or evidence related to such metrics, and finally, if possible, (d) judgments of responsibility for observed outcomes. NOAA’s billion dollar disaster time series is considered a “fundamental research communication” under the Public Communications order of NOAA’s parent agency, the Department of Commerce (https://www.osec.doc.gov/opog/dmp/daos/dao219_1.html). NOAA defines a “fundamental research communication” to be “official work regarding the products of basic or applied research in science and engineering, the results of which ordinarily are published and shared broadly within the scientific community” (https://www.noaa.gov/sites/default/files/legacy/document/2021/Feb/202-735-D.pdf). NOAA further identifies an important subset of “fundamental research communications” to be “influential information,” which “means information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions” (https://www.noaa.gov/organization/information-technology/policy-oversight/information-quality/information-quality-guidelines). The billion dollar disaster dataset is also what the Office of Management and Budget defines as “Influential Scientific Information” (https://www.govinfo.gov/content/pkg/FR-2005-01-14/pdf/05-769.pdf). NOAA’s Information Quality and Scientific Integrity policies set forth the criteria to be used for evaluating “fundamental research communications,” including the subset of “influential information.” Specifically, NOAA’s Information Quality Guidelines identify three criteria of information quality: utility, objectivity, and integrity (https://www.noaa.gov/organization/information-technology/policy-oversight/information-quality/information-quality-guidelines). Utility refers to “the usefulness of research to its intended users, including the public,” with an emphasis on “transparency.” NOAA’s Scientific Integrity Policy provides further guidance: “Transparency, traceability, and integrity at all levels are required” in order for the agency “to achieve” its mission (https://www.noaa.gov/sites/default/files/legacy/document/2021/Feb/202-735-D.pdf). Traceability: “The ability to verify sources, data, information, methodology, results, assessments, research, analysis, conclusions or other evidence to establish the integrity of findings.” Transparency: “Characterized by visibility or accessibility of information.” Objectivity refers to presentation and substance: Presentation: “includes whether disseminated information is presented in an accurate, clear, complete, and unbiased manner and in a proper context.” Substance: “involves a focus on ensuring accurate, reliable, and unbiased information. In a scientific, financial, or statistical context, the original and supporting data shall be generated, and the analytic results shall be developed, using sound statistical and research methods.” Integrity refers to “security ‑ the protection of information from unauthorized access or revision, to ensure that the information is not compromised through corruption or falsification.” Integrity will not be further considered as part of this evaluation. NOAA’s Scientific Integrity Policy also states that it will “ensure that data and research used to support policy decisions undergo independent peer review by qualified experts” (https://sciencecouncil.noaa.gov/scientific-integrity-commons/sic-integrity-policy/). OMB requires that agencies develop “a transparent process for public disclosure of peer review planning, including a Web-accessible description of the peer review plan that the agency has developed for each of its forthcoming influential scientific disseminations” (https://www.govinfo.gov/content/pkg/FR-2005-01-14/pdf/05-769.pdf). There is no such plan in place for the NOAA “billion dollar” dataset and the methods, which have evolved over time, and results have not been subject to any public or transparent form of peer review. The evaluation conducted here thus focuses on traceability and transparency (as elements of utility) and presentation and substance (as elements of objectivity). Traceability and transparency The NOAA billion dollar disaster dataset is intransparent in many ways, including its sources, input data and methodologies employed to produce results. The intransparency includes elements of event loss estimation, additions to and subtractions of events from the database, and adjustments made to historical loss estimates. There have been an unknown number of versions of the dataset, which have not been documented or made publicly available. Changes are made to the dataset more frequently than annually, suggesting that there have been many dozens of versions of the dataset over the past decades. Replication of the dataset or changes made to it is thus not possible by any independent researcher, as is verification or evaluation of the dataset itself. Seven examples illustrate the lack of transparency and lack of traceability. First, NOAA states that it utilizes more than “a dozen sources” to “help capture the total, direct costs (both insured and uninsured) of the weather and climate events” (https://www.ncei.noaa.gov/access/billions/faq). However, NOAA does not specifically identify these sources in relation to specific events, how its estimates are derived from these sources, or the estimates themselves. Almost all data sources that NOAA cites that it relies on for loss estimates are public agencies that produce data released to the public. Insured losses for specific events are aggregated and typically made available to the public, such as by the Florida Office of Insurance Regulation (https://www.floir.com/home). Aggregated data provides no information on specific businesses or individuals. NOAA also states that it includes in it loss estimates various indirect losses such as business interruption, wildfire suppression and others. NOAA does not provide the data or methods for its estimation of such indirect losses. Smith and Matthews5 (who also have created and maintained the dataset as NOAA employees) also identify livestock feeding costs as a function of national feedstock trends as a variable used in compiling the dataset. Livestock feeding costs are not considered a disaster cost in conventional disaster accounting methods (such as by NOAA Storm Data or SHELDUS), as these are not direct losses due to a local or regional extreme event, but rather an estimate of national market changes in commodity prices which are influenced by many more factors than an extreme event. It is unclear what other measures of indirect costs are included in the NOAA tabulation. Second, consider the case of Hurricane Idalia, which made landfall in the Big Bend Region of Florida in late September 2023. Initial catastrophe model estimates suggested insured losses of $2.5 to 5 billion (https://www.insurancejournal.com/news/national/2023/09/05/738970.htm). The initial NOAA estimate reported on its billion dollar disaster website in the immediate aftermath of the storm was $2.5 billion. However, actual insured losses have been far less than was estimated in the storm’s aftermath, totaling officially about $310 million through mid-November 2023 (https://www.floir.com/home/idalia). The historical practice of NOAA’s National Hurricane Center for estimating total direct hurricane damage was to double insured losses to arrive at an estimate of total direct losses6. Even accounting for some additional insurance claims to be made, it is unlikely that Idalia would reach $1 billion in total direct losses under the NHC methodology. Yet by December 2023 NOAA had increased its loss estimate for Idalia to $3.6 billion. What is the basis for NOAA’s estimate of Idalia’s total losses being ~12 times insured losses? That is unknown. Third, similarly unknown is why historical events are periodically added and removed from the dataset. For instance, from a version of the dataset available in December 2022 to an update published in July 2023, 10 new events were added and 3 were deleted (Fig. 1). A later comparison with yet another version of the dataset indicates 4 additional historical events were added (not shown in Fig. 1). There is no documentation or justification for such changes, I am only aware of them through the happenstance of downloading the currently available dataset at different times. Fig. 1 figure 1 Undocumented changes to disaster counts made by NOAA between two different versions of the billion dollar disaster dataset, one downloaded in 2022 and another in 2023. Full size image Fourth, a comparison of event loss estimates from the 2022 dataset and the 2023 version shows that each individual event has been adjusted by a different amount. According to NOAA, the only annual adjustment acknowledged is for inflation based on the Consumer Price Index (CPI). From 2022 to 2023, most of the adjustments made to individual events are between 4.5% and 6% but nine events are adjusted from 6.6% to 145%, and one is a reduction of about 75%. An annual adjustment for CPI should be constant across all events. No documentation is provided to explain these various adjustments and why they are unique to each event. Fifth, NOAA states that they perform “key transformations” of loss data estimates by “scaling up insured loss data to account for uninsured and underinsured losses, which differs by peril, geography, and asset class.” NOAA makes no details available on the methodology or basis for such transformations, nor their impact on loss estimates, nor how these transformations may change over time. Similarly, Smith and Matthews5 reference an overall bias correction that has been applied to the dataset, as well as an additional correction for crop insurance losses. Smith and Katz6 reference other adjustments, such as an adjustment to U.S. flood insurance participation rates, but neither the methodologies nor results of these various adjustments are documented, nor has the baseline data to which the adjustments are applied. Table 3 from Smith and Katz7 suggests an open-ended formulaic approach to loss estimation, but none of the data that would be used in such formulas is available. Nor is it clear that NOAA currently applies the formula to loss estimation. If so, it should be straightforward to provide sources, data and methods for each iteration of the dataset. Sixth, the number of smaller disasters ranging from $1 to $2 billion was fairly constant from 1980 to 2007 and then sharply increased starting in 2008 (Fig. 2). NOAA states that “we introduce events into the time series as they “inflate” their way above $1B in costs in today’s dollars. Every year, this leads to the introduction of several new events added from earlier in the time series” (https://sciencecouncil.noaa.gov/scientific-integrity-commons/sic-integrity-policy/). However, the December 2023 dataset shows a net change of zero events from $1-2 billion for the period of 1980–2000 and a net increase of such 2 events from 2001–2023. NOAA’s statement that it elevates disasters from

Comments

Popular posts from this blog

AN INTERESTING CONCLUSION… SOLAR FARMS WILL BECOME THUNDERSTORM and TORONADO INCUBATORS and MAGNETS.

IT’S A LITTLE LONG BUT DEFINITELY WORTH THE READ

Fact Checking The Claim Of 97% Consensus On Anthropogenic Climate Change