FYI.

This story is over 5 years old.

Tech

Irma and Harvey are Putting Experimental New Forecasting Tech to the Test

NOAA is using supercomputers to test a hurricane forecasting algorithm that has been two decades in the making.
Hurricane Harvey. Image: NASA

Over the last two weeks, the United States has witnessed a one-two punch of catastrophic weather events: the hurricanes Harvey and Irma.

Harvey was the first major hurricane to make landfall in the US since Wilma in 2005 and has resulted in at least 65 deaths and $70 billion in damage. At the time of writing, hurricane Irma is currently barreling its way across the Caribbean toward Florida. With sustained winds measured at over 175 miles per hour, Irma is the most powerful hurricane ever recorded in the Atlantic, a mark of the exacerbating impacts of climate change. It is still an open question if and where the Category 5 storm will make landfall in Florida. The question is of existential importance—storm forecasting determines whether communities will be evacuated and helps rescuers prepare for the fallout from the storm—but the specifics remain remarkably hard to answer.

Advertisement

Even though there are still a lot of unknowns when it comes to hurricane prediction science, a suite of improvements to the United States's hurricane forecasting infrastructure means the US may finally be on track to mitigate the worst consequences of these natural disasters. In other words, Harvey and Irma are high-stakes tests for multi-million dollar technologies that have been years in the making.

When trying to forecast the future path of a hurricane, analysts at the National Hurricane Center in Miami draw on a number of different models, including the US Global Forecast System (GFS) and the model from the European Center for Medium-Range Weather Forecasts (ECMWF), an intergovernmental agency supported by most nations in Europe. These models take environmental data collected by remote sensors in the ocean, hurricane-tracking aircraft, and satellites as input for algorithms that create simulations of weather patterns on a global scale. In the US, the data is routed through computers at the National Centers for Environmental Information in Maryland to create the GFS model. The GFS model, along with the European model and others, are then passed to the National Hurricane Center, which uses the models to publish hurricane forecasts up to four times a day. Historically speaking, the forecasts from Europe have led the pack in terms of accuracy. In the week leading up to Hurricane Sandy's landfall in 2012, for instance, only the Euro model managed to predict that Sandy would land on the East Coast. All the other models—including the GFS—showed Sandy venturing back into the Atlantic until just a few days before it made landfall. "Statistically speaking, the European model is still the best," Shian-Jiann Lin, the lead physical scientist at the National Oceanic and Atmospheric Administration's (NOAA) Geophysical Fluid Dynamics Laboratory, told me. "But we are making very big improvements to catch up." In the weeks after Sandy, it was clear that the US needed a drastic upgrade for its hurricane forecasting system to make it on par with Europe's capabilities. The ECMWF's superior forecasts were largely a result of a more sophisticated algorithm being run on two supercomputers that rank as the 25th and 26th most powerful in the world.

Advertisement

Read More: Why Weather Predictions Will Never Be Perfect

Given the number of input variables involved in creating sophisticated weather models, weather agencies need access to serious computing power. So in 2013, the US Congress authorized spending $48 million to improve weather forecasting to make sure that such a deadly and expensive miscalculation like Sandy didn't happen again. Of this, $25 million was devoted a massive $44 million upgrade of NOAA's computing architecture that was completed in January of 2016.

This upgrade—which added the supercomputers Luna and Surge to NOAA's weather modeling arsenal in Virginia and Florida, respectively—represented a nearly tenfold increase in NOAA's computing power to 5.78 petaflops, or 5,780 million million operations per second. But merely adding to NOAA's computing muscle wouldn't do much good without an improved algorithm for modeling the global environment. For this, NOAA has relied on the work of Lin and his colleagues at the Geophysical Fluid Dynamics Laboratory, who have spent nearly two decades developing a revolutionary new climate modeling algorithm known as the finite-volume cubed-sphere dynamical core, or FV3. This algorithm, which NOAA approved last year as the replacement of the current core at the heart of the current Global Forecast System, allows for unprecedented high resolution simulations and far more localized forecasts, all while generating a global forecast four times a day.

Advertisement

The supercomputer powering the European model. Image: ECMWF

It does this by essentially creating a 3D grid around the Earth and partitioning the atmosphere into smaller boxes, simulating the atmospheric conditions in each one, and then integrating the conditions from these boxes to give a comprehensive report of global weather patterns. Moreover, these boxes can be nested in one another, meaning that in future iterations of FV3, researchers will effectively be able to zoom in on local weather events and get accurate forecasts.

"FV3 is not a model by itself, but an engine to power the prediction model," Lin said, contrasting the engine with regional hurricane-specific models used by NOAA, such as HWRF and its experimental replacement, HMON. "FV3 describes the motion of the atmosphere, however you have to put data in to do predictions." As detailed in a Science profile of Lin published earlier this year, the strength of the simulation algorithm comes from its integration of climate science and meteorology. While this might seem like a natural union, the article points out that "for a long time, meteorologists and climate scientists operated in separate domains."

Meteorologists were all about creating fast forecasts, whereas climate scientists obsessed over the physics of weather models on a global scale and timespans that ranged over decades. In this sense, the new FV3 NOAA model combines the best of both worlds to produce hyper-detailed seasonal forecasts that will hopefully allow meteorologists to accurately forecast the path of hurricanes 5 to 10 days in advance. "Long-range prediction is really the most useful for hurricanes," Lin told me. "If you look at Harvey for example, decisions had to be made about evacuations, and you can't do that 1 or 2 days in advance. Once the public has confidence in the long-range predictions, officials can make wise decisions about evacuating before a hurricane. But I don't think we've established that confidence yet." In May, NOAA announced that it would be deploying FV3 as an experiment running alongside its operational forecasting model (GFS) during the 2017 hurricane season. Depending on how it fairs this year, it will determine the timeline for its operational deployment, which NOAA currently expects to be in 2019. In this sense, Harvey and Irma are truly putting the capabilities of NOAA's new forecasting architecture to the test. Lin and his colleagues are closely monitoring Irma to see how well FV3 will predict the hurricane's path. But as for its performance during Harvey, Lin said the model was able to forecast the hurricane's path about as accurately as the European model. But since the model is still considered experimental, the forecasts from FV3 weren't provided to government officials in Houston before the storm hit land. It's accuracy could only be determined after the fact.

Advertisement

The big improvements, however, were relative to NOAA's current Global Forecasting System, as well as its regional hurricane tracking model, HWRF. "FV3 outperformed all operational models," Lin said. "It's quite a bit better than GFS and HWRF, but Harvey was just one case. The key word here is 'experimental,' which means that not every component of FV3 is ready. We're still building up the system."

As for Irma, Lin said the FV3 is proving to be remarkably accurate at forecasting the hurricane's path. "I think we've done very well for Irma up to this point," Lin said. "The storm isn't over yet, so we have to wait another week to have a complete evaluation. From what we've seen thus far it's doing very well." Assuming that FV3 keeps up its performance this hurricane season, Lin said that the next big task will be switching the system from experimental mode to operational use by NOAA and the National Weather Service, where it will be used to provide hurricane and other forecasts to the public. However Lin said that in order to really tap into the full potential of the forecasting system, he's going to need even more computing power.

Read more: Islands of Starving Cows Discovered After Hurricane Harvey

"The difficulty we are facing is limited computing power," Lin said. "Even though the increase two years ago [with the supercomputers] was significant, I don't think it's sufficient. To do the really groundbreaking research that we are capable of, I need a system that is ten times larger."

Given Congress's adversity to providing funding for such projects, except in the aftermath of terrible storms—referred to around NOAA as "budgeting by disaster"—Lin may have a hard time convincing legislators to cough up several million dollars for more computers.

But as the experimental successes of the new NOAA forecasting system go to show, American lives may depend on it.