Yet again, Snowmageddon failed to happen. No one is complaining, of course, but I think that what’s really interesting about the situation is the reason why most people whiffed:
In the run-up to this week’s blizzard, some serious differences emerged when it came to the New York City snowfall forecast.
On the one hand, there was the National Weather Service, armed with thousands of meteorologists, a newly upgraded forecasting supercomputer, a nationwide network of weather radars and balloons, and satellite technology.* It even sent the Hurricane Hunters to fly through the storm to take additional data.
Early on, the NWS called for “historic” snowfall totals of 20 to 30 inches in New York City. It cautioned that if an intense snowfall band ended up camping out over the city (as several model forecasts suggested would happen), 3 feet wasn’t out of the question.
That obviously didn’t happen.
Only one major weather outlet got it right:
Throughout most the day on Monday, the Weather Channel was forecasting 12 to 18 inches for New York City, while the National Weather Service insisted a record-breaker was possible. By nightfall the Weather Channel had scaled back its forecast even further, calling for 8 to 12 inches. And that’s exactly what fell. As late as 5 p.m. Monday, the National Weather Service was still talking about top-end scenarios of up to 3 feet in the Bronx.
So how did the Weather Channel manage to nail it when literally everyone else was way, way off?
These days, meteorologists rely heavily on computer weather models for everything from temperature forecasts to the tracks of hurricanes to snowstorms. Usually, they’re pretty good. But the problem is, they frequently disagree—and when that happens, you need to quickly assess what information to use and what to toss. Which is where the humans come in.
As best I can piece together, the Weather Channel’s method for forecasting storms like this is not to throw out any model information, no matter how off-base it may seem at the time. And for this storm, the potential spread of model forecast placement of the most intense snow band was exceptionally large for the New York City area. This is a perfect situation where probabilistic forecasts are useful. Instead of banking on one or two specific models like the NWS did (and which turned out to be the wrong ones), the Weather Channel chose to blend the models and weight them a little more equally.
Hm. So, by focusing on a single predictive computer model that predicted a sensational outcome, essentially all of the media got it wrong. Now where have I heard that before…?
Oh yeah, every time the subject of global warming comes up!
And this is yet another example of the deception that occurs with global warming. Remember, every bit of the so-called “evidence” of global warming or climate change or whatever other boutique politically correct phraseology is being used is based on predictive computer models. Those predictive computer models are programmed by people. That programming is based on certain assumptions. Those assumptions are heavily influenced by the programmer’s biases and beliefs about how the system should work.
The end result is a prediction that is way, way off.
Or, we could apply common sense – if an entire industry of professionals can get it wrong about a storm coming less than 24 hours later, how much stock should we put into that same entire industry of professionals predicting doom and gloom — thus requiring policies that would result in severe and disastrous economic and social consequences — hundreds of years in the future?
There’s my two cents.