Weather Forecasting ?

Is there any reason to believe forecasting is more or less accurate in the Southern California area than in the upper Midwest?

Forecast Advisor is a web site which shows how accurate various weather forecast companies are for any given city. It shows that, over the past year, the best services are getting their forecasts correct for San Diego about 90% of the time, and about 90% for Los Angeles, as well. Meanwhile, the best anyone’s doing for Chicago is about 80%, and about the same for Minneapolis.

My understanding is that, generally, San Diego is a fairly “easy” area in which to forecast the weather, as it doesn’t rain often, and doesn’t often experience major temperature swings. On the other hand, as I understand it, what San Diego (and much of southern California) does have is a number of “microclimates” – small-ish areas in which the weather at a given time can be quite different from other nearby areas; this is due to the terrain, as well as the proximity of the Pacific Ocean.

By contrast, in the Upper Midwest (like here in Chicago), we do regularly get quick swings in temperature, particularly in the spring and fall. Last week, for example, it was in the 60s here in Chicago on Sunday (March 1), and back down in the 30s the next day (we also, of course, have a much bigger variation in seasonal temperature, though that doesn’t really play into the accuracy of short-term forecasting). Plus, we do get a lot more precipitation than you do in southern California, and a fair amount of it comes in relatively short periods of time (e.g., heavy snowfall in the winter, severe thunderstorms with drenching rains in the spring and summer).

In general, forecasters have a pretty good handle on what’s going to happen here in the Midwest, but apparently not at the level that they do in southern California. The big challenge to forecast accuracy here is precipitation: in the summer, a lot of our precipitation comes from thunderstorms, which can be harder to predict, especially since they can pop up (and fade away) very quickly, and be very small in coverage area – they may hit one area with heavy rain, while it may not rain much at all just a few miles away. While snowstorms don’t have the “small storm popping up suddenly” nature of summer thunderstorms, the area that a particular snowstorm will hit with heavy snow can often be a very narrow band, and a variation of just a few miles in the actual path of that storm can make the difference between one getting 2" of snow and 8".

I remember a tale from way back when about a college weather class where students were assigned to write a weather prediction program that uses a set of data to predict the next day’s weather. One important rule: you couldn’t just declare tomorrow’s forecast to be today’s weather. That would beat any other program a student could write in those days.

A place like San Diego is nearly ideal for “tomorrow’s forecast is today’s weather”. Not much change for good stretches of time. While Chicago isn’t like that, esp. in the winter.

It’s all about variation. The trick is to accurately predict when the weather will change.

I think this is the most important part. A few miles or just 1-2 degrees temperature change can make the difference between a major blizzard and a mild rain and sleet event. While pop-up thunderstorms do happen in the summer, they’re more of a southern phenomenon (think Florida, Texas, the deep south), and even those tend to be somewhat predictable in that there’s always a 50% or so chance of storms between say 3:00-7:00 PM or whatever it is.

Midwest storms are more often associated with weather fronts. The amount of moisture and other thermodynamic profiles can have a big impact on the formation of storm cells. Much like the winter storm situation, a slight change in the amount of sun before a cold front moves through, or a variation in the track of a wave of energy can take a high risk for severe storms and move it a few counties east, leaving areas that were bracing for storms high and dry. That just doesn’t happen much in the more calm climate of southern California.

I’m not at all sure how they say that a forecasting site is accurate or not; they’re all accurate within their own parameters. For example, if the NWS says there’s an 80% chance of rain, that means that under similar conditions, it rained 80% of the time. That’s it. There’s no way to be wrong on that- if it doesn’t rain, there was a 20% chance that it wouldn’t rain, so still accurate. Same thing with temperatures, wind, rain amount, etc… they’re all gauged off of the historical data as run through sophisticated computer models run by supercomputers for very small areas- “gridded forecasts”.

So they 're not necessarily 100% accurate in that the data/model may say that there’s a 20% chance of rain, with a low of 41 and a high of 66 with 22 mph winds for the next day, and the actual weather will be light rain with a low of 39 and a high of 65 with 20 mph winds. Is that a miss?

The other thing is the time window of the forecast; past about 3 days, and they may as well be using tea leaves, entrails and chicken bones to forecast the weather, except in the very broadest of strokes like “We believe this spring will be wetter and cooler than usual.” Conversely, if they say it’s going to rain at noon today (it’s 9:15 am here), they’re very likely to be accurate.

In fact, if you look at the links I shared above, they illustrate that. Near the bottom of those pages, the least-accurate forecast service is listed as “Persistence” – that’s not an actual forecast service, but just a measure of how well today’s weather conditions forecast the weather for the next three days.

In San Diego, “persistence” is 66% accurate in predicting future weather – that is, about 2/3 of the time, the weather on a particular day pretty accurately tells you what the weather will be like for the next three days. In Chicago, on the other hand, persistence is only 39% accurate.

Sure it’s possible to be wrong even with percentages. That’s pretty simple statistical analysis. If there were 10 days with an 80% chance of rain, and it rained on eight of those days, then that’s 100% accurate. If it rained on nine days, then the forecast was only 90% accurate. For the specific temperatures and wind (actual forecasts tend to be a bit more flexible, like “highs in the middle to upper 60s with 15-25 mph winds”), deviating by a few degrees or mph isn’t a “miss” it’s simply not 100% anymore. Now I don’t know what the percentage would be there, is one degree one percent, or two percent, or five percent? I guess it depends on how far off it could be versus what it actually is, which is kind of a sliding scale. Still it’s not 100% versus 0%.

Right, but AFAIK, they don’t do their forecasts for more than a day at a time. In other words, if you look at the forecast for this week (today is Monday), and there are 5 days with 80% chance of rain, it’s 5 separate daily forecasts with 80% chance of rain forecast for each day.

And tomorrow and Wednesday the forecast for Thursday may change- let’s say it was 80% today, and drops to 5% by Wednesday. Does that mean that if it doesn’t rain on Thursday that the forecaster’s forecast on Monday was inaccurate? I wouldn’t say so- they got more information as time went on and refined the forecast.

As for why the accuracy percentages as measured would be higher for S. California vs. the middle of the country is probably because they’re right there with a big ocean on one side, and mountains on the other, and they’re further south of where most cold air swings eastward. So not a whole lot of changing conditions. Meanwhile, somewhere like Dallas (79% accurate), has a lot of stuff in play- air coming over the Rockies from the west, with all the altitude-related instability, Gulf air from the South carrying a lot of warmth and moisture, and whatever’s affecting the weather from north and east. Basically a LOT more variables over a lot larger area in play for say… Dallas vs. San Diego, which means that the models are going to be less exact in Dallas.

I live in the Bay Area, and the forecasting is pretty accurate. You can watch the systems come in from the Pacific and there aren’t any mountains or anything to block them.

Coming from the East, we laugh when a temperature increase of 2 degrees is called a warming trend. Things are mostly stable.

Now, forecasts are made on a much more local basis than the Midwest because of our micro-climates. Some places on the other side of a pass, only 10 miles away, have very different temperatures than we do.

You can’t really analyze overall forecast accuracy in that way. You’d have to do it per time period. So you can get the forecast accuracy for day-of, 24-hours, 2-days, 3-days, 5-days, and 10-days out separately. They can’t just be lumped together to come up with a single number. So the accuracy of the 5-day and 10-day forecasts will be lower than the day-of or 24-hour forecasts.

Old Lewis Black bit where he describes, among other things, how being a weatherman in San Diego is the easiest job in the world.

Some years ago I read in a book by Vance Randolph - “Ozark Superstitions” - about a Holly Roller congregation that would pray for that weatherman in Springfield who “lies so much about the weather”.

Where I live there is no large body of water nearby. And the jet stream is the major predictor of our weather. The upper Midwest is similar.

You are not as wildly affected by the jet stream and have a large body of water to somewhat temper wild fluctuations. I think you get a lot of wind coming from the West off that water mass.

Here the jet stream is generally heading from north to south. If it is to our west it draws northern cold air to us. If it is to the east it is drawing southern air to us. Southern air usually means Pacific Ocean air. Warmer.

When it gets really cold there. The jet stream is usually doing a very deep and westward curve. Bringing far north air down to you. But that is not usual that far south.

Your weather often comes from across the Pacific Ocean. Which is a moderating heat sink. So your weather is more stable. My weather is even more quickly variable than the upper mid west, as I am close to the Rocky Mountains and the jet stream wanders west and east of my location often within an hour or two. No ocean heatsink to moderate things.

That’s what I’m saying too, although maybe not so well.

I guess where I’m coming from is that there needs to be a margin of error for stuff like wind speed, temperature, etc… and for precipitation, they already bake that in by giving you a probability. So if there’s a 50% chance of rain, how do you determine if that forecast was accurate or not? They’re literally telling you it could go either way. If they’re telling you it’s a 75% chance of rain, then one out of four times that conditions are similar, it WON’T rain- so if it doesn’t rain, they didn’t fail there either. The only places I’d say they could fail is if they give absolutes- 0% or 100% probabilities and the opposite happens.

And what I’m saying is that you need to take all (or some sampling) of the days where the forecast was a 50% chance of rain and see if it rained or not on those days. 20 days with a 50% chance of rain, if it rained on 10 of those days and was dry the other 10, then there’s a high forecast accuracy, arguably 100% accurate. If it rained on 15 of the days and was dry only for five days, then the forecast accuracy is lower, only 75% accurate.

I guess the point is that you can’t look at forecasts with probabilities and come up with a statement of accuracy based on only one day in isolation.

You need to think of this like the election predictions. There they predict every county and give a percentage for each candidate to win. If you looked at just the toss up counties (50% D), and lets say there were 100 of them nation wide, and 50% ended up with the Dem candidate winning and 50% ended up with a republican wining then the prediction was right. If on the other hand 75% of the counties that were toss ups went R then the forecast was wrong by 25% (we could call it 50% wrong too I don’t want to get into that).

They take the same approach with weather. Of the days where they predicted 0% chance of precipitation all year what percentage of them had rain. To get a little bit local here in Denver they measure rainfall at the airport. We average 83 days of precipitation per year so lets say that the weatherman predicted 0% chance of rain 285 days of the year and some amount greater than 0% 80 days last year. Now lets say it was a perfect year and we got precipitation exactly 83 days. That means that the 0% forecast was wrong 3/285 or about 1% of the time. You can do a similar set for any prediction amount (its helpful here that weathermen round to the closest 10%) so for 50% days you could add them up and would expect it to only rain on half of them.

My time stationed in Kansas taught me weather forecasting there is simple. It’s either too hot and too windy, or too cold and too windy. Any daytime condition can be predicted by glancing north. “Yup, that storm looks to be about 75 minutes away.” If cows in the sky grow larger, the tornado is approaching.

A story I wrote, set in San Diego, was criticized for my inserting a hot, dry day. Hey, I felt it happen once! Just as Portland OR is consistently damp-ish but we spent a week there in Palm Springs-like sun and air. Our area in the central Sierra Nevada range is chock-full of microclimes. Weather Underground reports on nearby stations; I’ll see 20°F variations within a five mile radius at similar elevations. Forecasts can be rather general.