What does it mean when "X percent chance of rain" is predicted?

From the (U.S.) National Weather Service’s Weather Service Operations Manual (Chapter C-11, section 8.3.1, if you care):

A POP (probability of precipitation) forecast is the the likelihood of occurrence of a precipitation event at any given point in the forecast area.

A precipitation event is further defined as a measurable amount of liquid precipitation or water equivalent.

So, a “30 percent chance of rain” means that there’s a 30% percent chance of measurable rain (at least one one-hundredth of an inch) at any given point in the forecast area for the length of time of the forecast (usually twelve hours).

It has nothing to do with pattern recognition or past history, as alluded in the article on the web site. Naturally, such pattern recognition or past history may play some part in what number a forecaster will stick in there, but the link stops there.

Doesn’t that beg the question of how the percentage chance of a precipitation event is determined?


“Come on, Phonics Monkey–drum!”

Take a chance on rain and click.


Dopeler effect:
The tendency of stupid ideas to seem smarter when they come at you rapidly.

The PoP numbers are determined by the forecaster at the time of the forecast. The body of the forecast sorts these PoP numbers into categories:

<30% slight chance or less, PoPs below 30% aren’t mentioned in most offices (some allow forecasters to go as low as 20).

30-50 slight chance
60-70 likely
80-100 categorical (you’ll see phrases like “periods of…”, “occasional…”, in this category).

Each office, and each forecaster within an office is part of a verification program that encompasses elements that include these PoP forecasts, which tend to be verified at major observation locations (usually medium to large airports).

Right, UncleBeer, but leftshifter says that historical pattern matching is not the case; then literally offers a tautology that a 30% chance of rain means a 30% chance of rain:

I want to know how he and the other meterologists determine that it is 30%, and not 20% or 70% or some other number.

If he says it isn’t historical pattern matching, then what is it?

If it’s feeding observations like cloud cover, humidity, barometric pressure and other parameters into a formula to get a number back, it is historical pattern matching, at least it is if the same paramaters yield the same result each time. If they don’t, the predictions have little meaning anyway.


"Come on, Phonics Monkey–drum!

I was just reading that column, Pld and then came here and found this topic. I just happened to have the link handy so I posted it. Always trying to stay on the good side of the moderators.

I does sound to me like leftshifter is full of precipitation, though. Like you, I want to know what the forecasters use to predict probability if current measurments aren’t correlated with historical data. And what purpose does collecting the data serve if you’re not going to use it.


Dopeler effect:
The tendency of stupid ideas to seem smarter when they come at you rapidly.

Feel free to correct my spelling and punctuation errors in the above post.

Damn.

The PoP forecast is usually done at the tail end of the forecast process itself, and is typically used more to convey a sense of likelihood than a specific number. As you’ve implied, there really isn’t a whole lot of difference between a 30% and 40% chance of rain, but there is a fairly significant difference between 30% and 80%. The PoP came about mainly because of questions, especially from emergency managers and other government agencies, about what “a chance of rain” really meant. As a result, those chances were quantified (in the table I posted above).

Once again, climatology does creep into the overall forecast process, but at no time do forecasters flip through old books and say, “this happened seven times out of the last ten times, so I’ve got to give it a 70% PoP”. The 70% PoP represents a belief that there is a 70% chance that any point within the forecast area will receive measurable precip during that forecast period.

Ok, you’re saying it’s not strictly a pattern matching function. I’ll buy that. Although to be honest, I’d think a well done pattern match couldn’t be any worse (or vague) than the general quality of forecasts I seem to see.

But I think the follow up question that Phil and UncleBeer are asking is “A belief (that there is a x% PoP) based on what?”

If you’re not comparing current conditions to historical data, or using computer programs that perform that function, then what is the probability based on?

How do you get from ‘the current conditions are these’ to ‘and so I think there is an x% PoP’?

Ugly

The problem with strict pattern matching is that there are so many discrete variables that the exact pattern never recurs. Patterns are often very similar, but never exact enough to rely strictly on history.

To get back to the main question, though, which as I read it is, “How do you come up with X percent instead of some other number?”
The answer is that forecasters attempt to give a probability of any one point receiving a measurable precipitation event. Many forecasters break this down into two elements: a likelihood of precipitation, and the coverage of that precipitation. For example, you could be completely convinced that thunderstorms will occur tomorrow afternoon, but due to the nature of the system, they’ll be fairly scattered, and only affect 30% or so of the area. In this case, the PoP would be 30%, because you’d only expect 30% of the forecast area to receive measurable precipitation.

On the other hand, perhaps you’re looking at a deepening winter storm that would most assuredly affect the entire forecast area (100% coverage), but there’s only a 30% chance that the storm will be mature enough to produce precipitation before it’s east of you. Once again, your PoP would be 30%, because overall, the chances are 30% that any given point in your forecast area will receive measurable precip.

Model Output Statistics (MOS) PoP values are generated from multiple computer models every forecast cycle by the supercomputers in Camp Springs, MD. Those numbers are an attempt to replicate this process. Climatology plays a small part in the algorithms to produce these numbers - a much larger part is based on interpreting strict dynamic physics models, and mapping the results into probabilities.

Pardon me for intruding, but I think that pattern matching discussion is moot. pldennision said:

In that general sense, just about everything is pattern matching. Whenever we make any kind of prediction, we do it based on previous observations, i.e. we compare the current situation to previous situations to find out what is likely to follow. That’s what we always do, with or without formulas, computer, or whatever. If you call that pattern matching, I’m sure leftshifter will agree they perform some kind of pattern matching, because whatever climatological models they use, they will be based on past experience.

On the other hand, leftshifter said:

Well, the point of a good pattern matching algorithm is to take care of inexactness and to find similarities when equalities don’t exist. I’m sure you do that in some way or other.

Well, when does pattern matching end and dynamic modeling begin? If you use the very broadest definition possible, then most of empirical science is merely pattern matching.