8 out of 25 in the last 15 years. And 3 of them in 2005. But if you look at the whole distribution there is no obvious pattern. The last 2 big ones are 2017 and 2010 - 7 year spread. But in the 7 years from 1992 to 1999 there were 4 big ones. Same in the 1985 to 1992 7 year spread - 4 big ones. So having 8 in 15 years is not something extraordinary. And I don’t know where you get “almost 1 per year in the last 3-4 years” with only 2 big ones from 2010 to now.
[sup][sup]Note to mods: The GQ has been answered… this is a joke![/sup][/sup]
Moderator Note
Joke or not, it’s still a political jab. Keep it out of GQ. It has nothing to do with the question in the OP.
Colibri
General Questions Moderator
Global warming is a subset of climate change.
The jury is still out on global warming effects on hurricanes.
“ln Hertford, Hereford and Hampshire, hurricanes hardly ever hapoooohshhiiiii…”
The jury might still be out on what global warming would theoretically be expected to do to hurricanes, but we’re getting close to the point where that doesn’t matter, because we’re finding out empirically what it does. Global warming is not something that will happen; it’s something that is happening right now, and has been for a century.
No, that’s not how these things work. And we have records for some places going back much further than 500 years, all of which goes into the stats.
Most phenomena like that obey a power-law distribution. OP asks “Have the parameters of the power law for hurricane intensity changed?”
Although power laws apply to a huge range of real-world statistics, it was in fact flooding (along the Nile River) that, a century ago, led to the early understanding of such power laws.
Power-law statistics (and specifically the old Nile River papers by Harold Hurst) led Benoit Mandelbrot to some of his conclusions. The famous 1998 collapse of a hedge fund led by Nobel Prize winners might be partly blamed on their use of normal distributions(*) in their models rather than the power-law distributions which are needed to describe fluctuations and risk in financial markets.
(* - There are simplifying theorems that apply to normal distributions so these are often preferred in modeling. “The light’s better here.” :smack: )
Think of it as the “frog in the pot” situation. Current fluctuation could be normal statistical variation, or it could be a sign of things to come. Like the frog in the slowly heating pot of water, we’ll only be sure when the statistics are incontrovertible. Also remember these things happen regularly. They only seem to impact our collective consciousness (i.e. become 24-hour news on CNN) when the storm takes aim at a well-populated area of the USA.
The problem with these storms, as mentioned earlier, is not sea level. Compared to a 12-foot storm surge, a few cm (if that) of sea level rise is negligible. The biggest problem is the amount of rain - that causes flooding. In this whole thread, everyone discusses wind speed, etc. Nobody has mentioned comparative rainfall statistics. Yet we can point to various news stories over the years of rising sea surface temperatures…
It’s the wind speed that produces all the cool videos …
One bad hurricane season does not make it “conclusive” that climate change is to blame, any more than having a streak of 12 years without a major hurricane hitting the U.S. disproves climate change.
Tempting as it is to assign definitive causes to isolated events, long-term trends are what count.
As for severe seasons, one of the worst on record was in 1780.
Its a bit harder than that even. Based on what your saying, the 500 year flood means that there is a 0.2% chance of Houston getting hit by that kind of flood in a given year, but there is also a 0.2% chance of Miami getting hit by a 500 year flood and a 0.2% chance of Atlanta getting hit by a 500 year flood and a 0.2% chance of Charleston getting hit by a 500 year flood etc… So depending on average size of these floods and the overall area of the US that is susceptible to Hurricanes it may be that even under normal conditions we should expect a 500 year flood somewhere in the US as often as every decade or two.
That’s what the probabilities say …
Consider this, there’s a 0.0004% probability that Houston gets two 500-year floods in any given year … half the season’s over so there’s only a 0.0002% chance of Houston getting nailed this year again … awesome …
You are assuming that the probability of location A getting a 500-year flood is uncorrelated with location B getting a 500-year flood. This is probably true for widely separated locations, but as the locations get closer (or on the same lake/river/oceanfront) this is less and less true.
That’s a good assumption … if we use the data from location A to calculate the probability of a flood at location A … the data from location B isn’t used … any correlation between a flood at location A and location B is strictly due to the cause of the flood event … something the statistical arts doesn’t account for in this context … the 500 year flood probability is from all causes …
Yes, the statistical model is valid for a particular location (although in reality flood maps in areas with little or no data are generated using the statistics from hydrologically similar areas). My objection was the assertion that if the probability of a 500 year flood in location A is 0.2% (by definition) and the probability at location B is also 0.2% then the probability of a 500 year flood at either location A or B is 0.4%, and if we add up enough locations then it is likely that we would have a 500-year flood somewhere at frequent intervals. Adding probabilities like this is only valid if the events are independent (i.e., uncorrelated). Since many areas would share the same cause of the flood, the flood events are clearly correlated and adding the probabilities is incorrect.
Right … if we divide up the world’s land mass into 500 equal-area chunks (a little bigger than Arizona) … on average, one of these chunks will experience a 500-year flood somewhere in its area every year … keep in mind, if a tiny tickle of water runs every 500 years in Central Antarctica, that’s a 500 year flood event … the statistical arts are useful, but there are some drawbacks …
Buck Godot used as an example Houston and Miami … these two locations are far enough apart where just adding the probabilities is valid enough … as valid as can be had until we have 500 years of good scientifically accurate data … we’re just guessing at this point …
This is actually the opposite of what I am saying…there is no way that the weather in all of those chunks is uncorrelated with every other chunk. El Nino, for example, affects weather over an enormous area. Flood probabilities in Miami and Houston are both going to be affected by water temperatures in the Atlantic.
When a butterfly flaps its wings in Australia, a tornado is eventually spawned in Nebraska … cause-and-effect … I get that … I just think you’re diving into a level of minutia that can’t be supported by the margin of errors we’re dealing with … a 500 year flood in Miami or Houston is 0.4% give or take 0.2% … neither one of us has the data to dispute that error …
What I am saying has nothing to do with the butterfly effect - I don’t know how you can read that from my posts, I am talking about very macro effects like ocean temperature which can impact weather over a very wide area, thereby introducing a correlation in the weather between locations within that area. Once there is correlation, you can’t calculate the probability of A or B by simple addition. That’s all that I am saying.