Why isn't flash photography permitted in museums?

Whiter than Michael Jackson?

Dex, you are correct in claiming that the number that comes out of grg88’s calculations has so much margin of error as to be useless … as an estimate of the true number. But grg88 wasn’t trying to estimate the true number. As 5cents pointed out, grg88 was trying to come up with an upper bound. By calculating a number that was almost certainly larger than the true number, and demonstrating that the number is insignificant, then he would have proved his point that museum directors are wrong to use that as their reasoning.

In other words, his margin of error was NOT plus-or-minus X It was plus X, minus Y, where X~=0. If the assumptions are made conservatively enough, then X might even be negative - i.e. the value selected is ABOVE the largest reasonable value for the factor in question.

Naturally, he might not have been conservative enough in some cases. And he might have forgotten to include one or more factors. But he indicated quite clearly that it was a work in progress. Rather than hearing, “wotta waste of time”, he had hoped to hear, “you would be safe in decreasing this factor to X, but you should increase that factor to Y, and you forgot all about factor Z.” Advice like that may not have gotten you significantly closer to the true value, but it could increase your confidence in an upper bound.

This isn’t his orals. It’s true that his sources of data are suspect and may not apply to art conservation, but they’re a reasonable starting point. And if he is consistantly conservative, and if other experts chime in with fine tuning, then we could collectively come up with an upper bound with a reasonable degree of confidence.

That you were a harassed nerd in HS doesn’t mean much - sometimes the picked-upon become the most ruthless bullies when they find somebody even weaker. I’ll assume this is not the case with you.

It is fine to disagree with his numbers or his methods. But dipping into ridicule is mean. Not that I have anything against meanness - a certain amount of irony and sarcasm is TSD tradition - but ridiculing a flawed but honest attempt at rigor is not just mean, it’s anti-TSD.

True, but your point (that his calculations are useless) assumes that his chosen values are in the middle of their respective margins of error, which grg88 was specifically trying to avoid. If his values were consistantly +0/-Y, then the result would have an error of +0/-Z (where Z is the product of all the Ys).

His analysis was NOT useless, his aim was NOT pointless, and his method was NOT futile. His was a reasonable first-stab at demonstrating that an upper bound was insignificant, and therefore the true effect was insignificant. Are 5cents and I the only two for whom this was obvious, right from the start?

(I might add that his first two posts were specifically debunking the assertion that flashes heat the artwork. He wasn’t even addressing the issue of U.V. degradation of pigments.)

No.

FWIW, the method can be surprisingly useful even if the quantitative assumptions are intended to be “central” estimates. There’s another aspect of this that I think Darrell Huff Fan Club may have overlooked, which is that you’d be pretty unlucky if all your estimates turned out to be biased in the same direction. I can’t be bothered to do the maths now, but the more uncertain factors you multiply together, the less likely it is that the extreme ends of the range come into play.

Not mathematically correct. Each factor, when MULTIPLIED by another factor, EXPANDS the range of error. What do you think “multiply” means?

But there’s no guarantee that they aren’t, and an unwarranteed assumption that they cancel each other out. Piling guesses upon guesses does not increase the accuracy of the computation; quite the opposite, it increases the uncertainty.

This is an impasse.

On the one hand, we have people who think that mathematical estimates are conclusive in setting bounds.

On the other hand, we have people who think that these mathematical estimates might, with equal accuracy, be taken at random from a telephone book or daily horoscope.

I add the note that it doesn’t make a damn bit of difference what the mathematics proves. The museum officials think it’s a risk. Period. And that’s why they don’t allow it. It’s a question of risk tolerance. You can prove statistically that air travel is safer than automobile travel, but if you have someone who is afraid of flying, that “proof” will be meaningless.

It’s deja vu all over again…

At least he’s being consistent!
But also…

I’ve got an idea. Why don’t I take my camera along to the National Gallery (only 15 minutes way from where I work in London) and do some empirical research? The weather forecast for the next few days is a bit dreary, and I propose to wait until we have a half-decent day so that I’m not taking measurements on an atypically dull day. Stay tuned to this channel…

PS Musicat: I may be wrong about this, but I suspect I might not have got a degree in maths from Cambridge University if I didn’t know what “multiply” means…
Here’s a simple example. Suppose you believe “X” is somewhere between 10 and 30, and “Y” is somewhere between 20 and 60. What is X times Y? One might say, rather simplistrically, that it’s between 1020 and 3060, i.e. between 200 and 1800. But that’s ignoring the probablility distributions behind the numbers. Assuming you haven’t just plucked the numbers out of thin air - i.e. they are reasonable estimates - it’s reasonable to model X and Y as coming from normal distributions: X has mean=20 and standard deviation=5, Y has mean=40 and SD=10. Anyway crunch them together and you’ll get a range for XY of about 225 to 1370. Yes, XY could be 1800, but it’s extremely unlikely. If you recall, I wsn’t claiming that multiplying factors togetehr makes the result better. What I said was “The more uncertain factors you multiply together, the less likely it is that the extreme ends of the range come into play.” And that’s correct.

Anyway, this is probably getting a bit dull for the spectators. If you want to discuss it further, by all means e-mail me, and we can continue off-line.

It’s deja vu all over again…

At least he’s being consistent!
But also…

I’ve got an idea. Why don’t I take my camera along to the National Gallery (only 15 minutes way from where I work in London) and do some empirical research? The weather forecast for the next few days is a bit dreary, and I propose to wait until we have a half-decent day so that I’m not taking measurements on an atypically dull day. Stay tuned to this channel…

PS Musicat: I may be wrong about this, but I suspect I might not have got a degree in maths from Cambridge University if I didn’t know what “multiply” means…
Here’s a simple example. Suppose you believe “X” is somewhere between 10 and 30, and “Y” is somewhere between 20 and 60. What is X times Y? One might say, rather simplistically, that it’s between 1020 and 3060, i.e. between 200 and 1800. But that’s ignoring the probablility distributions behind the numbers. Assuming you haven’t just plucked the numbers out of thin air - i.e. they are reasonable estimates - it’s reasonable to model X and Y as coming from normal distributions: X has mean=20 and standard deviation=5, Y has mean=40 and SD=10. Anyway crunch them together and you’ll get a range for XY of about 225 to 1370. Yes, XY could be 1800, but it’s extremely unlikely. If you recall, I wsn’t claiming that multiplying factors togetehr makes the result better. What I said was “The more uncertain factors you multiply together, the less likely it is that the extreme ends of the range come into play.” And that’s correct.

Anyway, this is probably getting a bit dull for the spectators. If you want to discuss it further, by all means e-mail me, and we can continue off-line.

You stole my idea. :slight_smile:

If you’re dying to go on a little adventure, why not do your empirical testing on the piece of art that is mentioned in the article, something called “The Mona Lisa”. Apparently it is in a little museum in Paris called “The Louvre”. They get a couple dozen visitors a year (or is it 6 million?), some small percentage of which (say, 99.99%?) track down this “Mona Lisa”. The museum “strongly discourages” flash photography, and apparently this has a similar effect as to how speed limit signs strongly discourage speeding (the French don’t speed, do they?).

The empirical test: Is the Mona Lisa still present and visible, or not?

Yeah, I think I may have heard of it somewhere. Unfortunatey it’s not 15 minutes down the road. I may be devoted to fighting ignorance and furthering the cause of reason and science, but I’m not that devoted. (And my lunch-break probably isn’t long enough…)

Incidentally if anyone out these does find themselves visiting a major gallery or museum such as the Louvre, please do have a look at your fellow vistors, and let us know how many do use flash. It’s useful data.

BTW 5cents, I’m not 100% sure that your suggested experiment is appropriate. Last time I was in the Louvre I wasn’t personally able to determine that the Mona Lisa was still present and visible. There was a big crowd of people looking at something, but I’m only 5’7" and couldn’t see what that something was.

And anyway, the Mona Lisa has been in existence for 497 years, but the first 440 or so were before the advent of flash photography for amateurs. Say (for the sake of argument) the cumulative effect of thousands or millions of visitors taking flash photographs was to double the painting’s exposure to damaging light. Then it would have been exposed to the equivalent of about 560 years’ worth of light. That’s more than 497 but not a whole lot more. The effect of the flashes might be to shorten the painting’s life expectancy, but you wouldn’t expect it to have disappeared entirely yet.

(Still, if I’m going to do the experiment, maybe I’d better hurry.:slight_smile: )

Sooo after all that typing, I still don’t see Musicat’s point.

My unspoken ground rules were:

(1) We’re much more likely to get a get a more accurate answer by taking actual data, backed up by hyperlinks to the sources, than just taking wild-butt guesses. Never mind that we may be off by a factor of two on each bit of data-- astronomers and other scientists, when they have to, are comfortable with estimates that may be off by even bigger factors. Any estimate, based on SOMETHING, is better than no info at all or just a wild guess. With a lot of things, all you need is a ROUGH idea if it will fly. For example, lets say you wonder if all the people in China jumped off a chair, would it shake the Earth to a noticeable extent? Here’s a quick and rough estimate: 1 billion people (Horrors!, we’re probably 20 percent off!), weight of same, maybe 100 billion pounds. (Horror2!l, another 30% off easily!), weight of Earth, 6.0 sextillion tons (Horror!, don’t know if that includes the oceans!).
ratio of the two: 6,000,000,000,000,000,000. Hmmm, looks like the weight off all those folks is totally insignificant. Even if my math and data are off by several decimal places, it makes no difference.
Same basic idea with the flash estimates. The estimates may be off, but in the end the ratio tends to indicate the effects are small.
As I said before, I’m open to discussion, but please bring some numbers if it’s going to be a fruitful discussion.

Hey grg88, if you want to get people to take the numerical approach seriously, you might first have to convince them that you can count beyond one! :slight_smile: (Or was that a sort of reference to the Spanish Inquisition sketch from Monty Python?)

Seriously though, I’m totally with you as far as the principle goes. However we don’t seem to haev as clear-cut a case as the example you posted about the Chinese jumping up and down. What I think we need is:
[a] a clear set of numerical assumptions, including an understanding as to whether or not they are deliberately conservative (and if not, what the likely range of inaccurcy is); and
** a clear methodology for combining the numbers to calculate the magnitude of the effect we’re after.
The way this thread has rambled, we don’t have either of those. (And unfortunately the nearest effort, your lengthy post back in September, was immediately challenged for a logical error.)

I don’t think we should give up yet though.

Incidentally, yesterday whilst I was musing on the practicality of experimenting in the National Gallery, I thought of another complicating factor. The metering on cameras is generally calibrated for 18% grey. If you’re photographing anything that’s significantly darker or lighter than that, you have to adjust for it, or else the picture will be wrongly exposed. But I expect most people don’t know that - it’s the sort of thing you only learn if you’re something of an enthusiast. So if you’re trying to take a picture of a painting that’s significantly darker than 18% grey, it will be significantly over-exposed. And pictures like the Mona Lisa are very dark…

You’ve mentioned that before. I hope you’re not offering that as an argument that such things should not be analyzed or even discussed in TSD. Questioning Authority is a TSD tradition. Sure, we may not convince very many museum officials any time soon, but “fighting ignorance IS taking longer than we thought” it should. If TSD Q&A should be restricted to those most potentially productive areas of inquiry, then most of Cecils books should just be tossed in the trash.

I’ll assume that you weren’t trying to stifle discussion and were instead simply warning experimenters not to get their hopes up too high.

I think you’re thinking of averaging multiple measurements of the same value, not multiplying single measurements of several independant values.

Having a +/- error factor is a simplified form. Most measurements have an error “bell” curve that peaks at the most likely value. Sometimes these curves are not symetrical, so the most likely value is often not in center of the range. And the two extremes often do not correspond to zero-crossings, but rather to some cutoff probability. When multiplied together, I believe that these bell curves tend to get fatter, meaning that the uncertainty does increase.

In grg88’s case, since he’s trying to pick values that are consistantly at the upper limit of the error range, the sizes of the error ranges are irrelevent.