I understand the benefits of placing the musicians where they are in the orchestra (eg, all the strings together, all the percussions together, etc.); having similar musicians in the same place so they can all take direction from the conductor, the aesthetics of watching all the violinists’ bows moving in perfect unison, etc.
What I don’t get is this: strictly from the standpoint of how the music reaches the listeners’ ears, does the placement of the musicians matter? If the musicians were seated at random, would a blindfolded listener in the audience be able to tell that something was amiss?
Keeping in time and in tune would be much more difficult for the musicians. It takes a lot of careful listening to stay in sync with the other players in your section, and if the two or three flautists are scattered around the stage from each other they’re more likely to start playing a little off from each other. I can think of a few licks in a few pieces (the third movement of Rimsky-Korsakov’s “Scheherzade” springs to mind) that would be extremely difficult for the woodwinds if they weren’t near each other.
I don’t have a citation for this, but I believe (from my years in concert band) that the smaller instruments are in front because they don’t project volume nearly as well as the big ones sitting in back. A flautist standing in front of you playing at max volume is unlikely to melt your face off (figuratively of course), but a tuba definitely could. And the percussion section would need to be together to 1) keep time effectively and 2) facilitate the way percussionists have to switch instruments so frequently.
I don’t know exactly how the sound waves work, but I think if you put the tubas and drums in front and the flutes in the back that you wouldn’t be able to hear the flutes nearly as well. Interference maybe? Or just lack of volume projection.
I think the threshold for humans locating sounds in a space (left-to-right locality) is any timing difference of about 1 millisecond. I don’t know the exact number that would accepted by the scientific community (maybe it’s 0.5 ms) but I’ll use 1 so it’ a simpler calculation.
Assuming an orchestra width of 50 feet (1st violins on the far left and cellos on the far right), the following calculations show that the seating distance where the 50ft width would collapse to 1ft (thereby losing spatial locality) would be at 2500 feet.
Since most most concert halls are much shorter than 2500 ft, you would be able to detect if musicians were seated randomly. Unless you were deaf in one ear, or simply watching the orchestra on an old tv with one mono speaker. Even if you were deaf in one ear, the skin on the left and right sides of your body might perceive sound vibrations arriving at different times but not sure if it’s the same 1ms resolution as your ears.
My calculation is only accounting for direct sound. Extra factors such as the sounds bouncing around the walls (reverberation) could alter the conclusions.
And I have read somewhere that from an acoustics point of view it should be the other way round. The not so loud instruments should have a wall behind them to help project the sound.
It does have to do with the dynamics of the instruments. Note that you have 16 first violinists, playing in unison, but only three trumpets. You have eight double basses, but only one tuba.
The brass and percussion can play far louder than strings. Woodwinds are in the middle. In order to adjust the dynamics, the orchestra has more strings than anything else, and they are placed near the front.
The set up of the orchestra has been developed over the centuries and is the optimal way to have the correct dynamics (not to mention that the music is composed assuming this setup).
Location of sound sources uses a mix of effects. Phase is one, simple change of relative level between the ears (i.e. like the balance control on a stereo system) is more. When you listen to a pair of speakers in a stereo system these two are what are usefully used to encode position.
If you are seated in front of an orchestra you get to use much better localsisation cues. The Head Related Transfer Function HRTF is the sum of all the effects your ears and head have on the sound heard from each direction on what you finally hear. Our brains learn to decode this to make quite good localisation of sound, beyond what you would reasonably imagine possible given the infinitude of possible solutions mathematically possible. For instance discerning the height of a source, or disambiguating front from rear.
In a good venue the reverbarent sound field accounts for about 90% of the energy you hear. Not the direct sound (i.e the sound that has reflected off at least one surface versus the sound that travelled directly to you.) However the Hass effect is also critical. (This is where the figure of 1ms comes from.) It is the first arrival of the sound that determines location. If subsequent reflections of the sound arrive they merely add to the perceived level (if they are say 5 to 20 ms late) of if they are earlier, they add a sense of space, but still don’t ruin the localisation.
Localisation of source in an orchestra is going to depend a great deal on where you sit, and the quality of the venue. But my experience is that once you get a reasonable way back, you lose pretty much all localisation. For reasons I con’t quite explain I usually get seats either very close to the front, or the front of the first balcony. The latter position yields a well integrated spacious sound and zero localisation. Front row seats yields a ridiculous separation of sources as the orchestra subtends about 120 degrees across my view. Perfect seats, middle and about row F, yields a nice spread and good localisation, but nothing like the pinpoint imaging so beloved by HiFi freaks (who really need to get out more and listen to live music.) I need to wait for the season ticket holders who get those seats to die before I get them.
Higher frequencies tend to reflect more off reflective surfaces, and become absorbed more in absorbent surfaces, compared to lower frequencies. That’s why if you have a sound system with a subwoofer, it really doesn’t matter where you put it. You don’t have as much of a sense of what direction bass frequencies are coming from, and they will tend to penetrate through the orchestra in front of them when placed in the back. Piccolos, not so much.
One of the critical things about localisation is the harmonics of an instrument or speaker. We have essentially zero ability to localise low frequencies - the waveleght is so long that our head makes no difference to the amplitude, and the period so long that phase differences are useless. But the harmonics of the sound are higher frequency, and due to the quite severe non-linearity of frequency response our ears have in the low frequencies, these harmonics are significant;y emphasised versus the fundamental. A subwoofer should be impossible to localise, however a low quality one isn’t to hard to find, because it has quite a bit of distortion, and harmonics that creep into the range we can localise are present. Vent noise (turbulence of the air chuffing in and out of a ported enclosure is a good one.)
Orchestral instruments have quite a lot of harmonic content, that is part of what gives them their character, and interestingly those instruments that boast very low frequency notes may have more output in the harmonics than in the fundamental. The extreme example are organ pipes, where the very low stops may be 10dB down in the fundamental versus the harmonics. Which is part of the reason you can enjoy organ music at all on something less than an insane HiFi system. Horns - even a Tuba, have a remarkable amount of energy in the harmonics. Its what gives a horn its sound. Bowed instruments are essentially a sawtooth wave, so even a double bass has a solid helping of harmonics.
Choruses sometimes mix all of the parts together - it’s more difficult to sing that way, so it forces the individual singers to really learn their parts solidly. This is more often done as a practice technique, but I have seen it done in concert.
And that’s where my knowledge of the difference in sound comes from. There is a lot of work in putting people together whose voices (through harmonics) will not clash. It’s not something I ever got good at, but my director was a genius. We could sound out of tune until he swapped two people.
The difference between sections and mixed placement is striking. The sound seems to be coming from the entire choir. This was actually the preferred arrangement of my college director. We learned our parts in sections, but once we’d moved past that stage, we often were grouped into quartets, and stayed that way in concert.
BTW, you got the reason wrong for the mixed placement, I think. Sure, in a low level choir it might just be to force you to learn your part, but, in most, it’s to force blending and balance. You hear the other parts better, and can focus on making your part fit with the other parts. Choral singing requires a lot of on the spot tuning and dynamic adjustments.
While this is also true in a band, the discrete fingerings of most instruments make this less of a problem. Oddly enough, though, any string instrument would be an exception. I bet they are placed together for the amplification effect, as they are quite quiet instruments.