In science, there are two main ways of doing things: scientific method and optimization. Strictly speaking, scientific method almost always requires a control. In optimization, you already did the control 6 months back, or there are simply too many variables involved to properly control for everything. You are looking for a local minimum, or if lucky, global. Rather than rigor, which could take for ever, the goal is “Good enough for government work”. That loosens your control constraints to the point where you have a chance of actually finding workable answers. Perhaps not the BEST, but with say 1000 variables, you have to find someplace to start. Some folk Will argue that optimization is not science. I think they are wrong. Sometimes it is the only feasible way to attack really complex problems in finite time.
Sengold, skillz? 
Since no one has linked to the show so far, here’s what it shows. The dummy hits the ground at 120 mph without a ball around it, while it hits the ground at 56 mph with it. Either way, it’s too fast, so any human would be killed with or without the ball:
As an active (brain) scientist, I would say that half the paper I read (or write) don’t involve any notion of control, not do they require one. Just to name a simple example, place cells are neurons that track where a rat is in a room, and demonstrating that neurons are place cells doesn’t require a control.
At some point, the notion of control is subjective. Imagine that you study some fertilizer. You grow some crop with zero drops of fertilizer in the water, or one drop, or two drops, etc. You see that crops get bigger when you put more fertilizer, and make some linear (or non-linear) regression between the quantity of fertilizer and crop size. Fine. Now, you will generally call the ‘zero drop’ condition a ‘control’. However, from a mathematical point of view (when you make your linear regression), there is nothing special about the ‘zero drops’ data point. It is really just a data point like another.
Similarly, many experiments (in brain science or other fields) consist in manipulating one (or many) experimental variables and see how they affects an outcome. Defining what a control is requires defining what a ‘zero’ or a ‘baseline’ condition is, which is not always possible, or which may be quite arbitrary, and which is often ultimately irrelevant.
So really, ‘control’ is a good notion when you want to highlight the effect of manipulating one variable, but it is only meaningful in some contexts. There can also be designs where experiment A is the control for experiment B, and experiment B is the control for experiment C: in some fields, you just say that you ‘contrast’ A versus B, or B versus C, etc.
It’s a bit like asking: Does every medical trial require a cohort of a million+ people?
The answer is, ideally, yes. But practically no.
We want our experiments to be as rigorous as possible, so we would always like some kind of control. That includes cases where the outcome of the control is obvious: finding errors in our common knowledge or assumptions is part of science’s bag.
But in the real world, it is not always possible to have a control. And even when it’s possible, there’s a trade off if the control does look self-evident and would be very expensive to include, in time or resources.
Control prevalence depends on field too. If you are looking for a particle peak at 547 Tev on your particle smasher, it’s a far different game than looking for the optimum amount of HCl and dichlororomethane, AgNO3 and HCl to add to your reactants to maximize D isomer formation in chlorination across the #7-8 double bond In a steroid molecule.
Also, sounds like the wrong question is being asked in the OP.
The question isn’t “could a human being survive if …”, it is “could a human being die if …”.
After all, if, by some miracle, a person did survive, you didn’t learn a whole lot except survival is possible. It opens up a lot of questions.
But if you thought the baseline was survival, now you’ve settled the question pretty firmly with just a single experiment.
Even in experiments with controls, the proper null hypothesis is important, and a lot of people don’t get that part right (various threads on these forums show this clearly).
Seems to me a bigger ball would work because it would slow terminal velocity and lengthen the deceleration distance (thus lowering the force)
Brian
Unless the test is silly to begin with, there is no reason to do the test. Yes, dropping a watermelon off the top of a 20 story building to the street, the expected result is that it would break apart on impact. Unless this is being done for comic effect and curiosity (ala David Letterman) then it isn’t a good test, so you don’t need a control either or to do the test/experiment in the first place.
Seven stories onto concrete is enough to do in a reasonably ripe watermelona*, 8 pounds or so?
The result is mostly not worth eating.
*The control, not dropped, was delicious.
That article appears in the Christmas issue of BMJ, so calling it “completely serious” is an overstatement!
Often overlooked, but it’s also important that you have experiments with multiple differences, because maybe you have two treatments which each separately don’t do much, but which work amazingly well when combined, or which either one work well individually, but don’t work at all when combined, or the like. Though of course it’s easy to see why this isn’t usually done, because with a large number of possible variables, the number of combinations to test grows exponentially.
Richard Feynman on the importance of controls (http://calteches.library.caltech.edu/51/2/CargoCult.pdf)
"Other kinds of errors are more characteristic of poor science. When I was at Cornell. I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this—I don’t remember it in detail, but it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do, A. So her proposal was to do the experiment under circumstances Y and see if they still did A.
I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person—to do it under condition X to see if she could also get result A—and then change to Y and see if A changed. Then she would know that the real difference was the thing she thought she had under control."
"For example, there have been many experiments running rats through all kinds of mazes, and so on—with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.
The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and, still the rats could tell.
He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
Now, from a scientific standpoint, that is an A‑Number‑l experiment. That is the experiment that makes rat‑running experiments sensible, because it uncovers the clues that the rat is really using—not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat‑running.
I looked into the subsequent history of this research. The subsequent experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. "