Spacecraft landing questions

Inspired by the questions raised about the success of the Philae landing and about the abilities of its software. During the Apollo 11 decent, Armstrong famously had to tale over manual control after the autopilot was doing the space equivalent of driving into a ditch and guide the lander into a clearing that he had observed. How smart are modern (and Apollo era) guidance softwares. Would a probe computer of today have recognised the dangers that Armstrong identified and act accordingly.

How are landing zones decided anyway for spacecraft? Is it a case of “well it seems that this place fulfills the needed parameters and hope they are no complications”? Because, remember that the Apollo 11 landing site seemed ok from orbit, but was actually full of boulders.

A computer autopilot is basically an idiot savant; it is brilliant at doing exactly what it is told, and obtuse to any reality that extends beyond its instructions or response data from the inertial measurement unit (IMU). Autopilots, which are comprised of both the hardware in the flight computer and software, are far more complex and capable today than the Apollo Primary Guidance, Navigation and Control System (PGNCS or “pings”…yes, that’s not how the acronym phonetically sounds out, deal with it) but are built on the same fundamental principles. One particular technique of note is the use of the Kálmán filter, which is a time-weighted averaging algorithm which “filters” (e.g. performs signal processing operations) on incoming data to reduce noise and attempt to produce an accurate estimate of position and trajectory parameters (speed, velocity and rotation vectors, et cetera). Kálmán filtering is critical to the reliable operation of guidance systems, without which errors would rapidly compound placing the system off-course, but are also very complex to develop and code, and can easily give seemingly good but erroneous results if not well-validated.

Note that while landing on the Moon or a comet is challenging, it is nothing compared to landing on a body with a significant atmosphere in which the spacecraft has to go from purely inertial loads to hypersonic drag to the supersonic shock waves to subsonic form drag. This is especially challenging when those transitions are not well defined, e.g. in very thin but still significant atmospheres. This makes Mars just about the most difficult solid body in the Solar system to land a spacecraft on and will represent a substantial challenge for any large (i.e. crewed) vehicle that attempts to land on the surface.

Stranger

So, even with all the advancements of today, no computer programme can do what Neil did 45 years ago? Would image recognition software be able to say “boulder field, avoid”? What about the Terrain mapping software of Cruise Missiles?

(Sorry if I sound idioitic, I am the most ignorant of laymen)

Stranger, I found a book on Amazon that describes the actual digital systems used on the lander. What I read was that even when Armstrong flipped over to “manual”, computer assistance was still involved somehow. I bring this up because in the game “Kerbal Space Program”, which does reasonably realistically simulate vacuum moon landings, making a manual landing on engine thrust is extremely difficult. It’s inherently a very unstable systems you are trying to control - the relationship between angle of your craft to the ground and lateral velocity components means that a slight mistake will give you a huge lateral velocity and make it very difficult to recover.

Did the Apollo avionics have protection for this? If Armstrong had accidentally canted the lander too many degrees over would the avionics have cut engine thrust to prevent him flying out of control?
Regarding Mars landings : what I’m hearing here is that manned landings would first require building some very large and expensive manned-scaled aeroshell landers, and testing them out unmanned over Mars to validate the design. This would have to be done multiple times in order to feel even tepidly confident that the lander used on the manned mission would work.

I was under the impression that the LEM was still seeing the command module on radar, and could not cope trying to see the landing site too.

I believe that it was simply not possible to build in any kind of guidance system on Philae because it has no spare capacity. The tiny jet on top that was supposed to stop it from bouncing failed to work so that was that.

That’s both very simple and very hard to do.

It’s easy to identify very specific things. For example, most modern cameras have facial recognition. They can usually identify if something is a face. Even so, they fail a good portion of the time and create false positives and errors, like if something vaguely looks like a face or if somebody’s face is somewhat obscured.

But getting a computer to generally identify “things” in a scene is very hard. We’ve been working on that one for 50 years.

And that’s the problem you run into with autonomous machines. It would be relatively easy to give it a “detect 1 meter granite boulder” program. It would be hard to give it a “detect rocks of any size, composition, and reflectivity” program. And that’s not including things like gullies, ditches, water (puddle or river?), etc.

We avoided those problems with the Mars exploration probes. The probes were loaded into shells which could land even on rocks and not necessarily flat and still open up with the probe upright. Guaranteeing a flat and gentle landing on smooth ground would have been a much harder engineering exercise.

Also, it’s important to remember that the human brain is one of the most advanced image processors out there. It was developed over millions of years and still takes us years and years as children to learn to use properly. Even so, we often screw up the relatively simple act of seeing things and responding accordingly.

Kind of a different problem. Terrain mapping software used on drones and cruise missiles generally only care about how high off the ground you are, not what’s on the ground itself.

And cruise missiles don’t want to land gently. Generally, they want to crash or explode on impact.

What you are describing is linearization of the system response for proportional control; in other words, the system is designed such that the input is scaled in a fashion that makes the response predictable by a human pilot. Conceptually it is no different than car steering which adjusts to make small motions at high speed and large ones at low speed, making it more difficult to accidentally overshoot and put the car in an out of control condition. I’m not familiar enough with PNGCS on the LM to know the limits it had for input, but most control systems will incorporate a both a deadband zone to prevent uncontrolled oscillation, and range limits that prevent response out of the defined linear region.

This would be true for essentially any payload much larger than the MSL. The AIAA Journal of Spacecraft and Rockets had a series of papers on this in the May-June 2014 (Volume 51, Number 3) journal, especially looking at retropropulsion systems that will probably be necessary. Note that this isn’t physically impossible or anything like that; just that it will require the development of technologies and techniques that are outside of current experience. The challenge in this case is one of engineering, not developing any radical new physics; but this challenge is very significant and will take considerable effort and cost.

We tend to think of things we do as “easy” even though we were a billion years of evolution in the making, and we’re still not as fast or precise at repetitive or clearly defined tasks as even a fairly crude robot. On the other hand, we’re painfully delicate, prone to just keeling over and dying at the first exposure to subfreezing or broiling temperatures, a little bit of vacuum, or exposure to caustic atmosphere. The ideal spacefaring astronaut would have all of our strengths, i.e. intellect and tactile manipulation capability, but none of our weaknesses. Hence, why remotely operated and semi-autonomous probes will continue to the the most efficient means of space exploration and exploitation for the foreseeable future, or at least until we can start modifying ourselves into less delicate forms.

Stranger

I’m not sure what you mean by that sentence, but that landing was almost a total failure. The harpoons never fired and the lander bounced three times, taking something like 2 hours to finally come to a rest. Considering the comet has an escape velocity of 1.1mph, it was shear luck it happened to land back on the surface instead of just taking off again.

I should probably put this in the other thread, but the lander has two harpoons, the fact that neither of them deployed makes me assume they’re both tied together at some point. I’m surprised they didn’t make them totally, 100% independent. That is, separate power sources, separate sensors to tell them when to fire, separate everything. That way if neither fired, if would be a coincidence and not an accident.

The problem was probably at the software level or sensor. It probably did not correctly detect the conditions for triggering the harpoons.

The Curiosity rover didn’t have those things. It could not have been set down on more than a fairly small rock and hope to survive. And yet the speed of light meant that a human could not intervene in time to select a better site. It had to work to very high precision totally autonomously (and succeeded, of course).

Close. The computer was working the radar in both landing and rendezvous mode. That caused the 1201 and 1202 alarms.

Armstrong landed long and had to go to manual because he screwed up undocking from the CM. They were supposed to completely depressurize the docking tunnel between the LM and CM before they separated, but Armstrong thought he’d take advantage of a boost by undocking from a pressurized tunnel. So he started the Powered Descent going a bit faster than he should have been and the onboard computer didn’t know it. And maybe he was being a little bit of a hotshot pilot by not letting the computer do all it could have done.

For Apollo 12, Conrad used the auto system where you line up some crosshairs projected on the window on the point where you wanted to land, push a button, and then the computer would take the LM there. And he ended landing exactly where he wanted to be, just a couple hundred yards away from a Surveyor lander, where he was supposed to (and did) remove a couple of parts and take them back to Earth.

ref: “A Man On The Moon”, A. Chaikin.

The Red Dragon proposal looks more interesting, IMHO. There’s a somewhat long talk about it here.

Previous Mars missions basically used some form of ballistic reentry coupled with parachutes. However, this does not make optimal use of Mars’ limited atmosphere. Better is to actually use negative lift to dive into the lower atmosphere and bleed off velocity there. Once you get down there, you give yourself just enough lift (using the same technique of an offset center of mass as current capsules do) to stay at that low altitude. Only once you totally run out of lift do you complete your descent on rockets. Fortunately, you are at a low altitude, so you do not lose much to gravity drag.

In the end you can complete your EDL with a high-ballistic-coefficient lander (450-600 kg/m[sup]2[/sup]) without crazy amounts of delta-V for your landing rockets. However, I suspect the maneuver is tricky enough that it would have to be totally automated regardless of whether a human was onboard.

Incidentally, although this isn’t entirely germane to the current discussion, people may want to keep an eye on SpaceX’s next cargo mission (“no earlier than” Dec 9). They will again attempt to land their first stage over the ocean–but this time on a 300x170 foot barge. The first stage legs are ~60 feet across, so this means they need a CEP of well under 50 feet.

The first stage goes through hypersonic reentry, then a nearly ballistic descent (though using lifting-body techniques for fine control), and finally a “suicide burn” landing on rockets that can’t get anywhere close to a thrust/weight of 1. They give it a 50/50 shot at success but if they get even close it will be spectacular.

So while it’s not quite the same problem as landing on a comet or Mars, it does illustrate how far along landing systems have come, and to my eyes looks easily as difficult overall (though in different ways) as some of these other scenarios.

I note that Stranger has described the difficulties of landing on Mars in some detail, but the difficulties of landing on a comet are also enormous. The surface is incredibly rough, as you can see from the images of where Philae ended up; it resembles an abandoned slate quarry. As for the local gravity, it is miniscule, but not miniscule enough to ignore altogether- the lander was attracted by the gravity of this irregular object, and hit with enough force to bounce a kilometer off into space- a bounce that took nearly an hour.

Even though the lander has an inertial mass of 100 kilograms, it has an effective weight of about 15 grams , so it is only very weakly attracted to the surface. Firing the harpoons into the surface could simply propel the lander away from the comet, so the lander had a thruster to counteract this recoil. This failed, so they tried to deploy the harpoon without counterthrust. In the event the harpoons seem to have failed too. All in all, a slightly less-than-optimal landing.

One analogy would be if you were suspended by ropes against a cliff, and trying to obtain a secure foothold by firing a harpoon into the surface. This would be a difficult manoeuvre, even if there wasn’t thirty minutes of light speed delay involved.

From what I’ve seen from the mission reports so far, the harpoon sequence was triggered, and the motor retraction which occurs after the harpoons fire did happen as planned. The harpoons simply never fired. Each harpoon has an independent gas generator to propel it, so either both of those failed or some other possibly common component in the triggering circuits failed. They also had failure of all four of the pyro triggered pins that were supposed to puncture diaphragms to pressurize the gas thrusters, and a failure of the shutter mechanism of the APXS after landing.

RE: "How are landing zones decided anyway for spacecraft? "

Wednesday night, after the landing, the Discovery channel had a special about it.

One section covered how they selected the landing site. There was a tradeoff between places that were safest to land vs. those the would yield the most scientific information. IIRC there were originally 10 different sites, whittled down to 5, and then one was selected. There was a bit of campaigning among the scientists who had developed experiments for the lander, with each wanting a landing that would best suit his experiment.

Wikipedia, fundamental source of knowledge that it is, cites the harpoon gas generator material (nitrocellulose) as a candidate failure element. They cite an obscure (to me) Danish aerospace non-profit as having shown in 2013 that nitrocellulose is unreliable in vacuum.

The wiki claim has a citation footnote that chases to some stuff, including a Youtube video, apparently in Danish.

Still, propellant as common failure mode could make sense.

I dug out my copy of AMotM, and didn’t see any reference to the pressurized docking tunnel. And, in any case, Collins in the CM controlled the procedure.
Plus, the Moon’s uneven gravitational field always affected the orbits and descent profiles.

The landing point designator was also used on Apollo 11.
Apollo 12 had an advantage, a new technique using Doppler shift, that gave a much better fix of the LM location relative to the planned descent.