Whats the difference. By 2030’s they will be one and the same anyway.
NASA discovered that Mark was alive around Sol 54 if I’m remembering correctly, and the rest of the crew had left at around Sol 20. So it’s true that in terms of distance, the crew wasn’t super far away.
But even if they could just turn the ship around and turn back to Mars (which I don’t think is very easy or maybe even possible in terms of spaceflight), they didn’t have a way to pick up Mark. I’m pretty sure they didn’t have any capsules or ships or anything they could send down to the surface. The vehicle that Mark used to get into orbit was 8000 km away, much further than anyone had ever traveled in a rover. He mentioned in the movie how the rover only can go for about 32 km before being recharged. Also at the time they had no way to communicate with Mark. It was only after he went and got Pathfinder where NASA and the crew were able to communicate with Mark.
Unfortunately I don’t have the book at hand but I’ll do a little plausibility calc using the numbers I do have.
I know that the engine was supposed to give 2 mm/s[sup]2[/sup] of acceleration. That’s high compared to typical ion thrusters but not obviously crazy.
First thing is how much delta-V the ship has available. A typical Mars transfer needs in the ballpark of 4300 m/s, but continuous-thrust transfers always need more, which I’ll ballpark roughly as 5 km/s. However, the Hermes obviously had plenty of extra delta-V available; enough to return from Mars orbit and do another swingby. They didn’t stop at Mars on the second round but they needed something for the modified orbit. Let’s go with a total of 10 km/s (again, very rough ballparking).
With a 5000 s Isp ion thruster, the Hermes would need a 20% propellant mass fraction for 10 km/s delta-V. Seems pretty reasonable given the way it looked. There were some rather large tanks at the rear, but still just a fraction of the total.
By my math, to accelerate 1 kg at 0.002 m/s[sup]2[/sup] requires a mass flow of 4.1x10[sup]-8[/sup] kg/s. At a 49 km/s exit velocity, that requires 49 watts of input power.
If we can dedicate 25% of the ship mass to the reactor and the electrical efficiency is 50%, then we need about 400 watts/kg of reactor performance.
Looking around a bit, I see this paper, which seems (after a very brief skim) to indicate that 100 W/kg might be achievable. So 400 W/kg seems high, though perhaps more advanced technologies (gas cooled, etc.) would improve matters. If you could dedicate 50% of the ship mass to the reactor and get somewhat better than 50% efficiency, then you only need 150 W/kg performance. Still high but better.
However, better than spending more ship mass on the reactor is to turn down your exhaust speed, since power does up with the square. If we dedicate 40% of ship mass to fuel, we can turn down the thrusters to an Isp of 2000 s (either in the design stage or using a variable-Isp engine like VASIMIR). That increases our energy efficiency by a factor of over 6, making the reactor far more plausible.
All in all it looks ok. Optimistic, sure, but not fantastic.
I did laugh at that, though at this in this case it’s more like “firing solution found”, which might look at millions of possible trajectories before finding one that met the constraints. Still, my code is more liable to print a single “done.” at the console instead of popping up a fancy graphical window.
Maybe in 2035, where every client OS is now something akin to android and the OS has easy to use GUI libraries baked in, it’s just as easy to do a fancy window as a simple line of text. Or it’s just sci fi.
Seriously, conceptually, showing a window is not much more difficult than printing a line of text. (the information you need to feed a library stack is not any more complex, even though the actual code to draw a window is far more complex) The issue is that the graphical tools like visual basic have dogshit programming languages behind them, while gcc doesn’t have an easy to use GUI editor that lets you just draw up a gui in seconds and hook it to events in your code built in.
Yeah, the real silliness there was that Purnell was doing all that while physically sitting in the middle of the server farm.
Well, that, and he was aiming all of that processing power at the wrong target. It’s really easy for a computer to take a rough orbital plan and polish it up for maximum efficiency, speed, economy, or whatever, or to verify that it can meet some set of constraints. Purnell’s laptop by itself could do that easily. Heck, those tablets the astronauts had on their wrists could do it, if they had the right app (and why wouldn’t they?). The really hard part, which might justify the use of the massive data center, is in coming up with the basic plan to begin with… but that all came out of Purnell’s own steely-eyed head.
Huh. Coulda sworn I saw an EU flag.
Not strictly necessary, but I can’t say that I’ve never done something along those lines. Usually to debug something that I need to see in person for whatever reason. I can imagine that Rich knows that he’ll be stuck at the back of the job queue if he submits his job remotely, and being so junior has no way to bump the priority. But if he shows up and plugs in locally, he can disconnect the network between the farm and the rest of the facility, guaranteeing that his jobs will be executed first :).
There’s sort of a middle ground in this particular case. Although it’s straightforward to compute the behavior of a continuous-thrust engine given a set of commands, the space of possible commands is immense. This is in contrast to impulse-thrust engines (chemical rockets) where there is some small finite number of thrust events, and they almost all occur at a periapsis or apoapsis.
So Rich might have come up with the basic swingby maneuver, using experience and intuition to predict that it’s possible given the constraints, but still only have a vague idea of the engine commands needed to achieve that goal. That’s where the heavy work comes in, to try zillions of possibilities (running an optimizer on each one) to find the best.
That said, Weir did his calcs on a ~2010 era PC, so technically we have proof that the server farm wasn’t necessary :).
I’m a flag guy and always tend to notice them. Never saw an EU flag in the movie myself.
Andy Weir says he wrote a program to calculate the course of Hermes, including the various rates of acceleration; he assumed the ship used a VASIMR drive, which requires a very efficient nuclear reactor on-board, but is otherwise reasonable.
You would think that NASA would have something a lot more detailed than simulating a point under acceleration traveling through space for the Hermes. The simulator would be a full up discrete model of the whole spacecraft, modeling the fuel flows, radiation and temperature fluxes, component failures, with the data the simulation uses coming from information extracted from actual telemetry and maintenance records for the Hermes.
If you think about it, to find out if it’s even possible, you need to check against a model of the ship breaking and factor in occasional engine failures, etc, during the flight. A model like this would be extremely memory hungry and there are many possible permutations to try. Not to mention, the model would do N-body gravity the way NASA does it, where each planet isn’t just a point mass, it’s modeled by an equation that determines the gravitational field at a particular position in orbit around the planet.
He might be executed for that. ![]()
Virtually all modern large clusters are run by a control (head) node, or even a series of head nodes, and it isn’t possible to just set up jobs to run locally without entering the job into the queueing system unless you literally detach the child nodes.
The analyses implicitly referenced here are separate simulations using different tools; there is no global unitary model to simulate all spacecraft systems and performance simultaneously. The specific models referred to are engine balance and propellant line pressure/flow models (“fuel flows”), thermal radiation and internal heating models (“radiation and temperature fluxes”), functional reliability analysis and electrical/hydraulic circuit simulation (“component failures”), flight software testing and hardware in the loop testing (“data the simulation uses from from … actual telemetry”), structural modal response and structure/joint margins (“check against a model of the shop breaking”), control bending modes, control authority, flight anomaly recovery (“factor in occasional engine failures”), and swing-by trajectory simulation.
Although there are interactions between many of these phenomena, most of the analyses such as control bending modes or thermal radiation models are just run for the most stressing conditions, and results feed back into other analyses to provide bounding cases to ensure that control and structural/thermal margins aren’t exceeded. The vast majority of trajectory analysis is done assuming point sources for planets with high precision ephemerides (you can access them yourself [directly from JPL]) and a [URL=“http://geodesy.curtin.edu.au/research/models/mgm2011/”]gravity reference model](http://ssd.jpl.nasa.gov/horizons.cgi) to account for local deviations from mass concentrations when calculating the orbital insertion or swing-by maneuver.
Those are detailed trajectory analyses for particular target box or intercept solutions, though; the work needed to calculate a return trajectory for the Hermes sufficient for planning purposes could be done on a mid-quality laptop computer.
Stranger
How much would the last-minute course adjustments needed to grab Watney have affected the calculations? Is a difference of ~70 km and 30 m/s trivial, or would Purnell be stuck back down in the server farm re-running everything?
Not a computer scientist at all, but I interpreted that scene as Purcell basically hiding out from interruptions as he tinkered with the problem because he wasn’t yet ready to tell others about it. “You do know I’m your boss, don’t you?”
You wouldn’t rerun a simulation for last minute adjustments like this. Instead what you’d do as part of the guidance calcluations is create a linearized parametric model that can solve for local intercept parameters, and frankly, you wouldn’t even need to have a person-in-the-loop except as a contingency. Circa 1960 a human pilot was the best guidance system to perform a rendezvous, but with modern guidance systems there is no need for human intervention.
That’s what Salt Lounge is for.
Stranger
I assume that that’s some recreational facility at JPL, and not a movie theater in Arizona, as Google would suggest?
It’s a place down on Miller’s Alley in Old Town. It seems to have replace the 35’er as the local JPL hangout, at least among the people I know. I still miss Barney’s…we did a lot of proposal work there.
Stranger
Liked the movie. One issue/question.
Why would they drop the ascent vehicle from Ares 4 a year+ early as part of the Ares 3 mission? The movie opens with the threat of the ascent vehicle getting knocked over by a storm and killing it, so keeping it on the surface for that long is risky. It seems implausible that they’d deliver it years ahead as part of the previous mission, rather than as part of the supply package leading up to Ares 4.
Mainly it is a plot thing, but one launches for Mars when the planets are in the best position relative to each other (or will be when you get there) and NASA wants to have everything ready and waiting for the next astronauts so they can return.
The story also involved the MAV’s creating their own fuel from local elements, to save the lift mass. It would take some time to do it, so the MAV had to be prepositioned. The time lag also allowed the crew launch to be aborted if something went really wrong there. The possibility of a storm strong enough to knock it over would not have been a consideration because it could not happen. ![]()
NASA’s newly-announced Mars plan involves doing pretty much that.