On pathfinding algorithms, and hills.

I’ve been working on a project to find optimum bike routes around hills, and I’d like some advice on what sort of pathfinding algorithm I should be using. I’ve pretty much got my data in place (I’m also hoping to eventually start using bike lane data as well), and I need to get an algorithm going. Unfortunately, I’ve had fairly limited formal education in programming, and I know pretty much nothing about pathfinding algorithms. I’ve done some poking around online, and I’ve come across recommendations of Dijkstra’s algorithm and A*, but I’ve run up against my lack of knowledge of what makes a good algorithm.

So, what I’ve got so far is that Dijkstra’s finds optimum solutions, but may be slower than A*. From the wiki, my understanding is that it’s starting with lowest-cost segments, and working it’s way out until it hits the target. So, it seems like it’s looking in all directions at the same time, which seems a little inefficient to me. Am I wrong about how it works? Or is that inefficiency a result of it finding optimum solutions, that is, it’s considering all possible cases?

Given that I’m working with a fairly regular system of lines, it seems like I ought to be able to do some optimization to find a good route quickly (ie, don’t start off by going backwards), but I don’t really know where to start. So, can anyone suggest some algorithms I might want to use?

Perhaps, but you need to specify your problem in much more detail.

What format is your hill data in? What constitutes an “optimum” path (shortest, or flattest, or what)? Are you restricted to bike paths, or can you take an arbitrary path through the hills? Is human interaction allowed, or should the program run completely automatically? Do you want a global optimum, or is a local one good enough?

Why do you care about efficiency?
Even the cheapest computer available today has more processing power than a Cray 1.
Are you really concerned that your program is going to take .01 seconds to run vs. .001 seconds?

Generally agree, but depending on number of routes and number of segments per route, could become a problem. (I recently did this with picking routes in a distribution center and even an efficient algorithm would run for a loooong time, but I would guess the number of routes and segments in that case was probably greater than what the op is working on).

This is the textbook[sup]1[/sup] example of the wrong way to think about algorithms. In this case you can get away with it, but such thinking is not a good habit to get into.

[sup]1[/sup]Cormen et al.

It’s very important to think about efficiency when considering routing algorithms to be used on anything more than a small network. Because the real world is so vast and complicated, for a long route (e.g. transcontinental) if you’re not careful a bad algorithm can easily take an hour to run, vs. a millisecond for a good one.

But if you are only considering paths within a small neighborhood, it’s not so important.

I implement these algorithms for a living.

Specifications:

I’m finding bike routes through city streets. I’d like to weight for bike lanes, but not restrict to them. Same for elevation - weight for avoiding unnecessary hills (my height data is in the form of elevation for each node). I was planning on playing around with my weighting somewhat, or possibly having a shortest-path vs. least-hills slider. Or drop-down, like how you can sometimes choose shortest-walking-distance or shortest-time for public transit routefinders (at least 511 does that). I’ll definitely need to play with that part.

Basically, I still have a lot of tweaking that I’d like to do, but I work best when I have something to play with and see results, so I’d like to at least get the algorithm working in order to have something to see.

Oh, and I’m concerned about efficiency because, well, I’m working in Flash, and it ain’t exactly optimized for speed, ya’ know? Also, because the last vaguely complicated algorithm I worked with was John Conway’s game of life, and it killed my computer. Well, until I discovered the joys of convolution filters. But I’m assuming I can’t use a convolution filter for pathfinding.

Seems like A* would be a good fit. You can work all your preferences for hills and bike lanes into the heuristic. It’s also straightforward to implement (I did it in a couple of hours in Java).

It depends on if you are an Academic or a professional.
I’m all for efficient algorithms - I write code for very small devices, and efficiency is important. But, it’s often a waste of time to thrash around trying to optimize code that already runs very fast, especially if it’s only going to be called infrequently.

As a former professional and a current academic, I don’t agree with this at all.

This I do agree with, and I expect we also both agree that ease of coding is something developers should consider when selecting an algorithm. I though that you were saying something along the lines of “who cares about the algorithm, let’s just throw more CPU at it”, and that’s what I (very strongly) disagree with.

If your problem can be expressed as a Single Souce Shortest Paths problem, then go with Dijkstra’s (actually Moore’s) algorithm. You can find it all over the Net which will save you time and debugging headaches. There is absolutely no better choice for a DIYer. Coding it oneself is going to take an immensely greater amount of time with no assurance that the code will be correct.

Note that it isn’t a single algorithm. It implicity uses a heap as a data structure and by using a fancy implementation of a heap, the running time can be quite low. But since the standard version is quadratic, that would almost certainly be fast enough for your purposes. There is no likely reason to consider anything else.

You might want to read up on Dynamic Programming while you are at it.

I have no dog in this fight, but I’ll throw another one in.

A “simple” routine that you REALLY understand how it works and can tell when it works, has something to be said for it over another kind of routine.