# Is there an algorithm to generate all possible shortest paths between two points?

I’m writing a graph package, and for testing shortest path algorithms, I thought it may be useful to have an algorithm that generates all possible shortest paths between two vertices in a weighted, directed graph. That way I would only have to hand-solve toy cases to test this algorithm, then I could use that algorithm to generate sets of possible answers for all the other ones. Multiple optimal paths frequently exist, and most shortest path algorithms find only one and which one they find is based on the successor function ordering and priority queue sorting and phase of the moon and the current disposition of Fenris and all sorts of other hard to control stuff.

It looks like Matlab may have a function in a standard library that does this, which of course doesn’t have any open source code I can look at, but I haven’t found any standard algorithms or pseudocode. I could probably devise one (probably by modifying Uniform Cost Search), but there’s no point reinventing the wheel if I don’t have to.

I understand that the running time for such an algorithm is almost necessarily going to be bonkers, but it’s not a big deal since it probably won’t be used on very large graphs, and is unlikely to be used except in testing, which doesn’t have to be blazing fast. At worst, I can run it once, save the output to disk, and then compare against that instead of running it every time. (I mean, in cases that take a while but not “multiple months to find the solution” a while).

Well, I suppose for starters you could make a step in the right direction by just leaving Fenris out of your algorithm?

You’re writing this all yourself, from scratch? So you have, like, full, complete, total, unobstructed control over your own algorithm?

Typically, as I understand it, you would search all possible paths, possibly with some pruning algorithm, and each time you find your end point, you would:
[ul]
[li] Look at the distance of that path;[/li][li] Compare that distance with the best distance you found and saved so far;[/li][li] If the new path is better than the best saved so far, your new path replaces that as the best candidate.[/li][/ul]

One obvious pruning algorithm would be: Each time a particular path (so far) exceeds your best candidate so far, abandon any further exploration of that path.

Now, if you want to find ALL optimal paths (for whatever your definition of optimal path), this means you have to search ALL paths and have a mechanism for saving a list of paths. Each time you find a path that is equal to your best candidate, just add it to your list of optimal paths. (As opposed to some kinds of search algorithms, where you are only looking for any one solution, whereupon you can stop searching as soon as you find a solution. I wrote a program once for solving Skeleton Puzzzles (a.k.a. Kriss-Kross puzzles) this way.)

As I’m seeing your problem statement, the only thing you need to do is to be able to record a whole bunch of paths (namely, all the optimal ones you’ve found so far), rather than only recording one BEST-so-far candidate path.

Somehow, that advice sounds too simple. I don’t know a whole lot about these kinds of algorithms (although I’ve done some years ago), so there’s some serious possibility that I’m really not understanding the problem.

WWFD? (What would Fenris do?)

(Bold added.) Of course, all those kinds of factors make the algorithm non-determinate. Those are a bitch to test and debug, or to make a suite of test cases for them. GFL with that!

I managed to write something on top of Floyd-Warshall.

It generates all costs between all nodes, and can be modified to keep track of the shortest path. I altered this further.

The algorithm to keep track of the shortest path seems to be: if some new successors is cheaper, supersede the current known successor with the new one. I altered this to: if some new successor is roughly (“roughly” because costs are floats) the same price as the known cost, add it to a list. If a new node is cheaper, supersede the whole list and start a new one containing this node.

Then it was a matter of altering the path reconstruction function they mention. The gist of the algorithm seemed to be: if we know two points i and j, and we know some intermediate point between them (from above). Reconstruct the path between i and the intermediate, and the intermediate and j, and join them. Do this recursively.

I altered this to say that for EACH intermediate, there’s a set of shortest predecessor paths to i, and a set of shortest successor paths to j. (Recursively) generate all of these paths. Now join each predecessor path via the junction at the intermediate with each successor path. Do this for each intermediate.

I think this falls under the umbrella of algorithms that “aren’t fast, but are about as fast as can be reasonably expected.”

It is straightforward to modify an algorithm like Floyd-Warshall to associate with every (i,j) the length l(i,j) of the minimum path between the pair. (You don’t even need every j, just your goal vertex).

Once you have that, from the start vertex it is clear which connections do and which do not lead to a tied-for-minimal path. You can print them all with a simple recursion.

Djykstra’s shortest path first algorithm, slightly extended to keep all equally shortest paths. This is what internet routers use (OSPF or IS-IS) for “introdomain” routing. Frequently, up to 8 equal shortest paths are kept, to allow “equal cost multi-path” forwarding. I used to be a bit of an expert on this, but it’s been a while.

It runs O(NlogN), but can be optimized to nearly O(N) if you constrain the total distance to a small enough number to keep a table of all possible distances. The IS-IS specification covers (and unfortunately, IIRC, requires) that optimization. Unfortunate because the limit on total path cost isn’t ideal. Look for something like “MAX_PATH” in the IS-IS spec, if you can find a copy. The official standard is an ISO doc, which unfortunately isn’t free, but there was an IETF RFC version that was publicly available. The RFC has been retired but should still be available, and should have the optimization in it.

Oops, that’s O(N^2), optimizable to O(NlogN).