52.219 All Pairs Shortest Paths (APSP)

In the All Pairs Shortest Paths problem we are given a directed graph G = (V, A) (where V is set of vertices, and A is set of directed arcs). Each arc (i,j) is labelled with a non-negative weight C[i,j] We are to find for each ordered pair of vertices (v,w) the smallest cost of any path from v to w in G.

An algorithm that does this is Floyd's algorithm. We start by copying the cost matrix C to an array A. It should be noted that if there is no directed arc (i,j) then C[i,j] will be infinity (in our case *max*), and that when we copy the cost array C into array A, we set A[i,i] to zero, for all i. That is, the diagonal (top left to bottom right) is all zero's implying that the cost of going from vertex i to vertex i is zero.

The algorithm proceeds in a manner that is very similar to array multiplication. For example, to multiply together array X and array Y to give array Z, we assume array X has n rows and m columns, array Y has m rows and p columns, and resulting array Z has n rows and p columns. The procedure to do matrix multiplication is as follows:


   for i = 1 to n 
    for j = 1 to p
     for k = 1 to m
      Z[i,j] <- Z[i,j] + X[i,k] * Y [k,j]

where initially Z[i,j] is set to zero, for all i and all j.

NOTE ON TERMINOLOGY:
I have used a pseudo code above that I hope is not to difficult to understand. Z is a two dimension array, and Z[i,j] is the ith/jth element of Z. The operator <- assigns a value from the right hand side to the variable on the left hand side. The "for i = 1 to n" iterates over i, and can be interpreted into scheme as


     (do ((i 1 (+ i 1)))
         ((> i n))
          .....
          )

Floyd's algorithm works as follows (I'll present it, then explain it):

          1   for i = 1 to n
          2    for j = 1 to n 
          3      A[i,j] <- C[i,j]
          4   for i = 1 to n
          5    A[i,i] <- 0
          6   for k = 1 to n
          7    for i <- 1 to n
          8     for j <- 1 to n
          9      A[i,j] <- min(A[i,j] , A[i,k] + A[k,j])

In lines 1,2,3 we copy the cost matrix C into the matrix A, and in lines 4 and 5 we set the diagonal of A to be all zero's. Lines 6 to 9 are the heart of the algorithm. Line 6 is the outer loop, the k loop. For each time k changes we will have changed i n times (line 7) and for each time i changes we will have changed j n times (line 8). Consequently line 9 is performed n*n*n times ... a cubic algorithm.

Line 9, we set the cost of going from vertex i to vertex j to be the minimum of the cost of going from i to j compared to the cost of going from i to vertex k and then from vertex k to vertex j. That is, is there a shorted root to j going through k?

Note that after the kth iteration (ie we've been round the loop that starts at line 7 k times) we will have in A[i,j] the cost of the shortest path that goes from vertex i to vertex j, but does not pass through a vertex higher than k.

Getting the Paths
The above algorithm gets the costs of the shortest paths, but it doesn't actually get the paths! We can use a technique similar to that use in Dijkstra's algorithm, namely using an array (here we use an array, in Dijkstra we use a vector) P, where P[i,j] is the vertex we visit between vertex i and vertex j. Initially P[i,j] is set to zero, and we replace line 9 above as follows


          9      if A[i,j] < A[i,k] + A[k,j]
          9.1    then A[i,j] <- A[i,k] + A[k,j]
          9.2         P[i,j] <- k

This will then capture the paths, and to retrieve a path from vertex i to vertex j we do as follows

                 (path i j P)
                  k <- P[i,j]
                  if k # 0
                  then (path i k P)
                       (print k)
                       (path k j P)

where # means "not equal to".

Transitive Closure
Floyd's algorithm can be modified so that it worked on an unlabelled graph (or unlabelled digraph) to give us the transitive closure. That is, we start with an adjacency matrix A, such that if there is an arc i to j then A[i,j] is true (ie #t) otherwise A[i,j] is false (ie. #f). By replacing line 9 in Floyd above as follows


          9      A[i,j] <- A[i,j] or (A[i,k] and A[k,j])

we say that there is a path from i to j if the path already exists or there is a path from i to k and a path from k to j. Consequently, at the end of the algorithm A[i,j] is true (ie. #t) if there is a path from i to j in G.

Centre of a Graph
In many cases we have a graph that represents services, such as supply of electricity, telecomms network, etc, and what we want to do is to position some facility such that it is in a "good" position. For example we might want to position something such that the maximum cost of going from that node to some other is a minimum. Put another way, we can compute, using Floyd's algorithm the APSP matrix. We then examine vertex i, and get the worst case cost for vertex i (ie. get the largest cost in the ith row of array A). Do this for all i, and take the vertex that has the smallest worst case cost. This vertex is then the "centre of G". That is, the maximum cost of going from the centre is a minimum.

Why not use Dijkstra's Algorithm
Dijkstra's algorithm finds SSSP, and is O(n*n) at worst, and a good implementation can be made O(nlogn). We could then apply Dijkstra's algorithm n times, using each vertex in turn as the Source. At worst this would the (again) be O(n*n*n) (ie. cubic). Consequently there is no good reason why we should not use this technique. However, if we have an O(n*n) implementation of Dijkstra it might be the case (as we have already discussed) that due to factors outwith our control, such as time to develop, what's already there, speed of machines, languages etc, we might just plonk for Floyd.