DAGCH-Editorial

PROBLEM LINK:

Practice
Contest

Author: Shiplu
Tester: Hiroto Sekido, Gerald Agapov
Editorialist: Ajay K. Verma

DIFFICULTY:

Hard

PREREQUISITES:

Semi Dominator, Path Compression Algorithm

PROBLEM:

We are given a directed graph with N nodes and M edges. The nodes of the graph are numbered in preorder according to a depth first traversal of the graph. The task is to compute the semi-dominator of all nodes of the graph.

QUICK EXPLANATION:

The algorithm to compute semi-dominators is very well described in the paper. If you want to read about the detailed theory behind semi-dominators, you can stop reading now and instead refer to the paper, as the following section merely summarizes the results mentioned in the paper.

EXPLANATION:

The problem defines that a vertex x is a supreme vertex of y, if there exist a directed path x = v0, v1, v2, …, vk = y, where x < y < vi for all 0 < i < k. Later, the problem defines the superior vertex of w as the minimum numbered supreme vertex of w. The rest of the problem only deals with superior vertices.

Note that, we can safely remove the constraint constraint (x < y) from the definition of supreme vertex without changing the definition of superior vertex. This is because the parent of a node w in the depth first search tree is a supreme vertex of w irrespective of whether (x < y) condition is included or not, and has number lower than that of w. Since the superior vertex of w is the lowest numbered supreme vertex of w, superior (w) <= parent (w) < w. In other words, the superior vertex of w will satisfy the constraint (x < y) anyway even if such constraint is not included in the definition of supreme vertex. This means:

superior (w) = min {v | there is a path v = v0, v1, v2, …, vk = w, such that w < vi for all 0 < i < k}

Note that, this is exactly the same as the definition of semi-dominator vertex of w, represented as sdom (w) in the paper on page 124. Hence, the task is compute the sdom (w) for all nodes w of the graph. Once we have computed the sdom() of nodes, we can create a new array cnt[] such that cnt[x] is the number of nodes whose semi-dominator vertex is x. The array cnt[] can be used to answer the queries. Next, we show how to compute the sdom (w) for all nodes.

Computation of semi-dominators:

The most relevant part of the paper in the computation of sdom (w) is the Theorem 4, mentioned on page 126, which states:

sdom (w) = min ({v | (v, w) is an edge and v < w}, {sdom (u) | u > w, and there is an edge (v, w) such that u --> v})

Here, the notation u --> v means that u is an ancestor vertex of v in the depth first search tree. Based on this theorem, one can compute semi-dominators of nodes by traversing them in decreasing order.

Let us say that semi dominator of all nodes n + 1, n + 2, …, N are already computed, and now we want to compute sdom (n). This can be done using the following snippet.

sdom[n] = INF;
for (v : predecessor(n)) {  // Consider each node v such that (v, n) is an edge.
    sdom[n] = min(sdom[n], v);
    
    u = parent[v];  // parent of node v in the depth first search tree.
    while (u > n) {
        // Since u > n, sdom[u] is already computed.
        sdom[n] = min(sdom[n], sdom[u]);
        u = parent[u];
    }
}

This gives an algorithm that will compute the sdom (w) for all nodes in O (Mh) time, where M is the number of edges in the graph, and h is the height of the depth first search tree. However, using efficient data structures the time can be reduced to O (M lg N).

Data structure for reducing the complexity:

In the above snippet the most expensive part is when we travsrese the path from u to root, and compute the minimum of sdom() values of nodes encountered, while ensuring that we only consider those nodes whose sdom() is already computed.
If we initialize the sdom() value of all nodes to INF, then we can ignore the check (u > n), as all nodes u smaller than n, will have their sdom() set to INF, and it will not affect the calculation of sdom (w).

This means we need to design a data structure that supports the following operations on the tree in O (lg N) time:

  1. set the sdom() value of a node w.
  2. For a given u, computer the minimum of sdom() of all nodes in the path from u to root.

This can be achieved easily by using a combination of heavy light decomposition and segment tree. However, the use of heavy light decomposition is probably an overkill here, as we know that the nodes are traversed in a particular order. Instead one can use the Path Compression Technique to maintain the minimum of sdom() values of the nodes on the path from a given node to root.

The main idea is to construct the dfs tree by starting with empty tree and then adding the nodes and edges iteratively. The nodes are added in decreasing order as described above, along with the edge from this node to its parent. Hence, at any point of time we will have a forest (i.e., collection of trees), which will finally converge into a single tree. During the construction, for each node v we maintain the three values:

  1. parent[v],
  2. ancestor[v],
  3. label[v] = minimum of the sdom() values of all nodes in the path from v to ancestor[v].

If a node is root of its tree, then its parent and ancestor are set to nil. For any other node v, ancestor[v] could be any node which has been visited so far and is an ancestor of v, however, we want it to be as close to the root as possible. So whenever, we traverse a path from some node u to r, we set ancestor[v] to r for all nodes in the path, as well as update their label[v]. Also for the traversal from u to root, we do not need to follow parent links, but we can use ancestor links, as we already have the minimum of all sdom() values from a node to its ancestor in label().

This means the traversal from a node to root does not need to visit O (h) nodes. It has been shown in the paper that that amortized complexity of traversing the path from a node to root via ancestor links is O (lg N). Hence, we can compute all sdom() values in O (M lg N) time.

TIME COMPLEXITY:

O (M lg N)

AUTHOR’S AND TESTER’S SOLUTIONS:

Author’s solution will be put up soon.
Tester’s solution will be put up soon.

4 Likes

Here’s another solution that i find interesting and simple.

  1. lets create an array mins[], initialize it with INF
  2. we will find the supreme vertex of each vertex from n to 1
  3. lets call the current processed vertex u, for each vertex k from u+1 to n, mins[k] will contain the smallest vertex number that have a path to k where every path will only contain vertex > u except the end of the path
  4. now we will find the value mins[u] which is the supreme vertex of u, for each edge v -> u, there are two cases :
    • if v < u then v is superior vertex of u, we will minimize mins[u] with v
    • if v > u then we will minimize mins[u] with mins[v], this means we can extend the path mins[v] -> … -> v -> u
  5. now that we get the value of mins[u] it will be the supreme vertex of u
  6. after processing vertex u, there will be a path from u to all it’s children, so we will minimize all it’s children’s mins[] value with mins[u]

Array mins could be implemented using segment tree as it require range update and single query operation.

The complexity of this algorithm is O(M log N)

Here’s my code 3436327

4 Likes

can anyone tell me where my code is wrong my approach is very much similar to as stated by @zeulb. I used stack to fnd the minimum number efficiently.
sol link http://www.codechef.com/viewsolution/3436001
Thankyou…

Here’s where your solution fails,
1
5 10 5
1 2
1 4
2 1
2 3
2 5
3 2
3 4
4 2
4 5
5 3
1 2 3 4 5

the output should be 3 1 0 0 0

1 Like

@Zeulb : Can you give a more detailed explanation if possible ? Your approach sounds very interesting.
Specifically I could not understand the part " for all edge that point to u lets call it v -> u, where v > u, find the minimum of their mins[] value call it x" .
A more detailed explanation would be very helpful! :slight_smile:

I updated my explanation, hope you will be able to understand it clearer :slight_smile: