# FBCHEF - Editorial

Author: A Surya Kiran
Tester: Shiplu Hawlader and Mahbubul Hasan
Editorialist: Lalit Kundu

MEDIUM-HARD

Segment trees
BIT
DFS, BFS, Graphs

### PROBLEM:

Given a rooted tree, each of whose node is associated with an integer, “wealth”. You have to perform two operations:
* A,X ie. decrease “wealth” of all nodes B by, by X/2d where d is distance between the node B and node A.
* Report number of descendants of a given node with non-positive “wealth” (further referred to as “dead” in editorial).

### EXPLANATION:

Someone said in a previous editorial:

Trees are hard to think about. In many tree problems involving data structures, the first step is to form one(or several) linear structures, and phrase the problem in terms of these linear structures.

Here, if we do a BFS on the tree, all the immediate children of node are numbered consecutively. We can use this to our advantage, but a bomb dropped on a node A effects all nodes in the tree according to distance.
Intuition here would be define a new kind of bomb, say NBomb, which when dropped on node A effects all it’s descedants according to the distance from node. Also, it should effects A’s parent by X/2. But now, since A is immediate child of it’s parent, it would be effected more than X. To normalise that, we drop a NBomb of -(X/4) on A.
So, we drop two NBombs on A, one whose power is X and the other whose power is -(X/4) and an NBomb of power (X/2) on parent of A.
For dropping a NBomb on A, at every step we go down the tree level by level and damage cities in every level until 2d>X. How we do this is by keeping for each node the leftest and rightest nodes. So, if we update an interval [i,j] by T, we will update [leftest[i],rightest[j]] by T/2.

So each query of type one is broken down in logX*logX queries, where each query is updating in a subarray(in the array we build after performing a BFS on the tree).

Now, we can have two approaches to solve this problem:
i.) Offline solution
ii.) Online solution

## Offline solution:

Let’s keep, for each query a time t, according to the sequence they arrived in input.
The approach here is to find for each node, the time of death. Once we get time of death for each node, it’s very simple to find the answer for query two. Let’s just say, we have times of death for now.
We have query of type two, which asks for number of descendants of a node which are dead at time=t. Again, we map the tree to an array using DFS, where all descendants of a node lie in one subarray.
We traverse over time now and keep on marking in BIT those indexes which have died before that time. For a query of type two, we just have to print the number of marked elements in L to R inclusive.

``````set BIT <- 0
for t=1 to Q:
for each node whose death time=t:
update(BIT,t,1)
for query of type-two at time=t
``````

Now, we come to the part of finding the death times. Again, a very classic approach called binary search comes into play here :).
For each node, I suppose that it died at time T. If it is dead at time T, it will be dead at time T+1 too. So, we can use binary search here, as it’s a non increasing function.
So:

``````def get_death_time(node n):
if sum of decrement queries till Q < wealth[n]:
return INF   // didn't die till the end
low=1
high=Q
while(low < high)
t1 = (low + high)/2
if (wealth[n] <= getDecSum(t1))   //get sum of all decrement queries performed upto time t1 that effect node n
high = t1   // node is dead at time t1
else
low = t1+1  // node is not dead at t1
return low
``````

However, we still have to get hold of all operations that effect node n, and use them to construct our BIT. To do this efficiently, we can use the fact that all operations affect some continuous subarray of our array. If Chef A comes immediately after chef B on the array, the operations that effect chef A are going to be almost same as those that effect chef B. Only those operations whose range either ends at B, or begins at A have to be taken care of. Rest of the operations are going to be common. Here is a pseudo code showing how to accomplish this:

``````set BIT <- 0
for i=1 to N:
for each query of type1 ie. (dec,i,r) at time t:  \\ that is, l=i
update(BIT,t,dec)
death[i]=get_death_time(i)
// This part can also be calculated in O(1) by reading
// the actual frequency at a position in BIT. See topcoder tutorial.
for each query of type1 ie. (dec,l,i) at time t:  \\ that is, r=i
update(BIT,t,-dec)
``````

Hence, we get death times of all nodes.

## Online solution:

In, online solution, you update all subarrays that need to be updated due to query of type1. And, then it figures out all the dead nodes in the tree. So, the only difference here, is that while updating the damages, we find the nodes which have died and update them in our BIT. And similar to offline solution, we can print using BIT, the number of dead nodes at any time.

I don’t think I can do any better than this editorial by Utkarsh Lath. So, I am taking the segment tree lazy propagation part from this editorial.

To efficiently decrease all values in a subarray, we would need lazy propagation. Also, to efficiently determine nodes who have died due to an update, each node(of our segment tree) should store the descendant leaf of minimum health(this leaf will be the first to die if all descendants are poisoned). The overall structure of a node of our segtree is:

``````
struct node {
int __health-of-least-wealthy-node__, __decrement__ , __alive-count__;
// _Actual health_ of a leaf(in segtree) can obtained by subtracting __decrement__ value of all its ancestors(in segtree).
// This is called _lazy propagation_.
}

``````

A node can be updated by simply updating decrement and health-of-least-wealthy-node parameters. decrement would need to be propagated down before updating any node.
After performing a normal update, the following function can be used to remove dead chefs:

``````
if tree[root].health-of-least-wealthy-node > 0
return // no dead node below me
if is-leaf(root)
set __alive-count__ of root to 0, __health-of-least-wealthy-node__ to infinity
else
propagate the value of __decrement__ to both children, set __decrement__ of root to 0
remove-dead-nodes(root * 2);    // some dead chef exists and needs to be removed
remove-dead-nodes(root * 2+1);  // so check both children
update __health-of-least-wealthy-node__ and __alive-count__ parameters of root

``````

Query and update operations are very standard.

### AUTHOR’S AND TESTER’S SOLUTIONS:

To be updated soon.

### RELATED PROBLEMS:

10 Likes

I suppose that tests were rather weak, my solution spend a lot of time (about 3 seconds) and memory (600 mb) on test which is generated inside the code. I used the first idea and thought my solution too slow but it passed tests in time. It seems there wasn’t the test without any queries or something like that. My code.

I had the same idea as the Online Solution here but I found it to be wrong because, the damage gets halved as we proceed down the subtree and hence the least-healthy descendant may not be affected by the same value of damage as its ancestors (credits to @darkstar1122 for pointing this out). Thus only the information about the least-healthy descendant is not of much use. Can you please clarify this?

@notimesea:
I decided to be not too strict on the constant factor of a solution. Basically I thought of allowing solutions with recursive implementation of some functions. But brute solutions are no where possible to get in the given time limit.
And the cube cluster which codechef uses is faster than many personal computers and might be than yours also.

DP to use hua hi nahi isme? Aise kaise chalega bhai…

Dp, maxflow, fourier transform…

you might wanna add van emde boas trees as well…cool shit that.

Plus…codechef pe koi problem hard nhi hota…at max medium-hard hota hai…and jo hard hota hai wo actually me NP-hard hota hai :D:

21 Likes

@nitinj What is the point of posting such a useless comment on such a well written editorial. If you dont understand cool shit then dont comment on it.

3 Likes

haha…dude …i was just kidding…you took it too seriously

PS: editorial is great. No doubt in that

8 Likes

@tinchy_stryder: I think you are confused between least healthy descendant of segment tree and least healthy descendant of original tree.

The segment tree is built upon BFS or DFS run of tree?

It will be done on BFS run tree. We get logN * logN subarrays to update in the BFS run tree as the children of any node lie as a continuous subarray in the BFS run. So we build a Segment Tree on BFS run tree and reduce each update to logN steps.

1 Like
//