GERALD08 - Editorial

PROBLEM LINK:

Practice
Contest

Author: Gerald Agapov
Tester: Shiplu Hawlader and Mahbubul Hasan
Editorialist: Lalit Kundu

DIFFICULTY:

MEDIUM-HARD

PREREQUISITES:

Blue-Red Hackenbush
DFS, Graphs

PROBLEM:

Given a rooted tree of N<=105 nodes where each edge is either black or white and two players play optimally using these rules, find who will win:
* The players make moves in the game alternately.
* On his move Chef deletes a single undeleted black edge. On her move Ksen deletes a single undeleted white edge.
* When some edge has been deleted, all the edges that now aren’t connected (directly or indirectly) to the root will be deleted too.
*One who cannot move lose the game.

EXPLANATION:

Red Green Hackenbush is a standard game and very good research has already been done on it. Here is a very good paper which explains pretty well about the optimal strategy to win. If you don’t have much time, read page 9 of this pdf.

We evaluate a function F(node),

if F(root)==0, print “Ksen Chef”
else if F(root)>0: print “Chef Chef”
else: print “Ksen Ksen”

For each node, we calculate F recursively:

def F(n):   
    ret=0         
    for each vertex v which is child of n:   
        ret1=dfs(v)   
        if edge connecting u and v is 0:   
            find smallest p such that ret1+p > 1   
            ret += (ret1+p)/(1<<(p-1))   
        else:   
            find smallest p such that ret1-p < -1   
            ret += (ret1-p)/(1<<(p-1))   
    return ret    

Of course our problem is much harder, because the large constraints do not allow to use double or long double. Also unclever calculations using BigInteger or BigRational should not pass.

So, we have to use some optimisations. One of them could be that, since we know our answer is always of form a/2k we can store [a,k] and make intelligent calculations.

Addition of [a1,k1] and [a2,k2] is simple. Let k1<k2, we get [a1*2k2-k1+a2,k2].

Coming to finding smallest p such that [a,k] + p > 1.
If a is positive, we have to add just 1. ie. a <- a + 2k and k remains same.
If a is negative and abs(a)<2k, then we have to just add 2 ie. a<-2k+1+a and k remains same.
If a is negative and abs(a)≥2k, we add 2 to [abs(a)%2k,k]. We can quickly find abs(a)2<sup>k</sup> by using the fact a 2k = a & (2k - 1)

Similarily we can find smallest p such that ret1-p < -1.

AUTHOR’S AND TESTER’S SOLUTIONS:

To be updated soon.

Did you mean this research paper? http://www.geometer.org/mathcircles/hackenbush.pdf

Yeah, thanks for pointing out the error.

First, you probably mean a/2^k where a,k are integers, not a/2.

There are many unanswered questions here.

How would you store a1, a2? Computers don’t work with formulas, they work with numbers - beyond a ‘simple’ formula like a1*2^(k2-k1)+a2, millions of calculations could be hidden if not handled properly.

How large can a1 and a2 get (in number of 1-bits, number of bits without leading zeroes in total etc.)? The tighter the bounds, the better.

Is ANDing numbers really a good idea? It’s O(k). When k gets large (e.g. for a long stalk with alternating colours), we get O(N^2). If you just do 1<<(p-1) as k +=p-1, then this solution’s complexity is catastrophic.

Ok, enough flame for now. What I would’ve done (and did) better:

You might be familiar with the trick ‘BIT over HLD’, using which you can answer queries on trees online quickly. What I used is: bigint over HLD.

My bigint separately stores the integral and positive fractional part. The advantage of positive fractions is that adding them is much simpler than if you had to think about signs. And what I really need for deciding the number p by which I shift is just the integer in front of it. The fractional part is stored as a map<> of (bit,value) and an integer exp, which means that the fraction’s a sum of terms ‘value*2^(bit+exp)’. Let’s denote a number for which all ‘value’-s in the map are 1-s as standard.

Adding number B into number A (with both fractional parts in standard forms) works this way: add the integral part of B to that of A (duh), then add all entries (bit,value) of B into A as (bit+B.exp-A.exp,value). When you add anything into the map<>, convert it into standard form immediately - start with the entry with the key b=bit+B.exp-A.exp, and while for (b,value) we have value > 1, add (b+1,value/2), set value := value%2; in case value is 0, delete it from the map<>; move to b+1. This works because there’s always at most 1 key whose value is non-1, and that’s the key b.

It’s still possible that the integral part could get larger - and if we have the normal form, then we know that for every (bit,1) with bit >= 0, we can remove it from the map<> and add 2^bit to the integer.

Notice that adding integers is trivial.

What about shifting by a? It’s clear that for the fractional part, it’s just exp -=a. The integral part needs a bit (no pun intended) more care. We can afford to simply shift by 1, a times; each such shift is about subtracting INT%2 (integral part mod 2, non-negative modulo) from INT and if INT was odd, adding it to the fractional part in the same way as described before; then INT /=2. Note that this way, we can ensure that the result is really the number we wanted to get, with the fraction still in standard form.

We’re now able to do both required operations. Another useful one is checking whether the current number is integer, but that’s just checking if the map<>'s empty. The really powerful trick is in not copying all numbers from sons of vertex R into R when computing the comb. game value of R’s subtree, but copying all but one of them into the value of that one son and remembering just where the result for R really is.

If we pick the son which had the largest subtree, we can ensure that any bit is added at most O(log N) times this way. Plus, any bit can only be subtracted at most once, there are at most O(N) bits in total (they come from the numbers p mentioned in this editorial, but p can only be larger than 1 by about as much as the number of times p was equal to 1 when processing the subtree) and therefore there are at most O(N) right shifts, and the integral parts can (for the same reason) only be up to O(N), so they only have O(N log N) bits in total. The resulting complexity is then O(N log^2 N); the other logarithm comes from the complexity of array operations.

Hope this was useful.

14 Likes

Hope this was useful.

It was, thanks. :slight_smile:

Even these so called ‘intelligent’ optimizations were not enough to get that accepted. I was getting continous TLE’s after applying many different optimizations (including the ones mentioned above). I spent 2-3 days trying to get my BigFraction class to work. I expected more from this editorial.

4 Likes

More helpful than the editorial itself. Thanks!

1 Like