I tried to solve this problem using merge sort tree and euler tour. The overall complexity turns out to be O(n*log(n)*log(n)). My solution passes all but one test case (the second last one). Even For passing the last test case I had to use fast I/O method. After the contest got over I asked my friend and turns out he also used the same algorithm and got AC(a different way of implementation), I think the time limit was too strict and it is an example of bad problem setting.

I would like to have other’s views about such problems.

Same here. I used the same approach but was not able to pass the first test case of the third subtask. I think the intended solution was O(mlog(n)). So, they put such a strict time limit.

If the intended solution was mlog(n) then the time limit should be even more strict because I know a lot of people who solved it in O(n*logn*logn) and I shouldn’t have been able to pass the last task.

The last task was HELL. I passed all test cases except the last one

I even tried to build the segment tree iteratively but still it was giving TLE. I think you should try building segment tree iteratively because it is somewhat faster and maybe it’ll give you AC. Check out this blog for iterative segment tree - http://codeforces.com/blog/entry/18051

I implemented HLD and also have a time complexity of O(m*logn*logn). The first logn is for going up the tree using chains of HLD and the second is for querying the segment tree for each chain.

I passed the hardest case in 1.48 :). Some optimizations i did:

- fast input/output:
- use inline functions everywhere
- pass by reference and not by value wherever i can
- Take random root, not always 0.
- Sorting queries(by value K) and node values(by value C) SEPARATELY. This had the largest impact i think i sped up my code by a factor of 2. If i combine queries and nodes and sort i have complexity O((n+m)
*log(n+m)). If i sort separately i have complexity O(n*logn + m*logm). Assuming n = 100.000, m = 100.000 we get:

O(200.000 * 18) vs O(100.000 * 17) which in the case of such tight limits turned out to be a game changer :).

I read somewhere about iterative segment tree query function (not recursive) and that it is much faster but i don’t know how to implement it.

I used a persistent segment tree. Every node will hold the xor of itself and its ancestors. The persistent segment tree is done on the parameter max. K. (we do co-ordinate compression on K before. so, max K is N)

Now, for a query U,V,k, we find xor by doing of query(U) xor query(lca(U,V)) xor ( query(V) xor lca(U,V) )

Total complexity. O(NlogN + MlogN)

Isn’t query(lca(U,V)) complexity O(logn*logn) ? Is your lca function constant in complexity? If so how?

Same here, except there is no need to find the LCA. Because XOR is its own inverse, the LCA xor queries negate each other. Just query(U) XOR query(V) will give you the right answer.

Okay I got an AC now and the only thing that I have changed is sorting the queries according to the k values. I guess that it had a big impact on the overall complexity. Still not understood how sorting queries benefits us?

Mine is an online approach. I didn’t sort the query.

Firstly I did heavy light decomposition.

Secondly for each heavy path I built a segment tree and for each segment I sorted the value on it and calculate the prefix XOR sum.

Thirdly for each query, say if I want the XOR sum from a node to root the path weight of which are lesser that K, than for each heavy path, I use binary search for finding the position of K, and I know the answer is prefix XOR up to that position.

Overall complexity is O(n log n+Q(log n)^2)

For details, you can check my code.

@meooow: Oh yes! Thanks !

@vasja: It’s logn. You can check out this article https://www.topcoder.com/community/data-science/data-science-tutorials/range-minimum-query-and-lowest-common-ancestor/#Another%20easy%20solution%20in%20O(N%20logN,%20O(logN)

This part. Another easy solution in <O(N logN, O(logN)>