Is PHSTREE Can be solved using HLD technique ??

During the contest i have thought about the HLD solution for the problem PHSTREE i have understand about the decomposition of the tree so that we can traverse atmost log n subtree to get the result. But when it comes to answer the query in a single branch we need to use the fenwick tree along with the compressed weight (finding the weight takes logn by binary search), then using the fenwick tree to get the result which also takes logn time so solution should be NlogSquaredN solution.

But when forming the fenwick tree over the single branch i got stuck. I have seen that people used to build the segment tree easily on a single branch of the tree but fenwick tree i do not understand how to do that. Would you please help me clearing my doubts about my approach. I would be very thankful to you.

1 Like

Do you know how to construct a fenwick tree using simple array ? If not you learn from this extremely good article then apply it in the case single and then all the chains.

Yeah of course i know about the fenwick tree making for the array but when during HLD i have seen that the segment tree is constructed using single array but in this particular question, the fenwick tree should be separately constructed and that’s where i got stuck.

You can refer to this article in which you can see about the talking of the range over the single chain… But in the COT code example you can see that there is only one build of segment tree.

Yes, I’ve solved it using HLD.

Used Heavy/Light Decomposition and if the branch is not very small built BST (merge tree) on it. (for very short branch just brute-force at the time of the query.)

To be more precise: built BSTs with 30 merge vectors on it - one for each bit of result.

The BST[x] contained a number only if corresponding x-th bit is present in the number.

Combine function was interesting: when merging two intervals if number present in both parts - don’t insert it into parent array, as even number cancel each other. As result the merge lists on each level of BST where very sparse as it contained only numbers with specific bit set and only if there was odd quantity of such numbers. I.e. lists [1,3] and [3,9] would be merged into [1,9] as 3 is present in both parts.

After that search alongside path and get number of numbers that affect specific bit. If number is odd - set bit (or xor it to result).

To be more precise: for each segment in the HLD, run query [l,r, M] corresponding to the part of the branch, which returns exact number that need to be XORed with results. It first runs [r,l] on BST, but then on every segment of BST that need to be included it runs lower_bound(M) on 30 arrays to count if the corresponding bit should be set or not and XORes it to the result.

Got it juts under time limit, could optimise more, but why?

1 Like

Yes it can be solved using HLD as @lboris mentioned. But in this post I gave an alternate method as well just in case you want one.

I doubt you can solve this problem using fenwick tree on values. There are O(N) chains produced in HLD, so that will result to a costly O(N^2) memory requirement like you said.

How some people were able to do it with segment tree is because they are doing the inserts lazily. This style saves us space you really need since not all N compressed numbers are used per chain. But for me, I think that’s not enough; you’ll need some sort of persistence to maintain the order of elements. Going up the chain while doing a bounded query is hard for a rather “pure” fenwick tree, even if you do it on compressed values.

If you really really want to proceed with a fenwick tree, you’ll have to do something similar to what @Iboris did: build a tree on each of the 30 bits. That will make your algorithm run in O(Qlog^3N), which luckily passes the tight time limits. But in the end, I still suggest you to solve it using path-to-root XOR with a persistent tree because that one is more straightforward with only O(QlogN) complexity, hence is more practical compared to having 30 tries embedded in your HLD. If you have time, try that solution as well. But I hope I was helpful in clearing your understanding :slight_smile:

1 Like

Would you please explain your approach of using persistance ??

You can use HLD + BIT + Offline queries. Here’s how:
Sort the the queries in increasing order of k. Also sort the edges in increasing order of weight. Go through each sorted query and update the BIT tree with edges that have weight less than or equal to the k of current query and haven’t been updated yet. Now use BIT to get answer of current query.
Complexity: O(q\log q + n\log^2n)

1 Like

Thanks sdssudhu,