Hi
I’m curious about one thing. What if the queries were permanent? As in the swaps were permanent and the subsequent queries should be answered accordingly. Can the same technique be used or some modifications are required?
Thanks
Hi
I’m curious about one thing. What if the queries were permanent? As in the swaps were permanent and the subsequent queries should be answered accordingly. Can the same technique be used or some modifications are required?
Thanks
I think my method isn’t clear to you. basically, at each node of the mergesort tree, I stored 3 arrays. 1st array stored the elements in the range, represented by the node, in sorted order. Let us call the array ‘a’. the second array, named lp stored the left pointers for cascading. so lp[i] stored the index of the element in the left child which is greater than or equal to the a[i] of the current node. similarly, array rp for right child. Query on a mergesort tree visits logn nodes at max. While moving through the tree, I recursively changed my pointers according to lp and rp in O(1).
They are certainly easy to mug up since they have shorter codes, but I wouldnt agree to the claim that they are easier to understand than segment trees.
I dont think the same could work. The problem you are talking about is this: http://www.spoj.com/problems/SWAPS/
I used a BIT with sqrt(n) size buckets to solve the spoj one in q.sqrt(n).log(n). The method was nothing similar to what I did to solve chefinv.
What queries did you use in the tree for answering the main query ? I was doing 4 queries in range (i+1,j-1) (i<j):
elements greater than a[i], elements less than a[i], elements greater than a[j], elements less than a[j].
I guess one query to the tree takes 2logn time (1 for binary search + 1 for segment tree traversal). Hence my time was 8logn.
Yes I thought the same but still had lingering doubts. I was thinking something similar to your approach. I think sqrt decomposition is essential to apply here along with BIT because we can’t really modify the complete BIT again and again for each query. That would certainly result in a timeout because the complexity then becomes q.n.log(n) which would be too slow
The complexity for spoj swaps is m.( log(sqrt(n)).log(a_max) + sqrt(n) )
Yes, precisely. Thanks to the sqrt buckets for speeding things up. Though I would still delve more into it to understand it properly and be completely clear with the idea because I still don’t feel confident about coding it yet
Here is my code if you need help:
Thanks man. You’ve been of much help. I’ll make sure to refer your solution whenever I get stuck
@pushkarmishra
Okay so then you basically don’t need the array ‘a’ for any node other than the root. You perform a binary search on the root and then keep following the pointers.
@rohangarg
I believe you can reduce the running time by doing just one query for the number of elements between a[i] and a[j] in the range (i+1,j-1). Then depending on whether a[i]>a[j] or vice-versa you could subtract or add twice that number to the inversions to get the final answer.
That would take 3(logn) time (2-bs+1-st). (Although I’m not sure if you need to do some extra processing to handle duplicates)
What is meant by the line :
By add we mean adding the point’s Y-coordinate in the Fenwick tree.
Also how is this link related to calculation of no of points inside an axis parallel rectangle??