Hello Guys, I’m back once again with editorials, hoping you would like them.
This is the part 2 of three posts of Unofficial editorials for December Long Challenge.
For editorials of problems GIT01, CPLAY, and VK18, click here.
For editorials of problem REDBLUE, click here.
This post has editorials for problems CHEFHAM and CHEFEXQ.
Problem CHEFHAM
Problem Difficulty : Easy-medium
Problem Explanation
Given an array, rearrange it in order to maximize the hamming distance between original array and the rearranged array. Hamming distance is defined as the number of indices i for which A[i] != B[i].
Important thing to note: Maximum frequency of a value is 2.
Solution
Well, I’m gonna share the most basic solution that comes in your mind naturally, with O(n^2) time Complexity.
Loop for every position and run a nested loop swapping elements if elements at both positions can be swapped or not.
Two elements at positions i and j can be swapped if A[i] != B[j] && A[j] != B[i]. (B array is another array with same elements in same order). (Swaps should be made only in B array)
After that, calculate hamming distance using a single loop and print the Hamming distance and the B array.
Simple, right. Well, this is my solution, except one observation.
Observation: Above solution will easily fit the time limit. (My solution ran within 0.47 seconds out of 6 second time limit.)
We run the internal loop only at that position i for which A[i] == B[j].
As we know that the maximum frequency of any element is 2, there can be at most 3 iterations (one for each position i and j, and last one is successful one) in the inner loop, making the solution complexity O(3*N), that is same as O(N).
That’s all.
Here’s a link to my
[4].
### Problem [CHEFEXQ][5]
#
### Problem difficulty: Medium
#
### Prerequisites: Square Root Decomposition
#
#### Problem Explanation
Given an array with N elements, you have two perform two type of queries:
1. Update ith value to x
2. Count Number of magical sub-arrays ending before i whose prefix xor is k.
Magical sub-array is defined as sub-array starting with index 0 of original array. (0 based indexing in my solution).
#### Solution
**Naive Solution**:
For every update query, update A[i] to x and for every query of second type, run a loop from 0 to i, taking prefix xor. whenever prefix xor == k, increment count by 1 and then print count.
Time complexity of update is O(1) and count operation is O(N), making overall complexity O(N*Q), which will give TLE.
Surprisingly, this solution gave 50 points instead of 20 points as above solution would give TLE for N and Q upto 1e4, probably weak test cases for those test files.
Anyways, we now are aiming for 100 points and that's where square root decomposition comes into action.
**Square root decomposition, or the buckets method**, means to divide the given range into blocks of any size (usually of sqrt(N) size), pre-compute answers for every block as a whole, and combine results of different blocks to answer count queries of given range. In case of query having partial start and end block, we will compute answer using naive method till start of next block and also from end of last complete block in range to r. Time complexity per count => Q(sqrt(N)) (Assuming we have sqrt(N) blocks).
For update operation, We just need to update one block, which can be done in O(sqrt(N)) time, assuming block size is sqrt(N).
Well, this was general overview of square root decomposition technique. Now, we need to decide what to store in each block such that we can answer queries for whole block in O(1).
I creates an array xor, which holds the xor of all values present in each block, and a map
(unordered in c++/HashMap in java) for every block, holding the the key value pairs where key is the prefix xor of block up-to any position and value is the frequency of key being the prefix xor of block.
Take example of one block of an array 1 3 5 4 1
prefix xor would be 1 2 7 3 2
Now, map for this block will contain following four key => value pairs:
1. 1 => 1
2. 2 => 2
3. 3 => 1
4. 7 => 1
Coming to update operation, for each update, we will rebuild whole map again the same way we initially built above, and also update xor[b] with new block xor, where b is block index.
Query operation is simple.
prev = 0 and count = 0
if i is the given index and k is given value, then for every block ending before i,
if map[b] contains key prev^k, count += map[b].get(prev^k)
prev = prev^xor[b].
For remaining portion of count operation, use the simple solution mentioned above.
The variable prev is holding the prefix xor for first b-1 blocks. Querying for prev^k works for following reason.
**Reason:** we need to find number of positions for which **prefix xor of whole array** == k. But every block stores prefix xor of their own block only. We know that if xor(i, j) denote prefix xor of range i to j, then xor(i, j) = xor(i, k) ^ xor(k+1, j) holds true.
xor(i, j) = xor(i, k) ^ xor(k+1, j)
If k = index of last element of last completed block i-1, Then prev = xor(0,k) for ith block.
We need the prefix xor to be k and we have prefix xor up-to last position is prev, So entry in map of ith block, whose key is prev^k, will have prefix xor of whole array as (prev^k) ^ prev = k, which is what we want.
Link to my
6.
Just like editorial part 1, I again hope you enjoy my editorials as much as you all do like coffee in winters (though i don’t).
Enjoy coding.