PROBLEM LINK:
Author: Sunny Aggarwal
Tester: Kevin Atienza
Editorialist: Ajay K. Verma
Russian Translator: Sergey Kulik
Mandarian Translator: Gedi Zheng
DIFFICULTY:
Medium-Hard
PREREQUISITES:
sqrt decomposition
PROBLEM:
Given a string S, handle the queries of the following form: for given L, R and k, consider all substrings of S[L, R], which have each character occurring an even number of times in it, and return the sum of k-th power of the lengths of these substrings.
QUICK EXPLANATION:
Create an array P, where P[i] corresponds to an encoding of the parity of frequency of characters in the prefix S[1, i]. This means that a substring S[L, R] is balanced iff P[R] = P[L - 1]. We can then use a sqrt decomposition to precompute the answer for O (N^{1.5}) subarrays such that queries can be answered in O (N^{0.5}) time.
EXPLANATION:
We are given a string S of length N, and we want to determine whether its substring S[L, R] contains each character an even number of times. The easiest way is to count the number of occurrences of each character in the substring, and then check if all of them are even. This will take O (nk) time, where n is the length of the substring, and k is the size of the alphabet. If we have to answer several queries of this form, we can do some preprocessing to speedup the queries. For each character x, create an array P_x[1..N], where P_x[i] represents the number of occurrences of of x in the prefix S[1, i]. Using this array, we can compute the number of occurrences of x in S[L, R] in constant time. More specifically,
\text{count} (x, S[L, R]) = P_x[R] - P_x[L - 1]
Note that, we do not want to know the exact number of occurrences of x in S[L, R], but only whether it is even, i.e.,
P_x[R] - P_x[L - 1] = 0 \bmod 2
P_x[R] \bmod 2 = P_x[L - 1] \bmod 2
Hence, we do not need to store the exact number of number of occurrences of x in P_x, but only a single bit representing the parity of the count, and we only need to check whether P_x[R] = P_x[L - 1]. For each position i \in [1, N], we have k parity bits, one for each character. We call this k-bit vector the signature of that position. A substring S[L, R] will have the even number of occurrences of each character if and only the signature of position R and (L - 1) are the same. Since k is bounded by 26, we can actually convert the signature into a 26-bit integer, so that in order to compare the signature we only need to do an integer comparison, which can be done in constant time. From now onwards, we will deal with this signature array.
More formally, We are given an array P[1..N], and our task is handle the queries of the following form: Given L \leq R and k, consider all pair (x, y) such that L \leq x < y \leq R, and P[x] = P[y]. Compute the sum of (y - x)^k for all such pairs.
Note that the numbers in the array P can be as large as 2^{26}, however, since the size N of the array is bounded by 10^5, there can be at most 10^5 different numbers in the array. This means, we can normalize the array P such that each number of P is bounded by 10^5.
Normalize (int[] P) { // Stores the mapping from the original values of array P to the // normalized values. map mapping; int cnt = 0; for (int i = 1; i <= N; ++i) { // This is the first time we are seeing this number, let us create a mapping for it. if (mapping.find(P[i]) == mapping.end()) mapping[P[i]] = cnt++; // Apply the mapping. P[i] = mapping[P[i]]; } }
A Linear time solution:
Let us first find a slow approach, and we will later improve it, also let us consider the case when k = 0, i.e., we only want to know the number of such (x, y) pairs.
This can be done by making a traversal of the array from position L to R, while maintaining the frequency of each number encountered as shown below:
int CountPairs(int[] P, int L, int R) { // map which stores the frequency of the elements seen so far. map freq; int count = 0; for (int i = L; i <= R; ++i) { // Each element with the value same as P[i] will make a pair // with the current element. count += freq[P[i]]; ++freq[P[i]]; } return count; }
The complexity of the above approach is O (n \log n), n being the length of the subarray. The \log n factor coming from the fact that we are using a map. In practice the use of a hash map will reduce the complexity to O (n). One could also achieve the worst case O (n) complexity by using an array for storing frequencies. Create an array Q[1..N], such that Q[x] stores the frequency of element x in the subarray. In the beginning of each query, we need to initialize the array, however, we do not need to initialize the frequency of all elements, but only those elements which we encounter in the subarray, this would take only O (n) time as shown below.
void init(int L, int R) { for (int i = L; i <= R; ++i) { Q[P[i]] = 0; } }
Now, let us consider the case when k = 1. Similar to the k = 0 case, we traverse the subarray. Suppose we are at element P[i], and we have already seen m elements with the same value as P[i] at indices i_1, i_2, \cdots, i_m, then we are adding m new pairs with total length:
(i - i_1 + 1) + (i - i_2 + 1) + \cdots + (i - i_m + 1)
= m (i + 1) - (i_1 + i_2 + \cdots + i_m)
We already know the value of m from the map “freq”. We also need a separate map, which stores the sum of indices of all elements with the value x.
int CountLen(int[] P, int L, int R) { // map which stores the frequency of the elements seen so far. map freq; // map which stores the sum of indices. map indexSum; int sum = 0; for (int i = L; i <= R; ++i) { // Each element with the value same as P[i] will make a pair // with the current element. sum += freq[P[i]] * (i + 1) - indexSum[P[i]]; ++freq[P[i]]; indexSum[P[i]] += i; } return sum; }
The case of k = 2 can be handled in a similar way, which case the addenda at encountering P[i] would be:
(i - i_1 + 1)^2 + (i - i_2 + 1)^2 + \cdots + (i - i_m + 1)^2
= m (i + 1)^2 - 2 (i + 1) (i_1 + i_2 + \cdots + i_m) + (i^2_1 + i^2_2 \cdots i^2_m)
Hence, we also need a third map, which stores the sum of square of indices.
Using this approach, we can answer a query in O (n) time, which is still slow, and will not work for the large subtasks. Next, we show how to pre-compute some values to speed up the query time.
Improve the Query Time Using Pre-Computation:
The main idea to improve the query time is to pre-compute the answer for some subarrays and reuse them for answering general queries.
Let us denote the answer for subarray P[L..R] by F_k (L, R). We pick a block size B, and then compute the answer for all subarrays [L, R], for which either L or R is divisible by B. This means we store the answer for O (N/B) subarrays. This can be done in O (N^2/B) time as shown below:
// F[i][j] contains the answer for subarray P[i * B..j] int F0[N/B][N]; int F1[N/B][N]; int F2[N/B][N]; void PreCompute(int[] P) { for (int idx = 1; idx <= N/B; ++idx) { map freq; map indexSum; map indexSqSum; int powerZeroSum = 0; int powerOneSum = 0; int powerTwoSum = 0; for (int i = idx * B; i <= N; ++i) { // Update sum values. powerZeroSum += freq[P[i]]; powerOneSum += freq[P[i]] * (i + 1) - indexSum[P[i]]; powerTwoSum += freq[P[i]] * (i + 1) * (i + 1) - ` ` 2 * (i + 1) * indexSum[P[i]] + ` indexSqSum[P[i]]; F0[idx][i] = powerZeroSum; F1[idx][i] = powerOneSum; F2[idx][i] = powerTwoSum; // Update maps. ++freq[P[i]]; indexSum[P[i]] += i; indexSqSum[P[i]] += i * i; } } }
The above snippet computes the answer for all subarrays whose left endpoint is a multiple of B. In a similar way we can compute the answer for for subarrays whose right endpoint is a multiple of B.
Now, if we want to compute the answer for subarray P[L, R]. It is possible that both L and R lie in the same block, in this case, we use the method described in previous section, which would compute the answer in O (B) time. Now assume that L and R lie in different blocks. Let us say L lie in block i and R lie in block j.
F(L, R) = F(L, j B) + F((i + 1) B, R) - F((i + 1) B, j B) + G(L, (i + 1) B - 1, j B, R)
The first term in the RHS considers the pairs in the subarray P[L, jB], and the second terms considers all pairs in the subarray P[(i + 1) B, R]. Since both terms contain the pairs in the subarray P[(i + 1) B, j B], which are now counted twice, we subtract them in the third term. Note that these terms have already been pre-computed, as they have one endpoint divisible by B. We still need to count the pairs whose left endpoint lies in [L, (i + 1) B], and right endpoint lies in [j B, R]. These pairs are represented by the fourth term in the RHS. Since both intervals [L, (i + 1) B] and [j B, R] have length smaller than B, these pairs can be computed using a similar way in the previous section in O (B) time.
Hence, we have provided an approach which has pre-computation complexity O (N^2/B), and query time O (B). Hence, the total complexity will be O (N^2/B + QB). The optimal value of B is N^{0.5}, which gives us the complexity O ((N + Q) N^{0.5}).
Time Complexity:
O ((N + Q) * N^{0.5})