DCGAME - Editorial

PROBLEM LINK:

Contest
Practice

Author: Sunny Agarwal
Tester: Hiroto Sekido
Editorialist: Kevin Atienza

DIFFICULTY:

Medium

PREREQUISITES:

stock span problem, stack, binary search, value compression, combinatorics

PROBLEM:

Given an array [A_1, \ldots, A_N]. Let B be the list of maximums of all subarrays of A (B has length \frac{N(N+1)}{2}). M games will be played, each with a single constraint of the form "C K", where C is either <, =, >, and K is an integer. In a single game, players alternate turns, and in a move a player marks off an integer in B satisfying the constraint "C K", and the player with no more valid moves is the loser.

You have to determine the winner of each game.

QUICK EXPLANATION:

Each game depends only on the parity of the number of integers satisfying the constraint. Thus, we only need to find the number of values in B satisfying the constraint.

Let L_i be the largest index L_i < i such that A_{L_i} \ge A_i (set L_i = 0 if no such value exists)

Let R_i be the smallest index R_i > i such that A_{R_i} > A_i (set R_i = N+1 if no such value exists)

All $L_i$s and $R_i$s can be computed in O(N) using a stack.

There can be at most N distinct values in A, say [V_1, \ldots, V_M] with M \le N (in sorted order). Let f(j) be the number of subarrays in which V_j is the maximum. Then f(j) = \sum_{A_i = j} (R_i - i)(i - L_i).

There are three kinds of constraints “< K”, “= K” and “> K”. we need to know the sum of the f(j) for all j in which V_j satisfies the constraint. We introduce a fourth kind of constraint, “\le K”. The other three kinds of constraints can be reduced to this.

To answer a “\le K” constraint, first find the largest index j such that V_j \le K (with binary search). Then the result we want is f(1) + \cdots + f(j). This number can quickly be obtained if we precompute all prefix sums beforehand.

EXPLANATION:

The solution has multiple parts. We start with the most obvious questions first, so that we know what to precompute, etc.

A “game”

Let’s first discuss how a game occurs. Suppose we have the list B, which contains the \frac{N(N+1)}{2} maximums among all subarrays (of course, we can’t construct this array in the harder subtasks because of its size). The first thing to notice is that the order of B's elements doesn’t matter. Thus, we can assume that B is sorted.

Let’s first assume that we have a constraint. The key thing to notice is that the game is really simple: the players simply alternate turns marking off numbers in B satisfying the constraint. In fact, the winner is solely dependent on the number of values in B satisfying the constraint, i.e., it doesn’t matter whatever strategy both players are using! To be specific:

  • If there are an even number of such numbers, then the second player always moves last.
  • If there are an odd number of such numbers, then the first player always moves last.

Therefore, we only need to count those elements of B satisfying the constraint! Because B is sorted and because of the nature of the constraints, this number can be computed using one or two binary searches!

To illustrate, let I(K) be the number of elements of B that are \le K (or simply the index of the largest i such that B_i \le K, which can be computed with a binary search). Then:

  • The number of elements of B that are < K is I(K-1).
  • The number of elements of B that are = K is I(K) - I(K-1).
  • The number of elements of B that are > K is \frac{N(N+1)}{2} - I(K).

Thus, if we already have the array B, then we can easily compute the result of any game in O\left(\log \left(\frac{N(N+1)}{2}\right)\right) = O(\log N) time.

Run-length encoding

Unfortunately, the previous approach uses up O(N^2) memory because of the array B. But we can reduce this significantly by noticing that there are only at most N distinct values in B. (Why?) So let [V_1, \ldots, V_M] be the distinct values in B, and let f(i) be the number of occurrences of V_i in B.

We can then compute I(K) by doing a binary search in V instead for the largest i such that V_i \le K, and then I(K) is simply f(1) + f(2) + \cdots + f(i). If we precompute all prefix sums of the f(i), then I(K) can still be computed in O(\log N) time, but the memory requirements decrease to just O(N).

Computing V and f(i)

To complete the algorithm, we need to compute the array [V_1, \ldots, V_M] and the values f(1), \ldots, f(M). Note that the sequence V can be computed in O(N \log N) time from the array A (with a simple sort + uniquify operation). Thus, all that remains is to compute the $f(i)$s.

First, let’s suppose that all elements of A are distinct. Consider the value A_i. How many subarrays are there whose maximum is A_i? Well, let L_i and R_i are the nearest larger elements to the left and right of A_i, respectively. Then the subarray cannot contain either A_{L_i} or A_{R_i}. Thus, the subarray should be strictly contained in [A_{L_i+1}, \ldots, A_{R_i-1}], but due to the way L_i and R_i are defined, we see that the latter condition is sufficient. Thus, there are (R_i - i)(i - L_i) possible subarrays (there are (R_i - i) and (i - L_i) ways to choose the right and left endpoints, respectively). Let us denote this quantity by m(i).

As an example, consider the following image (A_i is the height of the bar over the number i):

#             
#             #
#       #     #
#   #   #     #
#   #   # #   #
# # #   # #   #
# # #   # # # #
# # # # # # # #
1 2 3 4 5 6 7 8

In this case, suppose i = 5. Then L_i = 1 and R_i = 8. Thus, there are (R_i - i) = 3 and (i - L_i) = 4 choices for the right and left endpoints, respectively, for a total of 3\times 4 = 12 subarrays, as shown below:

1 2 3 4 5 6 7 8
 (2 3 4 5)
   (3 4 5)
     (4 5)
       (5)
 (2 3 4 5 6)
   (3 4 5 6)
     (4 5 6)
       (5 6)
 (2 3 4 5 6 7)
   (3 4 5 6 7)
     (4 5 6 7)
       (5 6 7)

As a consequence, we have f(j) = m(i) for the (unique) i such that A_i = V_j.

In case either L_i or R_i doesn’t exist, we can use the values 0 or N+1, respectively.

This formula, and the expression m(i) = (R_i - i)(i - L_i) is nice; unfortunately it doesn’t extend straightforwardly if the $A_i$s are not all distinct. To see why, consider the following example:

#         #     #
#   #   # #   # #
# # #   # #   # #
# # # # # # # # #
1 2 3 4 5 6 7 8 9

The natural extension of f(j) = m(i) would be f(j) = \sum_{A_i = V_j} m(i). However, in the above case, it doesn’t quite work. Consider j = 3. There are three i s with A_i = 3, namely i = 3, i = 5 and i = 8. But we have m(3) = 6, m(5) = 6 and m(8) = 2, so according to this formula this gives f(3) = 6 + 6 + 2 = 14. However, the correct value of f(3) is 10\,! (verify) Clearly we need to fix this formula.

We can do that by redefining m(i) so that it counts the subarrays whose leftmost maximum is A_i. This uses the fact that each subarray has a unique leftmost maximum, so no subarray is double-counted. To implement this, redefine L_i by weakening it so that A_{L_i} can equal A_i (i.e. A_{L_i} \ge A_i). Then m(i) stays the same, i.e. (R_i - i)(i - L_i). In the example above, we get the new values m(3) = 6, m(5) = 2 and m(8) = 2, so f(3) is correctly computed as 6 + 2 + 2 = 10.

But how do we compute the $L_i$s and $R_i$s? Actually, these values can be computed easily in O(N) time by means of a stack: it’s actually a standard problem called the stock span problem. (Recognizing this as the stock span problem requires experience, but if you didn’t know about this, notice that the problem of computing the $L_i$s and $R_i$s can be done in O(N \log N) time with a segment tree)

Now, all that remains is to compute the $f(i)$s given the $m(i)$s. But this can be done easily in linear time also:

  • Create an array for f, all initialized to zero.
  • Create an inverse map of V, say \phi, where \phi(V_i) = i.
  • For every 1 \le i \le N, add the value m(i) to f(\phi(A_i)).

That’s it! We now have all the parts needed for the whole solution. The preprocessing takes O(N \log N) time, and each query can be answered in O(\log N) time! The following is an example implementation in C++:

#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
using namespace std;
#define ll long long
#define LIM 1001111
#define INF (1<<30)

int A[LIM];
int L[LIM];
int R[LIM];
int s[LIM]; // stack

typedef pair<int,int> ct;
#define value first
#define count second

ct cts[LIM];
char typ[11];
char ans[LIM]; // will contain the answer string
int n, m;
int find(int k) {
    // binary search
    int L = 0, R = n + 1;
    while (R - L > 1) {
        int M = L + R >> 1;
        (cts[M].value <= k ? L : R) = M;
    }
    return cts[L].count;
}

int main() {
    scanf("%d%d", &n, &m);
    A[0] = A[n+1] = INF;
    for (int i = 1; i <= n; i++) scanf("%d", A + i);

    // compute L from left to right
    s[0] = 0;
    for (int q = 0, i = 1; i <= n; i++) {
        while (A[s[q]] <  A[i]) q--;
        L[i] = s[q];
        s[++q] = i;
    }

    // compute R from right to left
    s[0] = n+1;
    for (int q = 0, i = n; i; i--) {
        while (A[s[q]] <= A[i]) q--;
        R[i] = s[q];
        s[++q] = i;
    }

    // compute the frequencies of maximums of subarrays in sorted order.
    cts[0].value = -INF;
    cts[0].count = 0;
    for (int i = 1; i <= n; i++) {
        cts[i].value = A[i];
        cts[i].count = (R[i] - i) * (i - L[i]);
    }
    sort(cts, cts + n + 1);

    // compute cumulative sums. Since we only need the parity, we can use '^' instead of '+'
    for (int i = 1; i <= n; i++) {
        cts[i].count ^= cts[i-1].count;
    }

    // answer queries
    for (int i = 0; i < m; i++) {
        int k;
        scanf("%s%d%s", typ, &k, ans + i);
        if (!((*typ == '>' ? n*(n+1LL)/2 - find(k) : *typ == '<' ? find(k-1) : find(k) - find(k-1)) & 1)) ans[i] ^= 7;
    }
    ans[m] = 0;
    printf("%s\n", ans);
}

As always, if you have any questions, feel free to ask :slight_smile:

Time Complexity:

O((N + M) \log N)

AUTHOR’S AND TESTER’S SOLUTIONS:

setter
tester

6 Likes

I think test data was weak. Here is my O(N^2) solution that has 100 points : https://www.codechef.com/viewsolution/7729815

4 Likes

@author I used same approach, but my code failed to pass last test cases,here is the solution link.
https://www.codechef.com/viewsolution/7733279

I am getting TLE in DCGAME… My code is here https://www.codechef.com/viewsolution/7848594 please help.Thnx in advance :slight_smile:

This is just because there was no test data which contain all the elements in either ascending or descending order. This should not have happened.

1 Like

@antim_patel, This is the same problem which I and many of many friends were facing. Since your solution is quite similar to one, except the way you have calculated the frequency using stack, my suggest you to refer to my solutions https://www.codechef.com/viewsolution/7801472 (50 points) and https://www.codechef.com/viewsolution/7804811 (100 points).

Since I needed to reduce the constant for nlogn, I changed the quries part a bit. Since inside the queries, binary search takes (logn) time and map further takes (logn) time, the constant factor is increased. Instead of the map, the queries can be answered in O(1) using arrays. Just store the map values into an array outside the loop. Also, I removed the sort function because it was not needed as the map stores the elements in sorted order only. Sort function has a large constant factor as well.

Hope it helps.

setter and tester solution Access Denied. Please fix this

https://www.codechef.com/viewsolution/7843568
Help me, my code didnt even pass subtask 1 for some reason

Your code doesn’t calculate the frequencies correctly. See editorial for help.

Same solution but because of unnecessary computation got TLE … :frowning:

Stock Span Approach to solve m(i) array was amazing . It just does in O(n) with help of stack.
Thanks for editorial.I learned new thing today.
I also solved with same algorithm but I didn’t apply Stock Span approach.I solved it in O(NlogN) by using BST(Binary Search Tree) . It also worked and accepted. But Stock Span Approach is much much faster.

2 Likes

Would love some feedback on my code.

https://www.codechef.com/viewsolution/7861684

I was able to pass all tasks except #7 (TLE). Using randomly generated test data w/ n, m = 1,000,000 my code spends majority of time in the binary searches. I am using imported bisect_right and bisect_left from imported library. I assume that is the fasted method? Is it just a problem of python being slow? Thanks so much for any feedback.

hi,
i solved the problem using divide and conquer with binary search . I was able to get 50 points , is it able to optimize my code at any point to get better time complexity ??

https://www.codechef.com/viewsolution/7777733

here is [My Solution][1] . It passes some subtask and fails for some subtask too…can anyone
fix it??
[1]: https://www.codechef.com/viewsolution/7868670

@jeffa I also solved it using Divide and Conquer technique. I didn’t got tle. I used segment tree to get the index of the element which is maximum in the range. You should have a look at my solution if you want. Link

How are we supposed to identify that it can be solved by stock span and that we can use the formula r[i]-i*l[i]-i to find the frequencies,is there some logic to it or there are similar problems and hence the formula is known by experience?

my java implementation of the editorial.
Getting RE . Anyone can find out the problem ?

https://www.codechef.com/viewsolution/7918742

@apptica:

thank u , i will look at ur solution

To identify it as the stock span problem you either need to be familiar with it or simply be able to rediscover the solution on your own (it’s not too hard really). But if you didn’t know, you can still solve it using segment trees / binary search trees, albeit at O(N log N) time.

please use fast IO to get last test case/ second last test case accepted.
See the link for fast IO