# RRJAM - Editorial

Practice

Contest

Author: Roman Rubanenko

Tester: Tuan Anh

Editorialist: Florin Chirica

hard

### PREREQUISITES:

sqrt decomposition

### PROBLEM:

You’re given two arrays. On first one, we add arithmetic progression on on some intervals of it. We need to output, for each position, minimal number of operations (done in the order given in the input) needed to make element from the first array greater or equal to one from the second array.

### QUICK EXPLANATION:

One possible solution is sqrt decomposition. We add each step sqrt progressions at a time. Using a clever progressing, we can compute the resulting array in O(n). We check positions that respect the wanted conditions. For each, we “undo” the sqrt operation done and brute force one more time sqrt operations. This way, for each position, we find exact number of operations after which element from the first array became greater or equal to the one from the second one.

### EXPLANATION:

An easy simplification

Let’s do for beginning an easy simplification. Instead of using two arrays A and B, we can solve the problem with a single array X. Denote by X[i] = A[i] - B[i]. Now, when performing an operation, instead of adding the amount to the array A, we’ll add it to the array X. After some operations, some terms of X become positive. The first moment when X[i] becomes positive represents the first operation when A[i] >= B[i]. Why? Suppose delta is the amount added until X[i] becomes greater or equal to 0.

X[i] + delta >= 0

A[i] - B[i] + delta >= 0

A[i] + delta >= B[i]

which is exactly what we were interested in.

Now the problem is: given an array X, find for each i the minimal number of operations such as X[i] >= 0. From here, there are multiple approaches possible. I’ll just describe my favorite one:

SQRT Decomposition

The idea is to process sqrt(n) operations at a time. We perform sqrt(n) operations, then see what happens, then perform some more sqrt(n) operations, again see what happens and so on.

Let’s find a quick way to see how our array X will look after sqrt(n) operations. In order the solution to pass TL, we need to do this in O(n), giving O(n * sqrt(n)) time.

We need a trick: suppose we have operations like increase all elements from x-th to y-th by a value val. We can get final form of the array in O(n + numberOfOperations).
Let’s use an auxiliary array A. When we are given an operation defined by x, y and val, we do:

A[x] += val;

A[y + 1] -= val;

Then, answer on position i, after all operations, is A[1] + A[2] + … + A[i]. Draw it on the paper and you’ll see why it works.

Now we add an arithmetic progression defined by l, r, x and y (like in the statement). Obviously, all elements from l to r need to be increased by x. This, can be done as above.
Now, element from position l needs to be increased by y, element from position l + 1 by 2 * y and so on. Let V be the array, if considering only operations involving y (adding y, 2 * y, 3 * y and so on).

For an operation l, r, x, y, one can easily notice that V[i] = V[i - 1] + y, for each i between l + 1 and r. Now, answer for position i is A[1] + A[2] + … + A[i] + V[i].
What happens if two numbers y and y2 need to be added at V at position i? Well, we simply do V[i] = V[i - 1] + y + y2. Hence, the problem reduced to: add value y from all elements from l+1 to r. How to do this? Yes, one more array (suppose it’s Y):

Y[l + 1] += y;

Y[r + 1] -= y;

Now, V[i] = V[i - 1] + Y[1] + Y[2] + … + Y[i] and answer for position i of X[i] is V[i - 1] + Y[1] + Y[2] + … + Y[i] + A[1] + A[2] + … + A[i]. This way, we can get O(n) to recognize an array after sqrt(n) steps.

What now? Some elements become >= 0 after sqrt(n) operations. This means, somewhere in these sqrt(n) operations, we fount the answer for the elements. But how to find out exactly the moment when this happened?
Suppose X[i] becomes >= 0 after sqrt(n) operations. We can “undo” those sqrt(n) operations. Then, we can simply simulate again those sqrt(n) operations, only for element from position i.

What’s the complexity for this step? Obviously, each element can become >= 0 for the first time exactly one time (after this, we’re not interesed anymore). So, we get at most n elements. For each, we simulate at most sqrt(n) operations. Hence, this step is also O(n * sqrt(n)) and whole solution is O(n * sqrt(n)).

If you have another approach, please share it in the comments section!

### AUTHOR’S AND TESTER’S SOLUTIONS:

Author’s solution to be updated soon
Tester’s solution

2 Likes

My approach used persistent segment trees. Basically, after each operation, one creates a copy of all the updated nodes of the segment tree. After all the operations we have M segment tree roots, but only O(N+Mlog(N)) nodes overall. Then, for each position i, I binary searched the operation which made its value reach or exceed B(i). For a given operation x it is easy to find out the value of position i after the first x operations: simply descend down the persistent segment tree starting from root(x) all the way to the leaf corresponding to position i and adding up the appropriate values. The overall time complexity is O(Mlog(N) + Nlog(M)log(N)).

5 Likes

My approach:

• transform the queries’ x-s into “add (x-ly)+iy to Ai in the corresponding range”
• sort the queries, traverse the N numbers from left to right and store all queries covering the current number
• for each query, binsearch the answer for it - the smallest k such that all queries with id-s between 1 and k that cover that number sum up to at least Bi-Ai
• if we fix k, that sum of queries is clearly just the sum of the queries’ (transformed) x-s + i * the sum of their y-s; these values can be computed with 2 Fenwick trees and updating them is completely straightforward
• ???
• Profit!

Complexity: O((N log N + M) log M) with a great constant.

2 Likes

I did not understand the part which says
“What happens if two numbers y and y2 need to be added at V at position i? Well, we simply do V[i] = V[i - 1] + y + y2. Hence, the problem reduced to: add value y from all elements from l+1 to r. How to do this?”

V[i] = V[i-1] + y … from initial statement.
If 2 numbers y and y2 need to be added to V at i, the it should be V[i] + y + y2, right?
How does the problem reduce to add value y from all elements from l+1 to r?
Please try to clarify with an example.

Also, the method A[x] =+ val, and A[y+1] =-val. Does this have a name? I would like to read more about it.

Thanks

My solution was a lot simpler and of better complexity, only O(M + N log M), take a look here:

http://www.codechef.com/viewsolution/5630433

Basically, I created a simple segment tree over the array of intervals. Also, I used std::vectors
to find for each x, which queries have one end in x. I processed all the positions k one by one,
when I finish processing the position k-1, I ‘deactivate’ all intervals ending on k-1 and ‘activate’
all intervals beginning on k. Then, I just do a search through the segment tree to find the time
after which the sum of contributing (active) intervals gets larger than X[k].

I’m aware than this explanation is not really great, don’t hesitate to ask me to explain some more.

Yeah, it’s basically the same thing as what I (and probably others) did, but it uses segtrees and the additional info stored in them to avoid binsearching and take a logarithm away from the complexity. That’s actually the improvement that makes it not so simple to see.

I remember solving 1-2 problems in long contests this way.

Also, I proudly point out that (see practice) my analogous solution runs in almost the same time as yours, probably due to the segtree’s large constant. BIT FTW!

I have implemented approach described by author in the editorial. (code) Maybe it will help someone.

//