 # SLIS - Editorial

Author: Jingbo Shang
Tester: Xiaoxu Guo
Editorialist: Pushkar Mishra

Medium

Segment trees

### PROBLEM:

There are many ways to solve the Longest Increasing Subsequence (LIS) problem. There is a DP approach, a Patience sorting approach, and a Segment tree approach. We will use the last one to formulate our algorithm.

Since, A[i] \leq 10^9, we will have to compress the numbers into a smaller range. Since, the length of the array is \leq 10^5, we can compress the numbers into the range 1..10^5. Now, we need to find the number of second-longest LIS in this compressed array.

Let us look at how we find the LIS in the compressed array B[1..N] using segment trees. We process the elements of the array from left to right. We keep a segment tree whose nodes store the maximum of all the lengths of increasing subsequences ending on an element in their respective ranges. When we are processing B[i], we first find the maximum in the range 1..B[i]-1 using the segment tree. Let’s say this maximum is equal to L. We then update the leaf associated with the i^{th} index to L+1. We update its parents as well to reflect the new maximum in the range 1..B[i]. Once we have processed all the N elements, the length of the longest increasing subsequence lies in the root of the segment tree.

How can we get the number of LISes? For this, we store two variables per node, max and nmax. The variable max stores the maximum of the lengths of increasing subsequences ending on an element in the range that the node covers. The variable nmax stores the number of increasing subsequences which have the length max. Now, when we are processing B[i], we update the leaf associated with it by putting max = L+1 and nmax = prevnmax, where prevnmax is the number of increasing subsequences of length L (which is maximal) in the range 1..B[i]-1.

This gives us a way to calculate the number of LISes in an array. How do we find the number of second-LISes? Our previous algorithm gives us an idea. This time, we will store 2 more variables per node: smax and nsmax. The variable smax stores second-maximum of all the lengths of increasing subsequences ending on an element in their respective ranges. The variable snmax stores the number of increasing subsequences which have the length smax.

Now, during update and query operations on the segment tree, when we our combining two nodes x and y, we can have three cases.

• Case 1: x.max > y.max
In this case, we will check whether y.max is equal to x.smax or not. If it is, we add y.nmax to x.nsmax. We then return x as the result of this combining operation.

• Case 2: x.max < y.max
This case is analogous to Case 1. We will check whether x.max is equal to y.smax or not. If it is, we add x.nmax to y.nsmax. We then return y as the result of this combining operation.

• Case 3: x.max = y.max
In this case, we add y.nmax to x.nmax and y.nsmax to x.nsmax. We then return x as the result of this combining operation.

Once all the elements have been processed, our final answer lies in the nsmax variable of the root.

### COMPLEXITY:

\mathcal{O}(N\log N) per test case.

### SAMPLE SOLUTIONS:

4 Likes

In Case 3, x should be returned instead of y as mentioned in the editorial.

Can the tester please explain his solution?

3 Likes

yes, thank you Which part of it is unclear to you ?

I used BITs only, no segment trees.

I started by compressing the input and with counting the LIS ending at each A[i] (L[i]). Then, while we could count the number of increasing sequences of each length k ending at each A[i] using a BIT for each k and the same approach as for L[i] (with counting instead of max-ing), we only need k\ge L[i]-1, which gives only O(N) queries on BITs.

The next problem is space: one BIT costs O(N) space, which is too much. But we already know on which numbers we’ll have updates/queries for each k - they’re given just by L[i] and A[i] - so we’ll compress them for each k (using map<>s) and then build the BITs of optimal size for each k. The sum of those sizes is the same as the number of updates/queries, O(N).

Each time before an operation on those BITs, we use one O(\log{N}) operation on a map<>, so the time complexity is still O(N\log{N}). See my code for more details.

1 Like
//