CHSIGN Editorial (Unofficial)

For CHSIGN I used dynamic programming. The key observation here is that it suffices to ensure the strict positivity of the sums of contiguous subsequences of length two or three. Any longer contiguous subsequence can then be decomposed into contiguous subsequences of length two and three. If each of the parts has a strictly positive sum, so does the full subsequence.

We can then derive a dynamic programming solution that only considers the two most recent values. That allows us to derive a system with three states. We define the following functions to evaluate the optimal sum in each state:

PP(i) will be the optimal value using the terms A_1, A_2,\dots A_i and not changing the sign of A_{i-1} or A_i.

NP(i) will be the optimal value using the terms A_1, A_2,\dots A_i changing the sign of A_{i-1} but not of A_i.

PN(i) will be the optimal value using the terms A_1, A_2,\dots A_i not changing the sign of A_{i-1} but changing that of A_i.

Here the letters P and N tell the signs of the last two terms. If the sum of the last two terms with the given signs is not strictly positive then we will assign the value of the function to be \infty. We note that NN is not possible since making the last two terms negative would result in a contiguous substring of length two that is not strictly positive.

The sum of the optimal array will be the minimum of \{PP(N),NP(N),PN(N)\}. Those values can be computed iteratively using the following update rules for i>2:

NP(i)=\begin{cases}PN(i-1)+A_i,\qquad &\text{if }A_i>A_{i-1}\\\infty,\qquad&\text{otherwise.}\end{cases}
PN(i)=\begin{cases}\min\big\{PP(i-1),NP(i-1)\big\}-A_i,\qquad &\text{if }A_{i-1}>A_i+A_{i-2},\\ PP(i-1)-A_i,\qquad &\text{if }A_i+A_{i-2}\geq A_{i-1}>A_i,\\\infty,\qquad&\text{otherwise.}\end{cases}

For the base cases with i = 2 we have:

NP(2)=\begin{cases}A_2-A_1,\qquad&\text{if }A_2 > A_1,\\\infty,\qquad & \text{otherwise.}\end{cases}
PN(2)=\begin{cases}A_1-A_2,\qquad&\text{if }A_1 > A_2,\\\infty,\qquad & \text{otherwise.}\end{cases}

Using those equations we can easily compute the minimum sum of the resulting sequence. That can be done with a time complexity of \mathcal O(N). The method can then be augmented to determine an optimal sequence. That is done by keeping track of the state transitions. For each i we will keep track of the best way to get to each of the states. Afterwards we can backtrack and determine an optimal sequence. This does not change the time complexity and the additional storage is just \mathcal O(N).

Here is my Python code:


Hey robert I had used the exact same approach for my solution but it failed
can you give me some unique test cases that you encountered.


I copied your code from here:

It looks like it fails the test case:



5 5

Using your code gives me the answer

10 10

One of the best editorials ever.

Iā€™m glad you liked it!