# SEAPERM - Editorial

Author: Sergey Nagin
Tester: Shiplu Hawlader and Mahbubul Hasan
Editorialist: Lalit Kundu

CHALLENGE

### PREREQUISITES:

Heuristic, Greedy, Mixure of Methods, Test Generation Plans

### PROBLEM:

Give an array A’ of N integers, when a permutatation P is applied on A’ we get A.
f(A,i) = S - A[i] - A[i +1] - … - A[j], where j is the maximum possible index, such that A[i] + A[i + 1] + … + A[j] <= S, if A[i] > S, f(A, i) = S.
Find permutation P such that (f(A, 1) + f(A, 2) + … f(A, k))/k will be as low as possible.

### EXPLANATION:

This is a challenge problem. The most important thing in solving challenge problem is to find some approximation/partial methods using the combination of search, greedy, dynamic programming, etc…, because this challenge problem is NP-hard. Also, it is important to write a local test system to help you find the best strategy (almost all top ranked guys wrote their own test codes).

Since, there were only 20 files, you can by making assertions etc. find out the exact values of N and K and write specialised approached for different K. That’s what both top 2 did.

The most obvious two greedy approaches were to sort the array and other was to greedily assign integers to new array such that the f(A,i) is minimised for that i.

Also, another approach that was used by the 4th best solution by @manish05 was to use simulated annealing over greedy approach, which is a general local search approach and can be applied to almost all challenge problems.

Since, I was not able to understand the top rankers code, I would like to invite @kutengine and @aawisong as well as others to explain their approach.

### AUTHOR’S AND TESTER’S SOLUTIONS:

To be updated soon.

5 Likes

Can anyone kindly provide some approach to solve this particular problem. I am not able to find a way.

1 Like

Could you please elaborate on how “simulated annealing” can be applied on this problem…?
It is very difficult to understand anything from the given link.

@jaggi1234

Hope this helps:

Simulated annealing is a technique derived from metallurgy. If you observe the making of swords, ornaments, etc… you will see that metal is cooled slowly. Initially the molecules in the metal have tons of energy: they zap about really fast and try to reach many places. As they slowly cool down, they settle for comfortable positions with minimum potential energy.

So what’s happening is they are using their initial kinetic energy to reach a stable state.

In computers, we ‘simulate’ this behavior by representing the parameters as molecules. The fitness of any given situation can be calculated using these parameters. Initially, parameter values change drastically and randomly…do this using random variables. The change in value is also inversely proportional to current fitness. Later, the change in parameter values is small. That helps stay close to a global optima.

Finally, after a number of iterations(time), we take the final parameter values. There are many optimizations to this approach, but for now this is good enough.

Here:

//