Initially when I began to read papers on this problem , I found some papers which mentioned algorithms but they were just names and no other details . I found out some papers which mentioned algos like base 3 , base 5 , sphere methods , block algo and KD algo . What I could understand was only base 3 algo and it gave me 0.41 points . I didn’t understand a bit of the other algos and therefore couldn’t implement them . further I improved by using an exhaustive bad time complexity solution but it couldn’t further incorporate the base 3 solution with it . I would like to know what other ideas were implemented by people specially toppers of the problem and also the details if anyone implemented the above algos I have mentioned and how did they take care of time limits if exhaustive algos were involved .

Can you provide links to the algorithms you mentioned… Since I only wrote the worst case programs for this question… Due to my xams I was not able to solve much problems this time… but the links may help for sure…

I used base 3 property and brute force the remaining numbers. This gave me double the score of only base-3 numbers.

@devanshug : www.cs.umd.edu/~gasarch/ramseycourse/3aplarge.pdf

This is probably the best I could manage.

jatin_69 is right. I too expanded my selected subset by brute force after using base 3. As an example the base 3 method only selects 7 out the first 20 natural numbers, whereas there are 9 that can be selected. Apart from this many people have used the Stanley sequence. I was able to jump a few ranks by processing the given array in a different order like a[1] a[n] a[2] a[n-1] a[3] and so on.

The Simplest Approach which gave me 0.69 points beyond which i couldnt think:P:

1)Sort the Nos.

2)Assume first two nos are in ur answer.

3)To check wheather next no is in ur answer:Find AM(Arithmetic Mean) with all Previously selected nos…

and if that no is present Neglect that no…and Proceed for next input no…

If all AM’s are not present .Select that No…

I hope this helps…

And Thanx kavishrox for mentioning those methods…

My approach was to use both the base 3 method. And I was first removing all the numbers which are having 0,1,3,4 modulo 6 value. In this way, almost 4/6, 66% of the numbers with good probability of being in AP were considered. Then I used kept on taking the numbers if they did not form the AP with my earlier taken numbers. This is where I could have optimized my program, but I could not :(. One more observation can be made that, most of the numbers of your series will be from the first and last parts of sorted sequences. Middle numbers would be very less.

One another idea I had in my mind to convert problem into standard maximum independent set problem. But I could not code this well.

There is a similar question in the previous contests of Codechef. Its worth reading…

but this approach gave me only 0.69 points. I am waiting for a better algo…

There is a similar question in the previous contests of Codechef. Its worth reading… http://www.codechef.com/COOK15/problems/GENARSEQ

but this approach gave me only 0.69 points. I am waiting for a better algo…

I got got 0.75 points with a simple method that I didn’t optimize much. (Run time = 6 seconds with Go). I think it could potentially reach 0.85 points with some fine tuning.

http://www.codechef.com/viewsolution/2243028

I had two helper methods:

```
func hasTwo(num int) bool {
for num > 0 {
if num % 3 == 2 {
return false
}
num = num/3
}
return true
}
func pValue(nums []int) (float64, int) {
max := 0
for _, v := range nums {
if v > max {
max = v
}
}
p := float64(len(nums))/float64(max)
return p, max
}
```

here pValue is P, the probability of taking a number. (i.e. the density of the numbers given).

I assumed the numbers were already sorted. First I count how many numbers fall into each of the 1000 buckets. So numbers close together get put into the same bucket. Then each bucket gets given a score. The score is based on collisions with other buckets. If there are three buckets in an arithmetic progression, their score increases depending on how many numbers are in the three buckets. Eventually numbers that are in buckets that have low collision scores are given priority.

```
pV, max := pValue(ints)
num_buckets := 1500
buckets := make([]int, num_buckets) //make gives zero'd int array of length 1500
bucket_scores := make([]int, num_buckets)
d := max/num_buckets + 1
for _, v := range nums {
buckets[v/d]++ //bucket counts it's values
}
for i, _ := range buckets {
for j, _ := range buckets {
d := i - j
k := i + d
if j >= i || (k >= num_buckets) {
break
}
//should have a better scoring function
score := (buckets[i] + buckets[j] + buckets[k])/3
bucket_scores[i] += score
bucket_scores[j] += score //increase score of bucket based on collision
bucket_scores[k] += score
}
}
```

now value v has score bucket_scores[v/d].

Also if hasTwo(v), and if the density p > 0.74, then we increase the score (and hence the priority) of value v. (note any three numbers with hasTwo() cannot collide)

Note that some buckets that collide with each other will have the same score, so I don’t go through the values in *rough* order of priority, rather than exact order of priority.

Basically each time I add a number x to the solution, I loop through all other added numbers y. Then I insure (x+y)/2 is marked as taken, and y + (y-x) is added and x - (y-x) is added. (or thereabouts). This shows that no number can go there. I go through numbers in rough order of priority, if they can go there, then I add them and mark other related spots as taken

```
for _, v := range bucket_scores {
if v < min_score {
min_score = v
}
}
min_score += 2
for cap := min_score; cap < min_score + 100; cap += 10 {
for _, rej := range rejected {
if !taken[rej] && (bucket_scores[rej/d] < cap) {
ok := true
for _, v := range added {
pos1 := 2 * rej - v
pos2 := - rej + 2 * v
if pos1 >= 0 && pos1 <= 100000 {
if taken[pos1] {
ok = false
break
}
}
if pos2 >= 0 && pos2 <= 100000 {
if taken[pos2] {
ok = false
break
}
}
if (rej - v) % 2 == 0 {
pos3 := (rej + v)/2
if taken[pos3] {
ok = false
break
}
}
}
if ok {
added = append(added, rej)
solution = append(solution, rej)
//fmt.Println(rej)
taken[rej] = true
}
}
}
}
```

That was it, other things to try are fiddling with how much cap increases by. E.g. once cap > min_score + 100, I might start increasing cap by 200 after that.

I used only a brute force approach that was good enough for score of 4.04

Anyways, I am really waiting for the editorials to come out and when do we get to know the approach of the problem setter i.e. @ACRush.

It might even be superior to that of @kutengine… !! Who Knows!!

As far as I’ve seen, none of the top scoring competitors have explained their approach yet. So I give mine, which uses some easy insights.

Scoring: **5.183839 [0.89pts]** (which is according to the editorial pretty close to that of ACRush)

My approach was iterative, and trying to improve at each step. Some parts looks a bit like @robertking if I read it well.

First I wrote a simple algorithm that works in all cases, i.e. you could add the numbers in a random sequence and it will only add the number to the list if it won’t make a new sequence/collision. I think this is pretty straightforward so I skip details of this.

But I use this all the time so it is important to see how I made it work.

- Start with finding a solution that is correct. (will come back to this later)
- At each step: Check for each two numbers in the solution if there exist a third number in all given numbers where it would make an arithmetic sequence with.
*(note that this 3th number is not in the solution of course, else it won’t be a solution)* - Keep at each step a counter per number with the count of (what I called) collisions and at the end sort them by least collisions first
- Try to add them to the new solution in that sequence
- Keep track of the best solution over all iterations (in ideal case, the lastone)

This improves in most of the cases since a number with many collisions in a solution is more likely to make space for multiple new numbers out of the solution.

Also, **complexity** remains relatively low, I think any solution don’t get greater than 3000, even not for N=100000 (sorry for not rechecking this). So with O(S²+ N(log N)) at each step, where S is length of current solution and N total numbers we could improve multiple times within the time limit.

**Improving:**

*1)* Since it improves at each step, it also is important to start good.

Therefore the next approach seemed the winner:

- Start with left=0, right=N-1 and add both if they don’t make a collision together.
- at each step iterate to the middle
- Do this while (left<=right)

(again the earlier described add method is used here)

(by the way, this algorithm alone will give a not so bad result on its own)

*2)* Taking the complexity section into account, we could improve much more when N is small.

I took advantage of that, for the small sets I could iterate more then 60 times comparing to less than 10 for the biggest.

*3)* Using the entire sorted list won’t give very good results. Since we only used the numbers in the current solution, they are more likely to collide according to the algorithm, which may be false, so we have to give them higher priority. I applied some configurations in my algorithms, to take many of the entire list at the start, but the further we go, the better our solution, so the more we want to keep from the current solution at each step.

*4)* At last I improved the performance, so I could do more iterations. One of the things I looked for is to limit the iterations at each step, we don’t have to check for each number in the list, since the chance that one of the last numbers in the list will fit in the new solution is smaller than one with less collisions. So I took, when 60% of the list was checked(tried to fit in new solution), we stop and just go the next iteration. So sometimes we may miss some numbers that would also fit in the solution, but by doing more iterations, the results were better.

I hope this approach is a bit clear, any feedback is very welcome.