Let’s consider Game1 in which only horizontal and vertical movements are allowed, One can easily see that yoda wins only when both N and M are odd.
In the other game, chef wins only when both N and M are even.
This reduces the problem to following
If chef is winning in both the games, output 1
If chef is losing in both the games, and **P \neq 0 **, output 0. Else if P = 0, then output 1.
If chef is losing in 1 game and winning in other, then we need to calculate probbability of winning atleast P games out of K.
This is given by binomial distribution as \frac1 2^K \sum_{i=P}^{K}K\choose i
Now the task is to calculate K\choose i for i = P to K. You can calculate this by using BigDecimal in Java or using Decimal in python. This would work for small test case and would give you 40 points. For large testcase, this would result in TLE, even if you set precision to 6 decimal places.
Now, sum of last K - P terms is equal to sum of first K - P terms
\sum_{i=P}^{K}K\choose i = \sum_{i=0}^{K - P}K\choose i
Now, for small values of P, probability would be close to 1 and we could answer 1 instead of finding out probability. Similarly, for large values of P, probability would be close to 0 and we could answer 0 instead of finding the probability.
The first difference in output would occur when Probability = 10^{-6}
Substituting K = 10 ^ 5 gives P = 49162 and 50821
Substituting K = 10 ^ 6 gives P = 497370 and 502614
Observe that these values are pretty close to \frac K 2. We know that sum of all terms is 2^K and that the series is symetric. Therefore, sum of \frac K 2 terms is 2^{K - 1}. Now, we need to add or subtract atmost 3000 Terms from 2^{K - 1} to get probability.
This can be done within the given timelimit and here is the
(https://www.codechef.com/viewsolution/12795962)
You can also do it using Python Libraries
The working out of when will Chef be winning is given nicely in this post. It is easy to observe the pattern when Chef will be winning, and when Yoda will win. Problem is quite easy when probability of winning of Chef is either 1 or 0. (Though you must take care of the fact that when number of games player, K=0, it won’t matter what the probability is! This is because of the nature of the problem). The real problem is when P=1/2.
We don’t really need to make use of BigDecimal in JAVA or to implement your own. I didn’t even perform any sort of precomputation.
All I did, was to make sure the variable I was using to compute the answer didn’t overflow. And this can be easily handled by making use of the large power of 1/2 we are dealing with!
Isn’t this unnecessarily complicated? We can use log to deal with the large numbers. If we cache the log values of all factorials till 10^6, we can calculate the probability of getting i out of k wins as P_i=exp(log(k!) - log(i!) - log((k-i)!) - k \times log(2)) instead of P_i=\frac{k!}{i! \times (k-i)! \times 2^k}. The summation of P_i for all i in [p, k] will yield the answer. Complexity will be \mathcal{O}(k).
P = 0
kln2 = k*math.log(2)
for i in range(p, k+1):
lnPi = lnfact[k] - lnfact[i] - lnfact[k-i] - kln2
Pi = pow(math.e, lnPi)
P += Pi
return P
@utkarsh1997 you are right, it was pretty close. I checked, and exactly 6 decimal digits match with the actual value (as calculated by scipy) in the worst case. However if we choose to run iterations from 0 to p-1 or p to k depending on which is lesser, the value remains accurate upto 7 decimal places.
This approach is in my opinion the simplest one, because no external library or complex code is required. But if it had failed I would have had to think of another way, that’s why programming is fun
The question as explained above is of simple Binomial Trails.
For Pr(X >=6) = Pr(X =6) +Pr(X =7) + Pr(X =8) + Pr(X =9) + Pr(X =10).
We can use a techniques called “Normal Approximations to Binomial Trials”.
Scipy is a Python library to calculate the scientific calculations. 2 functions namely, binom.pdf() is for exact number of successes and binom.cdf() is for atleast P successes.
For more reference: http://joemath.com/binomial/
@ssaxena36, binom.cdf(x, n, p) is the cumulative distributive fucntion, which can be stated as the probability of at most (not at least) x successes out of n trials where the probability of a success is p. @bansal1232 the documentation has further details.
@meooow, can you please elaborate on when we can used this log approximation and when we cannot? And what would you had done if it’d had failed? Did you find that trick?
As @utkarsh1997 said, we should keep precision in mind when we use an approach like this. But because this is a long challenge problem we can afford to launch some loosely calculated arrows without penalty
And no, I didn’t think further about it after solving, but I think using scipy was a pretty cool way which I might have come across (because I was already going through MATLAB documentation)