Weird behaviour in C++

@konfused (a codechef member) had written two C++ codes which logically does the same thing but were implemented with two different methods and these two different methods give different answers which should not have happened. Check this [link][1]. The two programs are: [http://ideone.com/ObyWf7][2] and [http://ideone.com/wiv9ya][3]. I wrote an additional


[4] to verify the value of $2^{58}$ in both these implementation and they are same. Can somebody explain this weird behaviour?


  [1]: https://discuss.codechef.com/questions/75203/sprnmbrs-editorial/75343
  [2]: http://ideone.com/ObyWf7
  [3]: http://ideone.com/wiv9ya
  [4]: http://ideone.com/VYcyeo
1 Like

Nasty, but there is an explanation. The compiler is not stupidly translating each instruction into machine language, but optimizes. One of the optimization techniques is constant propagation. So the value of ratio is easily computed by the compiler. Normally there is no constant propagation beyond functions in libraries, but the mathematical functions are an exception. So the compiler actually is able to compute log(3)/log(3)=1. So the translated program looks more like the machine language equivalent of

int main() 
{
std::cout << 1;
return 0;
}

In the second case the compiler os propably not able to compute the for-loop completely, so this is done at runtime, but the constant term log(3) can be computed at compile time. This is where the problem starts. The first log function is evaluated at runtime using the (probably) dynamically linked math library, while the the second log is evaluated using the build-in function in gcc. And they seem to produce different results. If you print the difference of the two logs you get a double value of the order of 10^{-17}, well below double precision, but the result is less then one and gets rounded down to zero. Even if the functions produce exactly the same results there is still the possibility that x/y is not 1 even when x==y in double precision, since the FPU of some processors seem to have a different precision for double computations (like 80bit) than temporary storage registers (which should store 64bit),

I don’t know if this is actually considered bad behaviour, gcc-4.8 on my machine doesn’t produce different results for the two programs.

The problem of the program from a general point of view is the conversion from double to int, which is numerically unstable, i.e. for small differences in input (10^{-17}) you can get relatively big differences in output (1). And rounding doesn’t help in general, it is only shifting the problems by 0.5.

So the consequence is that you have to do away with integer conversion if you want to have exact results, or use comparisons with some EPS buffer if you want to check which integer it is.

2 Likes

So, what what you want to say is in the first program, both the log is evaluated using the built-in function in gcc but in the second program since both the log is evaluated using two different log functions (whose implementations might differ), they happen to produce different results. Correct me if I am wrong.

yup, that’s was my first explanation, but now I think it’s even worse. I’m going to write something more today

//