Sometimes, I think so much to come up with an algorithm, then code it well only to find out that it gives a TLE.

How can we estimate whether an algorithm with so-and-so complexity fits well with the input constraints, before starting the coding ?

For eg:

if input is of length 100000 or 10000

and I just want to know, if an O(n^3) solution gives TLE or not.

Generally, the problem states the time to be 1 or 2 seconds.

So, with this kind of information, can we do any estimates on the number of seconds it takes for a large input and get a sense about its TLE outcome before itself without coding and then testing it with the judge ? Please helpâ€¦

with a time limit of 1sec O(n^3) with n <= 10^3 will pass within limit but for n<=10^4 will give TLE. (n^3 for n=10^4 = 10^12 but the judging servers can do 10^8 to 10^9 computation per sec)

Agree with chalubhalu that you are allowed to perform about 10^6 computations.
But if you want to practically measure the execution time for your programm in C++, you can try this :

int main()
{
ofstream fout (â€śtest.outâ€ť);
ifstream fin (â€śtest.inâ€ť);

//

// your code
//

cout<< "Done in " << clock() / CLOCKS_PER_SEC <<"sec"<< endl;

}

for spoj: most of the problems get judged on 733MHz CPU. So to match the slowness of judging system, you should replace the CLOCKS_PER_SEC with 7.33e8 (733 Mhz).

you can check now if the time taken is more than 1 or 2 sec, you are going to get TLE!
I hope this will help you.
Any further queries are welcome !

This is not very practical in fact, you do not know how slow the judging machine is and average user here wonâ€™t test his code with worst case inputâ€¦

This is a tricky question. The most basic estimates are just calculating the number in the O-notated complexity and if itâ€™s up to 10^7, then itâ€™d probably run in 1 second (10^6 is outdated, since modern machines are faster). That doesnâ€™t have to work all the time, though, because thereâ€™s also a constant factor.

Different algorithms have different constants - in general, dynamic programming is really fast. Some operations are slow - modulo (that slows down the mentioned DP significantly), if-else conditions, standard minimum/maximum etc., while others, like bitwise operations, are really fast and a good way to optimize a program (they can be used for faster min/max functions, for example). Some data structures have a large constant - if we just limit ourselves to trees, then starting with red/black trees (STL set<>/map<>), down to interval trees - lazy-loaded, then simple, and BIT (Fenwick trees) have the best constant. The difference is so great that you can sometimes replace a map<> by a BIT, get an additional log-factor to complexity, and improve the actual running time!

Then, there are small tricks like adding a condition which for most test data makes the code skip some redundant operations, or noticing that itâ€™s hard to make test data for which some assumption doesnâ€™t hold. But thatâ€™s another topicâ€¦ the point is that sometimes, these tricks (they can be in your code even if you donâ€™t know about it :D) can improve your constant factor, or even complexity, noticeably.

In short, it all depends on experience. When you code more algorithms and note how fast they ran, youâ€™ll be able to estimate better what could pass later, even without yet coding it.

for spoj: most of the problems get judged on 733MHz CPU. So to match the slowness of judging system, you should replace the CLOCKS_PER_SEC with 7.33e8 (733 Mhz). SERPLinker