How to estimate if an algorithm performs within the time limits ?

Sometimes, I think so much to come up with an algorithm, then code it well only to find out that it gives a TLE.

How can we estimate whether an algorithm with so-and-so complexity fits well with the input constraints, before starting the coding ?

For eg:

if input is of length 100000 or 10000

and I just want to know, if an O(n^3) solution gives TLE or not.

Generally, the problem states the time to be 1 or 2 seconds.

So, with this kind of information, can we do any estimates on the number of seconds it takes for a large input and get a sense about its TLE outcome before itself without coding and then testing it with the judge ? Please help…


Approximately you can perform about 10^8 to 10^9 operations in one second…so with that you can estimate whether your algo will run or not


with a time limit of 1sec O(n^3) with n <= 10^3 will pass within limit but for n<=10^4 will give TLE. (n^3 for n=10^4 = 10^12 but the judging servers can do 10^8 to 10^9 computation per sec)


Agree with chalubhalu that you are allowed to perform about 10^6 computations.
But if you want to practically measure the execution time for your programm in C++, you can try this :

  1. generate your input in a file as


    using namespace std;

    int main()
    ofstream fout (“test.out”);
    ifstream fin (“”);


    // your code

    cout<< "Done in " << clock() / CLOCKS_PER_SEC <<"sec"<< endl;


for spoj: most of the problems get judged on 733MHz CPU. So to match the slowness of judging system, you should replace the CLOCKS_PER_SEC with 7.33e8 (733 Mhz).

you can check now if the time taken is more than 1 or 2 sec, you are going to get TLE!
I hope this will help you.
Any further queries are welcome !

1 Like

This is not very practical in fact, you do not know how slow the judging machine is and average user here won’t test his code with worst case input…

1 Like

for spoj they use 733MHz clock.
So in the place of CLOCK_PER_SEC, you can put the their clock rate!

This is a tricky question. The most basic estimates are just calculating the number in the O-notated complexity and if it’s up to 10^7, then it’d probably run in 1 second (10^6 is outdated, since modern machines are faster). That doesn’t have to work all the time, though, because there’s also a constant factor.

Different algorithms have different constants - in general, dynamic programming is really fast. Some operations are slow - modulo (that slows down the mentioned DP significantly), if-else conditions, standard minimum/maximum etc., while others, like bitwise operations, are really fast and a good way to optimize a program (they can be used for faster min/max functions, for example). Some data structures have a large constant - if we just limit ourselves to trees, then starting with red/black trees (STL set<>/map<>), down to interval trees - lazy-loaded, then simple, and BIT (Fenwick trees) have the best constant. The difference is so great that you can sometimes replace a map<> by a BIT, get an additional log-factor to complexity, and improve the actual running time!

Then, there are small tricks like adding a condition which for most test data makes the code skip some redundant operations, or noticing that it’s hard to make test data for which some assumption doesn’t hold. But that’s another topic… the point is that sometimes, these tricks (they can be in your code even if you don’t know about it :D) can improve your constant factor, or even complexity, noticeably.

In short, it all depends on experience. When you code more algorithms and note how fast they ran, you’ll be able to estimate better what could pass later, even without yet coding it.


@xellos0, thank you very much… :slight_smile:

for spoj: most of the problems get judged on 733MHz CPU. So to match the slowness of judging system, you should replace the CLOCKS_PER_SEC with 7.33e8 (733 Mhz).

thanks very good, from les femmes cougares et macougaramoi !

What will be the highest time complexity for n = `10^15` and time limit for test case is 4 second?