C++11 , just a pretty face?

Okay, so i have been checking upon my old submissions, finding new ways to optimize them, i came across with these unusual thing, While i truly follow C++11 for my programming, it seems that there are really a few drawbacks of using C++11.

While many C++11 programs would compile under C++4.3.2 as well(those who don’t use C++11 exclusive things) , there are some performance issues.

I started this simple test, by submitting my code for the problem TEST .

The very exact same code, give different results when compiled under different tags.


C++4.3.2 : 4098035

C++11 : 4097980

Interesting enough, the Execution time for for both were:

C++11 : 0.02sec
C++4.3.2 : 0.00sec

and not only that, we see memory usage for them are:

C++11 : 3.4M

C++4.3.2 : 2.8M

So i wonder, what exactly happens when the program is compiled under C++11? , does this mean C++11 is somewhat BAD for an enviroment with limited resources ? , if so, then why bother developing a new standard than is worse than the previous?.

Also Does using better compiler techniques ( Code::Blocks->Settings->Compiler->Compiler Flags->Optimize for speed ) help in improving the speed?. If so, should’nt you think than there should be compilation standard set for such optimizations?

now, who would compromise performance for a better library ? …

Here is a similar question, i asked on SO : http://stackoverflow.com/questions/24255622/why-c11-programs-are-slower-than-c4-3-2


1 Like


This falls a bit out of ambit of Codechef and of algorithmic programs, but, you’ve raised some interesting questions.

C++11 introduces a LOT of improvements over previous C++ standard by adding some new functionalities to the language which can help you in your programming. Stuff like more sophisticated iterators, support for lambda calculi, anonymous functions and definitely a huge plethora of other things were added in C++11 (most notorious one being the keyword auto, which is, I believe, a shorthand for automatic type inference. So, you don’t remember, or you don’t know what type a specific function returns? Use auto and the compiler will automatically infer the type from context, it’s amazing!!).

Also, recently and I believe independently, the new version of gcc has been released (gcc 4.8.1). This is a whole new version of what is known as the GNU Compiler Collection which is the de facto tool that Linux users use when compiling C/C++ codes. Code:.Blocks can also be set up to use either MinGW/GCC/Visual C++ and lots of more obscure and strange compilers which are only useful when developing specific or platform-dependent applications (who cares about Windows or Microsoft anyway? ).

In this new release, lots of features were added to the compiler and you can read about everything new, [here][1].

Answering your specific question:

If so, should’nt you think than there
should be compilation standard set for
such optimizations?

now, who would compromise performance
for a better library ? …


It’s not like the guys who wrote the compiler wanted to compromise performance, but, I’m also quite positive about the fact that they didn’t wrote it having SPOJ front-end users in mind (like we all are), so, your question ends up being a little “ungrateful” to some very talented engineers.

Speaking about optimizations, they already exist and usually they tend to be always enabled when we submit our codes here.

The flag that denotes optimizations in a generic way is -O.

“Under” this flag you have different layers or “levels” of optimization:

-O, -O2, -O3

which all tend to optimize code both for speed and memory performance by internally calling a HUGE set of flags (think of -O as an alias for a huuuuge set of lots of other options which all get enabled. As an example of some options enabled by -O2:


which are all flags which operate at Assembly level (obviously!) )

Not sure if this was the answer you were looking for…

[1]: http://gcc.gnu.org/onlinedocs/gcc-4.8.1/gcc/


see they are providing you with lots of new tools to make your life easy while programming. you can reduce memory by wisely importing header files,intialising array and all. See if you will do the same things you did in c/c++ in Java you might end up hitting 1000M memory and time>1 sec(thats why you might not have seen a java submission at top position in any of the problems ). Does this means java is poorer than c,c++? Its the amount of flexibility it provides that takes a toll on resources. If one day you decide to write a long code of some thousands of lines then you will understand the importance of those tools.

I am sorry, but i think you missed the point of the question, i am not talking about different languages with same logic… i am talking about same language with different standards.

well, thanks for info on optimization flags. But still, same code, when compiled under different flags using same compiler(say MinGW) effects performance and memory, is this true for all compilers in general? or it is just the way SPOJ works… And doesn’t this discourages the use C++11 in enviroments with limited resources?

How can some extra C++11 features (which are even not used) can affect the performance of the program?

PS - I highly appreciate the work of people behind C++11, but unable to understand how this compilation process really works…

see I gave you an example versioning of languages increases its functionality thats why I cited an example in terms of java which has high functionality…

yes, but the difference in performance between C++ and java can be explained on many basis, one being the compilation process itself.
Read this:

However, such differences are not there between c++ and c++11, cuz they are the same language with just extra libraries and standards.

also can anyone specify optimization flags used by SPOJ while compiling? @admin ?