std::vector or C-style array?

I recently overheard a rather interesting statement concerning programming and thought I’d share it with the world via a Tweet and a small counter example. This started an interesting discussion:

What I believed would go by unnoticed, spawned a series of interesting views on this topic – some of them missing my original point. Therefore, I figured it’d be a good idea to expand my thought into something longer than 140 characters. Note however, that this is not a full dissection of pros and cons of one solution over the other but a general opinion!

Original statement and code example

So the original statement claims that you should always use std::vector instead of a C-style array for data containment. The keyword “always” is what triggered my Tweet, claiming that in every instance it’s a desired and better solution. To see how this works out, here’s an example – a short program to create a static container of 10 integers, assign them values and return. First, let’s use the “dogmatic” approach:

// approach 1: using std::vector
#include <vector>

int main() { 
  std::vector<int> foo;
  for(int i = 0; i < 10; i++)
	foo[i] = i;
  return 0;

Alternatively, let’s do the same thing with a classic C-style array.

// approach 2: using C array
int main() {
  int foo[10];
  for(int i = 0; i < 10; i++)
	foo[i] = i;
  return 0;

Running both pieces of code through a compiler (unoptimized – will soon explain why) creates interesting results when looking at the assembly. Approach 1 produces output too long to be put in this post, however with approach 2 we get this neat code:

        push    rbp
        mov     rbp, rsp
        mov     DWORD PTR [rbp-4], 0
        cmp     DWORD PTR [rbp-4], 9
        jg      .L2
        mov     eax, DWORD PTR [rbp-4]
        mov     edx, DWORD PTR [rbp-4]
        mov     DWORD PTR [rbp-48+rax*4], edx
        add     DWORD PTR [rbp-4], 1
        jmp     .L3
        mov     eax, 0
        pop     rbp

“You forgot the -O flag!”

It’s true that the optimization flag can be passed to a compiler and in all likelihood, this is what you’ll be working with most of your time. There are, however, few specific reasons I deliberately choose not to demonstrate code with optimizations enabled:

1. Most developers work primarily with debug code during development process. While your mileage may vary, chances are that your debug session will be done with the use of debug symbols most of the time.
2. Optimizations (even for debug builds) cost you build time.

Why you should care?

The reason I put emphasis on the last sentence is because a lot of modern day programmers are taught to delegate all the heavy work to the compiler with an assumption that it will always turn out fine. What this belief leads to is crawling debug code in the best case. Take, for example, the first approach using -O3 and the counter example, also with -O3. My personal belief is that you should rely on the -O flag only as last resort (and even then take it with a grain of salt). To that extent, using complex structures such as std::vector, std::array and others should be only done when it’s justified and doesn’t incur unnecessary performance penalties on the executed code. It is not, in fact, the high count of assembly lines that’s the main problem, rather the heap allocations that take place when using STL containers. In many scenarios this is more penalizing than anything else, so unless a dynamic sized array is really needed, it’s not worth following established dogmas (whether STL is good for game programming is a completely different story).

Who should I do?

Don’t get too obsesive about the topic! I think it’s good practice to get your code to be as fast as possible without using excessive compiler optimizations but on the other hand – it is, after all, a tool that is handed to the programmer. And always be a bit cautious and sceptical when somebody tells you that solution A is “always” better than B – chances are, there’s a solution C that bests them all! 😉

Tweet about this on TwitterShare on RedditShare on LinkedInShare on FacebookShare on Google+Share on Tumblr

Leave a Reply

Your email address will not be published. Required fields are marked *