Talking Code: Get to Know Your Compiler

Mario Parra | March 31, 2017 | Mobile App Development

Get to know your compiler! Given a non-trivial task with many solution paths, a sensible Computer Scientist would utilize the path that offers an optimum balance of time/space complexity. The seasoned Software Engineer also understands getting the task done is only half the job; the final code must also be maintainable. One way to accomplish this is by making the source accessible to others. Thanks to the diligent work of the Computer Scientist, the Software Engineer can spend her time designing a readable framework without having to worry (at least not too much) about macro-optimizing the programs.

Because ideally your compiler provided a full range of optimization others won’t even need to be aware of. Just as the introduction of context-free language for machine code compilers made programming accessible to the average user, so too do today’s modern compilers—by optimizing code to be more efficient without requiring a programmer to sift through unintelligible code in the hopes of improving performance.

For instance, consider a task to be accomplished programmatically in which the output size is unimportant. The major concern is its performance to the tick of the processor. One may assume that—in order to save ticks on stack operations—the task would be best achieved within a very long processing unit (i.e. function). But this would go against the adage that a function should perform one task and one task only. Modulating code into functions makes it more maintainable even if those functions are called just once. Depending on the specifics, if the task were performed in C++ for instance, we could take advantage of GNU C++ Compiler’s optimizing options for heuristically inlining functions: -finline-functions, -finline-small-function, and -finline-functions-called-once.

The debate of whether to pre-fetch collection size when looping through it, serves as another example of when it’s wise to mind the compiler. Compare the following snippets of code:
know your compiler
Without knowing too much about the language or the compiler, one would assume the snippet on the left might generate more efficient code. After all, it appears the other snippet will evaluate args.Length at every iteration. But in C# this is not the case. Take a look at the optimized code generated after the Just In Time Compiler has had a pass at it (I’ve omitted the interrupt instructions that allowed me to see the code after JIT):

 
know your compiler

Even with no experience in Assembly, one notices the code generated is similar in length and form and, most importantly, so is the portion that gets looped over. One might wonder why the mov instructions on line 14 differ greatly on their operands and why not every CISC instruction executes at a constant rate. But suffice to say this does not impact the speed of execution in the end product. The way assemblers generate machine code is outside this post’s scope.

At a glance, these examples may seem trivial but they are significant in that they encourage the reader to take a closer look at what their compiler of choice does behind the scenes, thus allowing your engineer to focus more of her efforts on making the code more readable overall.

Got a question about your organization’s code-optimization strategies? Or any other question around mobile strategy for that matter? Then give us a call! We’d love to hear from you.

Mario Parra

Mario is the Backend Software Engineer for Propelics Products. His realm of expertise encompasses low-level programming (C and Assembly), Microsoft-related technologies (e.g., Win32, .NET), and object-oriented framework design—all prowess he developed during his time at National Instruments' Data Acquisition Drivers Team. Mario thrives on exploring computer programming at its core; he even loves getting his hands dirty with manual memory management, language theory and mathematical complexities.

More Posts