Optimizations - Programmer vs. Compiler? 1422
Saravana Kannan asks: "I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet. The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?"
"How would your answer differ (in terms of the level of trust on the compiler) if I'm talking about compilers for Desktops vs. Embedded systems? Compilers for which of the following platforms do you think is more optimized at present - Desktops (because is more commonly used) or Embedded systems (because of need for maximum optimization)? Would be better if you could stick to free (as in beer) and Open Source compilers. Give examples of code optimizations that you think the compiler can/can't be trusted to do."
Ask the compiler... (Score:5, Funny)
Compiler: Optimizing? Optimizing? Don't talk to me about optimizing. Here I am, brain the size of a planet, and they've got me optimizing inane snippets of code. Just when you think code couldn't possibly get any worse, it suddenly does. Oh look, a null pointer. I suppose you'll want to see the assembly now. Do you want me to go into an infinite loop or throw an exception right where I'm standing?
Programmer: Yeah, just show me the stack trace, won't you compiler?
Re:Ask the compiler... (Score:5, Funny)
Optimization rules... (Score:5, Funny)
Plus I got extra credit for implementing phong shading. I didn't even try to do phong shading.
Re:Optimization rules... (Score:4, Funny)
Re:Ask the compiler... (Score:3, Insightful)
Clear Code (Score:5, Insightful)
Re:Clear Code (Score:5, Insightful)
Re:Clear Code (Score:5, Interesting)
I wish I could put my hands on an article I read a couple years back on the code in the Space Shuttle. They go at that code base with an attitude that makes the average paranoid look happy-go-lucky. In fact, they approach software engineering kind of like other engineers do -- as if lives depended on it. It's old, it's slow, and it works. (Oh, wait, here it is. [fastcompany.com]) That's how code is maintained.
Re:Clear Code (Score:4, Interesting)
Must be nice (Score:5, Insightful)
one customer
420,000 lines
260 staff
no competition
no trade shows
no salespeople selling new features that have never been discussed
It's interesting to talk about their attention to detail, but to hold it up as a model for all software development neglects to consider that they are working under an entirely different set of constraints from most everyone else.
Re:Clear Code (Score:5, Interesting)
There is a whole are of study involved in correctness checking, which is related to the SAT (Satisfiability) problem.
The operating system choice is also interesting. Linux doesn't even come close to what they need. Having device drivers in the kernel is just not a good idea. It needs to have a separation kernel, at least that is the goal. I presently think they use the INTEGRITY operating system by Green Hills, but I could be wrong.
Re:Clear Code - Boeing (Score:4, Funny)
Re:Clear Code - Boeing (Score:4, Funny)
Re:Clear Code - Boeing (Score:5, Funny)
HAL! Open the bathroom door!
I'm sorry Dave, you shouldn't have had that last burrito.
Re:Not always. (Score:4, Insightful)
Re:Not always. (Score:5, Insightful)
I comment things that are non intuitive. I comment things that I *think* may be non intuitive. I comment things that I think someone else might have some difficulty understanding, because I happened to be deep into a code burn and consequently wrote something pretty tight, pretty sweet, but also pretty obfuscated. Finally, I comment things that I think *I* may not understand when I go back and look at the code again 3 months from now.
I don't comment every single line... I don't comment simple data structures, loops "/* this is a for loop using the integer variable I */" etc which would be stupid. I do however disassemble the complex portions of my code, describe how I'm dispatching events and best of all *why* I decided to do things a certain way instead of a different way.
I have, however, been handed 30k lines of code with zero documentation and not a single comment anywhere in it, with absolutely no clue at all how it worked and no access to the original programmer and been told "We need such and such fixed|updated|added by friday" and had to spend the entire week basically tracing every single line of code to figure out that the original programmer must have been smoking crack with NO indication of why he wrote things how he did and NO help when he decided to be exceedingly "clever"
in his code. That time was wasted.
Would it have killed him to simply put a comment block explaining his event dispatch model? Or to tell me what his functions and methods did and best of all why they did it?
There *is* a middle ground, believe it or not.
-- Gary F.
Re:Clear Code (Score:5, Insightful)
Re:Clear Code (Score:4, Funny)
If you are caught thinking out of the box again, you will get no dessert!
Re:Clear Code (Score:5, Insightful)
Another thing to remember is this: the compiler isn't stupid; don't pretend that it is. I had senior developers at an earlier job mad at me because I wasn't creating temporary variables for the limits of my loop indices (on unprofiled code, nonetheless!). It took actually digging up an article on the net to show that all modern compilers automatically dereference any const references (be they arrays, linked lists, const object functions, etc) before starting the loop.
Another example: function calls. I've heard some people be insistant that the way to speed up an inner loop is to remove the code from function calls so that you don't have function call overhead. No! Again, compilers will do this for you. As compilers were evolving, they added the "inline" keyword, which does this for you. Eventually, the compilers got smart enough that they started inlining code on their own when not specified and not inlining it when coders told it to be inline if it would be inefficient. Due to coder pressure, at least one compiler that I read about had an "inlinedamnit" (or something to that effect) keyword to force inlining when you're positive that you know better than the compiler
Once again, the compiler isn't stupid. If an optimization seems "obvious" to you, odds are pretty good that the compiler will take care of it. Go for the non-obvious optimizations. Can you remove a loop from a nested set of loops by changing how you're representing your data? Can you replace a hack that you made with standard library code (which tends to be optimized like crazy)? Etc. Don't start dereferencing variables, removing the code from function calls, or things like this. The compiler will do this for you.
If possible, work with the compiler to help it. Use "restrict". Use "const". Give it whatever clues you can.
illegal optimization there, for C at least (Score:4, Informative)
C++ reference, OK. If you mean a pointer though,
and you use C, the compiler is prohibited from
performing this optimization unless you also use
the "restrict" keyword.
Since you did mention "restrict", it appears that
you are working with C. "restrict" is not a C++
keyword.
BTW, inlinedamnit is __attribute__((__alwaysinline__))
for gcc and __forceinline for Microsoft.
Re:Clear Code (Score:5, Insightful)
Re:Clear Code (Score:3, Interesting)
In fact, working at Research In Motion has shown me how to write better code... sometimes, the most efficient or clever solution are the worst. I would rather see longer variable names, descriptive control structures and a total lack of goto statements. If you write sensible code, and use an optimization setting of 4 or 5, then you will have better prog
Re:Clear Code (Score:3, Insightful)
In particular, unless you have very specific efficiency needs, modern CPUs are more than up
Screw comments (Score:4, Insightful)
Bah. Comments lie. Code never lies.
Re:Clear Code (Score:3, Insightful)
"if (!ptr)" translates perfectly clear into english as "if no (valid) pointer" while "if (ptr==NULL)" involves some spurious special case value that I need to spend extra tinkering with.
It's like comparing booleans with "if (foo==true)" instead of "if (foo)". If that's better why not go all the way and write "if (((...((foo==true)==true)==true)...==true)==true) " ? For extra clarity you s
C and "flexibility" of expression operators (Score:5, Insightful)
! means "not" or "inverse of"; it is a boolean function. The variable ptr is a pointer; it is a reference to data, which means it isn't really data itself. !ptr shouldn't compute; a boolean operator should only work on boolean data. But C logical comparators are designed to work on everything. You are just supposed to know that 0 == NULL == false. This supposition is totally arbitrary and doesn't hold up in any language with strong typing.
This is what makes C difficult for beginners. Bad code compiles even though it has logical flaws, and ends up failing in mysterious ways.
The second case makes more sense. Equality is an operator that should work on all types of data. NULL is necessary if you are going to abstract data through the use of pointers or objects. Doing away with NULL would be equivalent to eliminating true and false and using 1 and 0 instead. Or eliminating strings and using sequences of ASCII codes. These substitutions are technically correct but in reality they make code unreadable.
Re:Clear Code (Score:5, Insightful)
Lets say you're traversing a 2D array of data (e.g., an image).
for(x=0; x < width; x++)
{
for(y=0; y < height; y++)
{
}
}
versus
for(y=0; y < height; y++)
{
for(x=0; x < width; x++)
{
}
}
The latter piece of code is just as clear as the first; however, will likely run about 50 times faster than the first, due to caching issues.
Will the compiler optimize the first piece of code to look like the second? Probably not (tell me if I'm wrong), as there may be a reason to process things in a particular order.
In addition, the latter piece of code may actually be less clear, as in some cases, it may not read well to do height before width in the for loop.
As a result, you'll still need to write code thinking about optimization.
Re:Clear Code (Score:5, Informative)
123
456
789
data[y][x];
This, in memory, is:
123456789
Similarly, the accsess order for the second loop is:
123456789
But for the first one, it is:
147258369
The first one hits memory sequentially, which is good for caching as each cache line stores a large chunk of sequential memory.
Considering hitiing the cache as oposed to hitting main memory is at least 100 times faster, you'll be lucky if the first loop is only 50 times slower.
This still presumes data stored in the specified order in memory (which is common for image formats, but not the only way things are done).
Re:The question was full of bad examples too (Score:4, Insightful)
You should always... (Score:5, Funny)
Re:You should always... (Score:5, Funny)
Re:You should always... (Score:5, Funny)
Re:You should always... (Score:5, Funny)
That's not as funny as you think (Score:3, Insightful)
> source code is self documenting...
And it's true too. Although comments are indeed a good thing, writing code that does not require them is a much better one. If your code needs comments, it's probably too complex for continued maintenance.
Re:You should always... (Score:3, Funny)
------------
[...]
Re:You should always... (Score:5, Informative)
There's nothing wrong with short variable names.
Thus quoth Linus in the Linux Coding Style guide.
Re:You should always... (Score:5, Informative)
some random integer loop counter, it should probably be called "i".
Calling it "loop_counter" is non-productive, if there is no chance of it
being mis-understood.
That last clause is an important one that often gets neglected. In fact, you should never, ever, call a variable loop_counter. That's as bad as pure reverse hungarian - it tells you how its used, not what it means.
I suggest that, for all non-trivial cases (and I'd prefer to see people err verbosely than compactly), you should use descriptive names. Not loop_counter, but maybe something like curRow? It doesn't have to be long, but at least then as the loop grows over time someone can understand a piece of code more easily than having to scroll back up to check that you are indeed in the "i" loop. Its even more critical when someone comes along and adds a nested (or containing) loop. Or whatever.
Same with "tmp". If its truly temporary, such as:
int tmp = getFooCount();
doSomething(tmp);
then it should be removed and rewritten as:
doSomething(getFooCount());
If its not that temporary, give it a real name. If you insist that it is temporary then you may have a scoping issue - having variables useful but only in part of your function could indicate that your function is doing too much work. If you insist its truly temporary, scope it down:
someRandomCode();
{
int foo = getFoo();
doSomething(foo);
doSomethingElse(foo);
}
moreCode();
At least now you've guaranteed that it is temporary. Better yet, just name it usefully.
Re:You should always... (Score:5, Insightful)
With the greatest respect to Linus, but writing a kernel does not make you the authority on programming. It does make you the authority on what particular style you allow in your CVS tree, but that's it.
I certainly agree that loop_counter is a bad name, though. But rather than use i, I prefer to at least make a note of what sort of objects I'm looping through.
For instance:
Code can never be 100% self documenting, but that's no reason not to settle for 0%. Whether you use CamelCase or words_broken_with_underscores is a matter of style, and you should stick with the style of the code base you're working on.
Anyone who can't or won't work with multiple languages or adopt the necessary style for an existing project is a poor programmer. When you create project, you create the rules. When you work on someone else's project, you follow the rules.
Re:Indeed (Score:3, Informative)
For example, if you just saw "FIELD_COUNT" by itself without code, would you have a good idea what sort of object that was knowing anything else? (A static variable). If you saw "_count"? (Instance variable). "getCount"? (Accessor
i, j, k, ... (Score:5, Informative)
I think that most people forget that the reason that i, j, k, etc. are used for loop counters is that unless otherwise declared, I..N default to INTEGER in FORTRAN. This convention just carried over as programmers migrated from FORTRAN to other languages and has been passed down through the ages.
Time to post the famous Knuth quote... (Score:5, Informative)
That's a Tony Hoare quote, not Donalded Knuth (Score:5, Informative)
use a good profiler!! (Score:3, Informative)
Using a profiler and your own brain you can often significantly improve over what a compiler can do.
Algorithms, Not Stupid Processor Tricks (Score:5, Insightful)
The sad truth is that, as far as optimization goes, this isn't where attention is most needed.
Before we start worrying about things like saving two cycles here and there, we need to start teaching people how to select the proper algorithm for the task at hand.
There are too many programmers who spend hours turning their code into unreadable mush for the sake of squeezing a few milliseconds out of a loop that runs on the order of O(n!) or O(2^n).
For 99% of the coders out there, all that needs to be known about code optimization is: pick the right algorithms! Couple this with readable code, and you'll have a program that runs several thousand times faster than it'll ever need to and is easy to maintain--and that's probably all you'll ever need.
Re:Algorithms, Not Stupid Processor Tricks (Score:4, Insightful)
Re:Algorithms, Not Stupid Processor Tricks (Score:3, Insightful)
Re:Algorithms, Not Stupid Processor Tricks (Score:5, Informative)
And while we're at it - n^2 vs 2^n *is* a big deal when working on 1ghz+ systems... If we have 1000 objects, an operation that takes 10 cycles and an n^2 algorithm, then we get a runtime of 0.01 seconds (10 x 1000 x 1000 cycles ), if we have a 2^n algorithm then we get a runtime in the hours, and no amount of optimizing the code in the loop (even down to one instruction) is going to get us anywhere near the n^2 algorithm.
Re:Algorithms, Not Stupid Processor Tricks (Score:5, Interesting)
One of the finest moments in my programming career was when my boss asked me to see if I could gain a speed improvement in a program that surveyed a huge datastore and generated volumes of text from it. This program had to run once a month, and deliver its result in the same month. The program that was originally written, unfortunately, took three months to run (it started out OK, but the data store had grown considerably). They had asked one of our "best programmers" to create a faster version of the program. He did that by reprogramming the entire thing in assembly (you may now understand why managers thought he was one of the best programmers). It took him six whole months to finish the new version. The resulting program completed the task in just about one month. However, my boss was afraid that when the datastore would grow a bit more, we would again be in trouble. That's when he asked me to look it over. I started by investigating the problem, which at first glance looked like a network traversing problem. I soon realised it could be solved by a nested matrix multiplication (which is, of course, a standard way to discover paths in a network). It was a matrix with about a million rows and columns, but since it contained only zeroes and ones (with a couple of thousands times more zeroes than ones), the multiplication was easy to implement in a fast way. Within half-a-day, I had built a prototype program in a high-level language which did the whole job in a few hours.
While I am still pleased with this result, I really think it came off so well not because I was so smart, but because the assembly programmer was not really worthy of the name. Still, I often use this as an illustration for students who are writing illegible code and argue that it is so very fast.
The algorithm that must not be named! (Score:5, Funny)
The most frustrating thing is that, if you must use the algorithm that must not be named, the bidirectional form of the algorithm is much faster (in practice) than the unidirectional form yet really no more complex to code than the latter if you have any potential as a software developer.
Re:The algorithm that must not be named! (Score:4, Funny)
The many-worlds version of bogosort is the fastest possible sorting algorithm though. Its O(C). For those that don't know, the many-worlds version just does one random permutation, with a new universe being created for each possible outcome of the permutation. You then destroy all the universes were the dataset isn't sorted.
Clear & Concise Code (Score:5, Interesting)
Also the clear code will result in fewer misinterpretations, which will mean fewer bugs (especially when the original author is not the one doing maintenance years later), further reducing costs in dollars, man hours, and frustration.
NULL not always 0 (Score:3, Insightful)
Aren't there machines out there where the C compiler specifically defines NULL as value that is not equal to 0? I recall reading that somewhere, and that was my reason for using ==NULL instead of !. My C days are long gone though...
Re:NULL not always 0 (Score:4, Informative)
Re:NULL not always 0 (Score:3, Informative)
5.1.1 Zero
Zero (0) is an int. Because of standard conversions, 0 can be used as a constant of an integral, floating-point, pointer, or pointer-to-member type. The type of zero will be determined by context. Zero will typically (but not necessarily) be represented by the bit pattern all-zeros of the appropriate size.
No object is allocated with the address 0. Consequently, 0 acts as a pointer litera
Code Twiddling (Score:3, Informative)
There really is no one answer to this, as it depends on the compiler itself, and the target architecture. The only real way to be sure is to profile the code, and to study assembler output. Even then, modern CPUs are really complicated due to pipelining, multilevel cache, multiple execution units, etc. I try not to worry about micro-twiddling, and work on optimizations at a higher-level.
From the "Patenting Fire" department (Score:4, Funny)
Prepare to be sued.
Tradeoffs (Score:5, Insightful)
Hard to measure, but what is the tradeoff between increased speed and increased readability (which is a prerequisite for correctness and maintainability)? And if you can estimate that tradeoff, which is more important to the goals of your application?
As a side note, it is far more important to make sure you are using efficient algorithms and data structures than to make minor local optimizations. I've seen programmers use bizarre local optimization tricks in a module that ran in exponential time rather than log time.
Optimize at the interpreter/compiler level... (Score:3, Interesting)
Most people should not bother (Score:5, Insightful)
What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?
Programmers cost lots more per hour than computer time. Let the compiler optimize and let the programmers concentrated on developing solid maintainable code.
If you make code too clever in an effort to try to pre-optimize, you end up with code that other people have difficulty understanding. This is leads to lower quality code as it evolves if the people that follow you are not as savvy.
Not only that, but the vast majority of code written today is UI-centric or I/O bound. If you want real optimization, design a harddrive/controller combo that gets you 1 GBps off the physical platter (and at a price that consumers can afford).
The most important optimization... (Score:3, Insightful)
The only exception I can think of is when you're doing standard stuff where the best (general) solution is well-known, like sorting; however, in those cases, you shouldn't reinvent the wheel, anyway, but instead use a (presumably already highly-optimized) library.
Beware of habits. (Score:5, Interesting)
Ended up writing my programs in assembler...
Huh (Score:5, Informative)
As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'.
Both forms resolve to the same opcode. Even under my 6502 compiler.
CMP register,val
JNE
Enjoy,
Re:Huh (Score:5, Insightful)
However, any compiler worth anything should find that and optimize it very easily in the case where you're comparing to a constant that evaluates to zero.
Re:Huh (Score:3, Informative)
$.02 (Score:5, Insightful)
2) Profile your code
3) Optimize the bottlenecks
That said, (!ptr) should be just as maintanable as (ptr == NULL) simply because it is a frequently used 'dialect'. As long as these 'shortcuts' are used throughout the entire codebase they should be familiar enough that they don't get in the way of maintainability.
micro optimization (Score:5, Insightful)
Compilers are pretty good at that, and you should let them do their job.
Programmers should optimize at a higher level: by their choice of algorithms, organizing the program so that memory access is cache-friendly, making sure various objects don't get destroyed and re-created unnecessarily, that sort of thing.
Those who forget Tony Hoare... (Score:5, Insightful)
"Premature optimization is the root of all evil"
If there's only one rule in computer programming a person ever learns, "Hoare's dictum" is the one I would choose.
Almost all modern languages have extensive libraries available to handle common programming tasks and can handle the vast majority of optimizations you speak of automatically. This means that 99.99% of the time you shouldn't be thinking about optimizations at all. Unless you're John Carmack or you're writing a new compiler from scratch (and perhaps you are) or involved in a handful of other activities you're making a big big mistake if your spending any time worrying about these things. There are far more important things to worry about, such as writing code that can be understood by others, can easily be units tested, etc.
A few years ago I used to write C/C++/asm code extensively and used to be obsessed with performance and optimization. Then, one day, I had an epiphany and started writing code that is about 10 times slower than my old code (different in computer language and style) and infinitely easier to understand and expand. The only time I optimize now is at the very very end of development when I have solid profiler results from the final product that show noticable delays for the end user and this only happens rarely.
Of course, this is just my own personal experience and others may see things differently.
Re:Those who forget Tony Hoare... (Score:4, Insightful)
It is important to be aware of that here are different types of optimizing. Optimizing code where the compiler probably does a good job is just stupid unless the code turns out to be a major bottleneck.
However, not thinking about optimization/speed early can IMHO be very dangerous. If the project is a bit large and complex, a nice design on the whiteboard may very well turn up to be dead slow with no chance in hell to make it run significantly faster without redesigning/rewriting the entire thing (this doesn't really have anything to do with compiler optimization though).
I've been working in a project (I wasn't in it in the beginning), where the design probably looked good for some people in the design document (although I don't really agree on that neither), but the performance aspect was neglected until the application turned out to be quite slow. Adding mechanisms to make it run faster has been quite "challenging". (My personal opinion in this piece of software is that performace issues was ignored even from early design because the wrong people making the decisions. Basically they didn't focus on where performance really was needed.
So after a few years my experience bottles down to: "If you have a performace requirement, make sure your code keeps up the entire time." and "You can't get both high performance and general purpose stuff in the same piece of code".
Write C for C programmers (Score:5, Insightful)
With regard to your example, I can't imagine any modern compiler wouldn't treat the two as equivalent.
However, in your example, I actually prefer "if (!ptr)" to "if (ptr == NULL)", for two reasons. First the latter is more error-prone, because you can accidentally end up with "if (ptr = NULL)". One common solution to avoid that problem is to write "if (NULL == ptr)", but that just doesn't read well to me. Another is to turn on warnings, and let your compiler point out code like that -- but that assumes a decent compiler.
The second, and more important, reason is that to anyone who's been writing C for a while, the compact representation is actually clearer because it's an instantly-recognizable idiom. To me, parsing the "ptr == NULL" format requires a few microseconds of thought to figure out what you're doing. "!ptr" requires none. There are a number of common idioms in C that are strange-looking at first, but soon become just another part of your programming vocabulary. IMO, if you're writing code in a given language, you should write it in the style that is most comfortable to other programmers in that language. I think proper use of idiomatic expressions *enhances* maintainability. Don't try to write Pascal in C, or Java in C++, or COBOL in, well, anything, but that's a separate issue :-)
Oh, and my answer to your more general question about whether or not you should try to write code that is easy for the compiler... no. Don't do that. Write code that is clear and readable to programmers and let the compiler do what it does. If profiling shows that a particular piece of code is too slow, then figure out how to optimize it, whether by tailoring the code, dropping down to assembler, or whatever. But not before.
Re:Write C for C programmers (Score:4, Interesting)
But please, oh, please, don't write this:
if (!strcmp(x,y))
Intuitively, that looks exactly backwards from what it's testing (equality).
On the "optimize for the compiler" issue, I think what's been said already is right: don't do it unless it's in a critical spot, write code for readability (and big-picture efficiency) first, then worry about local optimizations only if there's a problem.
code should be written for people to read (Score:5, Insightful)
- Structure and Interpretation of Computer Programs [tinyurl.com]
Not a question that can be asked generally (Score:5, Informative)
In general, however, systems are now fast enought that when in doubt, write the clearest code possible. I mean for most apps, speed is not critical, however for all apps stability and lack of bugs is important and obscure code leads to problems.
Also, for things that are time critical, it's generall just one or two little parts that make all the difference. You only need to worry about optimizing those inner loops where all the time is spent. Use a profiler, since programmers generally suck at identifying what needs optimising.
Keep it easy to read and maintain, unless speed is critical in a certian part. Then you can go nuts on hand optimization, but document it well.
Check out the LLVM demo page (Score:5, Interesting)
One of the nice things about this is that the code is printed in a simple abstract assembly language that is easy to read and understand.
The compiler itself is very cool too btw, check it out.
-Chris
Language paradigms (Score:3, Insightful)
> As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'.
> The reason someone might use the former code snippet is because they believe it would result
> in smaller machine code if the compiler does not do optimizations or is not smart enough
> to optimize the particular code snippet.
No programmer believes that.
In C, NULL is #define-ed to 0 and the "!" operator also compares against zero so every compiler should generate exactly the same code for both.
> IMHO the latter code snippet is clearer than the former, and I would use it in my code
Actually I prefer to write (and read) the former and I do find it clearer, mostly because it is idiomatic in C et al.
Another good reason is that the former works better in C++ because it enables you to substitute "smart" objects for plain pointers and use them in a more natureal way (especially in templates).
(Aside: most platforms that have C compilers also have deccent C++ compilers)
> if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet.
See above. There is nothing to optimize.
If you're not willing to TIME it... (Score:5, Insightful)
Never try to optimize anything unless you have measured the speed of the code before optimizing and have measured it again after optimizing.
Optimized code is almost always harder to understand, contains more possible code paths, and more likely to contain bugs than the most straightforward code. It's only worth it if it's really faster...
And you simply cannot tell whether it's faster unless you actually time it. It's absolutely mindboggling how often a change you are certain will speed up the code has no effect, or a truly negligible effect, or slows it down.
This has always been true. In these days of heavily optimized compilers and complex CPUs that are doing branch prediction and God knows what all, it is truer than ever. You cannot tell whether code is fast just by glancing at it. Well, maybe there are processor gurus who can accurately visualize the exact flow of all the bits through the pipeline, but I'm certainly not one of them.
A corollary is that since the optimized code is almost always trickier, harder to understand, and often contains more logic paths than the most straightforward code, you shouldn't optimize unless you are committed to spending the time to write a careful unit-test fixture that exercises everything tricky you've done, and write good comments in the code.
My experience (Score:4, Insightful)
The compiler will perform strength reduction in all reasonable instances.
The compiler will raise invariant computations from inner loops in almost all cases that do not involve pointers.
The compiler knows how to optimize integer division in ways I wouldn't have even thought of.
The compiler sometimes "forgets" about a register and produces sub-optimal code for inner loops.
The compiler can't always tell what variable is most important to keep in a register in an inner loop.
Other stuff:
x^=y; y^=x; x^=y; optimizes to an XCHG instruction with gcc on x86. I was amazed that it could do that. (Yes, that piece of code exchanges x and y). On the other hand, tmp=x; x=y; y=tmp; doesn't get optimized to an XCHG. Obviously, the compiler is using a Boolean simplifier or identity-prover.
The compiler always assumes a branch will be taken (unless you use certain compiler switches to change this behavior). Thus you should always arrange your conditional tests so that the less-often executed code is within the braces.
Don't be afraid to write complex expressions. Subexpression elimination is almost foolproof in all instances where pointers are NOT involved. It's better to leave your code clear, and let the compiler optimize it.
And ABOVE ALL:
No matter how much the compiler optimizes your code, you can throw it all down the toilet with bad design by screwing the cache utilization. This is EXTREMELY important especially in graphical applications which process huge raster buffers. Row-wise processing is always more efficient than column-wise. Random access will kill your performance. Do not trust the memory allocator to keep your allocations together. Write your own allocator if you are dealing with thousands or millions of small, related chunks of information.
I could go on... But I must also second what others have said, which is to perform algorithmic optimizations FIRST and do not bother with constant-factor optimizations until you are CERTAIN that you are using the best algorithm. If you ignore this advice you might waste a week optimizing a three-line inner loop and then come up with a better algorithm the next week which makes all your hard work redundant.
Premature Optimization (Score:5, Insightful)
The biggest mistake I see in my professional (and unprofessional) life is programmers who try to optimize their code is all sorts of "733+" ways, trying to "trick" the compiler into removing 1 or 2 lines of assembly, yet completely disregard that they are using a map instead of a hash_map, or doing a linear search when they could do a binary search, or doing the same lookup multiple times, when they could do it just once. It's just silly, and goes to show that lots of programmers don't know how to optimize effectively.
Compilers are good. They optimize code well. Don't try to help them out unless you know your code has a definite bottleneck in a tight loop that needs hand tuning. Focus on using correct algorithms and designing your code from a high level to process data efficiently. Write your code in a clear and easy to read manner, so that you or some other programmer can easily figure out what's going on a few months down the line when you need to add fixes or new functionality. These are the ways to build efficient and maintainable systems, not by writing stuff that you could enter in an obfuscated code contest.
valgrind (Score:4, Informative)
Algorithm (Score:4, Insightful)
Until it came time to dipose the memory. The STL slowly crawled tons of our objects, and the C++ dispose pattern was just too inefficient for all the stack hits. So we pointed the library at a custom heap and never disposed the dictionary - we just disposed the heap in bulk.
All written without hesitation for "longhand" syntax. (and btw, its "if ( NULL == var ) " to those that care). The code optimized fine, with just a few choice inlines we got to stick. No reg vars, no assembly piles littering the code.
But this was an in-house business app, and the lifecycles / requirements are different than other products. However, because of the nice algorithms, optimization wasn't difficult, and didn't rely on code tricks. If you're squabbling over code tricks for optimization, you're choosing the wrong algorithm, to me.
Rules for writing fast code (aka optimization) (Score:4, Insightful)
Second: Do it later. There are thousands of situations where you can postpone the actual computations. Imagine writing a Matrix class with the invert() method. You can actually postpone calculating the inverse of the matrix until there is a call to access on of the fields in the matrix. Also you can calculate only the field being accessed. Or at some sensible threshold you may assume that the user code will read the entire inverted matrix and you can just calculate the remaining inverted fields... the options are endless.
Most string class implementations already make good use of this rule by only copying their buffers only when the "copied" buffer changes.
Third: Apply minimum algorithmic complexity. If you can use a hashmap instead of a treemap use the hash version it's O(1) vs Olog(n). Use quicksort for just about any kind of sorting you need to do.
Fourth: Cache your data. Download or buy a good caching class or use some facilities your language provides (eg. Java SoftReference class) for basic caching. There are some enormous performance gains that can be realized with smart caching strategies.
Fifth: Optimize using your language constructs. User the register keyword, use language idioms that you know compile into faster code etc... Scratch this rule! If you're applying rules one to four you can forget about this one and still have fast AND readable code.
The never overused example that I have (Score:4, Informative)
So, the client decided not to pay the last million of dollars because the performance was total shit. On a weblogic cluster of 2 Sun E45s they could only achieve 12 concurrent transactions per second. So the client decided they really did not want to pay and asked us to make it at least 200 concurrent transactions per second on the same hardware. If I may make a wild guess, I would say the client really did not want to pay the last million, no matter what, so they upped the numbers a bit from what they needed. But anyway.
Myself and another developer (hi, Paul) spent 1.5 months - removing unnecessary db calls (the app was incremental, every page would ask you more questions that needed to be stored, but the app would store all questions from all pages every time,) cached XML DOM trees instead of reparsing them on every request, removed most of the session object, reduced it from 1Mb to about 8Kb, removed some totally unnecessary and bizarre code (the app still worked,) desynchronized some of the calls with a message queue etc.
At the end the app was doing 320 and over concurrent transactions per second. The company got their last million.
The lesson? Build software that is really unoptimized first and then save everyone's ass by optimizing this piece of shit and earn total admiration of the management - you are a miracle worker now.
The reality? Don't bother trying to optimize code when the business requirements are constantly changing, the management has no idea how to manage an IT dep't, the coders are so nube - there is a scent of freshness in the air and there is a crazy deadline right in front of you. Don't optimize, if the performance becomes an issue, optimize then.
Just optimize for the "big picture" (Score:4, Interesting)
As for simple code optimizations, here's what a modern compiler (Microsoft Visual C++
Dear Lord (Score:5, Insightful)
1) Don't know when two things are obviously equivalent to any non-brain dead compiler.
2) Think something other than readability matters.
3) Think the non-idiomatic way of doing something is more readable.
But I'm sure I'm just repeating the comments I can't be bothered reading.
Get the answer for yourself!! (Score:4, Insightful)
Most compilers today will get all the simple stuff like if (!ptr) vs if (NULL == ptr) optimization. Its the more complex things that the compiler cannot "prove" where it has trouble. For example:
void h(int x, int y) {
for (i=0; i < N; i++) {
if (0 != (x & (1 << y))) {
f(i);
} else {
g(i);
}
}
}
Very few compilers will dare simplify this to:
void h(int x, int y) {
if (0 != (x & (1 << y))) {
for (i=0; i < N; i++) f(i);
} else {
for (i=0; i < N; i++) g(i);
}
}
Because the compilers have a hard time realizing that the conditional is constant and should be hoisted to the outside of the for loop. The compiler has the opportunity to perform loop unrolling in the second form that its may not try in the first instance.
You can learn these things from experience, or you can simply figure it out for yourself with the afore mentioned decompilation tools.
Once you decide to optimize... (Score:4, Insightful)
Here's my "guide to optimizing":
1) Are you disk I/O bound? You might need to switch to memory mapped files, or you might need to tweak the settings on the ones you have. You might need to use a lower level library to do your I/O. Many C++ iostreams implementations are slow, and many similar libraries involve lots of copying.
2) Are you socket I/O (or similar) bound? If so, you may need to rewrite with asynchronous I/O. This can be a PITA. Suck it up.
3) Are your threads spending all their time sitting in locks waiting for other threads? One, make sure you're using an appropriate number of worker threads optimized by the number of CPUs the host has. If you've already got the right number of threads, this can be a really tough decision. Presumably, the threads are helping your program readability, and trying to rework things into fewer threads is often a *bad idea*.
4) Are you spending all your time in malloc/new/constructor free/delete/deconstructor? Maybe you need to keep things on the stack, use a garbage collector, use reference counted objects, use pooled memory techniques, etc. In the right places, switching from some "string" library to const char* and stack buffers can give a huge benefit. Make sure, of course, that you use the "n" version of all standard string functions (the ones that take the size of the buffer as an argument) to avoid buffer overruns.
5) Are you spending all of your time in some system call? Like maybe some kind of WriteTextToScreen or FillRectangleWithPattern type of thing? For drawing code in general, try buffering things that are algorithmically generated in bitmaps, and only regenerate the parts that change. Then just blit together the pieces for your final output. Perhaps you need to rely on hardware transparency support for fast layer compositing. You might need fewer system level windows so you draw more in one function. Maybe you need to reduce your frame rate.
6) Are you using memcpy as appropriate?
If any of the previous items are true, you have no business worrying about the compiler. However, once you've gotten this far, you can start worrying about optimizing your code line by line.
7) Since you've gotten this far, the line(s) of code you're worried about are all inside some loop that gets run. A lot. They may be inside a function that's called from a loop too, of course. So, a few things to consider. A) You may need to use templates to get code that is optimized for the appropriate data type. B) You may need to split off a more focused version of the function from the general purpose function if it's also used in non-critical areas. This has negative maintainance ramifications. C) Do the bonehead obvious stuff like moving everything out of the loop that you can. D) Look at the assembly actually generated by your compiler. If you're not confortable with this, you have no business doing further optimization.
After looking at the assembler, then you'll know if the following are important. In my experience, they are.
1) Change array indexing logic to pointer logic:
can change to:
This eliminates lots of redundant addition. All of those stuff[i] = val type of statements tend to generate:
mov
Finally, a story that my signature works well with (Score:4, Interesting)
Optimization (Score:4, Insightful)
1. Choose the right algorithm. For example, in an embedded project I worked on an engineer used a linked list to store thousands of fields that must be added and deleted. While adding is fast, it didn't scale for deleting. Changed it to a hash table and it sped it up significantly.
2. Know your data and how it is used. Knowing how to organize your data and access it can make a huge difference. As a previous poster pointed out, sequential memory accesses are much faster than random accesses. I had to do some 90 degree image rotation code. The simple solution just used a couple for loops when copying the pixels from one buffer to another. In another, I took into account the processor cache and how memory is accessed and broke it down into tiles. The first algorithm, while simple and elegant ran at 30 frames per second. The other ran at over 200 frames per second. Looking at the code the first algorithm should be faster since the code is simpler. Both algorithms operate in O(N) time, where N=width * height.
Further optimization attempts to hint to the CPU cache about memory made no difference (Athlon XP 1700+). The only possible way I see to speed it up further would be to write it in hand-coded assembler.
3. Reduce the number of system calls if possible. Some operating systems can be very painful when calling the kernel. Group reads and writes together so fewer calls are made.
4. Profile your code to find bottlenecks.
5. Try and keep a tradeoff between memory usage and performance. A smaller tightly packed data set will execute faster with CPU caches and will reduce page faults when loading and starting up.
6. Try debugging your code at the assembler level, stepping through it. It will help you better understand your compiler.
7. Don't bother trying to optimize things like getting every ounce of performance when the next function you call will be very slow. I.e. in one section of MS DOS's source code which was hand-coded assembly language it was calculating the cluster or sector of the disk to access. First the code checked if it was running on a 16-bit or 32-bit CPU. Next it took the 16-bit or 32-bit path for multiplication, then it read from the disk. Why the hell write all this code to check the CPU if it's 16 or 32 bit for the multiply when the frigging disk is going to be slow. They should have just stuck with the 16-bit multiply rather than be clever.
In general applications with GCC, I rarely see much difference between -O2 or -O3. For that matter, I often don't see a noticable difference between -O0 and -O3 for a lot of code.
I only see improvements in some very CPU intensive multimedia code. I also saw a significant improvement in some multimedia code when I told the compiler to generate code for an Ultrasparc rather than the default, but that's because the pre-ultrasparc code didn't use a multiply instruction.
-Aaron
/. posters (Score:5, Insightful)
Somehow 99% of the readers took this to mean "What is the difference between NULL and the zero bit pattern and do you think it is a good idea to write clear code and do the profile/algorithm change cycle until there is nothing left to optimize or should I write low level optimized code from the start?"
sigh.. I've only found two comments with code so far after going through hundreds of posts. This is possibly the worst signal to noise ratio I've witnessed on
HLSL (Score:4, Informative)
It's not really a coding trick like an XOR swap, but most compilers don't yet seem to fully unroll parallel loops into good SIMD instructions or multiple threads.
The only time I've needed to bother to look at assembly output in recent years (other than debugging a release mode program) is when writing HLSL shaders. HLSL is the high level shading languge (C like) for shaders that is part of Direct3D 9. HLSL can be compiled to SM1, SM2, or SM3 assembly.
With pixel shader 2.0 you've only got 64 instruction slots, and some important instructions like POW (power), NRM (normalize), and LRP (interpolate) take multiple slots. 64 slots is not enough for a modern shader. I curse ATI for setting it bar so low.
There are two flaws I've found with the Dec 04 HLSL compiler in the DirectX. Sometimes it will not automatically detect a dot product opportunity. I had some colour code to convert to black and white in a shader and wrote it as y = colour.r*0.299 + colour.g*0.587 + colour.b*0.114; as I thought that was the most clear way to write it. Under certain circumstances the compiler didn't want to convert to a single dot instruction so I had to write as y = dot( colour.rgb, half3(.299h,
Another is often a value in the range of -1 to +1 is passed in as a colour, which means it must be packed into 0-1 range. To get it back you've got to double it and add 1.
a = p*2+1; gets converted into a single MAD instruction which takes one slot.
a = (p-0.5)*2; gets converted into an ADD and then a MUL.
Also conventional wisdom says you've got to write assembly to get maximum performance out of pixel shader 1.1 as it is basically just eight instruction slots. I don't have any snippets to verify this though.
I think this thread demonstrates that either compilers are mature enough you don't need any code tricks to help them do their job or
Re:Bad example (Score:3, Insightful)
Read the C standard about the definition of a null pointer constant.
Wrong, wrong, wrong (Score:5, Informative)
"ptr == 0" must give the same result as "ptr == NULL", always.
Re:Wrong, wrong, wrong (Score:4, Informative)
An integral constant expression with the value 0, or such an expression cast to TypeIs_VoidPointer, is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. A null pointer constant has TypeIs_NULL.
Once again, it should be said:
Don't give advice not to give advice to not give advice when you don't know C when you don't know C when you don't know C.
Re:Wrong, wrong, wrong (Score:4, Informative)
Althoug the comparision (0 == ptr) might for some twisted reason com out as a non-zero value, you are not guaranteed that (!ptr) will have the same value.
Wrong [eskimo.com]
Don't give advice... ah, screw it. You get the idea.
Re:write clearer code first (Score:3, Informative)
Anyone with any familiarity with C will consider the latter form unnecessarily more verbose, and therefore less clear.
There is one exception to that, and that is when "ptr" is in fact a complex expression that it isn't obvious at a glance is a pointer expression. In that case, == NULL or != NULL spells out to the reader of the code "oh, and by the way, it's a pointer." That is the ONLY reason to write this for clarit
Re:write clearer code first (Score:3, Insightful)
The problem comes from the fact that some functions that return integers do so in a way that has the inverse of the intuitive boolean interpretation. (Zero means true). One example is strcmp(), I
Re:Neither are correct. (Score:3, Insightful)
So does using -W -Wall -Werror, and that lets you write your statements naturally and protects you when comparing two variables.
Relying on NULL #define-ition to be compatible with all pointer types is risky.
Relying on behavior that is explicitly specified in the ISO/ANSI standard is never risky.
Re:Neither are correct. (Score:3, Interesting)
Also, your statement that "relying on NULL to be compatible with all pointer types is risky" is just plain wrong.
Re:Neither are correct. (Score:3, Insightful)
IMHO, a good programming language is an extension of your thought processes. No current language is particularly great, but some are better than others because they work more like the way we think. This is a huge part of Object-Oriented programming, its purpose is mostly grouping and categorizing th
Re:You're asking the wrong crowd (Score:3, Insightful)
~Lake
Re:Not Machine Performance but Programmer Performa (Score:3, Insightful)
1. Run top and look at the amount of CPU usage your app has during different parts of its operation. It should not, for example, run at 99% CPU usage while idle.
2. Run QuartzDebug to make sure you aren't doing gratuituous amounts of extra drawing. Examples: redrawing more often than necessary, redrawing more area than necessary.
And yes, fo