Path: utzoo!utgpu!watmath!clyde!att!rutgers!mailrus!ulowell!m2c!applix!scott
From: scott@applix.UUCP (Scott Evernden)
Newsgroups: comp.sys.amiga.tech
Subject: Re: Dividing by a power of two (Re: Aztec compiler ineffeciencies)
Message-ID: <869@applix.UUCP>
Date: 29 Nov 88 16:56:31 GMT
References: <8811281919.AA17977@postgres.Berkeley.EDU>
Reply-To: scott@applix.UUCP (Scott Evernden)
Organization: APPLiX Inc., Westboro MA
Lines: 42

In article <8811281919.AA17977@postgres.Berkeley.EDU> dillon@POSTGRES.BERKELEY.EDU (Matt Dillon) writes:

	( re:  "/ 2" vs ">> 1" )

>	This is also a very good example of what is known as "programmer's
>experience".  No *good* programmer would do constant division by powers of 2
>using '/', but would use a shift instead.  The programmer, after all, knows
>exactly what range a particular variable will fall in whether it is 
>unsigned or not, and thus can make optimizations and assumptions the
>compiler could never match.

I used to believe this.  At one time, my code would to be riddled with <<'s
and >>'s for powers-of-2 arithmetic.  I still fight the temptation to do
this and other confusing tricks.  I now believe that it is the compiler's
duty to recognize these cases (at least) and do the optimization for me.
Mostly because "foo = bar / 16" is vastly more readable, and the idea is more
immediate than "foo = bar >> 4".  I maintain that someone else reading this
code will become more confused than the simple division.

Do you multiply by 12 like this? 

	foo = (bar << 3) + (bar << 2);

I hope not.  Yet, J. Toebes and Lattice will figure out a "* 12" for you.
Another problem with excessive use of << and >> is that they bind so loosely
that you've got to be real careful with parens, which tends to further
confuse symantics.

>	You might say "It is the compiler's job to optimize!", to which I would
>respond: "Which do you think the compiler can optimize better, a shift, or
>a divide?".  Got it?  No matter how good compilers get, certain constructs
>are more apt to optimization than others due to fewer assumptions having to
>be made about them, so the 'good' programmer *always* optimizes his code.

My feeling is that the compiler should optimize whatever it can, allowing
me to select degrees of optimization, etc- like most modern compilers do.
After trying to understand what some of my oldest C code was doing (full of
tricks, and even with comments) I'm now convinced that
readability/understandability is vastly more important than "good"
programmer tricks which attempt to out-think the compiler.

-scott