Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site mmintl.UUCP
Path: utzoo!linus!philabs!pwa-b!mmintl!franka
From: franka@mmintl.UUCP (Frank Adams)
Newsgroups: net.lang.c
Subject: Re: C programming style
Message-ID: <480@mmintl.UUCP>
Date: Fri, 12-Jul-85 13:46:01 EDT
Article-I.D.: mmintl.480
Posted: Fri Jul 12 13:46:01 1985
Date-Received: Mon, 15-Jul-85 00:40:01 EDT
References: <11434@brl-tgr.ARPA>
Reply-To: franka@mmintl.UUCP (Frank Adams)
Organization: Multimate International, E. Hartford, CT
Lines: 27
Summary: 


In article <11434@brl-tgr.ARPA> DHowell.ES@Xerox.ARPA writes:
>Let me start out by saying that while I am not an experienced C
>programmer, I am an experienced programmer.  These replies are based on
>my limited knowledge of C, but I believe they are applicable to all
>programming languages in one way or another.

I am an experienced programmer.  I have been writing C for about four
months now.

>Maybe "i++" is clearer to you, but do you only write programs for
>yourself?  To me "i++" is the kind of statement typical of an APL-type
>language, not a language that is supposed to be structured and easy to
>understand.  "i++" could mean something else in another language.  But
>almost all high level languages (even APL) use some form of "i = i + 1"
>to increment a variable.  If I want to distinguish between incrementing
>and adding, then I would define a procedure such as "increment(i)",
>which can be immediately understood.

I agree that "i++" is an abomination.  (I do use it, however, to be
consistent with the rest of the code I work with.)  Actually, C has
a third way to represent this operation: "i += 1".  Personally, I
think this is the superior notation.  It is concise, yet easy enough
for a person unfamiliar with the language to interpret.

Incidently, "i = i + 1" is not all that obvious, either.  Many people's
response, on seeing such an expression for the first time, is "impossible.
'i' can't equal 'i + 1'.".