Path: utzoo!utgpu!water!watmath!clyde!cbosgd!ihnp4!homxb!whuts!mtune!petsd!pedsga!chip
From: chip@pedsga.UUCP
Newsgroups: comp.software-eng
Subject: Re: program complexity metrics
Message-ID: <242@pedsga.UUCP>
Date: 14 Dec 87 17:15:02 GMT
References: <561@ndsuvax.UUCP> <3850002@wdl1.UUCP>
Reply-To: chip@pedsga.UUCP (Chip Maurer,7361)
Organization: Concurrent Computer Corp., Tinton Falls, N.J.
Lines: 59

In article <3850002@wdl1.UUCP> rion@wdl1.UUCP writes:
>>     If you do not use program complexity metrics, what sort of proof, evidence
>>or demonstrations would your company need to cause you to consider using
>>them?  What kinds of payoffs do they have to have and how would those payoffs
>>be established to your satisfaction?
>>     I am particularly interested in answers to the second paragraph of 
>>questions.
>
>I would like to answer this question since I don't use program
>complexity metrics and am very interested in software engineering.
>However, except for what the name implies, I really don't know what
>program complexity metrics are and therefore can't answer.  How about
>a brief overview?  Anybody?

According to my textbook: Software Enginneering, by Shooman, McGraw-Hill,
complexity metrics serve the following purposes:

1. As a rank of the difficulty of various software modules to be used along
with other factors in the assignment of personnel
2. In the case of module complexity, as a guide to judge whether subdivision
of a complex module is warrented
3. As a direct measure of progress and an indirect measure of quality during
the various phases of development.
4. As an aid in normalizing data used in retrospective studies of past 
development projects.

Personally, I feel that the second use is of most value.  

The general idea is to be able to apply formulas to software (either as a
whole or individually to modules) to determine the "quality" of the software.

One of the more famous complexity metrics papers was written by Thomas McCabe
(IEEE Transactions on Software Engineering, 12/76 pp 308-320) in which he
describes his graph-theoretic complexity measure.  His approach is that
the more paths there is in a given module, the more complex it is (seems
simple enough).  The formula will produce a complexity value for a module.
His conclusion is that modules with complexity values over 10 will have
a tendency to have more errors (I guess the programmer can't track 
all possible paths).  By breaking the module up into smaller modules, you
reduce the chance for inherent code errors.  Putting it simply, the formula
is as follows:

v = sum(branches, loops, case, conditions,...ANDs, ORs, NOTs) + 1

His studies showed that in several large projects, that 23% of the routines
with a value greater than 10 accounted for 53% of the bugs.

Metrics can provide a lot of insight to software quality.  Unfortunately,
they are not viewed as a means of bringing software development out of
its "primitive state" (said in jest of course, see previous articles in 
this newsgroup :-)

Hope this provides some insight to a relatively unknown topic.

-- 
         Chip ("My grandmother called me Charles once. ONCE!!") Maurer
     Concurrent Computer Corporation, Tinton Falls, NJ 07724 (201)758-7361
        uucp: {mtune|purdue|rutgers|princeton|encore}!petsd!pedsga!chip
                       arpa: pedsga!chip@UXC.CSO.UIUC.EDU