Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!mnetor!uunet!seismo!rutgers!labrea!aurora!jbm
From: jbm@aurora.UUCP (Jeffrey Mulligan)
Newsgroups: comp.graphics
Subject: Re: Thining out a bitmap image.
Message-ID: <823@aurora.UUCP>
Date: Wed, 22-Jul-87 14:18:54 EDT
Article-I.D.: aurora.823
Posted: Wed Jul 22 14:18:54 1987
Date-Received: Fri, 24-Jul-87 05:45:05 EDT
References: <11244@clyde.ATT.COM>
Organization: NASA Ames Research Center, Mt. View, Ca.
Lines: 48

in article <11244@clyde.ATT.COM>, spf@moss.ATT.COM says:
+ 
+ In article <275@uvicctr.UUCP> sbanner1@uvicctr.UUCP (S. John Banner) writes:
+>
+>    I have recently been called upon to write a program that takes
+>a bitmap image, and converts it to a second bitmap image, such that
+>no two points within a given radius of each other are left on.
+>... an algorithm where you read in a bunch of lines, then scan through
+>the image, point by point until you get to an on pixel, then blank
+>out everything within the given radius, and continue on.
+>... the program should not have to read in the entire bitmap (only a
+>small fraction of the map will fit...)
+ 
+ The central problem with this approach is that you will get a
+ different result depending upon where in the image you start and in
+ which direction you traverse it.  I don't know what you intend to do with
+ the product, but I would think this is undesirable.  If you can get
+ enough of the image in memory to use a statistical, rather than
+ a "first-encountered" approach, your result would be invariant under
+ changes in starting point.  Essentially, you would calculate the central
+ pixel (suitably defined for your purposes) and then turn off any on pixels
+ in its neighborhood.
+ 
+ Steve Frysinger


Both of these algorithms clip the gray scale range instead of remapping
it.  Let's say that you want to map white (all bits on) to
25% gray (1/4 of the bits on).  Naturally black->black (no bits on).
Then the problem is simply to reduce the "on" pixel density everywhere,
not simply where it is greater than 1/4.  Doing it this way will
preserve the contrast of the original image, while reducing the
luminance by a factor of four.

One way to do this would be to simply scan the original image, turning
"on" a pixel in the output image after encountering four "on" pixels
in the input images.  You could get fancy and place the new pixel at
the centroid of the four original pixels, and scan the image in
a way that is isotropic with respect to x and y, such as that
proposed by Koenderink and Van Doorn (Proc IEEE v67 n10 p1465 1979).



-- 

	Jeff Mulligan (jbm@ames-aurora.arpa)
	NASA/Ames Research Ctr., Mail Stop 239-3, Moffet Field CA, 94035
	(415) 694-5150