Path: utzoo!attcan!uunet!husc6!purdue!decwrl!jumbo!sclafani
From: sclafani@jumbo.dec.com (Michael Sclafani)
Newsgroups: comp.graphics
Subject: Re: Floyd-Steinberg Errors:  What do I do with them?
Summary: How to handle Floyd-Steinberg error terms
Message-ID: <13172@jumbo.dec.com>
Date: 12 Jul 88 21:38:08 GMT
References: <6506@well.UUCP>
Organization: DEC Systems Research Center, Palo Alto
Lines: 64

In article <6506@well.UUCP>, ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab) writes:
> 	However, I've run into a weird snag.  Occasionally, depending on the
> Phase of The Moon, it translates black pixels into grey ones.

This sounds like a common implementation problem: if you don't handle the
error terms properly, your error terms will accumulate and grow, whiting
out the image.  Make sure that the sum of the computed error terms equals
the actual error.  It's easy to get this wrong if you use bit-shifts for
division or aren't careful about rounding.

> After considering a number of possible
> problems, it struck me that perhaps I'm not dealing with overflow and
> underflow correctly.

[ example of negative pixel value deleted ]

> 	Now clearly, -.16 is out of range for the value of a pixel; it's too
> black.
> 
> 	So my question is this:  What do I do with this pixel when it comes
> time to process it?  Do I pretend like there's no problem, and treat it like
> all the other pixels (doesn't seem right, since this negative pixel will
> suck the brightness out of the neighboring pixel when I process it)?

But is _is_ right.  You've taken a pixel which is "gray", and set it to
"white".  To compensate, other pixels must be made darker.  Since the
adjacent pixel would already have been "black", the negative error term
will propagate through the image.  Consider a one-dimensional image (all
of the error propagates to the right):

 	+-----+-----+-----+
 	| .51 | .02 | .51 |		(.51 - 1.0) + .02  ==  -.47
 	+-----+-----+-----+

 	+-----+-----+-----+
 	| 1.0 | 0.0 | .51 |		(-.47 - 0.0) + .51  ==  .04
 	+-----+-----+-----+

 	+-----+-----+-----+
 	| 1.0 | 0.0 | 0.0 | +.04
 	+-----+-----+-----+
 
The average intensity of the region has been maintained _because_ of the
propagation of negative values.  The range of legal pixel values doubles
from [ 0.0 , 1.0 ] to [ -0.5 , 1.5 ].

In the example you gave, you used 3/8 to determine the horizontal error
term.  I believe that is different from the technique that Floyd and
Steinberg actually used, with the error split in four directions, not just
three:
        +-----+-----+-----+
        |     |  x  | 7/16|
        +-----+-----+-----+
        | 3/16| 5/16| 1/16|
        +-----+-----+-----+
         
This method introduces fewer artifacts than the (3/8 3/8 2/8) technique.
I think Steinberg posted on this newsgroup, stating that alternating
directions on scan lines (left to right on odd, right to left on even)
will produce even better results.  I've seen an implementation which does
this, and it's true.

Michael Sclafani     \\\  Digital Equipment Corporation
sclafani@src.dec.com \\\  Systems Research Center, Palo Alto, CA