Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!gem.mps.ohio-state.edu!uakari.primate.wisc.edu!uwm.edu!csd4.csd.uwm.edu!trantow
From: trantow@csd4.csd.uwm.edu (Jerry J Trantow)
Newsgroups: comp.sys.amiga.tech
Subject: Re: huffman encoding
Message-ID: <328@uwm.edu>
Date: 3 Oct 89 23:58:08 GMT
References: <467@crash.cts.com>
Sender: news@uwm.edu
Reply-To: trantow@csd4.csd.uwm.edu (Jerry J Trantow)
Organization: University of Wisconsin-Milwaukee
Lines: 17

In article <467@crash.cts.com> uzun@pnet01.cts.com (Roger Uzun) writes:
>Of course i realize that the huffman tree for a file that has
>256 unique elements is not optimal for files that have less than that,
>but I am trying to simplify things a bit.
>So I should have asked for the huffman bit patterns and bit counts
>for the constant encoding tree that would be used on files that
>have 256 unique elements.
>-Roger uzun
 
Rodger, I'm sure you realize that the bit count equals the depth of the
node. (depth = #paths between root and node)  If you have 256 unique
elements and a "constant encoding" tree, I assume they have equal
probabilities?  If they have equal probabilities they are all on the 
same lev. If I'm not really sure if this is what you mean.
I'm currently doing huffman compression on 8SVX files.  It is not that
tough to form the tree.  My only concern is getting the decoding to
run fast.