Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site utastro.UUCP
Path: utzoo!watmath!clyde!bonnie!akgua!whuxlm!whuxl!houxm!ihnp4!qantel!dual!mordor!ut-sally!utastro!nather
From: nather@utastro.UUCP (Ed Nather)
Newsgroups: net.ai
Subject: Re: Computer Vision, Pattern Recognition
Message-ID: <358@utastro.UUCP>
Date: Mon, 15-Jul-85 11:18:47 EDT
Article-I.D.: utastro.358
Posted: Mon Jul 15 11:18:47 1985
Date-Received: Wed, 17-Jul-85 21:02:41 EDT
References: <10571@rochester.UUCP>
Organization: U. Texas, Astronomy, Austin, TX
Lines: 57

> Reconstruction vs Recognition Based systems:
> 
> Many people (especially people at MIT) believe that a fundamental step
> in computer vision is to reconstruct some set of intrinsic parameters
> such as surface orientation, texture, illumination, reflectivity.

I'm not sure where it fits into the theory, but we have operational an
"image re-recognition" system that works fine for our (very restricted)
astronomical image fields.  We constuct (from the original image) a set
of r-theta tables representing the distance and angle of each "nearby"
star image to our target position, as well as the distance and angle of
"neighbors" for every star image in the original field.  The number of
neighbors is an adjustable parameter, depending on the "richness" -- the
density of star images -- in the field.  

When another image of this field is
presented (at a later time, and offset in X and Y, usually) we can identify
the target location in the (offset) field by comparing the r-theta values
from the new image with the stored tables,  by simple table look-up.  Cross
correlation is not needed.  We can then locate the target position, and
center it.

I realize this is a very limited application -- it only works on images
composed of point sources of light -- but the idea of transforming the
original image into "symbolic" form for comparison and recognition may have
some wider use.  The trick would be to find a transformation that retains
most of the information needed for recognition, and discards most of the
rest.  In this example, the chosen algoritm is very efficient.  For a star
field of average richness, only a few hundred bytes suffice to hold all of
the transformed information.  A 100 megabyte disk could hold all of the
"electronic finding charts" ever used in astronomy on this planet.

> Generalized Image Storage Format?
> 
> I can tell every university stores images differently.  As far as
> other generalized images then every program stores them differently.
> This I believe acts as a gigantic brake on vision research.  

Astronomers faced a similar problem, and seem to have solved it.  We can
trade images of star fields with other observatories if we just write them
onto mag tape in FITS tape format -- a generalized bit-mapped image tranfer
system.  I can point you to a technical description of FITS if you're
interested.

> This seems enough to spark some discussion (though I've been wrong
> before).  Any more and people won't read it anyway.  

Probably true.  I'm aware of three "automated telescope" projects in
astronomy that required image recognition to work, and all were total
failures.  A little coaxing would bring out details of this past history,
in hopes we won't be compelled to repeat it.

-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather%utastro.UTEXAS@ut-sally.ARPA