Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site duke.UUCP
Path: utzoo!watmath!clyde!bonnie!akgua!mcnc!duke!crm
From: crm@duke.UUCP (Charlie Martin)
Newsgroups: net.cse
Subject: Re: students editing output (more)
Message-ID: <6293@duke.UUCP>
Date: Sun, 15-Sep-85 11:22:55 EDT
Article-I.D.: duke.6293
Posted: Sun Sep 15 11:22:55 1985
Date-Received: Tue, 17-Sep-85 04:50:47 EDT
References: <433@uvm-cs.UUCP> <2889@ut-sally.UUCP>
Reply-To: crm@duke.UUCP (Charlie Martin)
Organization: Duke University
Lines: 27
Summary: Can't just look at the output!

In article <2889@ut-sally.UUCP> brian@ut-sally.UUCP (Brian H. Powell) writes:
> ...  AI hasn't progressed to the point that a program can
>judge program style as well as I can.  To me, that's an important part of
>teaching CS.  You don't just teach them how to program; you teach them how
>to program well....

I think that Brian has an essential point here:  one should not grade only
the output of a program!

The other solutions offered may be technically managable (I like the
``special script'' command idea; seems it should be easily done in
4.2 anyway) but they don't replace the direct feedback that I think is 
essential to teaching programming.

I've been teaching intro labs this semester, and thought about this --
what I finally concluded was that building a rigged demo in those labs
at least required more programming skill and effort than simply doing the
labs anyway.  This made the question much simpler for me (but doesn't really
apply to the original question.)

Another thought: how about a shell script that runs the programs and
then diff's the output with the sample turned in?

-- 

			Charlie Martin
			(...mcnc!duke!crm)