Path: utzoo!attcan!uunet!lll-winken!lll-tis!ames!pacbell!att!occrsh!uokmax!rmtodd
From: rmtodd@uokmax.UUCP (Richard Michael Todd)
Newsgroups: comp.os.minix
Subject: Re: Minix C compiler (again)
Message-ID: <1358@uokmax.UUCP>
Date: 3 Jun 88 21:56:08 GMT
References: <2855@louie.udel.EDU>
Reply-To: rmtodd@uokmax.UUCP (Richard Michael Todd)
Organization: University of Oklahoma, Norman, OK
Lines: 39

In article <2855@louie.udel.EDU> Leisner.Henr@xerox.com (Marty) writes:
>I have a copy of Minix I purchased right after the textbook came out.  In order
I assume that means v1.1.
>to get it to run on my hard disk (I use genuwine PC-ATs) I had to start
>recompiling Minix out of the box.  The performance of the C compiler was
>horrendous.  To make the kernel was something which seemed I recall took on the
>order over 1/2 hour.  While the system was running on floppy disks, it didn't
It's even worse on my system, a PC/XT clone.  The first time I recompiled the
kernel I sat down to watch Blake's 7.  The tv show was finished before the
compile was (~55 min.)
  Thing is, both Marty and I still have the v1.1 compiler.  Is v1.2 much
faster? Inquiring minds want to know.
>compiler.  Which is why I'm (somewhat) anxiously looking for other solutions to
>native compiling on Minix (gcc? small-c?).  
  Not likely.  I can claim a good deal of experience with small-c, having
ported it to my TRS-80 many years ago.  If you don't mind having a compiler
which only supports int, char, int *, char *, int [] and char [] types, 
small-C shouldn't be too difficult to port, especially if you start with
a version already hacked to produce 8088 code (e.g. the one announced in 
this month's Byte).  Frankly I don't think it's worth it.  And from what I've
heard porting GCC to the 8088 or 80286 would be decidedly non-trivial--
Richard Stallman didn't design his compiler to handle braindead architectures....

>may start timing each pass and see what's going on my system.  HELP!!
As I recall, the majority of the time is spent in cg and asld.  (This from
much experience pushing F1 to see what the machine was up to, and not 
terribly scientific.)
>bit pointers and map in user space before operating on it.  This will lead to
>the ability to run more complicated memory models (specifically 1 64K code
>segment/1 64K initialized data/1 64K stack/N 64K heap segments for moderate N).
>Anyone else doing this?
Hmm... sounds interesting.  What would be even more interesting is some sort
of overlay scheme with multiple code segments so we can have effective code
sizes >64K.  I don't really know enough about 286 architecture to know how
feasible it would be...
-- 
Richard Todd		Dubious Domain: rmtodd@uokmax.ecn.uoknor.edu
USSnail:820 Annie Court,Norman OK 73069 	Fido:1:147/1
UUCP: {many AT&T sites}!occrsh!uokmax!rmtodd, but don't be surprised if I
don't answer--our mailer is *very* *very* hungry :-<