Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!mailrus!csd4.milw.wisc.edu!cs.utexas.edu!uunet!crdgw1!sungod!davidsen
From: davidsen@sungod.crd.ge.com (ody)
Newsgroups: comp.lang.c
Subject: Re: Memory Models
Keywords: Memory models,C
Message-ID: <1633@crdgw1.crd.ge.com>
Date: 11 Aug 89 15:04:55 GMT
References: <562@dcscg1.UUCP>
Sender: news@crdgw1.crd.ge.com
Reply-To: davidsen@crdos1.UUCP (bill davidsen)
Distribution: usa
Organization: General Electric Corp. R&D, Schenectady, NY
Lines: 33

In article <562@dcscg1.UUCP> drezac@dcscg1.UUCP (Duane L. Rezac) writes:

| I am just getting into C and have a question on Memory Models. I have not
| seen a clear explanation on just what they are and how to determine which 
| one to use. Does anyone have a short, clear explanation of these for  
| someone just starting out in C?  

  I'll provide some information, but bear in mind that models are a
characteristic of the linker, rather than something just in C.
Segmented machines can support the models in all languages including
assembler.

The question is if the code and/or data space is limited to 64k or not.
Here's a table of the common models:

		       code
	       64k             >64k
	 _________________________________
	|                |                |
d  64k  |   small        |   medium       |
a	|________________|________________|
t	|                |                |
a >64k  |   compact      |   large        |
	|________________|________________|

  Two other models are tiny (code and data share the same segment) and
huge, in which array and aggregate objects may be larger than 64k.

  The reason for using the smaller models is performance. Data access is
faster in small or medium model.
	bill davidsen		(davidsen@crdos1.crd.GE.COM)
  {uunet | philabs}!crdgw1!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me