Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!csd4.csd.uwm.edu!cs.utexas.edu!uunet!attcan!ncrcan!ziebmef!mdfreed From: mdfreed@ziebmef.uucp (Mark Freedman) Newsgroups: comp.lang.c Subject: Re: Memory Models Keywords: Memory models,C Message-ID: <1989Aug18.210404.13183@ziebmef.uucp> Date: 19 Aug 89 01:03:59 GMT References: <562@dcscg1.UUCP> <10703@smoke.BRL.MIL> Reply-To: mdfreed@ziebmef.UUCP (Mark Freedman) Organization: Ziebmef Public Access Unix, Toronto, Ontario Lines: 24 In article <10703@smoke.BRL.MIL> gwyn@brl.arpa (Doug Gwyn) writes: >In article <562@dcscg1.UUCP> drezac@dcscg1.UUCP (Duane L. Rezac) writes: >>I am just getting into C and have a question on Memory Models. > >That is not a C language issue. It's kludgery introduced specifically >in the IBM PC environment. Unless you have a strong reason not to, >just always use the large memory model. (A strong reason would be >compatibility with an existing object library, for example.) and remember that objects larger than 64K can run into problems because of the segmented architecture. In Turbo C 2.0, malloc, calloc don't work for objects larger than 64K (use farmalloc, farcalloc), and far pointers wrap (the segment register is unchanged ... only the offset has been changed to protect the innocent :-)). huge pointers are normalized (all arithmetic is done via function calls which perform normalization), but pointers must be explicitly declared as huge. Even the huge memory model uses far pointers as the default (because of the overhead, I would imagine). I haven't used Microsoft or other MS-DOS implementations, but I suspect that they have similar design compromises. (apologies for the Intel-specific followup, but it might save someone some aggravation).