Path: utzoo!utgpu!attcan!uunet!tank!ncar!mailrus!uflorida!novavax!proxftl!bill From: bill@proxftl.UUCP (T. William Wells) Newsgroups: comp.lang.c Subject: Re: memory allocation Message-ID: <806@proxftl.UUCP> Date: 22 Sep 88 01:12:12 GMT References: <1262@micomvax.UUCP> <733@proxftl.UUCP> <33184@cca.CCA.COM> <778@proxftl.UUCP> <33422@cca.CCA.COM> <784@proxftl.UUCP> <33451@cca.CCA.COM> Reply-To: bill@proxftl.UUCP (T. William Wells) Distribution: comp.lang.c,comp.os.misc Organization: Proximity Technology, Ft. Lauderdale Lines: 52 Summary: Expires: Sender: Followup-To: Keywords: In article <33451@cca.CCA.COM> g-rh@XAIT.Xerox.COM (Richard Harter) writes: : In article <784@proxftl.UUCP> bill@proxftl.UUCP (T. William Wells) writes: : : ] [comments about an allocator I wrote for a kernel.] : : Good point. My concern has been with allocators for large : systems rather than for kernels. I don't begrudge the time or code : spent on making an allocator hot; in many applications allocators get : a heavy workout and the payback for improving them is large. A slow : allocator can make you look very bad. Yeah. This was a message passing kernel; you couldn't do *anything* without passing a message. And guess what the first step in sending a message was! : I can't say I'm much enthused about the notion of bending the : code structure to fit the allocator characteristics. Depends on how and when you do it. An example of where this worked nicely is where I was building a directed graph whose nodes pointed to another directed graph. I had a choice of putting a field in the pointing nodes of the first directed graph or in the pointed to nodes of the second. My reasoning went as follows: it doesn't matter much for the algorithm where I put the field, so there is nothing wrong with putting it in either place. If the underlying allocator is malloc, then it doesn't matter where. If the underlying allocator is my special allocator, then I can choose one or the other depending on which fits the allocator better. Of course, I also had to contend with the matter of differing processors, so I then asked: on what kinds of processors would this optimization matter? The answer to that gave me the sizes of the fields, and the nature of the allocator let me then decide where to put the field. And, BTW, it went in the first graph. : ][comment on using a buddy system with paged systems] : Now this is a very interesting point, and one which I hadn't : thought about. There is a clear profit to be gained in taking page : sizes into account -- page faults are a lovely way to take a performance : hit. In a best fit (or EF+WF) allocator one should have criteria for : not straddling page boundaries. I'm going to have think about that one. Studying the performance of a buddy system allocator in a paged system might be an interesting research topic. I know that it would beat hell out of some of the theses I reviewed when I was doing interviews. Any takers? :-) --- Bill novavax!proxftl!bill