Path: utzoo!attcan!uunet!hsi!wright From: wright@hsi.UUCP (Gary Wright) Newsgroups: comp.sw.components Subject: Re: Garbage Collection & ADTs Message-ID: <606@hsi86.hsi.UUCP> Date: 27 Sep 89 14:44:13 GMT References: <604@hsi86.hsi.UUCP> <6591@hubcap.clemson.edu> Reply-To: wright@hsi.com (Gary Wright) Organization: Health Systems Intl., New Haven, CT. Lines: 134 In article <6591@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu writes: >From wright@hsi.UUCP (Gary Wright): >> Bertrand Meyer claims that programmer-controlled deallocation "is >> unacceptable for two reasons: security and complication of program >> writing." By security, Meyer means that programmers [...] will >> make mistakes and will dispose of objects that still have active >> references. > > Unless they leave the task to an ADT instead. We all agree that writing an ADT that correctly manages it's own storage is a difficult task. One of the benefits gained by using ADT's is that of reuse, the user of an ADT does not have to re-create solutions to problems that have been solved once and for all the the ADT provider. Why not use the same reasoning to state that GC solves the problem once and for all and each ADT designer does not have to re-create solutions for memory management? > >> Complication refers to the fact that simply disposing >> of an object is not sufficient, all internal objects must also >> be disposed. Meyer calls this the "recursive dispose problem": >> >> This means that a specific release procedure must be >> written for any type describing objects that may refer >> to other objects. The result will be a set of mutually >> recursive procedures of great complication. > > Not true. The user supplies a Destroy procedure for whatever > is being stored. The ADT, in the course of destroying itself, > will call upon the user's Destroy procedure to handle the user's > data type. The writer of the Destroy procedure need only consider > destroying his ADT, since the user's Destroy procedure can be relied > upon to destroy all lower levels. It's really rather simple. So you agree that there is a recursive dispose problem but you believe that it isn't that complicated? Writing these dispose procedures, is straightforward but tedious and *boring*. This is exactly the type of situation that leads to errors. > >> Instead of the applications programmer worrying about storage, the ADT >> designer must worry about it. > > Precisely. As an ADT designer, I consider the task of storage > management to be among the least of my problems. The difficult > part is providing all the concurrency-related hard guarantees, > which languages like Eiffel manage to avoid by not providing > multitasking in the first place. > Let's stick to GC and leave concurrency for another day. Granted, that this is an important issue, and will become more important but we should keep this discussion focused. Also, I will admit to not being as versed in this area as I would like to be. >> My hunch is that the distiction between an ADT designer and an >> applications programmer is clear for objects like linked lists, >> stacks etc. but that it is not so clear the farther away from the >> basic data structures you get. > > The distinction is clear: the ADT specification separates the > application programmer from the ADT implementor, regardless of > whether you perceive the structure as "basic" or not. > You seem to be implying that there are only two layers to any given program, the applications layer, and the ADT's used by the application. In general, there can be any number of layers. If you look at a language like Eiffel or Smalltalk, all components of a program are classes (ADTs). There is no language distiction between how a stack is constructed and how any other component of the program is constructed. The problems than an ADT designer has to deal with are not unique to the ADT's at the bottom of the heirarchy. > The original point was that not all users can tolerate GC > (real-time users in particular); I have stated that GC may not be appropriate in certain cases (real-time). Others have said that there is work being done on real-time GC. Are you saying that some users can "tolerate" GC? If so, in support of what type of users have your arguments against GC been made? If you have been arguing against GC for real-time users exclusively, that has not been clear. If you have in mind another set of users who can not tolerate GC please be specific. > on the other hand, all GC users > can use managed components with no problems. This is your opinion. I have described some of the problems related to managed components regarding complexity of constructing ADTs which means the complexity of the entire program if you consider a program to simply be a collection of related ADTs. Others have described problems with general graph structures. An area that hasn't been discussed here is reusing components via inheritance. I really haven't come to any conclusions reagarding the interaction of memory management and inheritance. Anybody else have some ideas? Another area that concerns me that hasn't been discussed is the interaction between exceptions and memory management. My understanding is that when an exception is raised, execution may resume after any number of "scopes" have been exited. Will the objects in those scopes be guaranteed to be destroyed? If exceptions need to be handled in the scope in which they were raised simply due in part to this memory leakage, has not the lack of GC caused the program to become more complex? And if an exception is always handled in the scope in which it is raised, why bother with using the exception mechanism? (I am refering to programmer defined exceptions such as stack underflow or overflow as opposed to hardware exceptions etc. which should probably be handled differently.) > Therefore, if > a component is to be designed for the widest possible audience, > that component must manage its own storage. If components are > used which meet the highest possible standards, then we don't > have to worry about whether our components will stop working > when we do maintenance (which might introduce multitasking, > real-time operation, etc.) on our application; using GC introduces > such weaknesses, in addition to giving an *avoidable* run-time cost. Perhaps GC does introduce problems with reuse in regards to multi-tasking and real-time operations, perhaps not, I'm not sure. I am anxious to see how multi-tasking will be handled in Eiffel. ISE has indicated that they are working on incoporating it into the OOP framework. The run-time cost of GC *may* be avoided in certain cases by using ADT controlled memory management. The point is that there *are* tradeoffs. Not using GC "might introduce" reuse problems also. What is the cost associated with "avoiding" GC? -- Gary Wright ...!uunet!hsi!wright Health Systems International wright@hsi.com