Path: utzoo!attcan!uunet!ubvax!ames!lamaster From: lamaster@ames.arc.nasa.gov (Hugh LaMaster) Newsgroups: comp.arch Subject: Re: Software Distribution Message-ID: <15529@ames.arc.nasa.gov> Date: 26 Sep 88 16:42:58 GMT References: <5655@june.cs.washington.edu> <340@istop.ist.CO.UK> <15440@ames.arc.nasa.gov> <944@l.cc.purdue.edu> Reply-To: lamaster@ames.arc.nasa.gov.UUCP (Hugh LaMaster) Organization: NASA Ames Research Center, Moffett Field, Calif. Lines: 74 In article <944@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes: >An intermediate language should exist, which should include everything that >these machines and others can do. But we should realize that many, if not >most, machine operations do not exist on many machines. > >I agree that vector microprocessors will be fairly cheap. But which type of >architecture? I am familiar with several of them. I have used the CYBER 205, >and it has useful instructions which are not vectorizable at all, or vectoriz- >able only with difficulty and at considerable cost, on vector register >machines. Or will we be using massive parallelism? Try procedures which : >machines. An IL should be highly expressible and with an easy-to-use (from >the human standpoint) syntax. But if it is good, many of its features will >be directly usable only on few machines. There seems to be more useful >constructs, Hugh, than are in your philosophy. > > > > > > > I fear that I may have been misunderstood. I do not think that a portable IL (PIL) can be developed which can efficiently use all the features of a given architecture, very especially new, poorly understood architectures that involve massive parallelism with limited communication between processors. My point is that portable IL's are already in use, both explicitly and implicitly, and that "vectors" could be simply included in a new IL, and that it would be worth doing. No current IL can optimally mediate between the source language and a particular architecture, and yet, they are useful because they do a good enough job in many circumstances, and they make easier porting compilers, especially lesser used compilers that might never become available at all, to new architectures. Many people are using gcc, not because it produces optimal code for the VAX, but because the code it produces is good enough, and some compilers have become available to people through it, which would not be otherwise available. To carry the question about vectors further, it should not be necessary to know whether the machine has vector registers or a memory to memory architecture. It would simply represent vector operations as memory to memory operations, leaving register assignments to the code generator. It is true that some architectures would not be well used by such a scheme, but my guess is that you could get 30% of the performance of a machine specific compiler this way, and that would be good enough in many cases, and a significant improvement over the current situation, where portable compilers get a 0% improvement over scalar code. This is not an idealistic pursuit of the ideal IL, but a practical approach to solving the time/time tradeoff (how much programmer time can I afford to spend to get how much speedup of my program?) in the near term vector capable microprocessor world. BTW, the old CDC/ETA compiler did not detect the case of finding the maximal element of a vector and returning its index as one operation (a common operation - the instruction is there to support it) and instead used two vector operations, taking twice as long. It is not trivial to make optimal use of an architecture even if you don't have a PIL to worry about; this is, of course, why the "RISC" word appeared somewhere in this discussion. Since one of the tenets of the RISC philosophy is to usually exclude instructions which can't be easily generated by a compiler, RISC architectures tend to make PILs more practicable. -- Hugh LaMaster, m/s 233-9, UUCP ames!lamaster NASA Ames Research Center ARPA lamaster@ames.arc.nasa.gov Moffett Field, CA 94035 Phone: (415)694-6117