Path: utzoo!utgpu!watmath!clyde!att!rutgers!mit-eddie!uw-beaver!tektronix!sequent!mntgfx!dclemans.falcon
From: dclemans.falcon@mntgfx.mentor.com (Dave Clemans)
Newsgroups: comp.sys.apollo
Subject: Re: more SR10 questions
Message-ID: <1988Dec2.134845.19607@mntgfx.mentor.com>
Date: 2 Dec 88 21:48:44 GMT
References: <152@nrl-cmf.UUCP>
Organization: Mentor Graphics Corporation, Beaverton Oregon
Lines: 31

From article <152@nrl-cmf.UUCP>, by wicinski@nrl-cmf.UUCP (Tim Wicinski):
> Will they ever fix their compiler (re NFS) or will we be forced to abandon
> them for other vendors?
> 

Presumably you are talking about the ability to run Apollo binaries stored
on non-Apollo disks via NFS (or something similar)

The problem is not the compiler, the "problem" is the high degree of intelligence
in the program loader.  In contrast to typical Unix systems, the Apollo loader
just pages in the program from the disk on the remote node.  The system is
unable to do virtual memory paging using NFS, thus the program can't be loaded.
Other problems involve file typing (something that doesn't exist in NFS).

The only way I can think of for this ability to be implemented is:

    if while going through directories looking for a program to execute
    you cross a NFS boundary,

        don't check the Apollo file type; just assume that it is a COFF format file

        create a temporary file on the local node; copy the file from the remote
        node to the local temporary file

        then execute the program from the temporary file, arranging to delete
        the file when the programe exits.

This would let you execute programs from NFS disks, though at a performance cost
proportional to the size of the program.

dgc