Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Path: utzoo!utgpu!water!watmath!clyde!rutgers!cmcl2!nrl-cmf!ames!sdcsvax!ucsdhub!udiego!stokes
From: stokes@udiego.UUCP
Newsgroups: comp.sys.dec,sdnet.general
Subject: DEC e-net products
Message-ID: <707@udiego.UUCP>
Date: Thu, 3-Dec-87 19:54:31 EST
Article-I.D.: udiego.707
Posted: Thu Dec  3 19:54:31 1987
Date-Received: Sun, 6-Dec-87 20:37:39 EST
Organization: Univ. of San Diego, San Diego CA
Lines: 35
Keywords: Anybody have DELNIs and/or DEMPERs
Xref: utgpu comp.sys.dec:394 junk:6615


USD is going to be Eathernetting our VMS and UNIX
systems. We're planning on using DELNIs and DEMPERs
in various places. What I need is the following 
information (to see how deep of a hole I'm in):

	1 Physical size: DEC says these devices are rack size,
	  (we would place these on shelfs in a rack) but the
	  local people have no idea just how big they are. Any
	  clues?

	2 Heat. These devices are supposed to have their own cooling
	  fans but just how bad is the heat build up?. Will palcement
	  in a rack exaserbate the heat.

	3 Any reliability problems? Should I leave plenety of room
	  for 'future maintenance'?

	4 Any gotcha's, i.e. six months down the line am I going
	  to have to shell out extra bucks for 'maintenance kits',
	  etc??

Please forgive my paranoia, bad spelling, and dumb questions. I'm
just a poor system manager trying out how to place a VAX 8550, a
Pyramid 9805, and comm gear in one tiny room while keeping an old
780 running until everything is up and debugged.




-- 
David Stokes                   "USD where the future is tomorrow
Academic Computing Department   and today is slightly behind schedule"
University of San Diego
(619) 260-4810 or {sdcsvax, sdsu, ucsdhub}!udiego!stokes