Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » The ICL 2900
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Re: The ICL 2900 [message #338880 is a reply to message #336265] Tue, 07 March 2017 16:18 Go to previous messageGo to next message
Alan Bowler is currently offline  Alan Bowler
Messages: 185
Registered: July 2012
Karma: 0
Senior Member
On 2017-01-24 5:19 PM, Bob Eager wrote:
> On Tue, 24 Jan 2017 14:45:25 -0500, Rich Alderson wrote:
>
>> Anne & Lynn Wheeler <lynn@garlic.com> writes:
>>
>>> Bob Eager <news0006@eager.cx> writes:
>>
>>>> Yes, we got a VAXcluster (the base CPU power of an 8800, running SMP,
>>>> was a bit under the IBM offering, so they chucked in a couple of
>>>> 8200s). In fact, since it was on VMS 4.6, it couldn't do SMP anyway,
>>>> and didn't for years.
>>
>>> old reference to announce of "real" VMS SMP ("DEC Stalks Big Game with
>>> Symmetrical VMS")
>>> http://www.garlic.com/~lynn/2007.html#email880329
>>
>> Which was greeted with hoots of laughter from the (sadly moribund)
>> 36-bit DEC customers, who had SMP on the PDP-10 architecture nearly a
>> decade earlier. :-(
>
> We did too. Our 2900 had it on our homebrew operating system. It worked
> well, once I'd worked out how to do the inter-CPU control (I had to
> reverse engineer the microcode to get a register spec). I didn't writye
> the code, but there were model-specific issues.

Honeywell GCOS had SMP support (up to 4 processors) in the early 70's,
and by 73 has cluster support (Max 4 systems, each with 4 CPU's).
It may have had cluster support before then, but I'm only familiar
with the file system from 73 on. File system changes a few years ago
took away cluster support in favour of more CPUs and processes on the
same system.
Re: The ICL 2900 [message #338883 is a reply to message #338880] Tue, 07 March 2017 18:28 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Alan Bowler <atbowler@thinkage.ca> writes:
> Honeywell GCOS had SMP support (up to 4 processors) in the early 70's,
> and by 73 has cluster support (Max 4 systems, each with 4 CPU's).
> It may have had cluster support before then, but I'm only familiar
> with the file system from 73 on. File system changes a few years ago
> took away cluster support in favour of more CPUs and processes on the
> same system.

re:
http://www.garlic.com/~lynn/2017.html#58 The ICL 2900

360/67 original design supported 4-way SMP in the 60s
http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/A27- 2719-0_360-67_funcChar.pdf
http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/GA27 -2719-2_360-67_funcChar.pdf

however, I think that all shipped to customers were only 2-way except
for one 3-way that was shipped for the USAF MOL project ... with
enhancements that configuration could be changed under software
control. Standard SMP had configuration box and all the switches could
be "sensed" from the control registers (the 3-way shipped for MOL, the
configuration could be changed by changing values in the control
registers).

As I've mentioned before, when charlie was at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

working on multiprocessor fine-grain locking for CP67, he invented
compare and swap instruction. The initial effort to get it included in
370 architecture was rebuffed because the POK favorite son operating
system people said that test and set instruction (multiprocessor locking
from 360) was sufficient for multiprocessor support. They are talking
about 360/65MP MVT that had single kernel spin-lock ... so hardware
features had little effect.
https://en.wikipedia.org/wiki/IBM_System/360_Model_65

360/67 shared memory and every processor could address all channels.
360/65 was only 2-way and only shared memory. Dedicated channels for
each processor had to be configured with "multi-tail" control unit so
both processors could do i/o to the same controller/device.

370 architecture owners then said to get compare&swap included in 370
architecture, uses other than kernel locking were needed. Thus was born
the uses for multithreaded (like large DBMS) were invented .... examples
still appear in IBM mainframe principles of operation. past posts
http://www.garlic.com/~lynn/subtopic.html#smp

In 2nd half of 70s some of us were working on 16-way 370 ... initially
everybody thot it was great ... and we even co-oped some of the 3033
processor engineers ... it was lot more interesting that remapping 168-3
logic to 20% faster chips. Then somebody told the head of POK that it
might be decades before the POK favorite son operating system people had
effective 16-way support. 3033smp 2-way 1978
https://www-03.ibm.com/ibm/history/exhibits/3033/3033_CH01.h tml

1982 3081 2-way
1983 3084 4-way
1985 3090 6-way
1990 es9000 6-way
2000 z900 16-way
2003 z990 32-way
2006 z9 54-way
2008 z10 64-way
2010 z196 80-way
2012 ec12 101-way
2015 z13 140-way

Head of POK then invited some of us to never visit POK again ... and
told the 3033 processor engineers to stop being distracted (with more
insteresting projects). It then is almost 25yrs before 16-way ships
(2000).

one of the things done to MVS for the support of 3084 ... was
reorganization of storage management for cache line alignment and
multiples of cache line size .... there started to be a lot of cache
trashing (in 2-way there was cache invalidation signals from one other
processor, in 4-way, would have cache invalidation signals from three
other processors). turns out there was huge number of kernel storage
working areas that would be process specific ... but shared cache line
with working storage for process running on another processor. The claim
was the storage management cache-line restructure for 3084 got something
between five and ten percent total throughput improvement.

footnote: z196 claims 50BIP (625MIPS/processor) also says over half the
per processor speedup compared to z10 is introduction of out-of-order
and branch prediction features (that have been in other platforms for
decades); ec12 claims 75MIPS (743MIPS/processor); and z13 only claims
30% faster than EC12 (with 40% more processors, implying 100MIPS
714MIPS/processor).

when my wife must have been in kindergarten, she was in the gburg JES
group and was con'ed into going to POK to be in charge of
loosely-coupled (mainframe for cluster) architecture. While there she
did peer-coupled architecture
http://www.garlic.com/~lynn/submain.html#shareddata

but didn't remain long because of little uptake (except for IMS
hotstandby) until SYSPPLEX and Parallel SYSPLEX (long after she is
gone), and she was in constant battles with the communication group
trying to force her into using SNA/VTAM for loosely-coupled operation.
https://en.wikipedia.org/wiki/IBM_Parallel_Sysplex

Note starting with 3090, company started "hardware" subset version of
virtual machines ... PR/SM & LPAR ... basically partitioning real
machine into multiple (potentially loosely-coupled) subset machines.
That is the way nearly all mainframes run today .... even with multiple
real machines in loose-coupled (parallle sysplex) configuration, each of
the real machines may be further subdividate into multiple "logical
machines" (or LPARS, logical partitions).
https://en.wikipedia.org/wiki/Logical_partition

last project we did at IBM was HA/CMP product
http://www.garlic.com/~lynn/subtopic.hacmp

and was working on cluster scaleup for technical/scientific (with
national labs), filesystes, commercial (RDBMS with open RDBMS vendors),
etc. .... old post references meeting in Ellison's conference room
Jan1992 on RDBMS cluster scaleup
http://www.garlic.com/~lynn/95.html#13

I've mentioned doing global lock manager supporting VAX/Cluster
semantics to make porting easier for RDBMS vendors that had both
VAX/Cluster and Unix in their same RDBMS source base (got input from
RDBMS vendors about how VAX/Cluster could have done it better)
http://www.garlic.com/~lynn/2017.html#58 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339083 is a reply to message #338883] Sat, 11 March 2017 12:59 Go to previous messageGo to next message
Jon Elson is currently offline  Jon Elson
Messages: 646
Registered: April 2013
Karma: 0
Senior Member
Anne & Lynn Wheeler wrote:


>
> 360/67 original design supported 4-way SMP in the 60s
> http://bitsavers.trailing-
edge.com/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf
> http://bitsavers.trailing-
edge.com/pdf/ibm/360/funcChar/GA27-2719-2_360-67_funcChar.pd f
>
> however, I think that all shipped to customers were only 2-way except
> for one 3-way that was shipped for the USAF MOL project ... with
> enhancements that configuration could be changed under software
> control. Standard SMP had configuration box and all the switches could
> be "sensed" from the control registers (the 3-way shipped for MOL, the
> configuration could be changed by changing values in the control
> registers).
>
The problem with the 360/65 and /67 multiprocessor system was that the
memory didn't have enough bandwidth, so adding a CPU did not give you a 2X
boost. Somewhere between 1.5 - 1.75. I can only imagine it had to get
worse as you added more that a 2nd processor.

Jon
Re: The ICL 2900 [message #339095 is a reply to message #339083] Sat, 11 March 2017 18:44 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Jon Elson <elson@pico-systems.com> writes:
> The problem with the 360/65 and /67 multiprocessor system was that the
> memory didn't have enough bandwidth, so adding a CPU did not give you a 2X
> boost. Somewhere between 1.5 - 1.75. I can only imagine it had to get
> worse as you added more that a 2nd processor.

re:
http://www.garlic.com/~lynn/2017c.html#3 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#30 The ICL 2900

360/67 muiltiprocessor partially mitigated it ... allowing channel i/o
and processors to have independent paths to memory.

360/67 (single processor), 360/65, and 360/65MP, processors, i/o, etc
all shared common path to all memory.

you can see some of that in the 360/67 functional specifications giving
instructions timings ... every instruction on multiprocessor memory
access having slightly higher memory access.

a 360/67 "half-duplex" (multiprocessor memory, channel i/o etc ... but
only one processor) would have slightly slower raw MIP rate compared to
a straight simplex 360/67 (or 360/65) .... but under heavy i/o load
could have higher effective throughput .... because both I/O and
processor have paths to each memory storage units.

the actual timing gets more complex as distance from specific processor
to specific storage unit also contributes to latency

much longer discussion pg28, in two-processor system, each storage unit
can have four independent paths (one for each processor and one for each
processor's i/o controller)
http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/A27- 2719-0_360-67_funcChar.pdf

there is little bit longer discussion, pg29-pg39 in
http://bitsavers.trailing-edge.com/pdf/ibm/360/funcChar/GA27 -2719-2_360-67_funcChar.pdf

if all concurrent instruction & I/O was to the address range in single
storage unit .... then thruput would be limited by access to that single
unit .... however could have concurrent access to four different storage
units (multiprocessor configuration could have up to eight 2365-12
storage units for 2mbytes total) and effective concurrent memory
transfer rate of up to nearly four times that of "simplex" 360/67 (or
360/65s).

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339127 is a reply to message #339083] Sun, 12 March 2017 10:04 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Jon Elson wrote:
> Anne & Lynn Wheeler wrote:
>
>
>>
>> 360/67 original design supported 4-way SMP in the 60s
>> http://bitsavers.trailing-
> edge.com/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf
>> http://bitsavers.trailing-
> edge.com/pdf/ibm/360/funcChar/GA27-2719-2_360-67_funcChar.pd f
>>
>> however, I think that all shipped to customers were only 2-way except
>> for one 3-way that was shipped for the USAF MOL project ... with
>> enhancements that configuration could be changed under software
>> control. Standard SMP had configuration box and all the switches could
>> be "sensed" from the control registers (the 3-way shipped for MOL, the
>> configuration could be changed by changing values in the control
>> registers).
>>
> The problem with the 360/65 and /67 multiprocessor system was that the
> memory didn't have enough bandwidth, so adding a CPU did not give you a 2X
> boost. Somewhere between 1.5 - 1.75. I can only imagine it had to get
> worse as you added more that a 2nd processor.

That didn't happen with TOPS-10. A second processor only added .8.
All processors after that added 100%.

/BAH
Re: The ICL 2900 [message #339131 is a reply to message #339127] Sun, 12 March 2017 12:55 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Sunday, March 12, 2017 at 8:05:20 AM UTC-6, jmfbahciv wrote:
> Jon Elson wrote:

>> The problem with the 360/65 and /67 multiprocessor system was that the
>> memory didn't have enough bandwidth, so adding a CPU did not give you a 2X
>> boost. Somewhere between 1.5 - 1.75. I can only imagine it had to get
>> worse as you added more that a 2nd processor.

> That didn't happen with TOPS-10. A second processor only added .8.
> All processors after that added 100%.

That implies that each processor had its own memory. There would be a strict
limit to how many processors one could add if all of them are connected to
the same memory bus - and performance degradation would be expected, not an
indication of bad design.

Shared memory for a multiple-processor complex makes sense when a high-level
of inter-process communication is required, or when the memory has unused
bandwidth in a single-processor configuration. And then there's the situation
that has led to today's multi-core processors.

John Savard
Re: The ICL 2900 [message #339135 is a reply to message #339083] Sun, 12 March 2017 14:37 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Jon Elson <elson@pico-systems.com> writes:
> The problem with the 360/65 and /67 multiprocessor system was that the
> memory didn't have enough bandwidth, so adding a CPU did not give you
> a 2X boost. Somewhere between 1.5 - 1.75. I can only imagine it had
> to get worse as you added more that a 2nd processor.

re:
http://www.garlic.com/~lynn/2017c.html#3 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#44 The ICL 2900

at univ. with 360/67 (single processor) with 768kbytes (three 2365-2
memory banks ... only one memory interface shared with processor & i/o)
.... I played with (virtual machine) cp67 and the IBM SE played with
tss/360. We created a synthetic benchmark that simulated fortran program
edit, compile and executed. I ran it with cp67 and 35 simulated users
had been interactive response and higher throughput than tss/360 had
with 4 simulated users. Note this was before I did significant
peformance enhancements to cp67, greatly cut kernel pathlengths,
implemented ordered seek and rotational position for I/O (rather than
pure FIFO) and redid page replacement algorithms and scheduling
algorithms.

old posts about SHARE presentation on some of the (later) pathlength
changes I made to CP67 at the univ.
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
http://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)

Later tss/360 would claim that it was the only operating system that
would get 3.8 times more throughput on 2-processor than on single
processor. With little obfuscation and misdirection could almost imply
it was because 2-processor had effectively four times memory bandwith
than "simplex" (i.e. each memory bank had four separate paths, one for
each processor and one for each set of channel controllers). However, it
really was that tss/360 was so huge and bloated ... that on single
processor (with only 1mbyte memory), TSS/360 was still heavily page
thrashing. Going to 2processor also went to 2mbyte real memory ... which
reduced the page thrashing and tss/360 got higher effective throughput
(than single processor), but still neither processor ran at 100% cpu
utilization.

Part of the reason that cp67 ran with such more actual throughput than
tss/360 ... is that it regularly ran at 100% cpu utilization ... even
with all my enhancements ... significantly reducing kernel cpu
pathlength and utilization .... helped by improving I/O efficiency and
paging and scheduling algorithms. This is also why a "half-duplex"
360/67 could outperform a simplex ... since it could get nearly twice
the memory throughput (with each memory bank having independent path for
processor and I/O) and was running at 100% processor busy.

Other trivia, 360/67 multiprocessor had all channels addressable by all
processors (in addition to all memory addressable). 360/65MP had all
memory addressable by both processors (but didn't have independent
memory bus for each processor and set of channels to each memory bank).
However, it only simulated multiprocessor i/o. Each 360/65MP processor
still had dedicated I/O channels ... and to simulate multiprocessor i/o
configuration it required multi-channel controllers ... which could have
two channel connections ... each connected to dedicated channel for each
processor.

some recent related posts in thread over in comp.arch
http://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS operations
http://www.garlic.com/~lynn/2017c.html#29 Multitasking, together with OS operations

other trivia: move to 370 ... multiprocessor was the 360/65MP subset
(not the 360/67 flavor) ... but the machines had caches. Standard 370
multiprocessor slowed down processor cycles by 10% ... to provide extra
cycles for caches to help handle cross-cashe invalidation signals ...
so basic 370 multiprocessor started out 1.8 times that of single
processor.

I've mentioned before that the initial morph of cp67 into vm370, they
simplified and dropped a bunch of cp67 (including bunch of the stuff
that I had done as undergraduate and included in cp67 ... and
multiprocessor support). some old email about migrating my stuff to
VM370
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

posts about scheduling work
http://www.garlic.com/~lynn/subtopic.html#fairshare
posts about paging algorithms
http://www.garlic.com/~lynn/subtopic.html#clock

I then got sucked into do multiprocessor support. I mentioned before
that one of my hobbies was distributed&supporting enhanced operating
systems for internal datacenters and one of my long time customers was
(online, world-wide online sales&marketing support) HONE ... some past
posts http://www.garlic.com/~lynn/subtopic.html#hone

In the mid-70s all the US HONE datacenters were consolidated in Palo
Alto and they developed large disk farm with single-system cluster
support ... each disk bank having eight channel connections to eight
different systems ... front end that load-balanching logon across the
systems (possibly largest single system cluster support in the world)
.... and all processors running saturated. They wanted to add 2nd
processor to each system ... to (theoritically) double processor
capacity. I did multiprocessor support with some cache affinity features
and i/o interrupt batching .... that improved cache hit rates ... so
even tho each processor was only running at .9 a single processor
.... the improved cache hit rate more than offset the machine cycle
slowdown ... so each processor in two processor system was getting
higher MIP rate (than single processor version, because of the higher
cache hit rate). all sorts of SMP processor posts
http://www.garlic.com/~lynn/subtopic.html#smp

trivia: one facebook first moved into silicon valley (before buying the
old SUN campus) ... it was into new bldg built next door to the old HONE
datacenter.

more trivia: while I could show 2-processor 370 getting better than
twice single processor ... at the time, MVS documentation claimed
1.3-1.5 throughput of single processor. MVS was still pretty much the
OS/360 360/65MP single global kernel spin-lock and huge amount of
constant SIGP overhead ... one processor constantly signalling the other
processor.

Later in the 3081, they changed VM/SP multiprocessor to add a huge
number of SIGPs (constant one processor signalling the other processor
for some trivial, unimportant reason drastically driving up
multiprocessor overhead, interrupts killing cache hit rate, etc).
Customers moving from the previous release to the new (sigp intensive)
release saw at least 10% throughput decline.

They had attempted to obfuscate the SIGP change degradation with some
optimization of how 3270 terminal I/O was done. However, there were some
customers that were all ascii glass-teletype ... including a very large
3-letter gov. agency (SHARE installation code "CAD") ... where the
release transition was especially noticable. old email
http://www.garlic.com/~lynn/2001f.html#email830420
in this post
http://www.garlic.com/~lynn/2001f.html#57 any 70's era supercomputers that ran as slow as today's supercomputers?

some old posts mentiong SHARE installation code "CAD"
http://www.garlic.com/~lynn/2012j.html#20 Operating System, what is it?
http://www.garlic.com/~lynn/2013h.html#31 I/O Optimization
http://www.garlic.com/~lynn/2013h.html#51 Search for first Web page takes detour into US
http://www.garlic.com/~lynn/2013i.html#10 EBCDIC and the P-Bit
http://www.garlic.com/~lynn/2013l.html#19 A Brief History of Cloud Computing
http://www.garlic.com/~lynn/2013m.html#69 PDCA vs. OODA
http://www.garlic.com/~lynn/2014d.html#58 The CIA's new "family jewels": Going back to Church?
http://www.garlic.com/~lynn/2014e.html#36 Semi-OT: Government snooping was Re: Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014j.html#78 Firefox 32 supports Public Key Pinning
http://www.garlic.com/~lynn/2015c.html#39 Virtual Memory Management
http://www.garlic.com/~lynn/2015e.html#5 Remember 3277?

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339188 is a reply to message #339131] Mon, 13 March 2017 09:32 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Quadibloc wrote:
> On Sunday, March 12, 2017 at 8:05:20 AM UTC-6, jmfbahciv wrote:
>> Jon Elson wrote:
>
>>> The problem with the 360/65 and /67 multiprocessor system was that the
>>> memory didn't have enough bandwidth, so adding a CPU did not give you a
2X
>>> boost. Somewhere between 1.5 - 1.75. I can only imagine it had to get
>>> worse as you added more that a 2nd processor.
>
>> That didn't happen with TOPS-10. A second processor only added .8.
>> All processors after that added 100%.
>
> That implies that each processor had its own memory. There would be a strict
> limit to how many processors one could add if all of them are connected to
> the same memory bus - and performance degradation would be expected, not an
> indication of bad design.

The memories were interleaved. One of the cancelled hardware projects for the
PDP-10 product line was multi-ported internal memory. The SMP systems
had to use the external memory boxes; leaving the internal 2060 memory
useless.

>
> Shared memory for a multiple-processor complex makes sense when a high-level
> of inter-process communication is required, or when the memory has unused
> bandwidth in a single-processor configuration. And then there's the
situation
> that has led to today's multi-core processors.

Which don't seem to run in a true SMP manner but that's just the feeling
I get when using a multi-core system.

/BAH
Re: The ICL 2900 [message #339206 is a reply to message #339188] Mon, 13 March 2017 13:08 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
jmfbahciv <See.above@aol.com> writes:
> Which don't seem to run in a true SMP manner but that's just the feeling
> I get when using a multi-core system.

re:
http://www.garlic.com/~lynn/2017.html#59 The ICL 2900
http://www.garlic.com/~lynn/2017.html#61 The ICL 2900
http://www.garlic.com/~lynn/2017.html#74 The ICL 2900
http://www.garlic.com/~lynn/2017.html#75 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#44 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#45 The ICL 2900

as previously mentioned OS/360/65MP had global kernel spin-lock ... so
only processor executed in the kernel at a time (simplified all the
changes serialization changes to the kernel but seriously limited amount
of parallelization). That accounted for the push back from the POK
favorite son operating system (OS/360 MVT) push back on needed
compare&swap for SMP (test&set was more than sufficient) ... and carried
forward into 370 SMP MVS ... that throughput could be 1.2 to 1.5 times
simplex (i.e. 370 hardware clock slowed down 10% for listening for
cross-cache invalidation, hardware only 1.8 times, large amount of
softwarre SIGP signaling interrupts overhead, and very poor
parallelization, lack of fine-grain locking).
http://www.garlic.com/~lynn/subtopic.html#smp

One of the things limiting further scaleup on cache machines was all the
cache consistency protocol chatter ... two-processors mean signaling
from one other processor, four processors mean signaling from three
other processors, N-way means signaling from N-1 processors. SEQUENT had
done a lot of cache protocol optimization for its 32-way SMP DYNIX.

I got pulled into some of the SLAC meetings for SCI .. which used serial
fiber for memory bus and cache protocol ... and directory based cache
consistency protocol. And then had quite a few dealings with SEQUENT
.... when they were working on their 256-way SCI examplar. One of the
things that they claimed was they had done nearly only the work on
(windows) NT for fine-grain locking to get NT scaleup to 8-way (that
they offerred as alternative to DYNIX on their SMP systems) ... this is
server-based operations that already have highly parallelized
applications (like DBMS, aka m'soft had bought their sql-server from
SYBASE). upthread post mentioning SCI
http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900

Decade ago there is the story about head of microsoft complaining to
Intel SVP that they have to stop all this multi-core stuff and return to
increasingly faster single processor systems ... because application
parallel programming was "too hard" (desktop, client, etc). past quote
from Intel SVP Gelsinger about multi-core converstation with head of
Microsoft saying parallel program was "too hard"
http://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2008f.html#42 Panic in Multicore Land
http://www.garlic.com/~lynn/2012e.html#15 Why do people say "the soda loop is often depicted as a simple loop"?
http://www.garlic.com/~lynn/2012j.html#44 Monopoly/ Cartons of Punch Cards
http://www.garlic.com/~lynn/2013.html#48 New HD
http://www.garlic.com/~lynn/2014d.html#85 Parallel programming may not be so daunting
http://www.garlic.com/~lynn/2014m.html#118 By the time we get to 'O' in OODA

in recent upthread post ... i had mentioned doing vm370 2-way SMP for
HONE with some cache affinity ... getting better than 2-times (because
of improved cache hit rate)
http://www.garlic.com/~lynn/2017c.html#45 The ICL 2900

there is recent thread over in comp.arch about doing various cache
affinity strategies getting significant throughput improvements.

related to modern strategies is periodic comment that current memory
latencies (cache miss) .... when measured in count of processor cycles
.... is comparable to 60s disk latency, when measured in count of 60s
processor cycles. Some current processor throughput face similar issues
that 60s systems faced when waiting for disk i/o ... recent references:
http://www.garlic.com/~lynn/2017.html#13 follow up to dense code definition
http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps Explain Its Current Importance

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339293 is a reply to message #339206] Tue, 14 March 2017 09:00 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Anne & Lynn Wheeler wrote:
> jmfbahciv <See.above@aol.com> writes:
>> Which don't seem to run in a true SMP manner but that's just the feeling
>> I get when using a multi-core system.
>
> re:
> http://www.garlic.com/~lynn/2017.html#59 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#61 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#74 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#75 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#44 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#45 The ICL 2900
>
> as previously mentioned OS/360/65MP had global kernel spin-lock ... so
> only processor executed in the kernel at a time (simplified all the
> changes serialization changes to the kernel but seriously limited amount
> of parallelization). That accounted for the push back from the POK
> favorite son operating system (OS/360 MVT) push back on needed
> compare&swap for SMP (test&set was more than sufficient) ... and carried
> forward into 370 SMP MVS ... that throughput could be 1.2 to 1.5 times
> simplex (i.e. 370 hardware clock slowed down 10% for listening for
> cross-cache invalidation, hardware only 1.8 times, large amount of
> softwarre SIGP signaling interrupts overhead, and very poor
> parallelization, lack of fine-grain locking).
> http://www.garlic.com/~lynn/subtopic.html#smp
>
> One of the things limiting further scaleup on cache machines was all the
> cache consistency protocol chatter ... two-processors mean signaling
> from one other processor, four processors mean signaling from three
> other processors, N-way means signaling from N-1 processors. SEQUENT had
> done a lot of cache protocol optimization for its 32-way SMP DYNIX.
>
> I got pulled into some of the SLAC meetings for SCI .. which used serial
> fiber for memory bus and cache protocol ... and directory based cache
> consistency protocol. And then had quite a few dealings with SEQUENT
> ... when they were working on their 256-way SCI examplar. One of the
> things that they claimed was they had done nearly only the work on
> (windows) NT for fine-grain locking to get NT scaleup to 8-way (that
> they offerred as alternative to DYNIX on their SMP systems) ... this is
> server-based operations that already have highly parallelized
> applications (like DBMS, aka m'soft had bought their sql-server from
> SYBASE). upthread post mentioning SCI
> http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
>
> Decade ago there is the story about head of microsoft complaining to
> Intel SVP that they have to stop all this multi-core stuff and return to
> increasingly faster single processor systems ... because application
> parallel programming was "too hard" (desktop, client, etc). past quote
> from Intel SVP Gelsinger about multi-core converstation with head of
> Microsoft saying parallel program was "too hard"

Well, then they shouldn't have pulled the funding for JMF's project. He
was the expert at DEC.


> http://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran
developer, dies
> http://www.garlic.com/~lynn/2008f.html#42 Panic in Multicore Land
> http://www.garlic.com/~lynn/2012e.html#15 Why do people say "the soda loop
is often depicted as a simple loop"?
> http://www.garlic.com/~lynn/2012j.html#44 Monopoly/ Cartons of Punch Cards
> http://www.garlic.com/~lynn/2013.html#48 New HD
> http://www.garlic.com/~lynn/2014d.html#85 Parallel programming may not be so
daunting
> http://www.garlic.com/~lynn/2014m.html#118 By the time we get to 'O' in OODA
>
> in recent upthread post ... i had mentioned doing vm370 2-way SMP for
> HONE with some cache affinity ... getting better than 2-times (because
> of improved cache hit rate)
> http://www.garlic.com/~lynn/2017c.html#45 The ICL 2900
>
> there is recent thread over in comp.arch about doing various cache
> affinity strategies getting significant throughput improvements.
>
> related to modern strategies is periodic comment that current memory
> latencies (cache miss) .... when measured in count of processor cycles
> ... is comparable to 60s disk latency, when measured in count of 60s
> processor cycles.

WHOA!!!!

> Some current processor throughput face similar issues
> that 60s systems faced when waiting for disk i/o ... recent references:
> http://www.garlic.com/~lynn/2017.html#13 follow up to dense code definition
> http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps
> Explain Its Current Importance
>

They're going to have to start thinking outside the box. There is no
reason all caches have to speak to every other CPU's cache. That was
the reason for the spin lock in the first place. In audlen days there
were only a few events which required [what we called] the boot CPU
to provide interrupt/data to all other CPUs. It sounds like today's
hard/software need to address that. However the issue does need
tight hardware and software development human communication. I wonder
what other side effects have happened because the two are no
longer tied together under a common management.


/BAH
Re: The ICL 2900 [message #339295 is a reply to message #339293] Tue, 14 March 2017 09:44 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
jmfbahciv <See.above@aol.com> writes:
> Anne & Lynn Wheeler wrote:

>> Some current processor throughput face similar issues
>> that 60s systems faced when waiting for disk i/o ... recent references:
>> http://www.garlic.com/~lynn/2017.html#13 follow up to dense code definition
>> http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps
>> Explain Its Current Importance
>>
>
> They're going to have to start thinking outside the box.

"They've" been thinking outside the box for decades. Starting
with Sequent and SGI's NUMA systems.


> There is no
> reason all caches have to speak to every other CPU's cache.

So long as data is shared between CPU's, your statement is
so very wrong and evinces a lack of understanding of caching
and cache protocols. Feel free to research the acronym
"MESI" (or the alternate MOESI). https://en.wikipedia.org/wiki/Cache_memory#Cache_coherency

> That was
> the reason for the spin lock in the first place.

No, the reason for synchronization primitives (e.g. C.A.R. Hoares
_Cooperating Sequential Processes_, Semaphores, Mutexes and the
degenerate case of the spin-lock) is to protect shared data.

> In audlen days there
> were only a few events which required [what we called] the boot CPU
> to provide interrupt/data to all other CPUs.

There have been forty years of progress since the "olden days" and
highly-threaded workloads are standard (for example, memcached or
NGINX).

Today, the _applications_ share data between processes and within
processes between threads. This is common and required for adequate
performance since single-core performance has hit the physics wall
(power, leakage, thermal limits).


> It sounds like today's
> hard/software need to address that.

Today's software makes great strides in handling high-levels of
parallelism - particularly in the HPC space, where single applications
with 10's of thousands of threads (and the corresponding hardware
cores) sharing memory are not uncommon.

> However the issue does need
> tight hardware and software development human communication.

If you think there is no communication between hardware and software
developers, you've not been paying attention. Just two weeks
ago I participated in a technical advisory board meeting sponsered
by one of the major processor vendors - this is the group that
helps guide the future development of the processor architecture and
the group consisted of senior engineers from operating
system vendors, many of the major hardware vendors, several of the
major data center operators and the two major search engines.

See also http://opencompute.org/ for another vehicle of
"human communication" between hardware and software and
end-users (Facebook is a major sponsor).

> I wonder
> what other side effects have happened because the two are no
> longer tied together under a common management.

Decoupling software from hardware has provided far more benefit
than any single manufacturer could have by holding all the cards.
Re: The ICL 2900 [message #339304 is a reply to message #339295] Tue, 14 March 2017 12:32 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
scott@slp53.sl.home (Scott Lurndal) writes:
> "They've" been thinking outside the box for decades. Starting
> with Sequent and SGI's NUMA systems.

as well as data general and convex ... all implemented
with SCI ... recent refs:
http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900

wiki
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

memory bus was 64-port .... sequent and data general did 4 processor
board with shared cache and 64 boards (for 256 processors total)
interfaced to SCI.

futurebus contributes to SCI for I/O
https://en.wikipedia.org/wiki/Futurebus

Futurebus effort did act as a catalyst for simpler serial
technologies. A group then organized to create a system aimed directly
at this need, which eventually led to Scalable Coherent Interface
(SCI). Meanwhile, another member decided to simple re-create the entire
concept on a much simpler basis, which resulted in QuickRing. Due to the
simplicity of these standards, both standards were completed before
Futurebus+. Futurebus+ was ahead of its time in the 1980s. VME and other
parallel bus standards are still trying to adapt concepts that are
implemented in the Futurebus, specially in high performance
applications.

.... snip ...

contributing to Next Generation I/O and eventually Inifiniband
https://en.wikipedia.org/wiki/InfiniBand

In the late 80s, I got dragged into LANL standardization for (parallel)
HIPPI, LLNL standardization for (serial) FCS, and SLAC standardization
for SCI ... as well as working on cluster scaleup for our HA/CMP product
(both technical/scientific with national labs and commercial with RDBMS
vendors).

old reference to Jan1992 meeting in Ellison's conference room on
commercial cluster scaleup
http://www.garlic.com/~lynn/95.html#13

we had also been working with (IBM) Hursley 80mbit serial copper I/O and
wanted it to evolved in fractional speed interoperability with FCS (but
it evolves into incompatible SSA instead) ... recent reference:
http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
http://www.garlic.com/~lynn/2017b.html#75 The ICL 2900

within weeks of the Ellison meeting, the cluster scaleup is transferred
to Kingston, announced as IBM supercomputer for technical/scientific
*ONLY* and we are told we can't work on anything with more than four
processors. some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa

press from 17Feb1992, *scientific and technical* only
http://www.garlic.com/~lynn/2001n.html#6000clusters1
and 11May1992 "IBM" *caught by surprise* by national lab interest in
cluster supercomputers
http://www.garlic.com/~lynn/2001n.html#6000clusters2

I've mentioned before getting con'ed into do 4341 benchmarks for LLNL
that was interested in getting 70 4341s for computer farm ... sort of
leading edge of the cluster supercomputer tsunami
http://www.garlic.com/~lynn/2006y.html#email790220
other old 4341 email
http://www.garlic.com/~lynn/lhwemail.html#4341

background leading up to the transfer ... was there was operation in
Kingston lab that had responsibility for supercomputer ... they were
working on design ... but they were also providing financing to Chen
supercomputing. End of Oct1991, the senior corporate VP supporting the
Kinston supercomputer operation retires. There is then audits and
reviews of all the projects supported by the retired VP. After that they
start scouring the corporation for technology that could be used for
supercomputer.

In any case, we leave IBM later in 1992. Later in the 90s, Steve Chen is
CTO at Sequent and we are brought in as consultants (all before IBM
buys and shutdowns Sequent). misc. past posts mentioning Steve Chen
http://www.garlic.com/~lynn/2001n.html#70 CM-5 Thinking Machines, Supercomputers
http://www.garlic.com/~lynn/2004b.html#19 Worst case scenario?
http://www.garlic.com/~lynn/2006v.html#12 Steve Chen Making China's Supercomputer Grid
http://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
http://www.garlic.com/~lynn/2009.html#5 Is SUN going to become x86'ed ??
http://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
http://www.garlic.com/~lynn/2009s.html#5 While watching Biography about Bill Gates on CNBC last Night
http://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
http://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
http://www.garlic.com/~lynn/2010b.html#71 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010e.html#68 Entry point for a Mainframe?
http://www.garlic.com/~lynn/2010e.html#70 Entry point for a Mainframe?
http://www.garlic.com/~lynn/2010f.html#47 Nonlinear systems and nonlocal supercomputing
http://www.garlic.com/~lynn/2010f.html#48 Nonlinear systems and nonlocal supercomputing
http://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
http://www.garlic.com/~lynn/2011c.html#24 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011d.html#7 IBM Watson's Ancestors: A Look at Supercomputers of the Past
http://www.garlic.com/~lynn/2011o.html#79 Why are organizations sticking with mainframes?
http://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
http://www.garlic.com/~lynn/2013c.html#65 What Makes an Architecture Bizarre?
http://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
http://www.garlic.com/~lynn/2015g.html#74 100 boxes of computer books on the wall
http://www.garlic.com/~lynn/2015h.html#10 the legacy of Seymour Cray

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339362 is a reply to message #339295] Wed, 15 March 2017 09:29 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Scott Lurndal wrote:
> jmfbahciv <See.above@aol.com> writes:
>> Anne & Lynn Wheeler wrote:
>
>>> Some current processor throughput face similar issues
>>> that 60s systems faced when waiting for disk i/o ... recent references:
>>> http://www.garlic.com/~lynn/2017.html#13 follow up to dense code
definition
>>> http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps
>>> Explain Its Current Importance
>>>
>>
>> They're going to have to start thinking outside the box.
>
> "They've" been thinking outside the box for decades. Starting
> with Sequent and SGI's NUMA systems.
>
>
>> There is no
>> reason all caches have to speak to every other CPU's cache.
>
> So long as data is shared between CPU's,

That premise is what needs to be addressed.

> your statement is
> so very wrong and evinces a lack of understanding of caching
> and cache protocols. Feel free to research the acronym
> "MESI" (or the alternate MOESI).
https://en.wikipedia.org/wiki/Cache_memory#Cache_coherency

Of course it's wrong because of the premise.
>
>> That was
>> the reason for the spin lock in the first place.
>
> No, the reason for synchronization primitives (e.g. C.A.R. Hoares
> _Cooperating Sequential Processes_, Semaphores, Mutexes and the
> degenerate case of the spin-lock) is to protect shared data.
>
>> In audlen days there
>> were only a few events which required [what we called] the boot CPU
>> to provide interrupt/data to all other CPUs.
>
> There have been forty years of progress since the "olden days" and
> highly-threaded workloads are standard (for example, memcached or
> NGINX).
>
> Today, the _applications_ share data between processes and within
> processes between threads. This is common and required for adequate
> performance since single-core performance has hit the physics wall
> (power, leakage, thermal limits).

That sharing is the problem. Instead of making code reentrant
like we had to do, today's systems are going to have to figure
out how to make data "reentrant". Does 100% of all data
have to be shared?

>
>
>> It sounds like today's
>> hard/software need to address that.
>
> Today's software makes great strides in handling high-levels of
> parallelism - particularly in the HPC space, where single applications
> with 10's of thousands of threads (and the corresponding hardware
> cores) sharing memory are not uncommon.
>
>> However the issue does need
>> tight hardware and software development human communication.
>
> If you think there is no communication between hardware and software
> developers, you've not been paying attention. Just two weeks
> ago I participated in a technical advisory board meeting sponsered
> by one of the major processor vendors - this is the group that
> helps guide the future development of the processor architecture and
> the group consisted of senior engineers from operating
> system vendors, many of the major hardware vendors, several of the
> major data center operators and the two major search engines.
>
> See also http://opencompute.org/ for another vehicle of
> "human communication" between hardware and software and
> end-users (Facebook is a major sponsor).

I said "tight hard and software human communication". The meetings
above are organized and sparse. I'm talking about being able to
shout over the office wall or walking down the hall or bullshitting
over a beer at the bar. We sorted out lots of things with that
type of communication process so that most of our productive time
could be spent on the useful ideas, designs, and development.
>
>> I wonder
>> what other side effects have happened because the two are no
>> longer tied together under a common management.
>
> Decoupling software from hardware has provided far more benefit
> than any single manufacturer could have by holding all the cards.

I know its usefulness. I'm wondering about what the tradeoffs were
to achieve that. I tried to promote plug'n play at DEC and hit
a diamond brick wall.

/BAH
Re: The ICL 2900 [message #339367 is a reply to message #339362] Wed, 15 March 2017 10:41 Go to previous messageGo to next message
Morten Reistad is currently offline  Morten Reistad
Messages: 2108
Registered: December 2011
Karma: 0
Senior Member
In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
jmfbahciv <See.above@aol.com> wrote:
> Scott Lurndal wrote:
>> jmfbahciv <See.above@aol.com> writes:
>>> Anne & Lynn Wheeler wrote:
>>
>>>> Some current processor throughput face similar issues
>>>> that 60s systems faced when waiting for disk i/o ... recent references:
>>>> http://www.garlic.com/~lynn/2017.html#13 follow up to dense code
> definition
>>>> http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps
>>>> Explain Its Current Importance
>>>>
>>>
>>> They're going to have to start thinking outside the box.
>>
>> "They've" been thinking outside the box for decades. Starting
>> with Sequent and SGI's NUMA systems.
>>
>>
>>> There is no
>>> reason all caches have to speak to every other CPU's cache.
>>
>> So long as data is shared between CPU's,
>
> That premise is what needs to be addressed.
>
>> your statement is
>> so very wrong and evinces a lack of understanding of caching
>> and cache protocols. Feel free to research the acronym
>> "MESI" (or the alternate MOESI).
> https://en.wikipedia.org/wiki/Cache_memory#Cache_coherency
>
> Of course it's wrong because of the premise.
>>
>>> That was
>>> the reason for the spin lock in the first place.
>>
>> No, the reason for synchronization primitives (e.g. C.A.R. Hoares
>> _Cooperating Sequential Processes_, Semaphores, Mutexes and the
>> degenerate case of the spin-lock) is to protect shared data.
>>
>>> In audlen days there
>>> were only a few events which required [what we called] the boot CPU
>>> to provide interrupt/data to all other CPUs.
>>
>> There have been forty years of progress since the "olden days" and
>> highly-threaded workloads are standard (for example, memcached or
>> NGINX).
>>
>> Today, the _applications_ share data between processes and within
>> processes between threads. This is common and required for adequate
>> performance since single-core performance has hit the physics wall
>> (power, leakage, thermal limits).
>
> That sharing is the problem. Instead of making code reentrant
> like we had to do, today's systems are going to have to figure
> out how to make data "reentrant". Does 100% of all data
> have to be shared?

Mostly. But not at the same speeds. Go look up "NUMA" when
you are at the library next time. That happened in the eighties,
so you should be aware of it.

The basic premise is when you have hundreds of cpus they don't
(mostly) all need the fastest interconnect, there can be local
clusters of a few that have direct access to the same memory/buses
and where the caches are coordinated; but then there are the
rest of the 100+ cpus which have a slower interconnect to the
other clusters.

SGI and others pioneered this.

We did run a quite big (for the time) SGI R10K server for news,
14 processors at the most (in 1996). We had THAT much access and
batching. We ranked among the top 50 among news servers worldwide
then.

>>> It sounds like today's
>>> hard/software need to address that.
>>
>> Today's software makes great strides in handling high-levels of
>> parallelism - particularly in the HPC space, where single applications
>> with 10's of thousands of threads (and the corresponding hardware
>> cores) sharing memory are not uncommon.
>>
>>> However the issue does need
>>> tight hardware and software development human communication.
>>
>> If you think there is no communication between hardware and software
>> developers, you've not been paying attention. Just two weeks
>> ago I participated in a technical advisory board meeting sponsered
>> by one of the major processor vendors - this is the group that
>> helps guide the future development of the processor architecture and
>> the group consisted of senior engineers from operating
>> system vendors, many of the major hardware vendors, several of the
>> major data center operators and the two major search engines.
>>
>> See also http://opencompute.org/ for another vehicle of
>> "human communication" between hardware and software and
>> end-users (Facebook is a major sponsor).
>
> I said "tight hard and software human communication". The meetings
> above are organized and sparse. I'm talking about being able to
> shout over the office wall or walking down the hall or bullshitting
> over a beer at the bar. We sorted out lots of things with that
> type of communication process so that most of our productive time
> could be spent on the useful ideas, designs, and development.

This informality is active in the Valley, and also in local places
like here.

>>> I wonder
>>> what other side effects have happened because the two are no
>>> longer tied together under a common management.
>>
>> Decoupling software from hardware has provided far more benefit
>> than any single manufacturer could have by holding all the cards.
>
> I know its usefulness. I'm wondering about what the tradeoffs were
> to achieve that. I tried to promote plug'n play at DEC and hit
> a diamond brick wall.

-- mrr
Re: The ICL 2900 [message #339368 is a reply to message #339367] Wed, 15 March 2017 11:11 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Morten Reistad <first@last.name.invalid> writes:
> In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
> jmfbahciv <See.above@aol.com> wrote:
>> Scott Lurndal wrote:
>>> jmfbahciv <See.above@aol.com> writes:

>>>>
>>>> They're going to have to start thinking outside the box.
>>>
>>> "They've" been thinking outside the box for decades. Starting
>>> with Sequent and SGI's NUMA systems.
>>>

>>>
>>> Today, the _applications_ share data between processes and within
>>> processes between threads. This is common and required for adequate
>>> performance since single-core performance has hit the physics wall
>>> (power, leakage, thermal limits).
>>
>> That sharing is the problem. Instead of making code reentrant
>> like we had to do, today's systems are going to have to figure
>> out how to make data "reentrant". Does 100% of all data
>> have to be shared?
>
> Mostly. But not at the same speeds. Go look up "NUMA" when
> you are at the library next time. That happened in the eighties,
> so you should be aware of it.

The proper term of art is ccNUMA (cache-coherent NUMA). Cache-to-cache
communications is still necessary in NUMA systems for correctness.

The primary software-visible aspect of NUMA (non-uniform memory architecture)
is that software (typically the kernel) needs to make intelligent
decisions during memory allocation and thread scheduling taking into
account the differences in latency between accessing local and
remote memory.

The startup 3Leaf systems built a large computer from commodity
AMD/Intel systems using an ASIC that extended the coherency domain
over Inifiniband (or 10Ge) to create a ccNUMA system that scaled
to 16 nodes (AMD), and the next design for Intel scaled to
1024 nodes. We ran out of funding in 2010, sold the IP and most
of the development team transfered to the buyer, until CFIUS put
the kibosh on the deal.

>
> The basic premise is when you have hundreds of cpus they don't
> (mostly) all need the fastest interconnect, there can be local
> clusters of a few that have direct access to the same memory/buses
> and where the caches are coordinated; but then there are the
> rest of the 100+ cpus which have a slower interconnect to the
> other clusters.
>
> SGI and others pioneered this.

Sequent was first, IIRC.

>
> We did run a quite big (for the time) SGI R10K server for news,
> 14 processors at the most (in 1996). We had THAT much access and
> batching. We ranked among the top 50 among news servers worldwide
> then.

I worked on the successor OS for the SGI Origin systems (code named teak) for
a couple of years at SGI - it was a distributed version of IRIX
that could present a single system image over a collection of
interconnected (but not necessarily fully cache-coherent)
nodes.

At Unisys, we built the OPUS systems, which were also MPP (non-cache-coherent)
boxes with a single-system image version of Unix (SVR4.2ES/MP)
on top of the Chorus Systemes microkernel. 64 two-processor pentium pro
(p6) nodes each with ethernet and scsi that looked to software and
operations personnel as a single computer system (this used the
Intel Paragon wormhole-routing backplane for messaging between nodes).

The problems with both teak and OPUS is performance suffers for
shared memory applications (the granule of exclusion was the
4096 byte page, rather than the 64-byte cache line) which suffered
from false-sharing.


>>
>> I said "tight hard and software human communication". The meetings
>> above are organized and sparse. I'm talking about being able to
>> shout over the office wall or walking down the hall or bullshitting
>> over a beer at the bar. We sorted out lots of things with that
>> type of communication process so that most of our productive time
>> could be spent on the useful ideas, designs, and development.

I heard what you said, and disagree with it completely. There is
a difference between a closed shop such as DEC, IBM and Burroughs
and the modern decentralized hardware industry.

>
> This informality is active in the Valley, and also in local places
> like here.

And to a lesser extent in the Boston metro area.
Re: The ICL 2900 [message #339369 is a reply to message #339367] Wed, 15 March 2017 11:46 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Morten Reistad <first@last.name.invalid> writes:
> We did run a quite big (for the time) SGI R10K server for news,
> 14 processors at the most (in 1996). We had THAT much access and
> batching. We ranked among the top 50 among news servers worldwide
> then.

re:
http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#49 The ICL 2900

triva: when we were doing ha/cmp ... the executive we first report to,
then transfers over to head up somerset (AIM, apple, ibm, motorola;
power/pc). we leave in 92 after cluster scaleup is transferred to
kingston and announced as IBM supercomputer. He is hired away from
somerset/aim by SGI to be president of MIPs doing the MIPS R10K
(i.e. SGI had bought MIPS). We would drop in periodically and shoot the
breeze ... in fact he lets me have his executive Indy to take home.

some past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp
AIM
https://en.wikipedia.org/wiki/AIM_alliance
MIPS R10K
https://en.wikipedia.org/wiki/R10000

we had been brought in as consultants by small client/server startup
that wanted to do payment transactions on their server. Two of the
people that were in the Jan1992 Ellison meeting ... mentioned here
http://www.garlic.com/~lynn/95.html#13

have left oracle and are responsible for the "commerce server" at the
startup. The startup had also invented this technology they called "SSL"
they wanted to use. The result is now frequently called "electronic
commerce".

As load was public load downloading clients & server software on their
web servers was increasing ... they kept added servers (and directed
customer to manually spread download use across the increasing use of
servers) ... but CPU use couldn't keep up with the increase in download
use. Turns out their client/server protocol had used TCP for this very
short HTTP & HTTPS operations. TCP had never been implemented for such
behavior ... part of it was reliable session close with FINWAIT list to
catch dangling packets. FINWAIT operation had basically been designed
assuming there would never be more than a few sessions in close ... so
would do linear search of the list to see if incoming packets were part
of session in the process of closing. As nominal webserver use started
to climb, webserver processor quickly shot to 100%, nearly all of it
done searching FINWAIT list. There was period in the mid-90s where
webservers hit brick wall.

It turns out that Sequent had previously encountered this problem with
customers that were running large commercial operations with 20,000
TELNET (TCP) sessions. While the TELNET sessions were much longer
.... with 20,000 ... there was still relative high frequency of sessions
being (created &) shutdown ... and Sequent had already addressed
efficient FINWAIT list operation. The load problem at the small
client/server operation was "solved" when they installed a Sequent for
handling their webserver load.

After another six months, other vendors started shipping updates that
addressed the FINWAIT problem.

past posts mentioning FINWAIT
http://www.garlic.com/~lynn/99.html#1 Early tcp development?
http://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
http://www.garlic.com/~lynn/2002.html#3 The demise of compaq
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2004m.html#46 Shipwrecks
http://www.garlic.com/~lynn/2005g.html#42 TCP channel half closed
http://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
http://www.garlic.com/~lynn/2006k.html#2 Hey! Keep Your Hands Out Of My Abstraction Layer!
http://www.garlic.com/~lynn/2006m.html#37 Curiosity
http://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
http://www.garlic.com/~lynn/2007j.html#38 Problem with TCP connection close
http://www.garlic.com/~lynn/2008m.html#28 Yet another squirrel question - Results (very very long post)
http://www.garlic.com/~lynn/2008p.html#36 Making tea
http://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
http://www.garlic.com/~lynn/2009i.html#76 Tiny-traffic DoS attack spotlights Apache flaw
http://www.garlic.com/~lynn/2009n.html#44 Follow up
http://www.garlic.com/~lynn/2010b.html#62 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010m.html#51 Has there been a change in US banking regulations recently?
http://www.garlic.com/~lynn/2010p.html#9 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
http://www.garlic.com/~lynn/2011g.html#11 Is the magic and romance killed by Windows (and Linux)?
http://www.garlic.com/~lynn/2011n.html#6 Founders of SSL Call Game Over?
http://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
http://www.garlic.com/~lynn/2012e.html#89 False Start's sad demise: Google abandons noble attempt to make SSL less painful
http://www.garlic.com/~lynn/2012i.html#15 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
http://www.garlic.com/~lynn/2013h.html#8 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013i.html#46 OT: "Highway Patrol" back on TV
http://www.garlic.com/~lynn/2013i.html#48 Google takes on Internet Standards with TCP Proposals, SPDY standardization
http://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
http://www.garlic.com/~lynn/2014g.html#13 Is it time for a revolution to replace TLS?
http://www.garlic.com/~lynn/2014h.html#26 There Is Still Hope
http://www.garlic.com/~lynn/2014j.html#76 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
http://www.garlic.com/~lynn/2015d.html#2 Knowledge Center Outage May 3rd
http://www.garlic.com/~lynn/2015d.html#50 Western Union envisioned internet functionality
http://www.garlic.com/~lynn/2015e.html#25 The real story of how the Internet became so vulnerable
http://www.garlic.com/~lynn/2015f.html#71 1973--TI 8 digit electric calculator--$99.95
http://www.garlic.com/~lynn/2015g.html#96 TCP joke
http://www.garlic.com/~lynn/2015h.html#113 Is there a source for detailed, instruction-level performance info?
http://www.garlic.com/~lynn/2016e.html#43 How the internet was invented
http://www.garlic.com/~lynn/2016e.html#127 Early Networking

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339397 is a reply to message #339362] Wed, 15 March 2017 20:15 Go to previous messageGo to next message
pechter is currently offline  pechter
Messages: 452
Registered: July 2012
Karma: 0
Senior Member
In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
jmfbahciv <See.above@aol.com> wrote:
> Scott Lurndal wrote:
>> jmfbahciv <See.above@aol.com> writes:
>>> Anne & Lynn Wheeler wrote:
>>
>>>> Some current processor throughput face similar issues
>>>> that 60s systems faced when waiting for disk i/o ... recent references:
>>>> http://www.garlic.com/~lynn/2017.html#13 follow up to dense code
> definition
>>>> http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps
>>>> Explain Its Current Importance
>>>>
>>>

Lots of snip....

>>> They're going to have to start thinking outside the box.
>>
>> "They've" been thinking outside the box for decades. Starting
>> with Sequent and SGI's NUMA systems.
>>
>>
>>> There is no
>>> reason all caches have to speak to every other CPU's cache.
>> Decoupling software from hardware has provided far more benefit
>> than any single manufacturer could have by holding all the cards.
Lots of snip....
>
> I know its usefulness. I'm wondering about what the tradeoffs were
> to achieve that. I tried to promote plug'n play at DEC and hit
> a diamond brick wall.

OK... a computer topic that's interesting. How were you trying to implement
plug and play at DEC... They were getting there on Vaxes if you had
things like UDA50's and such which the hardware had things like the interrupt
vector programmed into it by the cpu when it checked the bus.

A little bit better than setting the switches on each board to match
the PDP11 address and vector list.

Bill

>
> /BAH
Re: The ICL 2900 [message #339407 is a reply to message #339368] Wed, 15 March 2017 21:02 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
scott@slp53.sl.home (Scott Lurndal) writes:
> Sequent was first, IIRC.

re:
http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#52 The ICL 2900

Sequent implements "snoopy" cache for Balance
http://www.icsa.inf.ed.ac.uk/cgi-bin/hase/coherence-m.pl?wtu -model-t.html,wtu-model-f.html,menu1.html
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Balan ce

Then did Symmetry ... started with i386, which was what was installed at
Mosaic/Netscape (trivia: when NCSA complained about using "Mosaic", what
company donated "Netscape") that addressed FINWAIT problem
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Symme try

In 1994 Sequent introduced the Symmetry 5000 series models SE20, SE60
and SE90, which used 66 MHz Pentium CPUs in systems from 2 to 30
processors. The next year they expanded that with the SE30/70/100 lineup
using 100 MHz Pentiums, and then in 1996 with the SE40/80/120 with 166
MHz Pentiums. A variant of the Symmetry 5000, the WinServer 5000 series,
ran Windows NT instead of DYNIX/ptx.[10]

..... snip ...

Sequent claimed that they did the work on NT, restructuring kernel for
SMP scaleup (for servers). However, upthread, I reference that still
doesn't get consumer/desktop application threading for increasingly
multi-core processors
http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900

Then for Exemplar, Sequent used SCI (but data general, sgi, convex, and
others, did also)
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA

SCI
https://en.wikipedia.org/wiki/Scalable_Coherent_Interconnect
NUMA
https://en.wikipedia.org/wiki/Non-uniform_memory_access

I've mentioned before 370 2-way SMP slowed down base processor cycle to
allow cross-cache invalidation signals ... that was just the start, any
processing of actual invalidation overhead would be in addition to the
base processor cycle slowdown. That is with just one other processor
sending invalidation ... going to 4-way SMP then would mean three other
processors broadcasting cross-cache invalidation signals. past
SMP posts
http://www.garlic.com/~lynn/subtopic.html#smp

I've periodically claimed that John Coche 801/risc
https://en.wikipedia.org/wiki/IBM_801
and
http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/risc/

was done to be the opposite of the enormously complex (failed)
Future System effort
http://www.garlic.com/~lynn/submain.html#futuresys

but another part of 801/risc was no cache consistency ... not even
between the i-cache and d-cache (in the same processor), ... along with
store-into cache ... loader needed special instruction to invalidate
address range in the i-cache and force corresponding changes in the
d-cache to memory (i.e. loader may have altered loaded program
instruction sequence as part of load, which would be in the d-cache,
which would have to be force to memory and any stale information in the
i-cache removed ... so latest copy could be loaded to i-cache) ... aka
not fall into the strong memory consistency overhead of 370 SMP.

part of somerset (AIM, referenced upthread) for power/pc was to support
cache consistency protocol ... i somewhat characterize it as adding
motorola 88k cache consistency to 801/risc.

past posts mention 801/risc, romp, rios, fort knox, pc/rt, power,
somerset, AIM, power/pc
http://www.garlic.com/~lynn/subtopic.html#801

IBM purchase of sequent
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#IBM_p urchase_and_disappearance

An alternative view of IBM's actions, born out of the belief[13] that
corporations maintain consistent strategies over the short and medium
term despite executive changes, is that IBM acquired Sequent not to
nurture it but simply to keep it out of Sun's clutches. Through its
acquisition of what became the Enterprise 10000 server line from Cray,
Sun had done so much financial damage to IBM's server market share, that
IBM was very reluctant to see this disaster repeated.[citation needed]
Even if it generated zero revenue for IBM, the net present value of
Sequent from IBM's viewpoint was higher inside IBM than inside Sun.[13]

.... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339431 is a reply to message #339367] Thu, 16 March 2017 08:49 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Morten Reistad wrote:
> In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
> jmfbahciv <See.above@aol.com> wrote:
>> Scott Lurndal wrote:
>>> jmfbahciv <See.above@aol.com> writes:

<snip>

>>>> However the issue does need
>>>> tight hardware and software development human communication.
>>>
>>> If you think there is no communication between hardware and software
>>> developers, you've not been paying attention. Just two weeks
>>> ago I participated in a technical advisory board meeting sponsered
>>> by one of the major processor vendors - this is the group that
>>> helps guide the future development of the processor architecture and
>>> the group consisted of senior engineers from operating
>>> system vendors, many of the major hardware vendors, several of the
>>> major data center operators and the two major search engines.
>>>
>>> See also http://opencompute.org/ for another vehicle of
>>> "human communication" between hardware and software and
>>> end-users (Facebook is a major sponsor).
>>
>> I said "tight hard and software human communication". The meetings
>> above are organized and sparse. I'm talking about being able to
>> shout over the office wall or walking down the hall or bullshitting
>> over a beer at the bar. We sorted out lots of things with that
>> type of communication process so that most of our productive time
>> could be spent on the useful ideas, designs, and development.
>
> This informality is active in the Valley, and also in local places
> like here.

On a daily basis? Having to call a meeting is not the kind
of interaction I'm talking about. This interaction occurs
before any meeting is called and the meeting is to fine tune
the results of the interactions.


/BAH
Re: The ICL 2900 [message #339432 is a reply to message #339407] Thu, 16 March 2017 08:49 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Anne & Lynn Wheeler wrote:
> scott@slp53.sl.home (Scott Lurndal) writes:
>> Sequent was first, IIRC.
>
> re:
> http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
>
> Sequent implements "snoopy" cache for Balance
> http://www.icsa.inf.ed.ac.uk/cgi-bin/hase/coherence-m.pl?wtu -model-t.html,wt
u-model-f.html,menu1.html
> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Balan ce
>
> Then did Symmetry ... started with i386, which was what was installed at
> Mosaic/Netscape (trivia: when NCSA complained about using "Mosaic", what
> company donated "Netscape") that addressed FINWAIT problem
> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Symme try
>
> In 1994 Sequent introduced the Symmetry 5000 series models SE20, SE60
> and SE90, which used 66 MHz Pentium CPUs in systems from 2 to 30
> processors. The next year they expanded that with the SE30/70/100 lineup
> using 100 MHz Pentiums, and then in 1996 with the SE40/80/120 with 166
> MHz Pentiums. A variant of the Symmetry 5000, the WinServer 5000 series,
> ran Windows NT instead of DYNIX/ptx.[10]
>
> .... snip ...
>
> Sequent claimed that they did the work on NT, restructuring kernel for
> SMP scaleup (for servers). However, upthread, I reference that still
> doesn't get consumer/desktop application threading for increasingly
> multi-core processors
> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
>
> Then for Exemplar, Sequent used SCI (but data general, sgi, convex, and
> others, did also)
> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA
>
> SCI
> https://en.wikipedia.org/wiki/Scalable_Coherent_Interconnect
> NUMA
> https://en.wikipedia.org/wiki/Non-uniform_memory_access
>
> I've mentioned before 370 2-way SMP slowed down base processor cycle to
> allow cross-cache invalidation signals ... that was just the start, any
> processing of actual invalidation overhead would be in addition to the
> base processor cycle slowdown. That is with just one other processor
> sending invalidation ... going to 4-way SMP then would mean three other
> processors broadcasting cross-cache invalidation signals. past
> SMP posts
> http://www.garlic.com/~lynn/subtopic.html#smp
>
> I've periodically claimed that John Coche 801/risc
> https://en.wikipedia.org/wiki/IBM_801
> and
> http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/risc/
>
> was done to be the opposite of the enormously complex (failed)
> Future System effort
> http://www.garlic.com/~lynn/submain.html#futuresys
>
> but another part of 801/risc was no cache consistency ... not even
> between the i-cache and d-cache (in the same processor), ... along with
> store-into cache ... loader needed special instruction to invalidate
> address range in the i-cache and force corresponding changes in the
> d-cache to memory (i.e. loader may have altered loaded program
> instruction sequence as part of load, which would be in the d-cache,
> which would have to be force to memory and any stale information in the
> i-cache removed ... so latest copy could be loaded to i-cache) ... aka
> not fall into the strong memory consistency overhead of 370 SMP.

Imagine the surprise of JMF and TW when they discovered that the KL
did not support write-thru cache. It almost caused the TOPS-10 SMP
project to be cancelled.


>
> part of somerset (AIM, referenced upthread) for power/pc was to support
> cache consistency protocol ... i somewhat characterize it as adding
> motorola 88k cache consistency to 801/risc.
>
> past posts mention 801/risc, romp, rios, fort knox, pc/rt, power,
> somerset, AIM, power/pc
> http://www.garlic.com/~lynn/subtopic.html#801
>
> IBM purchase of sequent
> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#IBM_p urchase_and_disa
ppearance
>
> An alternative view of IBM's actions, born out of the belief[13] that
> corporations maintain consistent strategies over the short and medium
> term despite executive changes, is that IBM acquired Sequent not to
> nurture it but simply to keep it out of Sun's clutches. Through its
> acquisition of what became the Enterprise 10000 server line from Cray,
> Sun had done so much financial damage to IBM's server market share, that
> IBM was very reluctant to see this disaster repeated.[citation needed]
> Even if it generated zero revenue for IBM, the net present value of
> Sequent from IBM's viewpoint was higher inside IBM than inside Sun.[13]

IBM seems to have never gotten rid of their determination to not
cooperate with other manufacturers hardware. JMF's first DEC project
was to get DEC computers and IBM computers to communicate. IBM
believe that homogeneous manufactured hardware was the only possibility;
it took DEC hard/software engineers to break that self-imposed rule.
This was in 1970, 1971. DEC was willing to talk to any hardware,
including others'. 1.5 decades later it became Digital and just
as snooty as IBM. I blame this on all those mid-level managers
who got hired from IBM.


/BAH
Re: The ICL 2900 [message #339433 is a reply to message #339368] Thu, 16 March 2017 08:49 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Scott Lurndal wrote:
> Morten Reistad <first@last.name.invalid> writes:
>> In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
>> jmfbahciv <See.above@aol.com> wrote:
>>> Scott Lurndal wrote:
>>>> jmfbahciv <See.above@aol.com> writes:
>
>>>> >
>>>> >They're going to have to start thinking outside the box.
>>>>
>>>> "They've" been thinking outside the box for decades. Starting
>>>> with Sequent and SGI's NUMA systems.
>>>>
>
>>>>
>>>> Today, the _applications_ share data between processes and within
>>>> processes between threads. This is common and required for adequate
>>>> performance since single-core performance has hit the physics wall
>>>> (power, leakage, thermal limits).
>>>
>>> That sharing is the problem. Instead of making code reentrant
>>> like we had to do, today's systems are going to have to figure
>>> out how to make data "reentrant". Does 100% of all data
>>> have to be shared?
>>
>> Mostly. But not at the same speeds. Go look up "NUMA" when
>> you are at the library next time. That happened in the eighties,
>> so you should be aware of it.
>
> The proper term of art is ccNUMA (cache-coherent NUMA). Cache-to-cache
> communications is still necessary in NUMA systems for correctness.
>
> The primary software-visible aspect of NUMA (non-uniform memory
architecture)
> is that software (typically the kernel) needs to make intelligent
> decisions during memory allocation and thread scheduling taking into
> account the differences in latency between accessing local and
> remote memory.

That sounds similar to the old swapping decisions.
>
> The startup 3Leaf systems built a large computer from commodity
> AMD/Intel systems using an ASIC that extended the coherency domain
> over Inifiniband (or 10Ge) to create a ccNUMA system that scaled
> to 16 nodes (AMD), and the next design for Intel scaled to
> 1024 nodes. We ran out of funding in 2010, sold the IP and most
> of the development team transfered to the buyer, until CFIUS put
> the kibosh on the deal.

A lot of development has not been done because of that.

>
>>
>> The basic premise is when you have hundreds of cpus they don't
>> (mostly) all need the fastest interconnect, there can be local
>> clusters of a few that have direct access to the same memory/buses
>> and where the caches are coordinated; but then there are the
>> rest of the 100+ cpus which have a slower interconnect to the
>> other clusters.
>>
>> SGI and others pioneered this.
>
> Sequent was first, IIRC.
>
>>
>> We did run a quite big (for the time) SGI R10K server for news,
>> 14 processors at the most (in 1996). We had THAT much access and
>> batching. We ranked among the top 50 among news servers worldwide
>> then.
>
> I worked on the successor OS for the SGI Origin systems (code named teak)
for
> a couple of years at SGI - it was a distributed version of IRIX
> that could present a single system image over a collection of
> interconnected (but not necessarily fully cache-coherent)
> nodes.

Kewl.
>
> At Unisys, we built the OPUS systems, which were also MPP
(non-cache-coherent)
> boxes with a single-system image version of Unix (SVR4.2ES/MP)
> on top of the Chorus Systemes microkernel. 64 two-processor pentium pro
> (p6) nodes each with ethernet and scsi that looked to software and
> operations personnel as a single computer system (this used the
> Intel Paragon wormhole-routing backplane for messaging between nodes).
>
> The problems with both teak and OPUS is performance suffers for
> shared memory applications (the granule of exclusion was the
> 4096 byte page, rather than the 64-byte cache line) which suffered
> from false-sharing.

Did the sceduler migrate the apps which shared memory to one
[I'll use the word] cluster? (I'm assuming that each node was
either one cluster or there could be a group of nodes which acted
like a cluster.

>
>
>>>
>>> I said "tight hard and software human communication". The meetings
>>> above are organized and sparse. I'm talking about being able to
>>> shout over the office wall or walking down the hall or bullshitting
>>> over a beer at the bar. We sorted out lots of things with that
>>> type of communication process so that most of our productive time
>>> could be spent on the useful ideas, designs, and development.
>
> I heard what you said, and disagree with it completely. There is
> a difference between a closed shop such as DEC, IBM and Burroughs
> and the modern decentralized hardware industry.

I understand the differences. I was merely wondering what else
changed w.r.t. the effort to develop hard/software long term.
I suspect schedules would be longer; this was resolved by some
to ship code every month instead of having a major software
development cycle of two years.
>
>>
>> This informality is active in the Valley, and also in local places
>> like here.
>
> And to a lesser extent in the Boston metro area.

We couldn't have gotten anything done with the same efficiency
without the possibility of daily informal gatherings. e.g., For one
of my tasks, I needed lots of details resolved; 2 or 3 details
required talking to a different person. Setting up meetings for
this would have taken me a week and I needed the questions answered
yesterday. So I sat by the coffee machine and, as each person
came to get his/her coffee, I would ask them the few questions.
Each interaction took about two minutes and I had everything
resolved within 2 hours (it took that long because people's
coffee habits spanned two hours). Another guy watched me
get the job done and marveled. I told him it was a waste
of my time, the people and their secretaries' time to
do a formal meeting. I also didn't want to interrupt
each person by going to thier office door and asking the
questions because it would interfere with their work thinking.

We also did alot of pre-development designing at the bar.
A two-hour session with 3 or 4 people cut enought of the rough
edges off a new product design so that design, functional,
and project specs could be written in the next week.
No time was wasted during these sessions; having a formal
meeting to do similar work would take 2 or 3 times as long.
Formal meetings are rife with territorial imperative
and time wasting.

/BAH
Re: The ICL 2900 [message #339443 is a reply to message #339397] Thu, 16 March 2017 10:10 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
pechter@lakewoodmicro-fbsd-tor1-01.lakewoodmicro.com (William Pechter) writes:
> In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
> jmfbahciv <See.above@aol.com> wrote:

>> I know its usefulness. I'm wondering about what the tradeoffs were
>> to achieve that. I tried to promote plug'n play at DEC and hit
>> a diamond brick wall.
>
> OK... a computer topic that's interesting. How were you trying to implement
> plug and play at DEC... They were getting there on Vaxes if you had
> things like UDA50's and such which the hardware had things like the interrupt
> vector programmed into it by the cpu when it checked the bus.

The burroughs systems were plug-n-play. A subset of the I/O command set
was standardized across all host controllers (Controls pre B4800, DLPS post B4800)
and each would implement the TEST/ID command which returned a unique 8-bit
identifier for each type of DLP. This allowed the MCP to scan the entire
channel space on boot (HALT/LOAD in burroughsese) and associate the tailored
I/O code (aka Driver) to each channel. Likewise for non-unit-record DLPs,
a READ-UNIT-STATUS op was supported by tape and disk drives that would report
the type of controller and the type of drive, from which capacity and other
characteristics were also derived.

There were operator commands to tell the MCP to rescan a particular channel
if a change was made - so the MCP didn't even need to be rebooted to add
or remove a channel and associated devices; it certainly didn't require the
operating system to be rebuilt (SYSGEN) like our major competitive systems :-)

The MCP also fully supported AVREC (Automatic Volume Recognition), so if a
particular job was waiting for a tape with a specific label, the MCP would
automatically assign the unit to the job when the TEST/UNIT-READY op
completed on a unit and the MCP would read the volume label and take
the appropriate action. So unlike competitors systems, one never needed
to pre-assign hardware resources when the job was started.
Re: The ICL 2900 [message #339454 is a reply to message #339432] Thu, 16 March 2017 13:03 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
jmfbahciv <See.above@aol.com> writes:
> IBM seems to have never gotten rid of their determination to not
> cooperate with other manufacturers hardware. JMF's first DEC project
> was to get DEC computers and IBM computers to communicate. IBM
> believe that homogeneous manufactured hardware was the only possibility;
> it took DEC hard/software engineers to break that self-imposed rule.
> This was in 1970, 1971. DEC was willing to talk to any hardware,
> including others'. 1.5 decades later it became Digital and just
> as snooty as IBM. I blame this on all those mid-level managers
> who got hired from IBM.


The claim is that the major motivation for Future System project in the
early 70s was clone controllers ... making such a tight integration
between processor systems and I/O controllers that it would
significantly raise the bar for clone controllers (however the lack of
370 products during the Future System period is credited with giving
clone processors a market foothold)
http://www.garlic.com/~lynn/submain.html#futuresys

As an undergraduate I was involved in project to create clone controller
first using interdata/3 ... then evolved into interdata/4 (for the
channel interface) and cluster of interdata/3s handling
ports/lines. Interdata markets this, and later marketed under the P/E
logo after Perkin/Elmer acquires Interdata. Four of is get written up as
responsible (for some part of) clone controller market.
http://www.garlic.com/~lynn/subtopic.html#360pcm

There have been some claims that the tight integration between SNA/VTAM
(mainframe) and SNA/NCP (37x5 controller ) are the closest survivor of
Future System objectives.

I've mentioned periodically that in the late 80s, a senior disk enginner
gets a talk scheduled at the annual world-wide, internal communication
group conference, supposedly on 3174 performance, but opened the talk
that the communication group was going to be responsible for the demise
of the disk division. The issue was that the communication group had
strategic corporate responsibility for everything that crossed the
datacenter walls and was fiercely fighting off distributed computing and
client/server, trying to protect its dumb (emulated) terminal paradigm
and install base. The disk division was seeing data fleeing the
datacenters to more distributed computing friendly platforms with drop
in disk sales. The disk division had come up with number of solutions to
reverse the problems, but they were constantly being vetoed by the
communication group
http://www.garlic.com/~lynn/subnetwork.html#terminal

In the early 80s, I had a project I called high-speed data transport
(HSDT) and was working with the director of NSF on interconnecting the
NSF supercomputer centers. We were suppose to get $20M, but then
congress cuts the budget, some other things happen, and then NSF
releases an RFP. Internal politics prevent us from bidding. The director
of NSF tries to help by writting the company a letter (with support from
other agencies), but that just makes the internal politics worse (as
does references to what we already have running is at least 5yrs ahead
of all RFP responses). As regional networks connect into the centers,
it grows into the NSFNET backbone (presursor to modern internet). some
old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet
and past posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

The communication group was even spreading a lot of misinformation
internally, claiming that SNA/VTAM could be used for the NSF RFP
.... somebody collects lot of that misinformation email and forwards it
to us ... heavily snipped and redacted to protect the guilty.
http://www.garlic.com/~lynn/2006w.html#email870109

other communication group misinformation related to pressuring the conversion
of the internal network to SNA/VTAM
http://www.garlic.com/~lynn/2006x.html#email870302
http://www.garlic.com/~lynn/2011.html#email870306

past internal network (the internal network was larger than
arpanet/internet from just about the beginning until sometime mid-80s)
http://www.garlic.com/~lynn/subnetwork.html#internalnet

the internal network technology had also been used for the corporate
sponsored university network (also for a time larger than
arpanet/internet):
http://www.garlic.com/~lynn/subnetwork.html#bitnet

In the early 70s, my wife was co-author for AWP39, peer-to-peer
networking architecture ... in the same time that SNA was being
formulated. She was then in the gburg JES group when she was con'ed into
going to POK to be responsible for loosely-coupled architecture
(mainframe for cluster) where she did peer-coupled shared data
architecture
http://www.garlic.com/~lynn/submain.html#shareddata

She didn't remain long, in part because of little updated (except for
IMS hotstandby) until SYSPLEX and Parallel SYSPLEX and in part because
of contant battles with the communication group trying to force her into
using SNA/VTAM for loosely-coupled operation. Nearly a decade later she
was co-author for a response to gov. request for highly secure campus
distributed computing operation where she introduces the concept of
3-layer/middle network. We are then out doing customer executive
presentations on 3-layer network (including ethernet adapters) and
taking lots of arrows in the back from the communication group (and the
SAA and token ring people). past posts
http://www.garlic.com/~lynn/subnetwork.html#3tier

part of that 3-tier customer executive presentation
http://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times?
http://www.garlic.com/~lynn/2013m.html#7 Voyager 1 just left the solar system using less computing powerthan your iP

One of the jokes was SNA was not a "System", not a "Network", and not an
"Architecture". Other IBM groups claim that they tried to build products
that would interoperate with communication group SNA/VTAM and find that
even "internal only" documents were sufficient, they basically had to
reverse engineer the actual interface and lots of trial&error.

recent posts in this thread
http://www.garlic.com/~lynn/2017.html#58 The ICL 2900
http://www.garlic.com/~lynn/2017.html#59 The ICL 2900
http://www.garlic.com/~lynn/2017.html#61 The ICL 2900
http://www.garlic.com/~lynn/2017.html#74 The ICL 2900
http://www.garlic.com/~lynn/2017.html#75 The ICL 2900
http://www.garlic.com/~lynn/2017b.html#69 The ICL 2900
http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#44 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#45 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
http://www.garlic.com/~lynn/2017c.html#54 The ICL 2900

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339455 is a reply to message #339432] Thu, 16 March 2017 13:10 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
jmfbahciv <See.above@aol.com> writes:
3]
>
> IBM seems to have never gotten rid of their determination to not
> cooperate with other manufacturers hardware. JMF's first DEC project
> was to get DEC computers and IBM computers to communicate. IBM
> believe that homogeneous manufactured hardware was the only possibility;
> it took DEC hard/software engineers to break that self-imposed rule.
> This was in 1970, 1971.

Burroughs systems could communicate with IBM systems in the late
60's, so "it didn't take DEC ...". Burroughs chose EBCDIC
in 1963 specifically for compatibility with IBM peripherals, and
supported tape interchange formats early on.
Re: The ICL 2900 [message #339458 is a reply to message #339443] Thu, 16 March 2017 13:54 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2017-03-16, Scott Lurndal <scott@slp53.sl.home> wrote:

> The burroughs systems were plug-n-play. A subset of the I/O command set
> was standardized across all host controllers (Controls pre B4800, DLPs post
> B4800) and each would implement the TEST/ID command which returned a unique
> 8-bit identifier for each type of DLP. This allowed the MCP to scan the
> entire channel space on boot (HALT/LOAD in burroughsese) and associate
> the tailored I/O code (aka Driver) to each channel. Likewise for
> non-unit-record DLPs, a READ-UNIT-STATUS op was supported by tape and
> disk drives that would report the type of controller and the type of
> drive, from which capacity and other characteristics were also derived.

A friend worked in a Burroughs 1700 shop. I remember seeing the machine
detect attached hardware. I was jealous.

> There were operator commands to tell the MCP to rescan a particular channel
> if a change was made - so the MCP didn't even need to be rebooted to add
> or remove a channel and associated devices; it certainly didn't require the
> operating system to be rebuilt (SYSGEN) like our major competitive systems :-)

Sweet.

> The MCP also fully supported AVREC (Automatic Volume Recognition), so if a
> particular job was waiting for a tape with a specific label, the MCP would
> automatically assign the unit to the job when the TEST/UNIT-READY op
> completed on a unit and the MCP would read the volume label and take
> the appropriate action. So unlike competitors systems, one never needed
> to pre-assign hardware resources when the job was started.

Univac's OS/3 supported at least part of this (as well as part of the
term: AVR). Although you could specify specific device addresses in
the JCL, this was usually only needed by CEs wanting to exercise a
particular disc or tape drive. When a job needed a disk or tape,
the console message would specify the address of a free device, or
just say "anywhere". Whenever you mounted a tape or disk, the OS
would automatically read the volume label and update its volume table.
You could see what was mounted with the MI VI console command
(which stood for MIx Volume Information, but since it only looked
at the first two characters, MILDLY VICIOUS worked just as well).

--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Re: The ICL 2900 [message #339461 is a reply to message #339458] Thu, 16 March 2017 14:38 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
> On 2017-03-16, Scott Lurndal <scott@slp53.sl.home> wrote:

> You could see what was mounted with the MI VI console command
> (which stood for MIx Volume Information, but since it only looked
> at the first two characters, MILDLY VICIOUS worked just as well).

The MCP also only cared about the first two characters, so the
black-out (BO) command, that was used on printing consoles to
create a blacked-out region over which a password would be
entered, could instead be typed as BOOBS, to which the
response was a series of overwrites using XXXXXXXX, WWWWWWWW,
and MMMMMMM.

When issued on a CRT terminal, only the MMMMMMMM would show;
which was an appropriate response (if accidental) to the
BOOBS command.
Re: The ICL 2900 [message #339474 is a reply to message #339461] Thu, 16 March 2017 17:45 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2017-03-16, Scott Lurndal <scott@slp53.sl.home> wrote:

> Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
>
>> On 2017-03-16, Scott Lurndal <scott@slp53.sl.home> wrote:
>
>> You could see what was mounted with the MI VI console command
>> (which stood for MIx Volume Information, but since it only looked
>> at the first two characters, MILDLY VICIOUS worked just as well).
>
> The MCP also only cared about the first two characters, so the
> black-out (BO) command, that was used on printing consoles to
> create a blacked-out region over which a password would be
> entered, could instead be typed as BOOBS, to which the
> response was a series of overwrites using XXXXXXXX, WWWWWWWW,
> and MMMMMMM.
>
> When issued on a CRT terminal, only the MMMMMMMM would show;
> which was an appropriate response (if accidental) to the
> BOOBS command.

By rolling up the IBM 2741's carriage during a similar response
to an MTS login, we found that the overstruck characters were
W, M, B, and I.

Speaking of significant characters in commands, the WATFOR
compiler was invoked under the student monitor with a
$COMPILE card. A lot of people thought it was $COMPUTE,
which worked just as well since only the first four characters
were signifcant. A buddy preferred $COMPOST.

--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Re: The ICL 2900 [message #339518 is a reply to message #339431] Fri, 17 March 2017 08:03 Go to previous messageGo to next message
Morten Reistad is currently offline  Morten Reistad
Messages: 2108
Registered: December 2011
Karma: 0
Senior Member
In article <PM00054AD82F93C0EC@aca42926.ipt.aol.com>,
jmfbahciv <See.above@aol.com> wrote:
> Morten Reistad wrote:
>> In article <PM00054AC50DE6BC65@aca40287.ipt.aol.com>,
>> jmfbahciv <See.above@aol.com> wrote:
>>> Scott Lurndal wrote:
>>>> jmfbahciv <See.above@aol.com> writes:
>
> <snip>
>
>>>> > However the issue does need
>>>> >tight hardware and software development human communication.
>>>>
>>>> If you think there is no communication between hardware and software
>>>> developers, you've not been paying attention. Just two weeks
>>>> ago I participated in a technical advisory board meeting sponsered
>>>> by one of the major processor vendors - this is the group that
>>>> helps guide the future development of the processor architecture and
>>>> the group consisted of senior engineers from operating
>>>> system vendors, many of the major hardware vendors, several of the
>>>> major data center operators and the two major search engines.
>>>>
>>>> See also http://opencompute.org/ for another vehicle of
>>>> "human communication" between hardware and software and
>>>> end-users (Facebook is a major sponsor).
>>>
>>> I said "tight hard and software human communication". The meetings
>>> above are organized and sparse. I'm talking about being able to
>>> shout over the office wall or walking down the hall or bullshitting
>>> over a beer at the bar. We sorted out lots of things with that
>>> type of communication process so that most of our productive time
>>> could be spent on the useful ideas, designs, and development.
>>
>> This informality is active in the Valley, and also in local places
>> like here.
>
> On a daily basis? Having to call a meeting is not the kind
> of interaction I'm talking about. This interaction occurs
> before any meeting is called and the meeting is to fine tune
> the results of the interactions.

There are regularly scheduled meals and events, around 6 per month
in various fora. Around half is just meeting in some budget eatery
or bad (not the cheapest, but not quite far off).

This is what it is like here, quite at the outskirts of the
civilised world; so I gather there are lots of similar, informal
events all around the world.

We do get the commodore annoucements here. They are usually on the
west coast, but they seem quite active.

-- mrr
Re: The ICL 2900 [message #339521 is a reply to message #339454] Fri, 17 March 2017 09:15 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Anne & Lynn Wheeler wrote:
> jmfbahciv <See.above@aol.com> writes:
>> IBM seems to have never gotten rid of their determination to not
>> cooperate with other manufacturers hardware. JMF's first DEC project
>> was to get DEC computers and IBM computers to communicate. IBM
>> believe that homogeneous manufactured hardware was the only possibility;
>> it took DEC hard/software engineers to break that self-imposed rule.
>> This was in 1970, 1971. DEC was willing to talk to any hardware,
>> including others'. 1.5 decades later it became Digital and just
>> as snooty as IBM. I blame this on all those mid-level managers
>> who got hired from IBM.
>
>
> The claim is that the major motivation for Future System project in the
> early 70s was clone controllers ... making such a tight integration
> between processor systems and I/O controllers that it would
> significantly raise the bar for clone controllers (however the lack of
> 370 products during the Future System period is credited with giving
> clone processors a market foothold)
> http://www.garlic.com/~lynn/submain.html#futuresys
>
> As an undergraduate I was involved in project to create clone controller
> first using interdata/3 ... then evolved into interdata/4 (for the
> channel interface) and cluster of interdata/3s handling
> ports/lines. Interdata markets this, and later marketed under the P/E
> logo after Perkin/Elmer acquires Interdata. Four of is get written up as
> responsible (for some part of) clone controller market.
> http://www.garlic.com/~lynn/subtopic.html#360pcm
>
> There have been some claims that the tight integration between SNA/VTAM
> (mainframe) and SNA/NCP (37x5 controller ) are the closest survivor of
> Future System objectives.
>
> I've mentioned periodically that in the late 80s, a senior disk enginner
> gets a talk scheduled at the annual world-wide, internal communication
> group conference, supposedly on 3174 performance, but opened the talk
> that the communication group was going to be responsible for the demise
> of the disk division. The issue was that the communication group had
> strategic corporate responsibility for everything that crossed the
> datacenter walls and was fiercely fighting off distributed computing and
> client/server, trying to protect its dumb (emulated) terminal paradigm
> and install base. The disk division was seeing data fleeing the
> datacenters to more distributed computing friendly platforms with drop
> in disk sales. The disk division had come up with number of solutions to
> reverse the problems, but they were constantly being vetoed by the
> communication group
> http://www.garlic.com/~lynn/subnetwork.html#terminal

We managed to avoid most of that kind of anti-production politics by
having each product line implement their own products. DECnet started
to trend towards your IBM's comm group; their origin was VAX-based.
However, they never got to having the equivalent power IBM's comm
group had because of DEC's breakup and demise. I suppose having
multiple product line installations at a lot of customers sites also
kept that kind of losing attitude to a minimum.
>
> In the early 80s, I had a project I called high-speed data transport
> (HSDT) and was working with the director of NSF on interconnecting the
> NSF supercomputer centers. We were suppose to get $20M, but then
> congress cuts the budget, some other things happen, and then NSF
> releases an RFP. Internal politics prevent us from bidding. The director
> of NSF tries to help by writting the company a letter (with support from
> other agencies), but that just makes the internal politics worse (as
> does references to what we already have running is at least 5yrs ahead
> of all RFP responses). As regional networks connect into the centers,
> it grows into the NSFNET backbone (presursor to modern internet). some
> old email
> http://www.garlic.com/~lynn/lhwemail.html#nsfnet
> and past posts
> http://www.garlic.com/~lynn/subnetwork.html#nsfnet
>
> The communication group was even spreading a lot of misinformation
> internally, claiming that SNA/VTAM could be used for the NSF RFP
> ... somebody collects lot of that misinformation email and forwards it
> to us ... heavily snipped and redacted to protect the guilty.
> http://www.garlic.com/~lynn/2006w.html#email870109
>
> other communication group misinformation related to pressuring the
conversion
> of the internal network to SNA/VTAM
> http://www.garlic.com/~lynn/2006x.html#email870302
> http://www.garlic.com/~lynn/2011.html#email870306
>
> past internal network (the internal network was larger than
> arpanet/internet from just about the beginning until sometime mid-80s)
> http://www.garlic.com/~lynn/subnetwork.html#internalnet
>
> the internal network technology had also been used for the corporate
> sponsored university network (also for a time larger than
> arpanet/internet):
> http://www.garlic.com/~lynn/subnetwork.html#bitnet

How in the world did IBM stay in business? First customer ships
had to be delayed by years since no work could did done. There
had to have been subversive groups which managed to get the work
done despite the management.
>
> In the early 70s, my wife was co-author for AWP39, peer-to-peer
> networking architecture ... in the same time that SNA was being
> formulated. She was then in the gburg JES group when she was con'ed into
> going to POK to be responsible for loosely-coupled architecture
> (mainframe for cluster) where she did peer-coupled shared data
> architecture
> http://www.garlic.com/~lynn/submain.html#shareddata
>
> She didn't remain long, in part because of little updated (except for
> IMS hotstandby) until SYSPLEX and Parallel SYSPLEX and in part because
> of contant battles with the communication group trying to force her into
> using SNA/VTAM for loosely-coupled operation. Nearly a decade later she
> was co-author for a response to gov. request for highly secure campus
> distributed computing operation where she introduces the concept of
> 3-layer/middle network. We are then out doing customer executive
> presentations on 3-layer network (including ethernet adapters) and
> taking lots of arrows in the back from the communication group (and the
> SAA and token ring people). past posts
> http://www.garlic.com/~lynn/subnetwork.html#3tier

I admire her. She had quite a few wars; I'd never have survived.

>
> part of that 3-tier customer executive presentation
> http://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york
times?
> http://www.garlic.com/~lynn/2013m.html#7 Voyager 1 just left the solar
system using less computing powerthan your iP
>
> One of the jokes was SNA was not a "System", not a "Network", and not an
> "Architecture". Other IBM groups claim that they tried to build products
> that would interoperate with communication group SNA/VTAM and find that
> even "internal only" documents were sufficient, they basically had to
> reverse engineer the actual interface and lots of trial&error.
>
> recent posts in this thread
> http://www.garlic.com/~lynn/2017.html#58 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#59 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#61 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#74 The ICL 2900
> http://www.garlic.com/~lynn/2017.html#75 The ICL 2900
> http://www.garlic.com/~lynn/2017b.html#69 The ICL 2900
> http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#44 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#45 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
> http://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
>

Did [who was it?] Gershner(sp?) manage to stop that kind of insanity?

/BAH
Re: The ICL 2900 [message #339523 is a reply to message #339518] Fri, 17 March 2017 10:12 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Morten Reistad <first@last.name.invalid> writes:

> This is what it is like here, quite at the outskirts of the
> civilised world; so I gather there are lots of similar, informal
> events all around the world.
>
> We do get the commodore annoucements here. They are usually on the
> west coast, but they seem quite active.

Some consider Fresno to be "the outskirts of the civilized world",
but it's relatively (circa 100 miles) close to the west coast :-).

I gave my Amiga 1000 to that club, so I did attend one of their
meetings. Enthusiastic commodore supporters, with a larger
percentage of younger members than I had expected. They were
quite interested in folklore as well :-)
Re: The ICL 2900 [message #339541 is a reply to message #339521] Fri, 17 March 2017 13:57 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
jmfbahciv <See.above@aol.com> writes:
> Did [who was it?] Gershner(sp?) manage to stop that kind of insanity?

re:
http://www.garlic.com/~lynn/2017c.html#55 The ICL 2900
demise of disk division
http://www.garlic.com/~lynn/subtopic.html#terminal

President of AMEX is in competition to be next CEO and wins. The looser
leaves taking their protegee and goes to Baltimore and take over what is
called a loan sharking business. They make some number of other
acquisitions eventually acquiring CITI in violation in Glass-Steagall.
Greenspan gives them an exemption while they lobby congress for
Glass-Steagall repeal, including enlisting the SECTREAS (and former head
of Goldman-Sachs), who resigns and joins CITI as soon as the repeal is
added to GLBA (enabling "too big to fail"). The protegee then leaves
CITI and becomes CEO of CHASE.

pecora hearings &/or glass-steagall posts
http://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass -Steagall
"too big to fail" posts
http://www.garlic.com/~lynn/submisc.html#too-big-to-fail

AMEX is in competition with KKR for private-equity take-over of RJR. KKR
wins, but runs into some trouble with RJR and hires away president of
AMEX to help turn it around.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fa ll_of_RJR_Nabisco

A few years after the talk by senior disk engineer (that the
communication group will be responsible for the demise of the disk
division; the communication group strangle hold on datacenters with
corporate strategic ownership of everything that crosses datacenter
walls), IBM has gone in the red and was being reorganized into the 13
"baby blues" in preparation to breaking up the company. The board then
hires away the former president of AMEX to reverse the breakup and
resurrect the comapny ... using some of the same techniques used at RJR.
http://www.ibmemployee.com/RetirementHeist.shtml

The former AMEX president then leaves IBM and becomes the head of
another large private-equity company ... one of the take-overs is the
beltway bandit that will employ Snowden. Private-equity take-over of
beltway bandits contributed to the enormous increase in outsourcing last
decade ... and companies in the private-equity mills are under intense
pressure to cut corners to provide profit to their owners. Intelligence
has 70% of the budget and over half the people outsourced ... past
article on Snowden's employer and its private-equity owner
http://www.investingdaily.com/17693/spies-like-us

it also contributes to the rapidly spreading "Success of Failure"
culture
http://www.govexec.com/excellence/management-matters/2007/04 /the-success-of-failure/24107/

past posts
http://www.garlic.com/~lynn/submisc.html#gerstner
and
http://www.garlic.com/~lynn/submisc.html#private.equity
and
http://www.garlic.com/~lynn/submisc.html#success.of.failure

There was IBM employee legal action for what was being done to their
retirement ... including changing pension obligation being listed as
asset rather than liability (corporate asset is up for grabs if the
company ever declared bankruptcy) ... change boosts the value/stock,
boosting price/share, increasing executive bonuses.

There are also claims that the "stock buyback" culture was introduced
then and has since dominated a lot of IBM financials since; Stockman
"The Corruption of Capitalism in America", pg464/loc9995-10000:

IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company spent
a staggering $67 billion repurchasing its own shares, a figure that was
equal to 100 percent of its net income.

pg465/10014-17:

Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year period.
Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.

.... snip ...

posts
http://www.garlic.com/~lynn/submisc.html#stock.buyback

Lots of properties were also being sold off to raise cash. I've
mentioned before in the 80s, Nestle sold its new, almost finished
corporate hdqtrs bldg to IBM for ten cents on the dollar. After new CEO
comes in, the bldg is (re)sold to Mastercard for its new hdqtrs
bldg. Shortly after Mastercard moves in, we are at a executive direction
meeting with them ... and Mastercard says that they paid more to have
all the internal door handles changed than they paid IBM for the bldg.

About the time IBM first goes into the red, AMEX spins off a lot of its
(mostly IBM mainframe) dataprocessing and outsourcing as FDC in the
largest IPO up until that time. Around 2000 they are doing outsourced
processing for a little over half the US credit card and debit card
processing ... as well as having introduced the original magstripe
merchant & gift card stored-value products. I've mentioned before that
about that time they have something over 40 of max configured IBM
mainframes (@$30M, constantly being updated on 18month cycle) configured
for doing overnight batch settlement and I look at improving the
performance of the 450+k lines-of-code cobol application doing
settlement. 15yrs after FDC is spun off in the largest IPO (up until
that time), KKR (referenced in the RJR private-equity take-over) does
private-equity take-over of FDC in the largest reverse-IPO up until that
time.

past posts mentioning doing performance improvement on 450+K LOC
cobol application
http://www.garlic.com/~lynn/2006s.html#24 Curiosity: CPU % for COBOL program
http://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?
http://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
http://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
http://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
http://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
http://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
http://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC derivatives warns State Street
http://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
http://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
http://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order' for financial services
http://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
http://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
http://www.garlic.com/~lynn/2012n.html#18 System/360--50 years--the future?
http://www.garlic.com/~lynn/2012n.html#24 System/360--50 years--the future?
http://www.garlic.com/~lynn/2012n.html#56 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
http://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will outlive us all
http://www.garlic.com/~lynn/2014b.html#83 CPU time
http://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
http://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
http://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
http://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
http://www.garlic.com/~lynn/2017b.html#15 Trump to sign cyber security order

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339546 is a reply to message #339432] Fri, 17 March 2017 14:13 Go to previous messageGo to next message
Alfred Falk is currently offline  Alfred Falk
Messages: 195
Registered: June 2012
Karma: 0
Senior Member
jmfbahciv <See.above@aol.com> wrote in
news:PM00054AD89C50621B@aca42926.ipt.aol.com:

> Anne & Lynn Wheeler wrote:
>> scott@slp53.sl.home (Scott Lurndal) writes:
>>> Sequent was first, IIRC.
>>
>> re:
>> http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
>> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
>> http://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
>> http://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
>>
>> Sequent implements "snoopy" cache for Balance
>> http://www.icsa.inf.ed.ac.uk/cgi-bin/hase/coherence-m.pl?wtu -model-t.ht
>> ml ,wt u-model-f.html,menu1.html
>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Balan ce
>>
>> Then did Symmetry ... started with i386, which was what was installed
>> at Mosaic/Netscape (trivia: when NCSA complained about using "Mosaic",
>> what company donated "Netscape") that addressed FINWAIT problem
>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Symme try
>>
>> In 1994 Sequent introduced the Symmetry 5000 series models SE20, SE60
>> and SE90, which used 66 MHz Pentium CPUs in systems from 2 to 30
>> processors. The next year they expanded that with the SE30/70/100
>> lineup using 100 MHz Pentiums, and then in 1996 with the SE40/80/120
>> with 166 MHz Pentiums. A variant of the Symmetry 5000, the WinServer
>> 5000 series, ran Windows NT instead of DYNIX/ptx.[10]
>>
>> .... snip ...
>>
>> Sequent claimed that they did the work on NT, restructuring kernel for
>> SMP scaleup (for servers). However, upthread, I reference that still
>> doesn't get consumer/desktop application threading for increasingly
>> multi-core processors
>> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
>>
>> Then for Exemplar, Sequent used SCI (but data general, sgi, convex,
>> and others, did also)
>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA
>>
>> SCI
>> https://en.wikipedia.org/wiki/Scalable_Coherent_Interconnect
>> NUMA
>> https://en.wikipedia.org/wiki/Non-uniform_memory_access
>>
>> I've mentioned before 370 2-way SMP slowed down base processor cycle
>> to allow cross-cache invalidation signals ... that was just the start,
>> any processing of actual invalidation overhead would be in addition to
>> the base processor cycle slowdown. That is with just one other
>> processor sending invalidation ... going to 4-way SMP then would mean
>> three other processors broadcasting cross-cache invalidation signals.
>> past SMP posts
>> http://www.garlic.com/~lynn/subtopic.html#smp
>>
>> I've periodically claimed that John Coche 801/risc
>> https://en.wikipedia.org/wiki/IBM_801 and
>> http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/risc/
>>
>> was done to be the opposite of the enormously complex (failed)
>> Future System effort
>> http://www.garlic.com/~lynn/submain.html#futuresys
>>
>> but another part of 801/risc was no cache consistency ... not even
>> between the i-cache and d-cache (in the same processor), ... along
>> with store-into cache ... loader needed special instruction to
>> invalidate address range in the i-cache and force corresponding
>> changes in the d-cache to memory (i.e. loader may have altered loaded
>> program instruction sequence as part of load, which would be in the
>> d-cache, which would have to be force to memory and any stale
>> information in the i-cache removed ... so latest copy could be loaded
>> to i-cache) ... aka not fall into the strong memory consistency
>> overhead of 370 SMP.
>
> Imagine the surprise of JMF and TW when they discovered that the KL
> did not support write-thru cache. It almost caused the TOPS-10 SMP
> project to be cancelled.
>
>
>>
>> part of somerset (AIM, referenced upthread) for power/pc was to
>> support cache consistency protocol ... i somewhat characterize it as
>> adding motorola 88k cache consistency to 801/risc.
>>
>> past posts mention 801/risc, romp, rios, fort knox, pc/rt, power,
>> somerset, AIM, power/pc http://www.garlic.com/~lynn/subtopic.html#801
>>
>> IBM purchase of sequent
>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#IBM_p urchase_and
>> _disa ppearance
>>
>> An alternative view of IBM's actions, born out of the belief[13] that
>> corporations maintain consistent strategies over the short and medium
>> term despite executive changes, is that IBM acquired Sequent not to
>> nurture it but simply to keep it out of Sun's clutches. Through its
>> acquisition of what became the Enterprise 10000 server line from Cray,
>> Sun had done so much financial damage to IBM's server market share,
>> that IBM was very reluctant to see this disaster repeated.[citation
>> needed] Even if it generated zero revenue for IBM, the net present
>> value of Sequent from IBM's viewpoint was higher inside IBM than
>> inside Sun.[13]
>
> IBM seems to have never gotten rid of their determination to not
> cooperate with other manufacturers hardware. JMF's first DEC project
> was to get DEC computers and IBM computers to communicate. IBM
> believe that homogeneous manufactured hardware was the only
> possibility; it took DEC hard/software engineers to break that
> self-imposed rule. This was in 1970, 1971. DEC was willing to talk to

DEC built a link between a PDP-9 and a 360/65 (initially) a 360/50)with a
1000' cable between in 1967-8. The nuclear physics experiment this was
developed for never quite worked out. The PDP-9 was to collect data from
instrumentation on a cyclotron and the 360 was to process it in real time,
returning reduced data for display by the 9. By the time the bugs (software
and hardware) were worked out they decided they didn't really need the real-
time link. For a few years it was used a kind of RJE by people working on
the 9, but not much else. IBM was never co-ooperative.

> any hardware, including others'. 1.5 decades later it became Digital
> and just as snooty as IBM.

Yup.

> I blame this on all those mid-level
> managers who got hired from IBM.
>
>
> /BAH
>
Re: The ICL 2900 [message #339550 is a reply to message #339546] Fri, 17 March 2017 14:36 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2017-03-17, Alfred Falk <falk@arc.ab.ca> wrote:

> jmfbahciv <See.above@aol.com> wrote in
> news:PM00054AD89C50621B@aca42926.ipt.aol.com:
>
>> IBM seems to have never gotten rid of their determination to not
>> cooperate with other manufacturers hardware. JMF's first DEC project
>> was to get DEC computers and IBM computers to communicate. IBM
>> believe that homogeneous manufactured hardware was the only
>> possibility; it took DEC hard/software engineers to break that
>> self-imposed rule. This was in 1970, 1971. DEC was willing to talk to
>
> DEC built a link between a PDP-9 and a 360/65 (initially) a 360/50)with a
> 1000' cable between in 1967-8. The nuclear physics experiment this was
> developed for never quite worked out. The PDP-9 was to collect data from
> instrumentation on a cyclotron and the 360 was to process it in real time,
> returning reduced data for display by the 9. By the time the bugs (software
> and hardware) were worked out they decided they didn't really need the real-
> time link. For a few years it was used a kind of RJE by people working on
> the 9, but not much else. IBM was never co-ooperative.

A friend assembled a number of IMSAIs to act as data concentrators for
his employer's Burroughs 17xx. When they switched to an IBM System/3
(a flagship machine, being the first model 15D in town), my friend was
having trouble getting his IMSAIs to talk to it. After checking the
lines, he found that the signal levels weren't standard. His conversation
with IBM went something like this:

Friend: Is this an RS-232 port?
IBM: Yes.
Friend: The signal levels aren't within the spec. RS-232 says that...
IBM: Our specification says... <conflicting specification>
Friend: Do you specify your port as being RS-232 compliant?
IBM: Yes.
Friend: Well, RS-232 specifies that...

He eventually got IBM to back down and modify their port so things worked.

--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Re: The ICL 2900 [message #339556 is a reply to message #339546] Fri, 17 March 2017 15:22 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Alfred Falk <falk@arc.ab.ca> writes:
> DEC built a link between a PDP-9 and a 360/65 (initially) a 360/50)with a
> 1000' cable between in 1967-8. The nuclear physics experiment this was
> developed for never quite worked out. The PDP-9 was to collect data from
> instrumentation on a cyclotron and the 360 was to process it in real time,
> returning reduced data for display by the 9. By the time the bugs (software
> and hardware) were worked out they decided they didn't really need the real-
> time link. For a few years it was used a kind of RJE by people working on
> the 9, but not much else. IBM was never co-ooperative.

re:
http://www.garlic.com/~lynn/2017c.html#55 The ICL 2900


I mentioned at university using Interdata (first interdata/3, then
upgraded to interdata/4 with cluster of interdata/3s) doing clone
controller for 360/67 ... that Interdata (and later Perkin/Elmer)
selling to lots of customers. Later I ran into former PE salesman said
he sold lots of the boxes to government, especially NASA (including
some real-time stuff)
http://www.garlic.com/~lynn/subtopic.html#360pcm

upthread
http://www.garlic.com/~lynn/2017c.html#3 The ICL 2900

is reference to Univ. of Michigan doing something similar using PDP-8 to
their 360/67 (later "upgraded" tp pdp11)
https://www.eecis.udel.edu/~mills/gallery/gallery7.html
https://www.eecis.udel.edu/~mills/gallery/gallery8.html

part of isn't wasn't building something to published interface, but
building board that interface to the internal channel interface and some
other conventions.

one of the "bugs" was 360/67 had high speed timer that had to update
storage location 80 with "tic" over 13+microseconds. (simplex) 360/67
(and all 360/65) had single memory interface bus .... turns out that if
the timer tics again before previous memory tic update has been done ...
machine "red lights" (hardware failure) and stops. Turns out that
channels have to periodic give up the memory bus in order for location
80 timer update to be done ... which in turns requires handshake with
controllers to signal channel to give up memory bus.

another "bug" was initial tests had terminal data from interdata
appearing in memory all garbage. Had overlooked that the official ibm
terminal controller (that the interdata was emulating) had line/port
scanners put incoming leading bits in lower order byte position (the
process was reversed for outgoing bits) ... as a result "official"
terminal bits appeared in memory bit-reversed in bytes. As a result the
terminal translate tables (to/from ebcdic) had to account for the
bit-reversed bytes.

as I previously mentioned a major motivation for future system effort
was to make extremely complex & integrated interface between processr,
channel and controllers as countermeasure to clone controllers
http://www.garlic.com/~lynn/submain.html#futuresys

the folklore then is that the extremely complex interface between
SNA/VTAM (mainframe) and 37x5/NCP (controller) is one of the few things
with really complex integration, that survived FS failure

in the early/mid 80s, I sucked into an effort to turn out a clone 37x5
implementation done by one of the baby bells on (IBM) Series/1 as
"TYPE-1" IBM product. It actually did emulation of both SNA/VTAM and
37x5/NCP initiating sessions with mainframe SNA/VTAM as "cross-domain"
.... i.e. the actual resource was "owned" by some other SNA/VTAM. Having
"ownership" of the resource out in the Series/1 contributed to being
able to do a lot of things not possible with real SNA/VTAM.

The communication group was well known for lots of corporate dirty
tricks ... and so we did a lot of stuff to insolate the effort from
anything the communication group was able to do. I then gave
presentation at Oct1986 SNA review board meeting in Raleigh on the
effort. What the communication group (to torpedo the project) can only
be described as truth is greater than fiction. Old post with part of the
Oct1986 presentation:
http://www.garlic.com/~lynn/99.html#67 System/1 ?

part of presntation given by one of the baby bell people at COMMON (s/1)
user group meeting
http://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: The ICL 2900 [message #339563 is a reply to message #339541] Fri, 17 March 2017 16:22 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Fri, 17 Mar 2017 10:57:55 -0700
Anne & Lynn Wheeler <lynn@garlic.com> wrote:

> President of AMEX is in competition to be next CEO and wins. The looser
> leaves taking their protegee and goes to Baltimore and take over what is

... and the tighter ?

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Re: The ICL 2900 [message #339564 is a reply to message #339546] Fri, 17 March 2017 16:32 Go to previous messageGo to next message
Rich Alderson is currently offline  Rich Alderson
Messages: 489
Registered: August 2012
Karma: 0
Senior Member
Alfred Falk <falk@arc.ab.ca> writes:

> DEC built a link between a PDP-9 and a 360/65 (initially) a 360/50)with a
> 1000' cable between in 1967-8. The nuclear physics experiment this was
> developed for never quite worked out. The PDP-9 was to collect data from
> instrumentation on a cyclotron and the 360 was to process it in real time,
> returning reduced data for display by the 9. By the time the bugs (software
> and hardware) were worked out they decided they didn't really need the real-
> time link. For a few years it was used a kind of RJE by people working on
> the 9, but not much else. IBM was never co-ooperative.

Hmm. Where was this? It sounds like the connection between the UOregon 360/50
and the PDP-7 (not PDP-9) in the Physics Lab in the Volcanology Building, from
what my friend Harlan Lefevre has shared (professor emeritus in Physics, and
responsible for the purchase of the PDP-7 in 1966). That -7 is running in the
2nd Floor exhibit hall at the museum. (LCM+L, Seattle)

--
Rich Alderson news@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Re: The ICL 2900 [message #339582 is a reply to message #339564] Fri, 17 March 2017 19:00 Go to previous messageGo to next message
Alfred Falk is currently offline  Alfred Falk
Messages: 195
Registered: June 2012
Karma: 0
Senior Member
Rich Alderson <news@alderson.users.panix.com> wrote in
news:mdd4lyr8x1b.fsf@panix5.panix.com:

> Alfred Falk <falk@arc.ab.ca> writes:
>
>> DEC built a link between a PDP-9 and a 360/65 (initially) a
>> 360/50)with a 1000' cable between in 1967-8. The nuclear physics
>> experiment this was developed for never quite worked out. The PDP-9
>> was to collect data from instrumentation on a cyclotron and the 360
>> was to process it in real time, returning reduced data for display by
>> the 9. By the time the bugs (software and hardware) were worked out
>> they decided they didn't really need the real- time link. For a few
>> years it was used a kind of RJE by people working on the 9, but not
>> much else. IBM was never co-ooperative.
>
> Hmm. Where was this?

University of Manitoba. KA9-17 was purchased specifically for this
particular project, but saw use generally in collecting data from cyclotron
experiments, mostly nuclear spectrometry. Replaced in 1971 or 1972 by a
PDP-15, but that was after I had moved on to grad school elsewhere.

> It sounds like the connection between the
> UOregon 360/50 and the PDP-7 (not PDP-9) in the Physics Lab in the
> Volcanology Building, from what my friend Harlan Lefevre has shared
> (professor emeritus in Physics, and responsible for the purchase of the
> PDP-7 in 1966). That -7 is running in the 2nd Floor exhibit hall at
> the museum. (LCM+L, Seattle)
>
Re: The ICL 2900 [message #339624 is a reply to message #339546] Sat, 18 March 2017 08:47 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Alfred Falk wrote:
> jmfbahciv <See.above@aol.com> wrote in
> news:PM00054AD89C50621B@aca42926.ipt.aol.com:
>
>> Anne & Lynn Wheeler wrote:
>>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>> Sequent was first, IIRC.
>>>
>>> re:
>>> http://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
>>> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
>>> http://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
>>> http://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
>>>
>>> Sequent implements "snoopy" cache for Balance
>>> http://www.icsa.inf.ed.ac.uk/cgi-bin/hase/coherence-m.pl?wtu -model-t.ht
>>> ml ,wt u-model-f.html,menu1.html
>>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Balan ce
>>>
>>> Then did Symmetry ... started with i386, which was what was installed
>>> at Mosaic/Netscape (trivia: when NCSA complained about using "Mosaic",
>>> what company donated "Netscape") that addressed FINWAIT problem
>>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#Symme try
>>>
>>> In 1994 Sequent introduced the Symmetry 5000 series models SE20, SE60
>>> and SE90, which used 66 MHz Pentium CPUs in systems from 2 to 30
>>> processors. The next year they expanded that with the SE30/70/100
>>> lineup using 100 MHz Pentiums, and then in 1996 with the SE40/80/120
>>> with 166 MHz Pentiums. A variant of the Symmetry 5000, the WinServer
>>> 5000 series, ran Windows NT instead of DYNIX/ptx.[10]
>>>
>>> .... snip ...
>>>
>>> Sequent claimed that they did the work on NT, restructuring kernel for
>>> SMP scaleup (for servers). However, upthread, I reference that still
>>> doesn't get consumer/desktop application threading for increasingly
>>> multi-core processors
>>> http://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
>>>
>>> Then for Exemplar, Sequent used SCI (but data general, sgi, convex,
>>> and others, did also)
>>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA
>>>
>>> SCI
>>> https://en.wikipedia.org/wiki/Scalable_Coherent_Interconnect
>>> NUMA
>>> https://en.wikipedia.org/wiki/Non-uniform_memory_access
>>>
>>> I've mentioned before 370 2-way SMP slowed down base processor cycle
>>> to allow cross-cache invalidation signals ... that was just the start,
>>> any processing of actual invalidation overhead would be in addition to
>>> the base processor cycle slowdown. That is with just one other
>>> processor sending invalidation ... going to 4-way SMP then would mean
>>> three other processors broadcasting cross-cache invalidation signals.
>>> past SMP posts
>>> http://www.garlic.com/~lynn/subtopic.html#smp
>>>
>>> I've periodically claimed that John Coche 801/risc
>>> https://en.wikipedia.org/wiki/IBM_801 and
>>> http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/risc/
>>>
>>> was done to be the opposite of the enormously complex (failed)
>>> Future System effort
>>> http://www.garlic.com/~lynn/submain.html#futuresys
>>>
>>> but another part of 801/risc was no cache consistency ... not even
>>> between the i-cache and d-cache (in the same processor), ... along
>>> with store-into cache ... loader needed special instruction to
>>> invalidate address range in the i-cache and force corresponding
>>> changes in the d-cache to memory (i.e. loader may have altered loaded
>>> program instruction sequence as part of load, which would be in the
>>> d-cache, which would have to be force to memory and any stale
>>> information in the i-cache removed ... so latest copy could be loaded
>>> to i-cache) ... aka not fall into the strong memory consistency
>>> overhead of 370 SMP.
>>
>> Imagine the surprise of JMF and TW when they discovered that the KL
>> did not support write-thru cache. It almost caused the TOPS-10 SMP
>> project to be cancelled.
>>
>>
>>>
>>> part of somerset (AIM, referenced upthread) for power/pc was to
>>> support cache consistency protocol ... i somewhat characterize it as
>>> adding motorola 88k cache consistency to 801/risc.
>>>
>>> past posts mention 801/risc, romp, rios, fort knox, pc/rt, power,
>>> somerset, AIM, power/pc http://www.garlic.com/~lynn/subtopic.html#801
>>>
>>> IBM purchase of sequent
>>> https://en.wikipedia.org/wiki/Sequent_Computer_Systems#IBM_p urchase_and
>>> _disa ppearance
>>>
>>> An alternative view of IBM's actions, born out of the belief[13] that
>>> corporations maintain consistent strategies over the short and medium
>>> term despite executive changes, is that IBM acquired Sequent not to
>>> nurture it but simply to keep it out of Sun's clutches. Through its
>>> acquisition of what became the Enterprise 10000 server line from Cray,
>>> Sun had done so much financial damage to IBM's server market share,
>>> that IBM was very reluctant to see this disaster repeated.[citation
>>> needed] Even if it generated zero revenue for IBM, the net present
>>> value of Sequent from IBM's viewpoint was higher inside IBM than
>>> inside Sun.[13]
>>
>> IBM seems to have never gotten rid of their determination to not
>> cooperate with other manufacturers hardware. JMF's first DEC project
>> was to get DEC computers and IBM computers to communicate. IBM
>> believe that homogeneous manufactured hardware was the only
>> possibility; it took DEC hard/software engineers to break that
>> self-imposed rule. This was in 1970, 1971. DEC was willing to talk to
>
> DEC built a link between a PDP-9 and a 360/65 (initially) a 360/50)with a
> 1000' cable between in 1967-8. The nuclear physics experiment this was
> developed for never quite worked out. The PDP-9 was to collect data from
> instrumentation on a cyclotron and the 360 was to process it in real time,
> returning reduced data for display by the 9. By the time the bugs (software
> and hardware) were worked out they decided they didn't really need the real-
> time link. For a few years it was used a kind of RJE by people working on
> the 9, but not much else. IBM was never co-ooperative.

JMF's project, which was successful, used a PDP-12. it was the site
which later ran a 5-CPU TOPS-10 SMP system.

>
>> any hardware, including others'. 1.5 decades later it became Digital
>> and just as snooty as IBM.
>
> Yup.

I wept.

/BAH
Re: The ICL 2900 [message #339625 is a reply to message #339541] Sat, 18 March 2017 08:47 Go to previous messageGo to next message
jmfbahciv is currently offline  jmfbahciv
Messages: 6173
Registered: March 2012
Karma: 0
Senior Member
Anne & Lynn Wheeler wrote:
>
> jmfbahciv <See.above@aol.com> writes:
>> Did [who was it?] Gershner(sp?) manage to stop that kind of insanity?
>
> re:
> http://www.garlic.com/~lynn/2017c.html#55 The ICL 2900
> demise of disk division
> http://www.garlic.com/~lynn/subtopic.html#terminal
>
> President of AMEX is in competition to be next CEO and wins. The looser
> leaves taking their protegee and goes to Baltimore and take over what is
> called a loan sharking business. They make some number of other
> acquisitions eventually acquiring CITI in violation in Glass-Steagall.
> Greenspan gives them an exemption while they lobby congress for
> Glass-Steagall repeal, including enlisting the SECTREAS (and former head
> of Goldman-Sachs), who resigns and joins CITI as soon as the repeal is
> added to GLBA (enabling "too big to fail"). The protegee then leaves
> CITI and becomes CEO of CHASE.
>
> pecora hearings &/or glass-steagall posts
> http://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass -Steagall
> "too big to fail" posts
> http://www.garlic.com/~lynn/submisc.html#too-big-to-fail
>
> AMEX is in competition with KKR for private-equity take-over of RJR. KKR
> wins, but runs into some trouble with RJR and hires away president of
> AMEX to help turn it around.
>
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fa ll_of_RJR_Nabisco
>
> A few years after the talk by senior disk engineer (that the
> communication group will be responsible for the demise of the disk
> division; the communication group strangle hold on datacenters with
> corporate strategic ownership of everything that crosses datacenter
> walls), IBM has gone in the red and was being reorganized into the 13
> "baby blues" in preparation to breaking up the company. The board then
> hires away the former president of AMEX to reverse the breakup and
> resurrect the comapny ... using some of the same techniques used at RJR.
> http://www.ibmemployee.com/RetirementHeist.shtml

So the answer might be yes via a side effect of reorganization and
lack of higher management having the habit of biasing business
decisions towards the comm group.
>
> The former AMEX president then leaves IBM and becomes the head of
> another large private-equity company ... one of the take-overs is the
> beltway bandit that will employ Snowden. Private-equity take-over of
> beltway bandits contributed to the enormous increase in outsourcing last
> decade ... and companies in the private-equity mills are under intense
> pressure to cut corners to provide profit to their owners. Intelligence
> has 70% of the budget and over half the people outsourced ... past
> article on Snowden's employer and its private-equity owner
> http://www.investingdaily.com/17693/spies-like-us
>
> it also contributes to the rapidly spreading "Success of Failure"
> culture
> http://www.govexec.com/excellence/management-matters/2007/04 /the-success-of-
failure/24107/
>
> past posts
> http://www.garlic.com/~lynn/submisc.html#gerstner
> and
> http://www.garlic.com/~lynn/submisc.html#private.equity
> and
> http://www.garlic.com/~lynn/submisc.html#success.of.failure
>
> There was IBM employee legal action for what was being done to their
> retirement ... including changing pension obligation being listed as
> asset rather than liability (corporate asset is up for grabs if the
> company ever declared bankruptcy) ... change boosts the value/stock,
> boosting price/share, increasing executive bonuses.
>
> There are also claims that the "stock buyback" culture was introduced
> then and has since dominated a lot of IBM financials since; Stockman
> "The Corruption of Capitalism in America", pg464/loc9995-10000:
>
> IBM was not the born-again growth machine trumpeted by the mob of Wall
> Street momo traders. It was actually a stock buyback contraption on
> steroids. During the five years ending in fiscal 2011, the company spent
> a staggering $67 billion repurchasing its own shares, a figure that was
> equal to 100 percent of its net income.
>
> pg465/10014-17:
>
> Total shareholder distributions, including dividends, amounted to $82
> billion, or 122 percent, of net income over this five-year period.
> Likewise, during the last five years IBM spent less on capital
> investment than its depreciation and amortization charges, and also
> shrank its constant dollar spending for research and development by
> nearly 2 percent annually.
>
> ... snip ...
>
> posts
> http://www.garlic.com/~lynn/submisc.html#stock.buyback
>
> Lots of properties were also being sold off to raise cash. I've
> mentioned before in the 80s, Nestle sold its new, almost finished
> corporate hdqtrs bldg to IBM for ten cents on the dollar. After new CEO
> comes in, the bldg is (re)sold to Mastercard for its new hdqtrs
> bldg. Shortly after Mastercard moves in, we are at a executive direction
> meeting with them ... and Mastercard says that they paid more to have
> all the internal door handles changed than they paid IBM for the bldg.
>
> About the time IBM first goes into the red, AMEX spins off a lot of its
> (mostly IBM mainframe) dataprocessing and outsourcing as FDC in the
> largest IPO up until that time. Around 2000 they are doing outsourced
> processing for a little over half the US credit card and debit card
> processing ... as well as having introduced the original magstripe
> merchant & gift card stored-value products. I've mentioned before that
> about that time they have something over 40 of max configured IBM
> mainframes (@$30M, constantly being updated on 18month cycle) configured
> for doing overnight batch settlement and I look at improving the
> performance of the 450+k lines-of-code cobol application doing
> settlement. 15yrs after FDC is spun off in the largest IPO (up until
> that time), KKR (referenced in the RJR private-equity take-over) does
> private-equity take-over of FDC in the largest reverse-IPO up until that
> time.

I sold my IBM stock last year. The BoD consists of > 50% ex-CitiBank.
The same mindset which destroyed US banking is now decided IBM's
fate. There is a lot of that (corporations having ex-bank crooks
on their BoD) going on.

>
> past posts mentioning doing performance improvement on 450+K LOC
> cobol application
> http://www.garlic.com/~lynn/2006s.html#24 Curiosity: CPU % for COBOL program
> http://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in
Mainframe?
> http://www.garlic.com/~lynn/2007l.html#20 John W. Backus, 82, Fortran
developer, dies
> http://www.garlic.com/~lynn/2007u.html#21 Distributed Computing
> http://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer
trainee
> http://www.garlic.com/~lynn/2008d.html#73 Price of CPU seconds
> http://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core
future is ahead of us
> http://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are
'Direct Access'?
> http://www.garlic.com/~lynn/2009d.html#14 Legacy clearing threat to OTC
derivatives warns State Street
> http://www.garlic.com/~lynn/2009e.html#76 Architectural Diversity
> http://www.garlic.com/~lynn/2009f.html#55 Cobol hits 50 and keeps counting
> http://www.garlic.com/~lynn/2009g.html#20 IBM forecasts 'new world order'
for financial services
> http://www.garlic.com/~lynn/2011c.html#35 If IBM Hadn't Bet the Company
> http://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some
gurus predicted that mainframes would disappear
> http://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea
about Cloud Computing in MAINFRAME ?
> http://www.garlic.com/~lynn/2012n.html#18 System/360--50 years--the future?
> http://www.garlic.com/~lynn/2012n.html#24 System/360--50 years--the future?
> http://www.garlic.com/~lynn/2012n.html#56 Under what circumstances would it
be a mistake to migrate applications/workload off the mainframe?
> http://www.garlic.com/~lynn/2013b.html#45 Article for the boss: COBOL will
outlive us all
> http://www.garlic.com/~lynn/2014b.html#83 CPU time
> http://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
> http://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts
Network LinkedIn group
> http://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
> http://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed,
instruction-level performance info?
> http://www.garlic.com/~lynn/2017b.html#15 Trump to sign cyber security order
>

/BAH
Re: The ICL 2900 [message #339643 is a reply to message #338207] Sat, 18 March 2017 13:19 Go to previous messageGo to previous message
Jan van den Broek is currently offline  Jan van den Broek
Messages: 70
Registered: April 2012
Karma: 0
Member
Thu, 23 Feb 2017 09:41:52 -0800 (PST)
Quadibloc <jsavard@ecn.ab.ca> schrieb:

[Schnipp]

> However, while it's certainly true you can delete critical system files in
> MS-DOS or whatever, it _is_ a bit easier to cause a disaster in Unix. The short
^^^^^^
Not MS-DOS, but I remember a Novell/Win3.1-environment and someone with
adminrights not used to Explorer moving whole trees, more than once.

(One day he didn't show up anymore, everyone was relieved)

--
Jan van den Broek balglaas@xs4all.nl

I have a great .sig, but it won't fit at the end of this post.
-Fermat
Pages (24): [ «    9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24    »]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: ARMv8 in RPi 3?
Next Topic: NJ teen joins "The Voice"
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Thu Apr 25 00:21:33 EDT 2024

Total time taken to generate the page: 0.08584 seconds