Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » CR or LF?
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Re: CR or LF? [message #395150 is a reply to message #395146] Sat, 30 May 2020 15:07 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> In article <slrnrd4sk2.1r7q.grahn+nntp@frailea.sa.invalid>,
> Jorgen Grahn <grahn+nntp@snipabacken.se> wrote:
>> I somehow (in the 1990s) got the impression this was a later development,
>> and not intrinsically Unix. Specifically, that people were horrified when
>> vi came, with a design that meant every keystroke during text editing
>> would use up computing resources.
>
> No doubt some people were, but Unix always had tty raw mode that let
> you read a character at a time. In practice, raw mode wasn't very
> useful until most people had switched from Teletypes to video
> terminals. As I recall it was mostly used at the login prompt
> to try and guess whether you had an upper/lower or upper case only
> terminal.

I'm not sure raw mode is necessary for determining case, I would have
expected 'login.c' to check the username that was entered, and if it
was all uppercase, set the appropriate tty driver ioctl to map upper
case to lower case.

However, in looking at v6/s1/login.c, I don't see any such check; nor
does such a check exist in v7/usr/src/cmd/login.c

V7 does have this interesting code at the beginning of main:

alarm(60);
signal(SIGQUIT, SIG_IGN);
signal(SIGINT, SIG_IGN);
nice(-100);
nice(20);
nice(0);
gtty(0, &ttyb);
ttyb.sg_erase = '#';
ttyb.sg_kill = '@';
stty(0, &ttyb);

The triple nice call is noted in the nice man page:

For
a privileged process to return to normal priority from an unknown
state, nice should be called successively with arguments -40 (goes to
priority -20 because of truncation), 20 (to get to 0), then 0 (to main-
tain compatibility with previous versions of this call).

UW2.01 reverted to a single nice(0);
Re: CR or LF? [message #395151 is a reply to message #395150] Sat, 30 May 2020 15:32 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <s9yAG.62006$1y4.35288@fx23.iad> you write:
>> terminals. As I recall it was mostly used at the login prompt
>> to try and guess whether you had an upper/lower or upper case only
>> terminal.
>
> I'm not sure raw mode is necessary for determining case, I would have
> expected 'login.c' to check the username that was entered, and if it
> was all uppercase, set the appropriate tty driver ioctl to map upper
> case to lower case.
>
> However, in looking at v6/s1/login.c, I don't see any such check; nor
> does such a check exist in v7/usr/src/cmd/login.c

It wasn't login.c, it was getty.c which cycled through a bunch of raw
mode tty settings. On modem ports, you could hit break until it
switched to the right speed and you got a legible prompt.

https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/ge tty.c

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: CR or LF? [message #395152 is a reply to message #395144] Sat, 30 May 2020 15:50 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
Jorgen Grahn <grahn+nntp@snipabacken.se> writes:

> On Thu, 2020-05-28, Bob Eager wrote:
>> On Thu, 28 May 2020 11:50:46 +0000, Thomas Koenig wrote:
>>
>>> So, each time somebody pressed a key in an interactive sesssion,
>>> a data block was sent to the CPU, and one was sent back. I thought that
>>> _strange_. I also wrote a rant to UNIX-haters about this, but I cannot
>>> find that any more.
>>
>> It didn't make it into the book - I just checked!
>
> I somehow (in the 1990s) got the impression this was a later development,
> and not intrinsically Unix. Specifically, that people were horrified when
> vi came, with a design that meant every keystroke during text editing
> would use up computing resources.
>
> That may be a garbled account, but at least the idea survived: that at
> some point this level of interactivity was seen as an absurd waste.

I don't think so.
I was working at Bell Labs when Emacs and VI were being born.
They had a whole bunch of PDP machines, if you overloaded one,
there were plenty of others to use.

I started out using ed, it really wasn't any fun so I hunted down
an Emacs version. It ran fine. Later on I heard Bill Joy had invented
something called VI, I saw no reason to change.

In the same time frame, IBM had released TSO (which used block mode
terminals). A few of us got to experiment with TSO but there was no
way to release that for general use, it used way too much mainframe
resources.

So, for at least a couple of years, we could do mainframe interactive
development, but only by using a bunch of Unix systems with Emacs.

Oh, yeah, we did have an IMS/DC full screen editor we could use, but
it was so clunky that few of us suffered through it .

So, perhaps character at a time was resource intensive, but
UNIX cycles were abundant and cheap compared to mainframe cycles
even with block mode terminals.

> Much later, I was horrified when I learned you need a gigabyte of
> RAM to run the Eclipse IDE.

Fortunately, never had to go that way. I prefer the cleaner code that
comes from hand crafting.


--
Dan Espen
Re: CR or LF? [message #395155 is a reply to message #395146] Sat, 30 May 2020 16:10 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Saturday, May 30, 2020 at 11:39:48 AM UTC-6, John Levine wrote:
> The bitmap terminals were driven by a terminal emulator
> on an 11/05 and I think at some point I added insert mode to the
> emulator.

If you've got an 11/05 driving a terminal - vector, though, not bitmap - you could
play Lunar Lander on it... so I'm not surprised insert mode is possible.

John Savard
Re: CR or LF? [message #395156 is a reply to message #395152] Sat, 30 May 2020 16:30 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sat, 30 May 2020 15:50:07 -0400, Dan Espen <dan1espen@gmail.com>
wrote:

> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>
>> On Thu, 2020-05-28, Bob Eager wrote:
>>> On Thu, 28 May 2020 11:50:46 +0000, Thomas Koenig wrote:
>>>
>>>> So, each time somebody pressed a key in an interactive sesssion,
>>>> a data block was sent to the CPU, and one was sent back. I thought that
>>>> _strange_. I also wrote a rant to UNIX-haters about this, but I cannot
>>>> find that any more.
>>>
>>> It didn't make it into the book - I just checked!
>>
>> I somehow (in the 1990s) got the impression this was a later development,
>> and not intrinsically Unix. Specifically, that people were horrified when
>> vi came, with a design that meant every keystroke during text editing
>> would use up computing resources.
>>
>> That may be a garbled account, but at least the idea survived: that at
>> some point this level of interactivity was seen as an absurd waste.
>
> I don't think so.
> I was working at Bell Labs when Emacs and VI were being born.
> They had a whole bunch of PDP machines, if you overloaded one,
> there were plenty of others to use.

I thought EMACS was developed at MIT and vi at Berkeley.

> I started out using ed, it really wasn't any fun so I hunted down
> an Emacs version. It ran fine. Later on I heard Bill Joy had invented
> something called VI, I saw no reason to change.
>
> In the same time frame, IBM had released TSO (which used block mode
> terminals). A few of us got to experiment with TSO but there was no
> way to release that for general use, it used way too much mainframe
> resources.
>
> So, for at least a couple of years, we could do mainframe interactive
> development, but only by using a bunch of Unix systems with Emacs.
>
> Oh, yeah, we did have an IMS/DC full screen editor we could use, but
> it was so clunky that few of us suffered through it .
>
> So, perhaps character at a time was resource intensive, but
> UNIX cycles were abundant and cheap compared to mainframe cycles
> even with block mode terminals.
>
>> Much later, I was horrified when I learned you need a gigabyte of
>> RAM to run the Eclipse IDE.
>
> Fortunately, never had to go that way. I prefer the cleaner code that
> comes from hand crafting.

How is using Eclipse not "hand crafting"? If it can generate code for
me I'd like to know how.
Re: CR or LF? [message #395157 is a reply to message #395155] Sat, 30 May 2020 16:43 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <9e1b2179-367a-47b1-84b8-ccc860644e24@googlegroups.com>,
Quadibloc <jsavard@ecn.ab.ca> wrote:
> On Saturday, May 30, 2020 at 11:39:48 AM UTC-6, John Levine wrote:
>> The bitmap terminals were driven by a terminal emulator
>> on an 11/05 and I think at some point I added insert mode to the
>> emulator.
>
> If you've got an 11/05 driving a terminal - vector, though, not bitmap - you could
> play Lunar Lander on it... so I'm not surprised insert mode is possible.

This was 40 years ago, it was definitely bit map, and the 11/05 was
driving 16 screens running out of bitmap memory. Doing insert mode
wasn't hard, since our characters were 8 bits wide, it just rippled
down the rest of the line moving the image one byte ahead.

Read all about it here:

https://www.academia.edu/5519074/An_Overview_of_the_Yale_Gem _System

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: CR or LF? [message #395164 is a reply to message #395156] Sat, 30 May 2020 19:42 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
J. Clarke <jclarke.873638@gmail.com> writes:

> On Sat, 30 May 2020 15:50:07 -0400, Dan Espen <dan1espen@gmail.com>
> wrote:
>
>> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>
>>> On Thu, 2020-05-28, Bob Eager wrote:
>>>> On Thu, 28 May 2020 11:50:46 +0000, Thomas Koenig wrote:
>>>>
>>>> > So, each time somebody pressed a key in an interactive sesssion,
>>>> > a data block was sent to the CPU, and one was sent back. I thought that
>>>> > _strange_. I also wrote a rant to UNIX-haters about this, but I cannot
>>>> > find that any more.
>>>>
>>>> It didn't make it into the book - I just checked!
>>>
>>> I somehow (in the 1990s) got the impression this was a later development,
>>> and not intrinsically Unix. Specifically, that people were horrified when
>>> vi came, with a design that meant every keystroke during text editing
>>> would use up computing resources.
>>>
>>> That may be a garbled account, but at least the idea survived: that at
>>> some point this level of interactivity was seen as an absurd waste.
>>
>> I don't think so.
>> I was working at Bell Labs when Emacs and VI were being born.
>> They had a whole bunch of PDP machines, if you overloaded one,
>> there were plenty of others to use.
>
> I thought EMACS was developed at MIT and vi at Berkeley.

I used something called "Montgomery" Emacs,
got it from Warren Montgomery who was somewhere in the Bell Labs
organization.

Here is some evidence:

https://tech-insider.org/unix/research/1983/0119.html

>> I started out using ed, it really wasn't any fun so I hunted down
>> an Emacs version. It ran fine. Later on I heard Bill Joy had invented
>> something called VI, I saw no reason to change.
>>
>> In the same time frame, IBM had released TSO (which used block mode
>> terminals). A few of us got to experiment with TSO but there was no
>> way to release that for general use, it used way too much mainframe
>> resources.
>>
>> So, for at least a couple of years, we could do mainframe interactive
>> development, but only by using a bunch of Unix systems with Emacs.
>>
>> Oh, yeah, we did have an IMS/DC full screen editor we could use, but
>> it was so clunky that few of us suffered through it .
>>
>> So, perhaps character at a time was resource intensive, but
>> UNIX cycles were abundant and cheap compared to mainframe cycles
>> even with block mode terminals.
>>
>>> Much later, I was horrified when I learned you need a gigabyte of
>>> RAM to run the Eclipse IDE.
>>
>> Fortunately, never had to go that way. I prefer the cleaner code that
>> comes from hand crafting.
>
> How is using Eclipse not "hand crafting"? If it can generate code for
> me I'd like to know how.

I don't have a lot of experience with any IDE but I thought Eclipse
allows you to paint a screen then generate the code to draw the screen.
I seem to recall doing that once and I was not that impressed with the
code it generated.

--
Dan Espen
Re: mainframe I/O, was CR or LF? [message #395166 is a reply to message #394946] Sat, 30 May 2020 20:21 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: antispam

John Levine <johnl@taugh.com> wrote:
>
> I suppose that hypothetically someone could have built a TTY channel
> interface that interrupted on every character but I would be surprised
> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
> be an overpriced front end processor.

Around 1992 I was using Internet connection (email) that did it.
Connection was from a PC via modem to IBM mainframe. AFAIU doing
it "proper" way would be complicated/expensive, do guys operating
mainframe hacked hardware to take interrupt on each incoming
character. They said that "IBM does not like interrupts", but
this was for single 9600 bps line, so mainframe could handle
interrupts without serious degradation of response.

Doing this on multiple lines probably would overload mainframe
with interrput processing. This mainframe probably had about 20
discs and 100 block mode terminals and my guesstimate is that
the single character mode line generated comparable number
of interrupts to all other periferials taken together...

--
Waldek Hebisch
Re: mainframe I/O, was CR or LF? [message #395169 is a reply to message #395166] Sat, 30 May 2020 21:59 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sun, 31 May 2020 00:21:51 +0000 (UTC), antispam@math.uni.wroc.pl
wrote:

> John Levine <johnl@taugh.com> wrote:
>>
>> I suppose that hypothetically someone could have built a TTY channel
>> interface that interrupted on every character but I would be surprised
>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>> be an overpriced front end processor.
>
> Around 1992 I was using Internet connection (email) that did it.
> Connection was from a PC via modem to IBM mainframe. AFAIU doing
> it "proper" way would be complicated/expensive, do guys operating
> mainframe hacked hardware to take interrupt on each incoming
> character. They said that "IBM does not like interrupts", but
> this was for single 9600 bps line, so mainframe could handle
> interrupts without serious degradation of response.
>
> Doing this on multiple lines probably would overload mainframe
> with interrput processing. This mainframe probably had about 20
> discs and 100 block mode terminals and my guesstimate is that
> the single character mode line generated comparable number
> of interrupts to all other periferials taken together...

Nahh, odds are that what they "hacked" was a channel controller.
Re: CR or LF? [message #395170 is a reply to message #395164] Sat, 30 May 2020 22:17 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Sat, 30 May 2020 19:42:38 -0400, Dan Espen <dan1espen@gmail.com>
wrote:

> J. Clarke <jclarke.873638@gmail.com> writes:
>
>> On Sat, 30 May 2020 15:50:07 -0400, Dan Espen <dan1espen@gmail.com>
>> wrote:
>>
>>> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>>>
>>>> On Thu, 2020-05-28, Bob Eager wrote:
>>>> > On Thu, 28 May 2020 11:50:46 +0000, Thomas Koenig wrote:
>>>> >
>>>> >> So, each time somebody pressed a key in an interactive sesssion,
>>>> >> a data block was sent to the CPU, and one was sent back. I thought that
>>>> >> _strange_. I also wrote a rant to UNIX-haters about this, but I cannot
>>>> >> find that any more.
>>>> >
>>>> > It didn't make it into the book - I just checked!
>>>>
>>>> I somehow (in the 1990s) got the impression this was a later development,
>>>> and not intrinsically Unix. Specifically, that people were horrified when
>>>> vi came, with a design that meant every keystroke during text editing
>>>> would use up computing resources.
>>>>
>>>> That may be a garbled account, but at least the idea survived: that at
>>>> some point this level of interactivity was seen as an absurd waste.
>>>
>>> I don't think so.
>>> I was working at Bell Labs when Emacs and VI were being born.
>>> They had a whole bunch of PDP machines, if you overloaded one,
>>> there were plenty of others to use.
>>
>> I thought EMACS was developed at MIT and vi at Berkeley.
>
> I used something called "Montgomery" Emacs,
> got it from Warren Montgomery who was somewhere in the Bell Labs
> organization.
>
> Here is some evidence:
>
> https://tech-insider.org/unix/research/1983/0119.html

That was developed in 1979, when EMACS had already been around for 3
years. I don't know if it had any real part in the development of the
EMACS that is known today--Montgomery recognizes that it is
proprietary where the original EMACS was not restricted.
>
>>> I started out using ed, it really wasn't any fun so I hunted down
>>> an Emacs version. It ran fine. Later on I heard Bill Joy had invented
>>> something called VI, I saw no reason to change.
>>>
>>> In the same time frame, IBM had released TSO (which used block mode
>>> terminals). A few of us got to experiment with TSO but there was no
>>> way to release that for general use, it used way too much mainframe
>>> resources.
>>>
>>> So, for at least a couple of years, we could do mainframe interactive
>>> development, but only by using a bunch of Unix systems with Emacs.
>>>
>>> Oh, yeah, we did have an IMS/DC full screen editor we could use, but
>>> it was so clunky that few of us suffered through it .
>>>
>>> So, perhaps character at a time was resource intensive, but
>>> UNIX cycles were abundant and cheap compared to mainframe cycles
>>> even with block mode terminals.
>>>
>>>> Much later, I was horrified when I learned you need a gigabyte of
>>>> RAM to run the Eclipse IDE.
>>>
>>> Fortunately, never had to go that way. I prefer the cleaner code that
>>> comes from hand crafting.
>>
>> How is using Eclipse not "hand crafting"? If it can generate code for
>> me I'd like to know how.
>
> I don't have a lot of experience with any IDE but I thought Eclipse
> allows you to paint a screen then generate the code to draw the screen.
>
> I seem to recall doing that once and I was not that impressed with the
> code it generated.

There are several window builders available for Eclipse but that is
not part of its basic functionality. And I'm pretty sure they
wouldn't be any help to me--my main use for it is as an alternative to
ISPF and the 3270.
Re: mainframe I/O, was CR or LF? [message #395185 is a reply to message #395166] Sun, 31 May 2020 04:30 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: David Wade

On 31/05/2020 01:21, antispam@math.uni.wroc.pl wrote:
> John Levine <johnl@taugh.com> wrote:
>>
>> I suppose that hypothetically someone could have built a TTY channel
>> interface that interrupted on every character but I would be surprised
>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>> be an overpriced front end processor.
>

You can't use an IBM1130 as a terminal concentrator. It doesn't have
terminal lines. IBM of course wanted you to use 3270 VDU's where
character editing was done in a local terminal controller.
If you can offload some processing to a less expensive front end box
rather than buy Mainframe MIPS thats a win...


> Around 1992 I was using Internet connection (email) that did it.
> Connection was from a PC via modem to IBM mainframe. AFAIU doing
> it "proper" way would be complicated/expensive, do guys operating
> mainframe hacked hardware to take interrupt on each incoming
> character. They said that "IBM does not like interrupts", but
> this was for single 9600 bps line, so mainframe could handle
> interrupts without serious degradation of response.
>

Are you sure it really passed each character over the channel, could you
run EMACS? Its not really the interupts that slow down mainframes its
the VTAM layers in between.

No computer likes interrupts. I remember a friend who worked for a big
UK bank was keen to roll out "all-in-one" to all bank staff, but he said
it needed so many VAXs compared PROFS on the IBM he couldn't cost
justify it.

I believe senior managers up got all-in-one the rest got PROFS.


> Doing this on multiple lines probably would overload mainframe
> with interrput processing. This mainframe probably had about 20
> discs and 100 block mode terminals and my guesstimate is that
> the single character mode line generated comparable number
> of interrupts to all other periferials taken together...
>

Well the VAX suffers the same. Lets see:-

1. User Types Character
2. VAX receives character.
3. VMS looks and sees its for a running application which is swapped.
4. It pages in the interupt handler.
5. It passes the character to application
6. The program looks at the character. Sees its a letter "F"
7. The program decides it needs to echo the character so asks the OS to
send a "P" to the terminal
8. The program goes to sleep
9. VMS pages it out to make room for another program that needs to
process a leetter "U" .....

So on a heavily loaded system thats two disk IO's for each character
typed. I am sure there must be a better way....

Dave
Re: CR or LF? [message #395187 is a reply to message #395156] Sun, 31 May 2020 04:45 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Sat, 30 May 2020 16:30:26 -0400
J. Clarke <jclarke.873638@gmail.com> wrote:

> How is using Eclipse not "hand crafting"? If it can generate code for
> me I'd like to know how.

Java programmers seem to swear by it for automatically flushing out
objects into beans and even doing some semi-automated refactoring.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Re: mainframe I/O, was CR or LF? [message #395194 is a reply to message #395185] Sun, 31 May 2020 11:48 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: antispam

David Wade <g4ugm@dave.invalid> wrote:
> On 31/05/2020 01:21, antispam@math.uni.wroc.pl wrote:
>> John Levine <johnl@taugh.com> wrote:
>>>
>>> I suppose that hypothetically someone could have built a TTY channel
>>> interface that interrupted on every character but I would be surprised
>>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>>> be an overpriced front end processor.
>>
>
> You can't use an IBM1130 as a terminal concentrator. It doesn't have
> terminal lines. IBM of course wanted you to use 3270 VDU's where
> character editing was done in a local terminal controller.
> If you can offload some processing to a less expensive front end box
> rather than buy Mainframe MIPS thats a win...
>
>
>> Around 1992 I was using Internet connection (email) that did it.
>> Connection was from a PC via modem to IBM mainframe. AFAIU doing
>> it "proper" way would be complicated/expensive, do guys operating
>> mainframe hacked hardware to take interrupt on each incoming
>> character. They said that "IBM does not like interrupts", but
>> this was for single 9600 bps line, so mainframe could handle
>> interrupts without serious degradation of response.
>>
>
> Are you sure it really passed each character over the channel, could you
> run EMACS?

Mainframe run VM and user interacted with CMS. I do not know
if software on mainframe side took any advantage of character
nature of the interface.

How do I know? I had a short chat with mainframe guys. I remember
this well as I was interesed in technical details. There were
specific contraints: mainframe with equipement was there but
for extras there was only tiny budget. Mainframe guys said
that equipement thay had did not support async lines. I do
not know if there were alternatives, but for some reason forcing
chanel to char-by-char transmission was deemed cheapest
solution.

> Its not really the interupts that slow down mainframes its
> the VTAM layers in between.

I must say that I would like to know better what made things
efficient or inefficient. As you wrote, once processing goes
to user level there is risk of swapping. In the case above
important use was for file transfers. In Unix I would expect
data going to kernel buffer so that interrupts would be
at kernel level. IIUC mainframe logicaly had single interrupt
line so bunch of instructions was needed to determine interrupt
source and dispatch to proper handler. OTOH thanks to chanels
handler could be relatively simple. Still, I would expect
between 100 and few hundred instructions per interrupt.
At few hundred character per second during file transfers
that give of order of hundred thousneds instructions per
second. I think that this was relatively small load. But
having the same load on 100 lines would be much more serious.

But I do not know how smart VM were at buffering. If character
handling went all to user code, then there was considerable
overhead due to context switching.

> No computer likes interrupts. I remember a friend who worked for a big
> UK bank was keen to roll out "all-in-one" to all bank staff, but he said
> it needed so many VAXs compared PROFS on the IBM he couldn't cost
> justify it.
>
> I believe senior managers up got all-in-one the rest got PROFS.
>
>
>> Doing this on multiple lines probably would overload mainframe
>> with interrput processing. This mainframe probably had about 20
>> discs and 100 block mode terminals and my guesstimate is that
>> the single character mode line generated comparable number
>> of interrupts to all other periferials taken together...
>>
>
> Well the VAX suffers the same. Lets see:-
>
> 1. User Types Character
> 2. VAX receives character.
> 3. VMS looks and sees its for a running application which is swapped.
> 4. It pages in the interupt handler.
> 5. It passes the character to application
> 6. The program looks at the character. Sees its a letter "F"
> 7. The program decides it needs to echo the character so asks the OS to
> send a "P" to the terminal
> 8. The program goes to sleep
> 9. VMS pages it out to make room for another program that needs to
> process a leetter "U" .....
>
> So on a heavily loaded system thats two disk IO's for each character
> typed. I am sure there must be a better way....

Well, there is cooked mode: echo and buffering is done by kernel
driver. telnet has linemode, theoreticaly when user is connected
via network one gets one interrupt per line so much more efficient
than interrupt per character. OTOH almost all sofware I know
immediately switches to raw mode and do character processing at user
level.

IIUC Javascript was intended to solve such problems in Web era:
Javascript in browser can do per-char processing so that web
server effectively works in blok mode.

--
Waldek Hebisch
Re: mainframe I/O, was CR or LF? [message #395202 is a reply to message #395194] Sun, 31 May 2020 16:49 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
<antispam@math.uni.wroc.pl> wrote:
> David Wade <g4ugm@dave.invalid> wrote:
>> On 31/05/2020 01:21, antispam@math.uni.wroc.pl wrote:
>>> John Levine <johnl@taugh.com> wrote:
>>>>
>>>> I suppose that hypothetically someone could have built a TTY channel
>>>> interface that interrupted on every character but I would be surprised
>>>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>>>> be an overpriced front end processor.
>>>
>>
>> You can't use an IBM1130 as a terminal concentrator. It doesn't have
>> terminal lines. IBM of course wanted you to use 3270 VDU's where
>> character editing was done in a local terminal controller.
>> If you can offload some processing to a less expensive front end box
>> rather than buy Mainframe MIPS thats a win...
>>
>>
>>> Around 1992 I was using Internet connection (email) that did it.
>>> Connection was from a PC via modem to IBM mainframe. AFAIU doing
>>> it "proper" way would be complicated/expensive, do guys operating
>>> mainframe hacked hardware to take interrupt on each incoming
>>> character. They said that "IBM does not like interrupts", but
>>> this was for single 9600 bps line, so mainframe could handle
>>> interrupts without serious degradation of response.
>>>
>>
>> Are you sure it really passed each character over the channel, could you
>> run EMACS?
>
> Mainframe run VM and user interacted with CMS. I do not know
> if software on mainframe side took any advantage of character
> nature of the interface.
>
> How do I know? I had a short chat with mainframe guys. I remember
> this well as I was interesed in technical details. There were
> specific contraints: mainframe with equipement was there but
> for extras there was only tiny budget. Mainframe guys said
> that equipement thay had did not support async lines. I do
> not know if there were alternatives, but for some reason forcing
> chanel to char-by-char transmission was deemed cheapest
> solution.
>
>> Its not really the interupts that slow down mainframes its
>> the VTAM layers in between.
>
> I must say that I would like to know better what made things
> efficient or inefficient. As you wrote, once processing goes
> to user level there is risk of swapping. In the case above
> important use was for file transfers. In Unix I would expect
> data going to kernel buffer so that interrupts would be
> at kernel level. IIUC mainframe logicaly had single interrupt
> line so bunch of instructions was needed to determine interrupt
> source and dispatch to proper handler. OTOH thanks to chanels
> handler could be relatively simple. Still, I would expect
> between 100 and few hundred instructions per interrupt.
> At few hundred character per second during file transfers
> that give of order of hundred thousneds instructions per
> second. I think that this was relatively small load. But
> having the same load on 100 lines would be much more serious.
>
> But I do not know how smart VM were at buffering. If character
> handling went all to user code, then there was considerable
> overhead due to context switching.
>
>> No computer likes interrupts. I remember a friend who worked for a big
>> UK bank was keen to roll out "all-in-one" to all bank staff, but he said
>> it needed so many VAXs compared PROFS on the IBM he couldn't cost
>> justify it.
>>
>> I believe senior managers up got all-in-one the rest got PROFS.
>>
>>
>>> Doing this on multiple lines probably would overload mainframe
>>> with interrput processing. This mainframe probably had about 20
>>> discs and 100 block mode terminals and my guesstimate is that
>>> the single character mode line generated comparable number
>>> of interrupts to all other periferials taken together...
>>>
>>
>> Well the VAX suffers the same. Lets see:-
>>
>> 1. User Types Character
>> 2. VAX receives character.
>> 3. VMS looks and sees its for a running application which is swapped.
>> 4. It pages in the interupt handler.
>> 5. It passes the character to application
>> 6. The program looks at the character. Sees its a letter "F"
>> 7. The program decides it needs to echo the character so asks the OS to
>> send a "P" to the terminal
>> 8. The program goes to sleep
>> 9. VMS pages it out to make room for another program that needs to
>> process a leetter "U" .....
>>
>> So on a heavily loaded system thats two disk IO's for each character
>> typed. I am sure there must be a better way....
>
> Well, there is cooked mode: echo and buffering is done by kernel
> driver. telnet has linemode, theoreticaly when user is connected
> via network one gets one interrupt per line so much more efficient
> than interrupt per character. OTOH almost all sofware I know
> immediately switches to raw mode and do character processing at user
> level.

Stupid.

>
> IIUC Javascript was intended to solve such problems in Web era:
> Javascript in browser can do per-char processing so that web
> server effectively works in blok mode.
>



--
Pete
Re: mainframe I/O, was CR or LF? [message #395219 is a reply to message #395185] Mon, 01 June 2020 12:27 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
David Wade <g4ugm@dave.invalid> writes:
> On 31/05/2020 01:21, antispam@math.uni.wroc.pl wrote:
>> John Levine <johnl@taugh.com> wrote:
>>>
>>> I suppose that hypothetically someone could have built a TTY channel
>>> interface that interrupted on every character but I would be surprised
>>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>>> be an overpriced front end processor.
>>
>
> You can't use an IBM1130 as a terminal concentrator. It doesn't have
> terminal lines. IBM of course wanted you to use 3270 VDU's where
> character editing was done in a local terminal controller.
> If you can offload some processing to a less expensive front end box
> rather than buy Mainframe MIPS thats a win...
>
>
>> Around 1992 I was using Internet connection (email) that did it.
>> Connection was from a PC via modem to IBM mainframe. AFAIU doing
>> it "proper" way would be complicated/expensive, do guys operating
>> mainframe hacked hardware to take interrupt on each incoming
>> character. They said that "IBM does not like interrupts", but
>> this was for single 9600 bps line, so mainframe could handle
>> interrupts without serious degradation of response.
>>
>
> Are you sure it really passed each character over the channel, could you
> run EMACS? Its not really the interupts that slow down mainframes its
> the VTAM layers in between.
>
> No computer likes interrupts. I remember a friend who worked for a big
> UK bank was keen to roll out "all-in-one" to all bank staff, but he said
> it needed so many VAXs compared PROFS on the IBM he couldn't cost
> justify it.
>
> I believe senior managers up got all-in-one the rest got PROFS.
>
>
>> Doing this on multiple lines probably would overload mainframe
>> with interrput processing. This mainframe probably had about 20
>> discs and 100 block mode terminals and my guesstimate is that
>> the single character mode line generated comparable number
>> of interrupts to all other periferials taken together...
>>
>
> Well the VAX suffers the same. Lets see:-

Can suffer the same, perhaps. We used a PDP-11/44 as a terminal
concentrator feeding four vaxen to handle hundreds of terminals
(the concentrator could also feed Wylbur on the Itel AS/6 running MVS),
kinda like a gandalf setup.
Re: mainframe I/O, was CR or LF? [message #395220 is a reply to message #395194] Mon, 01 June 2020 12:28 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2020-05-31, antispam@math.uni.wroc.pl <antispam@math.uni.wroc.pl> wrote:

> David Wade <g4ugm@dave.invalid> wrote:
>
>> Its not really the interupts that slow down mainframes its
>> the VTAM layers in between.
>
> I must say that I would like to know better what made things
> efficient or inefficient.

One thing nobody's yet mentioned is the overhead in the transmission
protocol itself. Mainframe terminals communicated using protocols
designed to send files across the continent, not messages across the
room. On the Univac systems I worked on, a message looked like this:

<SYN><SYN><SYN><SOH><rid><sid><did><STX><data><ETX> <BCC>

where <rid><sid><did> addresses the individual terminal (and device,
e.g. attached printer) on the multi-dropped line. Also, transmissions
were typically half-duplex - you had to turn around your Bell 201 modem
when changing direction (e.g. to acknowledge the above message), and
that took time. Remember all those signals that RS-232 specifies?
There's a reason they took up most of the pins on a DB-25 connector,
and a half-duplex synchronous link needed most of them.

Then there was the overhead of polling, queueing and prioritizing
the multiple terminals on the line, etc. (most systems polled
once a second, which put a lower limit on the response time.)
Now imagine applying that overhead to every byte of a transmission,
rather than the entire message - it's no wonder that byte-by-byte
transmission never caught on.

You could do a lot of disk swapping in the time it took to
negotiate such a protocol - especially at 2400 bps.

--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <cgibbs@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Re: mainframe I/O, was CR or LF? [message #395221 is a reply to message #395166] Mon, 01 June 2020 12:32 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
antispam@math.uni.wroc.pl writes:
> John Levine <johnl@taugh.com> wrote:
>>
>> I suppose that hypothetically someone could have built a TTY channel
>> interface that interrupted on every character but I would be surprised
>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>> be an overpriced front end processor.
>
> Around 1992 I was using Internet connection (email) that did it.
> Connection was from a PC via modem to IBM mainframe. AFAIU doing
> it "proper" way would be complicated/expensive, do guys operating
> mainframe hacked hardware to take interrupt on each incoming
> character. They said that "IBM does not like interrupts", but
> this was for single 9600 bps line, so mainframe could handle
> interrupts without serious degradation of response.
>
> Doing this on multiple lines probably would overload mainframe
> with interrput processing. This mainframe probably had about 20
> discs and 100 block mode terminals and my guesstimate is that
> the single character mode line generated comparable number
> of interrupts to all other periferials taken together...

The burroughs medium systems had real-time interrupts. they were used
to handle the check sorters, where you needed to fully handle the interrupt
while the document (at 2500DPM) was in transit between the MICR read station
and the pocket-select station (if it took too long to process, the item
would be placed in the too-late-to-pocket-select slot and would need to
be resorted, which made the customer unhappy).

The B4900 could handle 10 sorters at full speed while processing
batch workloads.

A few years later, the pocket-select criteria was "offloaded" to
the sorter and the real-time interrupts were no longer required.

I/O on the burroughs systems was far more efficient than on IBM; no
channel programs, no separate disk seek ops, no CPU interactions;
the CPU fired off a 8 digit
I/O descriptor and the I/O hardware handled everything else.
Re: mainframe I/O, was CR or LF? [message #395225 is a reply to message #395220] Mon, 01 June 2020 14:48 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
> On 2020-05-31, antispam@math.uni.wroc.pl <antispam@math.uni.wroc.pl> wrote:
>
>> David Wade <g4ugm@dave.invalid> wrote:
>>
>>> Its not really the interupts that slow down mainframes its
>>> the VTAM layers in between.
>>
>> I must say that I would like to know better what made things
>> efficient or inefficient.
>
> One thing nobody's yet mentioned is the overhead in the transmission
> protocol itself. Mainframe terminals communicated using protocols
> designed to send files across the continent, not messages across the
> room. On the Univac systems I worked on, a message looked like this:
>
> <SYN><SYN><SYN><SOH><rid><sid><did><STX><data><ETX> <BCC>
>
> where <rid><sid><did> addresses the individual terminal (and device,
> e.g. attached printer) on the multi-dropped line. Also, transmissions
> were typically half-duplex - you had to turn around your Bell 201 modem
> when changing direction (e.g. to acknowledge the above message), and
> that took time. Remember all those signals that RS-232 specifies?
> There's a reason they took up most of the pins on a DB-25 connector,
> and a half-duplex synchronous link needed most of them.
>
> Then there was the overhead of polling, queueing and prioritizing
> the multiple terminals on the line, etc. (most systems polled
> once a second, which put a lower limit on the response time.)
> Now imagine applying that overhead to every byte of a transmission,
> rather than the entire message - it's no wonder that byte-by-byte
> transmission never caught on.

Burroughs supported a similar sequence (generally on asynchronous
serial lines, so no leading <SYN> required, but sometimes used for
padding) and most stations and DCP's/line Adapters supported contention
mode as well.

In poll select, the poll sequence is:

<EOT><ad1><ad2>p<ENQ>

The addressed station (ad1:ad2) would respond with <ACK> if it had
a message to transmit, or <NAK> if it had nothing to transmit (in
which case the DCP would poll the next station).

The select sequence is:

<EOT><ad1><ad2>q<ENQ>

When the station responded with ACK, the DCP would transmit the message; NAK
would cause the DCP to retry on the next cycle through the poll loop.

In both contention and poll-select mode, the message itself was:

<SOH><ad1><ad2><STX><data><ETX><bcc>

And it was <ACK> or <NAK>'d by the receiver as necessary.

<EOT> would end the poll select sequence.

For contention mode:

/**
* Supports dedicated (Contention mode) point-to-point communications using
* ASCII control sequences to contend for master mode on a point to
* point connection.
*
* This procedure applies when there are two stations on a dedicated point to
* point link, with neither station designated as the master station. Both
* stations contend for master status and may seize it under the condition that
* the other station is not seizing it. Staggared re-attempts to achieve master
* status in the event of an initial simultaneous attempt are based on the
* variation in the contending terminal operator's action and response times. A
* contention function determines the master/slave relationship of the two
* stations. A terminate function returns the system to the contention
* condition.
*
* The V_IDLE condition on the communications link is that which
* follows the terminate function of the previous transmission. In this
* condition, neither station has master status, but both stations may
* bid for master status.
*
* A master station wishing to transmit a message bids for master
* status by sending the enquiry character (ASCII ENQ); after which,
* it begins the time-out function which is dependent upon the operator.
* To resolve simulataneous bids by both stations, the station which takes
* the longest time-out interval after having bid for master status
* will react to the received ENQ character as though it had not bid
* for master status. Conversely, after having bid for master status,
* the station which takes the shortest time-out interval will not repond
* to a received ENQ character. Each station will reinitiate it's bids
* when the designated time-out interval has expired if the master/slave
* relationship hasn't been established.
*
* Upon receipt of the affirmative acknowledge response (ACK), the
* station bidding for master status assumes master status and proceeds
* with message transfer.
*
* Upon receipt of the negative acknowledge response (NAK), the station bidding
* for master status may reinitiate a bid for master status by sending the ENQ
* character again. The station may reinitiate its bid for master status as
* often as the operator selects.
*
* In the case of an invalid or no reply to the initial ENQ character,
* the station bidding for master status reinitiates the bid by sending
* the ENQ character again. The station reinitiates its bid for master
* status as often as the operator selects.
*
* Assuming station A bids for master status, and station B replies with
* the ACK characters, station A will assume master status and proceed
* with message transmission. If station B is not ready to receive,
* it sends a NAK character. Station A, detecting the NAK character
* may again contend for master status by operator action.
*
* After station A transmits the message to station B as the master and receives
* a positive acknowledgement (ACK) response character from station B, station A
* will terminate the transmission by sending the EOT character. If station B
* negative acknowledges the transmission with the NAK character, station A will
* retransmit the message. If station A receives an invalid, or no reply to the
* transmission, station A will send an ENQ to station B. Upon receipt of a NAK
* from station B, will station A will resend the transmission; upon receipt of
* an ACK from station B, station A will terminate the transmission. Note that
* this may result in either the loss of, or duplication of the transmission.
* If after sending the ENQ character 'n' times ('n' may equal zero) a valid
* acknowledgment is not received, the master station may terminate the
* transmission with the EOT character.
*
* Failure of station A to achieve master status or to receive a valid response
* may result in transmission of an EOT character and a return to the IDLE line
* state.
*/
Re: mainframe I/O, was CR or LF? [message #395260 is a reply to message #395185] Tue, 02 June 2020 15:12 Go to previous messageGo to next message
usenet is currently offline  usenet
Messages: 556
Registered: May 2013
Karma: 0
Senior Member
On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:
> On 31/05/2020 01:21, antispam@math.uni.wroc.pl wrote:
>> John Levine <johnl@taugh.com> wrote:
>>>
>>> I suppose that hypothetically someone could have built a TTY channel
>>> interface that interrupted on every character but I would be surprised
>>> if anyone did. More likely they'd sell you an 1130 or later a Sys/7 to
>>> be an overpriced front end processor.
>
> You can't use an IBM1130 as a terminal concentrator. It doesn't have
> terminal lines. IBM of course wanted you to use 3270 VDU's where
> character editing was done in a local terminal controller.
> If you can offload some processing to a less expensive front end box
> rather than buy Mainframe MIPS thats a win...
>
>> Around 1992 I was using Internet connection (email) that did it.
>> Connection was from a PC via modem to IBM mainframe. AFAIU doing
>> it "proper" way would be complicated/expensive, do guys operating
>> mainframe hacked hardware to take interrupt on each incoming
>> character. They said that "IBM does not like interrupts", but
>> this was for single 9600 bps line, so mainframe could handle
>> interrupts without serious degradation of response.
>
> Are you sure it really passed each character over the channel, could you
> run EMACS? Its not really the interupts that slow down mainframes its
> the VTAM layers in between.
>
> No computer likes interrupts. I remember a friend who worked for a big
> UK bank was keen to roll out "all-in-one" to all bank staff, but he said
> it needed so many VAXs compared PROFS on the IBM he couldn't cost
> justify it.
>
> I believe senior managers up got all-in-one the rest got PROFS.

Depending on when this decision was made, and consequently what VAX models were
available, it may have hinged more on the relative processing power of the two
machines rather than interrupts and differences in character processing. The
initial VAX 780 was an approximately one MIPS machine and could handle perhaps
thirty or forty general timesharing users. I don't know what kind of load
All-In-One presented, but I would expect it might be similar, assuming active
users.


>> Doing this on multiple lines probably would overload mainframe
>> with interrput processing. This mainframe probably had about 20
>> discs and 100 block mode terminals and my guesstimate is that
>> the single character mode line generated comparable number
>> of interrupts to all other periferials taken together...

I will interject here that DEC's PDP-10 mainframes had a front-end processor --
in most cases a PDP-11 minicomputer -- that handled all the terminal lines, so
character interrupts and echoing were not an issue. And most programs used
"line mode" input -- they go into an I/O wait state while the user types and
characters get stashed in a buffer. When the user types one of the end-of-line
characters (CR, LF, VT, FF, or ESC) then the program wakes up and can retrieve
them. It's been too many years, and it's a bit outside my bailiwick, so I can't
recall what happens if the user simply types and types without an end-of-line.
When the buffer is full either the characters are thrown away, or an end-of-line
is forced; I don't remember which.


> Well the VAX suffers the same. Lets see:-
>
> 1. User Types Character
> 2. VAX receives character.
> 3. VMS looks and sees its for a running application which is swapped.
> 4. It pages in the interupt handler.
> 5. It passes the character to application
> 6. The program looks at the character. Sees its a letter "F"
> 7. The program decides it needs to echo the character so asks the OS to
> send a "P" to the terminal
> 8. The program goes to sleep
> 9. VMS pages it out to make room for another program that needs to
> process a leetter "U" .....
>
> So on a heavily loaded system thats two disk IO's for each character
> typed. I am sure there must be a better way....

I will confess ignorance on how VAX/VMS processed characters, but I can't
imagine it was this bad. I strongly suspect it had a "line mode" input model
similar to the PDP-10 that was used by most programs. I very much doubt that
the processor was interrupted on every character typed by every user. I could
be wrong though; VMS certainly made some other big mistakes. There was a strong
"NIH" current in Spit Brook and they often ignored the lessons learned by
the software engineers working on other DEC operating systems.
Re: mainframe I/O, was CR or LF? [message #395261 is a reply to message #395260] Tue, 02 June 2020 16:28 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
usenet@only.tnx (Questor) writes:
> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:

>
>> Well the VAX suffers the same. Lets see:-
>>
>> 1. User Types Character
>> 2. VAX receives character.
>> 3. VMS looks and sees its for a running application which is swapped.
>> 4. It pages in the interupt handler.
>> 5. It passes the character to application
>> 6. The program looks at the character. Sees its a letter "F"
>> 7. The program decides it needs to echo the character so asks the OS to
>> send a "P" to the terminal
>> 8. The program goes to sleep
>> 9. VMS pages it out to make room for another program that needs to
>> process a leetter "U" .....
>>
>> So on a heavily loaded system thats two disk IO's for each character
>> typed. I am sure there must be a better way....
>
> I will confess ignorance on how VAX/VMS processed characters, but I can't
> imagine it was this bad.

No, it's not that bad.

Inbound characters were stored in a typeahead buffer[*]
in the kernel by the hardware driver.

When an application requested terminal input, it would be
retrieved from the typeahead buffer.

To be fair, it's been forty years and I don't recall
exactly if VMS had the equivalent of unix raw mode;
but they must have because, well, EDT on a VT-100
(complete with GOLD key).


> I strongly suspect it had a "line mode" input model
> similar to the PDP-10 that was used by most programs. I very much doubt that
> the processor was interrupted on every character typed by every user. I could
> be wrong though; VMS certainly made some other big mistakes.


Do you have any examples to share?


[*] I got my first job doing systems programming on VAXen thanks
to the typeahead buffer. By default VMS 2.x took a memory
dump whenever it shut down or rebooted. The memory dump file
was able to be read (and analyzed with the analyzer tool) by
any user. It very conveniently formatted and displayed the
contents of the typeahead buffer. As the most recent input
on the console was usually the system manager logging in to
invoke the shutdown, the system manager password could usually
be found in one of the typeahead buffers in the dump file.

I showed that to the system manager and was offered a job, right
after he changed the dump file permissions. Later, a student had
discovered that the 'debug' utility could be used on programs
'install'ed with elevated privileges (such as change mode to kernel)
and it was trivially easy to inject object code to subvert system
security. Needed a VMS patch from DEC to fix that one, but not
until after the student who discovered it caused a considerable
amount of damage.
Re: mainframe I/O, was CR or LF? [message #395282 is a reply to message #395261] Wed, 03 June 2020 04:13 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: David Wade

On 02/06/2020 21:28, Scott Lurndal wrote:
> usenet@only.tnx (Questor) writes:
>> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:
>
>>
>>> Well the VAX suffers the same. Lets see:-
>>>
>>> 1. User Types Character
>>> 2. VAX receives character.
>>> 3. VMS looks and sees its for a running application which is swapped.
>>> 4. It pages in the interupt handler.
>>> 5. It passes the character to application
>>> 6. The program looks at the character. Sees its a letter "F"
>>> 7. The program decides it needs to echo the character so asks the OS to
>>> send a "P" to the terminal
>>> 8. The program goes to sleep
>>> 9. VMS pages it out to make room for another program that needs to
>>> process a leetter "U" .....
>>>
>>> So on a heavily loaded system thats two disk IO's for each character
>>> typed. I am sure there must be a better way....
>>
>> I will confess ignorance on how VAX/VMS processed characters, but I can't
>> imagine it was this bad.
>
> No, it's not that bad.
>
> Inbound characters were stored in a typeahead buffer[*]
> in the kernel by the hardware driver.
>
> When an application requested terminal input, it would be
> retrieved from the typeahead buffer.
>
> To be fair, it's been forty years and I don't recall
> exactly if VMS had the equivalent of unix raw mode;
> but they must have because, well, EDT on a VT-100
> (complete with GOLD key).

As the whole intent was to run All-In-One which is screen based you
ended up in raw mode...


>
>
>> I strongly suspect it had a "line mode" input model
>> similar to the PDP-10 that was used by most programs. I very much doubt that
>> the processor was interrupted on every character typed by every user. I could
>> be wrong though; VMS certainly made some other big mistakes.
>
>
> Do you have any examples to share?
>
>
> [*] I got my first job doing systems programming on VAXen thanks
> to the typeahead buffer. By default VMS 2.x took a memory
> dump whenever it shut down or rebooted. The memory dump file
> was able to be read (and analyzed with the analyzer tool) by
> any user. It very conveniently formatted and displayed the
> contents of the typeahead buffer. As the most recent input
> on the console was usually the system manager logging in to
> invoke the shutdown, the system manager password could usually
> be found in one of the typeahead buffers in the dump file.
>
> I showed that to the system manager and was offered a job, right
> after he changed the dump file permissions. Later, a student had
> discovered that the 'debug' utility could be used on programs
> 'install'ed with elevated privileges (such as change mode to kernel)
> and it was trivially easy to inject object code to subvert system
> security. Needed a VMS patch from DEC to fix that one, but not
> until after the student who discovered it caused a considerable
> amount of damage.
>
Re: mainframe I/O, was CR or LF? [message #395283 is a reply to message #395282] Wed, 03 June 2020 04:33 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Fred Smith

On 2020-06-03, David Wade <g4ugm@dave.invalid> wrote:
> On 02/06/2020 21:28, Scott Lurndal wrote:
>> usenet@only.tnx (Questor) writes:
>>> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:
>>
>>>
>>>> Well the VAX suffers the same. Lets see:-
>>>>
>>>> 1. User Types Character
>>>> 2. VAX receives character.
>>>> 3. VMS looks and sees its for a running application which is swapped.
>>>> 4. It pages in the interupt handler.
>>>> 5. It passes the character to application
>>>> 6. The program looks at the character. Sees its a letter "F"
>>>> 7. The program decides it needs to echo the character so asks the OS to
>>>> send a "P" to the terminal
>>>> 8. The program goes to sleep
>>>> 9. VMS pages it out to make room for another program that needs to
>>>> process a leetter "U" .....
>>>>
>>>> So on a heavily loaded system thats two disk IO's for each character
>>>> typed. I am sure there must be a better way....
>>>
>>> I will confess ignorance on how VAX/VMS processed characters, but I can't
>>> imagine it was this bad.
>>
>> No, it's not that bad.
>>
>> Inbound characters were stored in a typeahead buffer[*]
>> in the kernel by the hardware driver.
>>
>> When an application requested terminal input, it would be
>> retrieved from the typeahead buffer.
>>
>> To be fair, it's been forty years and I don't recall
>> exactly if VMS had the equivalent of unix raw mode;
>> but they must have because, well, EDT on a VT-100
>> (complete with GOLD key).
>
> As the whole intent was to run All-In-One which is screen based you
> ended up in raw mode...
>
>

Ahh, the joys of trying to understand all the ins-and-outs of SYS$QIO...
Re: mainframe I/O, was CR or LF? [message #395295 is a reply to message #395283] Wed, 03 June 2020 12:20 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Fred Smith <fred@thejanitor.corp> writes:
> On 2020-06-03, David Wade <g4ugm@dave.invalid> wrote:
>> On 02/06/2020 21:28, Scott Lurndal wrote:
>>> usenet@only.tnx (Questor) writes:
>>>> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:
>>>
>>>>
>>>> > Well the VAX suffers the same. Lets see:-
>>>> >
>>>> > 1. User Types Character
>>>> > 2. VAX receives character.
>>>> > 3. VMS looks and sees its for a running application which is swapped.
>>>> > 4. It pages in the interupt handler.
>>>> > 5. It passes the character to application
>>>> > 6. The program looks at the character. Sees its a letter "F"
>>>> > 7. The program decides it needs to echo the character so asks the OS to
>>>> > send a "P" to the terminal
>>>> > 8. The program goes to sleep
>>>> > 9. VMS pages it out to make room for another program that needs to
>>>> > process a leetter "U" .....
>>>> >
>>>> > So on a heavily loaded system thats two disk IO's for each character
>>>> > typed. I am sure there must be a better way....
>>>>
>>>> I will confess ignorance on how VAX/VMS processed characters, but I can't
>>>> imagine it was this bad.
>>>
>>> No, it's not that bad.
>>>
>>> Inbound characters were stored in a typeahead buffer[*]
>>> in the kernel by the hardware driver.
>>>
>>> When an application requested terminal input, it would be
>>> retrieved from the typeahead buffer.
>>>
>>> To be fair, it's been forty years and I don't recall
>>> exactly if VMS had the equivalent of unix raw mode;
>>> but they must have because, well, EDT on a VT-100
>>> (complete with GOLD key).
>>
>> As the whole intent was to run All-In-One which is screen based you
>> ended up in raw mode...
>>
>>
>
> Ahh, the joys of trying to understand all the ins-and-outs of SYS$QIO...
>

Indeed, I dug up some old macro-32 code that used screen escape sequences:

.Page
.Sbttl {*** PMON$CLEAR ***} Clear the screen
.Psect pmon$clear,exe,nowrt,byte

.Entry pmon$clear,^m<>

clrl r1
bbc #pmon$v_hard,flags,5$ ; Hardcopy request?
movc5 #0,(sp),#^a" ",hbuflen,@hbufadr ; Yes, clear buf
ret
5$:
bbc #pmon$v_haze,flags,10$ ; Are we on a Hazeltine
movb #^a"~",movebuffer ; <~><^> clear screen
movb #28,movebuffer+1
movzbw #2,r1
brb 30$ ; branch to print section
10$:
bbc #pmon$v_dm,flags,20$ ; Is this a Datamedia?
movb #31,movebuffer ; - <us> = clear seq
movzbw #1,r1 ; Length of sequence
brb 30$ ; branch to print section
20$:
bbc #pmon$v_vt52,flags,25$ ; Is this a vt52?
movab movebuffer,r0
movb #27,(r0)+ ; <esc><H><esc><J>
movb #^a"H",(r0)+
movb #27,(r0)+
movb #^a"J",(r0)+ ; Home, Erase to EOS
movzbw #4,r1
brb 30$
25$:
movb #26,movebuffer ; ADM3A <sub> = clear seq
movb #1,r1 ; Length of sequence
30$:
$Qiow_s - ; Print section
chan = tchan ,-
func = #io$_writevblk!io$m_noformat ,-
p1 = movebuffer ,-
p2 = r1

ret

The program only produced output (it used $GETJPI to get process information,
displayed it on the screen, updating every N seconds).
Re: mainframe I/O, was CR or LF? [message #395307 is a reply to message #395261] Wed, 03 June 2020 14:25 Go to previous messageGo to next message
usenet is currently offline  usenet
Messages: 556
Registered: May 2013
Karma: 0
Senior Member
On Tue, 02 Jun 2020 20:28:45 GMT, scott@slp53.sl.home (Scott Lurndal) wrote:
> usenet@only.tnx (Questor) writes:
>> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:
>>
>>> Well the VAX suffers the same. Lets see:-
>>>
>>> 1. User Types Character
>>> 2. VAX receives character.
>>> 3. VMS looks and sees its for a running application which is swapped.
>>> 4. It pages in the interupt handler.
>>> 5. It passes the character to application
>>> 6. The program looks at the character. Sees its a letter "F"
>>> 7. The program decides it needs to echo the character so asks the OS to
>>> send a "P" to the terminal
>>> 8. The program goes to sleep
>>> 9. VMS pages it out to make room for another program that needs to
>>> process a leetter "U" .....
>>>
>>> So on a heavily loaded system thats two disk IO's for each character
>>> typed. I am sure there must be a better way....
>>
>> I will confess ignorance on how VAX/VMS processed characters, but I can't
>> imagine it was this bad.
>
> No, it's not that bad.
>
> Inbound characters were stored in a typeahead buffer[*]
> in the kernel by the hardware driver.
>
> When an application requested terminal input, it would be
> retrieved from the typeahead buffer.
>
> To be fair, it's been forty years and I don't recall
> exactly if VMS had the equivalent of unix raw mode;
> but they must have because, well, EDT on a VT-100
> (complete with GOLD key).
>
>
>> I strongly suspect it had a "line mode" input model
>> similar to the PDP-10 that was used by most programs. I very much doubt that
>> the processor was interrupted on every character typed by every user. I could
>> be wrong though; VMS certainly made some other big mistakes.
>
> Do you have any examples to share?

In old age first thing to go is memory.

I forget what the second thing is.


I have to offer the following caveats. I don't remember a lot of the details.
One person's feature is another one's mistake. The time period would be the
mid-1980s.

I think making the page size only 512 bytes was a big mistake and wasn't forward
looking, given the trends of increasingly larger and increasingly less expensive
memory.

VMS lacked some useful queue options that the TOPS-10/20 systems had. (For the
uninitiated, the queueing system is how one submitted and controlled print
requests, card or paper tape punching, and batch jobs, a form of automated
script processing.)

While apparently complete, the help system was cumbersome to use. I found it
difficult and ineffective as a means to quickly learning the big picture about a
program or a command.

Getting into the weeds, there was some function in the Data Access Protocol (DAP
was a specification for doing remote file access operations over DECnet) which
in VMS DECnet included an implicit rewind of a file, while the TOPS-10/20
implementation essentially left the file at EOF, because a rewind was not
explicitly in the spec. Okay, that's not a big one, but it demonstrates a
certain inter-group dynamic.

DEC was making a big bet on homogenizing their product lines with VAX/VMS
machines while deeemphasizing their "legacy" offerings. The VMS group was
getting a big chunk of company resources and it seemed to result in the attitude
of, "we've decided to do it this way, and we don't have to care much about what
other groups think because we're VMS." I think the majority of VAX/VMS people
came into it from the PDP-11 world, so to some extent they were hobbled by small
system thinking even as they built bigger and bigger VAXes.


> [*] I got my first job doing systems programming on VAXen thanks
> to the typeahead buffer. By default VMS 2.x took a memory
> dump whenever it shut down or rebooted. The memory dump file
> was able to be read (and analyzed with the analyzer tool) by
> any user. It very conveniently formatted and displayed the
> contents of the typeahead buffer. As the most recent input
> on the console was usually the system manager logging in to
> invoke the shutdown, the system manager password could usually
> be found in one of the typeahead buffers in the dump file.

Heh heh heh.


> I showed that to the system manager and was offered a job, right
> after he changed the dump file permissions. Later, a student had
> discovered that the 'debug' utility could be used on programs
> 'install'ed with elevated privileges (such as change mode to kernel)
> and it was trivially easy to inject object code to subvert system
> security. Needed a VMS patch from DEC to fix that one, but not
> until after the student who discovered it caused a considerable
> amount of damage.

TOPS-10 and TOPS-20 also had a facility where unpriviledged users could run
certain system programs which could perform priviledged operations. These were
often vectors for security intrusions. TOPS-10 and TOPS-20 also allowed a small
subset of operating system commands to be executed without logging in -- for
example, one could see the system status or view the print queue. This created
another "attack surface," to use the current phrasing.
Re: mainframe I/O, was CR or LF? [message #395321 is a reply to message #395307] Wed, 03 June 2020 17:20 Go to previous messageGo to next message
Rich Alderson is currently offline  Rich Alderson
Messages: 489
Registered: August 2012
Karma: 0
Senior Member
usenet@only.tnx (Questor) writes:

> DEC was making a big bet on homogenizing their product lines with VAX/VMS
> machines while deeemphasizing their "legacy" offerings. The VMS group was
> getting a big chunk of company resources and it seemed to result in the
> attitude of, "we've decided to do it this way, and we don't have to care much
> about what other groups think because we're VMS." I think the majority of
> VAX/VMS people came into it from the PDP-11 world, so to some extent they
> were hobbled by small system thinking even as they built bigger and bigger
> VAXes.

Engineers from the PDP-11, managers from the IBM middle management world, and
everything went to hell in a handcart.

As my friend BAH is wont to point out frequently.

--
Rich Alderson news@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Re: mainframe I/O, was CR or LF? [message #395368 is a reply to message #395307] Thu, 04 June 2020 11:18 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
usenet@only.tnx (Questor) writes:
> On Tue, 02 Jun 2020 20:28:45 GMT, scott@slp53.sl.home (Scott Lurndal) wrote:
>> usenet@only.tnx (Questor) writes:
>>> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:

>>> I strongly suspect it had a "line mode" input model
>>> similar to the PDP-10 that was used by most programs. I very much doubt that
>>> the processor was interrupted on every character typed by every user. I could
>>> be wrong though; VMS certainly made some other big mistakes.
>>
>> Do you have any examples to share?
>
> In old age first thing to go is memory.
>
> I forget what the second thing is.
>
>
> I have to offer the following caveats. I don't remember a lot of the details.
> One person's feature is another one's mistake. The time period would be the
> mid-1980s.
>
> I think making the page size only 512 bytes was a big mistake and wasn't forward
> looking, given the trends of increasingly larger and increasingly less expensive
> memory.

Although in the late 70's, I'm not sure how evident that trend was...

>
> VMS lacked some useful queue options that the TOPS-10/20 systems had. (For the
> uninitiated, the queueing system is how one submitted and controlled print
> requests, card or paper tape punching, and batch jobs, a form of automated
> script processing.)

One of my first tasks after being hired was to develop a print symbiont
for remote printers that included accounting (rpsacc - remote printing
accounting); the mainframe folks had been billing various departments
for lines printed, cards read, cpu seconds, memory seconds, tape I/O,
disk I/O, etc. VMS didn't at the time have any accounting provisions
for printed output (and remote printing was not well supported in VMS 2.x),
so they tasked me to write a print symbiont for that. The print symbiont
was written in Macro-32, RPSACC in Vax PASCAL.


[INHERIT('SYS$LIBRARY:STARLET'),
IDENT('V01-000')]

PROGRAM Remote_print_accounting( Quax );
CONST
Max_message_size = 64;
%INCLUDE 'SYS$LIBRARY:PASSTATUS/NOLIST'

TYPE
Unsigned_byte = [BYTE] 0..255;
Unsigned_word = [WORD] 0..65535;

Iosb_type = [QUAD] RECORD
Status: Unsigned_word;
Length: Unsigned_word;
Spare: UNSIGNED;
END;

Onebit = [BIT(1)] BOOLEAN;

PRCSDEF = PACKED RECORD { Message format to quota manager }
CASE INTEGER OF
0:(
PRCS_V_IAC: [POS(16)] ONEBIT;
PRCS_V_BAT: [POS(17)] ONEBIT;
PRCS_V_NET: [POS(18)] ONEBIT;
PRCS_V_PRC: [POS(19)] ONEBIT;
PRCS_V_SUB: [POS(20)] ONEBIT;
PRCS_V_PRT: [POS(21)] ONEBIT;
PRCS_V_DLO: [POS(22)] ONEBIT;

PRCS_V_LOGIN: [POS(32)] ONEBIT;
PRCS_V_LOGOUT: [POS(33)] ONEBIT;
PRCS_V_BYPASLO: [POS(34)] ONEBIT;
PRCS_V_WARNED: [POS(35)] ONEBIT;
PRCS_V_KILLED: [POS(36)] ONEBIT;
PRCS_V_DELPRT: [POS(37)] ONEBIT;
PRCS_V_RPLYPND: [POS(38)] ONEBIT;
PRCS_V_ALTER: [POS(39)] ONEBIT);

1:(
PRCS_FLAGS: [POS(0),LONG] UNSIGNED;
PRCS_STATUS: [POS(32),LONG] UNSIGNED;
PRCS_USERNAME: [POS(64),BYTE(12)] PACKED ARRAY [1..12] OF CHAR;
PRCS_UIC: [LONG] UNSIGNED;
PRCS_QMASK: [LONG] UNSIGNED;
PRCS_SMBTIME: [QUAD,UNSAFE] PACKED ARRAY [1..2] OF UNSIGNED;
PRCS_SMBQUENAM: [BYTE(16)] PACKED ARRAY [1..16] OF CHAR;
PRCS_SMBPAGCNT: [LONG] UNSIGNED);
END; { Message to quota manager }

Local_mailbox_message = PACKED RECORD
Net_channel: Unsigned_word;
Net_iosb: Iosb_type;
Reply_unit, { Unit number for reply to symbiont }
Request_type: Unsigned_word; { 0 - Get quax record, 1 - Update }
Username: PACKED ARRAY [1..12] OF CHAR;
Uic: UNSIGNED;
Queue_name: PACKED ARRAY [1..16] OF CHAR; { byte 1 = length }
Pages_printed: UNSIGNED;
Source_node: VARYING [6] OF CHAR;
END; { Local message format }

...

>
> While apparently complete, the help system was cumbersome to use. I found it
> difficult and ineffective as a means to quickly learning the big picture about a
> program or a command.

From my perspective (coming from a TSS8/HP3000 background), any help was better
than none :-)
Re: mainframe I/O, was CR or LF? [message #395369 is a reply to message #395321] Thu, 04 June 2020 11:20 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Rich Alderson <news@alderson.users.panix.com> writes:
> usenet@only.tnx (Questor) writes:
>
>> DEC was making a big bet on homogenizing their product lines with VAX/VMS
>> machines while deeemphasizing their "legacy" offerings. The VMS group was
>> getting a big chunk of company resources and it seemed to result in the
>> attitude of, "we've decided to do it this way, and we don't have to care much
>> about what other groups think because we're VMS." I think the majority of
>> VAX/VMS people came into it from the PDP-11 world, so to some extent they
>> were hobbled by small system thinking even as they built bigger and bigger
>> VAXes.
>
> Engineers from the PDP-11, managers from the IBM middle management world, and
> everything went to hell in a handcart.
>
> As my friend BAH is wont to point out frequently.

Although I think it's fair to say that VMS outlasted the PDP-10 and successors in the long
run, and there may still be production users on OpenVMS.

Personally, I found VMS a joy to use.
Re: mainframe I/O, was CR or LF? [message #395376 is a reply to message #395321] Thu, 04 June 2020 13:41 Go to previous messageGo to next message
usenet is currently offline  usenet
Messages: 556
Registered: May 2013
Karma: 0
Senior Member
On 03 Jun 2020 17:20:07 -0400, Rich Alderson <news@alderson.users.panix.com>
wrote:
> usenet@only.tnx (Questor) writes:
>
>> DEC was making a big bet on homogenizing their product lines with VAX/VMS
>> machines while deeemphasizing their "legacy" offerings. The VMS group was
>> getting a big chunk of company resources and it seemed to result in the
>> attitude of, "we've decided to do it this way, and we don't have to care much
>> about what other groups think because we're VMS." I think the majority of
>> VAX/VMS people came into it from the PDP-11 world, so to some extent they
>> were hobbled by small system thinking even as they built bigger and bigger
>> VAXes.
>
> Engineers from the PDP-11, managers from the IBM middle management world, and
> everything went to hell in a handcart.

The IBM managers came later. I think a shift to a more business-orientated
approach (as opposed to an engineering-based one) was a good idea, but it didn't
turn out well. They should have been hiring managers from smaller, leaner
companies, not one with an even bigger bureaucracy.

I think DEC's upper management foresaw the move towards more commodity
computing, but I suspect they underestimated just how far that trend was going
to go. And they backed the wrong model. They envisioned a VMS terminal on
every desk, but the world was already moving towards a computer -- a PC -- on
every desk. The pitch was that VMS was something of a universal computing
solution, and one merely had to select the appropriate sized VAX machine. (This
was derided internally as "one strategy, one egg, one basket.") However the
concerns of a small computer user are not completely aligned with those of
someone running a very big installation. So while the idea may sound good in
theory, in practice shrinking or stretching VMS to cover every use case didn't
work as well as hoped.

I could say quite a bit about DEC's decline and demise, but I'm not writing that
essay today, nor posting it here.
Re: mainframe I/O, was CR or LF? [message #395382 is a reply to message #395369] Thu, 04 June 2020 15:51 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <7j8CG.525300$TM6.311409@fx42.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
> Although I think it's fair to say that VMS outlasted the PDP-10 and successors in the long
> run, and there may still be production users on OpenVMS.

No question about that. Much though we loved the PDP-10, word
addressed machines were showing their age, and although the address
expansion was done about as well as it could, it was still a kludge
and if you had to rewrite your programs anyway, might as well rewrite
them for something with a big address space without sections.

I wouldn't so much say that the VAX was better than that it was good
enough and it was clear to everyone that 32 bit byte addressed
machines were route to the future.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: memory sizes, mainframe I/O, was CR or LF? [message #395385 is a reply to message #395368] Thu, 04 June 2020 16:53 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <1h8CG.525299$TM6.280020@fx42.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
>> I think making the page size only 512 bytes was a big mistake and wasn't forward
>> looking, given the trends of increasingly larger and increasingly less expensive
>> memory.
>
> Although in the late 70's, I'm not sure how evident that trend was...

4K DRAM chips appeared in 1973, 16K DRAM chips in 1974, and chip
densities were doubling every year or two.

IBM's S/370 in the early 1970s had both 2K and 4K pages. The 512 byte
VAX pages were obviously too small at the time.




--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: memory sizes, mainframe I/O, was CR or LF? [message #395386 is a reply to message #395385] Thu, 04 June 2020 18:22 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> In article <1h8CG.525299$TM6.280020@fx42.iad>,
> Scott Lurndal <slp53@pacbell.net> wrote:
>>> I think making the page size only 512 bytes was a big mistake and wasn't forward
>>> looking, given the trends of increasingly larger and increasingly less expensive
>>> memory.
>>
>> Although in the late 70's, I'm not sure how evident that trend was...
>
> 4K DRAM chips appeared in 1973, 16K DRAM chips in 1974, and chip
> densities were doubling every year or two.
>
> IBM's S/370 in the early 1970s had both 2K and 4K pages. The 512 byte
> VAX pages were obviously too small at the time.

I wonder if they wanted the size of the page to match the basic
disk sector size; perhaps to avoid checkerboarding when managing
the working set (another, hmmm, interesting VMS feature).
Re: mainframe I/O, was CR or LF? [message #395390 is a reply to message #395368] Thu, 04 June 2020 19:26 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Scott Lurndal <scott@slp53.sl.home> wrote:
> usenet@only.tnx (Questor) writes:
>> On Tue, 02 Jun 2020 20:28:45 GMT, scott@slp53.sl.home (Scott Lurndal) wrote:
>>> usenet@only.tnx (Questor) writes:
>>>> On Sun, 31 May 2020 09:30:37 +0100, David Wade <g4ugm@dave.invalid> wrote:
>
>>>> I strongly suspect it had a "line mode" input model
>>>> similar to the PDP-10 that was used by most programs. I very much doubt that
>>>> the processor was interrupted on every character typed by every user. I could
>>>> be wrong though; VMS certainly made some other big mistakes.
>>>
>>> Do you have any examples to share?
>>
>> In old age first thing to go is memory.
>>
>> I forget what the second thing is.
>>
>>
>> I have to offer the following caveats. I don't remember a lot of the details.
>> One person's feature is another one's mistake. The time period would be the
>> mid-1980s.
>>
>> I think making the page size only 512 bytes was a big mistake and wasn't forward
>> looking, given the trends of increasingly larger and increasingly less expensive
>> memory.
>
> Although in the late 70's, I'm not sure how evident that trend was...

Multics used 1K (word) pages, IIRC. VM used 2KB initially and later 4KB.
VMS came from too much of a minicomputer background. Of course they could
have, and maybe did, made up for this by handling physical pages only in
groups of four or eight.

>
>>
>> VMS lacked some useful queue options that the TOPS-10/20 systems had. (For the
>> uninitiated, the queueing system is how one submitted and controlled print
>> requests, card or paper tape punching, and batch jobs, a form of automated
>> script processing.)
>
> One of my first tasks after being hired was to develop a print symbiont
> for remote printers that included accounting (rpsacc - remote printing
> accounting); the mainframe folks had been billing various departments
> for lines printed, cards read, cpu seconds, memory seconds, tape I/O,
> disk I/O, etc. VMS didn't at the time have any accounting provisions
> for printed output (and remote printing was not well supported in VMS 2.x),
> so they tasked me to write a print symbiont for that. The print symbiont
> was written in Macro-32, RPSACC in Vax PASCAL.
>

--
Pete
Re: mainframe I/O, was CR or LF? [message #395391 is a reply to message #395369] Thu, 04 June 2020 19:26 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Scott Lurndal <scott@slp53.sl.home> wrote:
> Rich Alderson <news@alderson.users.panix.com> writes:
>> usenet@only.tnx (Questor) writes:
>>
>>> DEC was making a big bet on homogenizing their product lines with VAX/VMS
>>> machines while deeemphasizing their "legacy" offerings. The VMS group was
>>> getting a big chunk of company resources and it seemed to result in the
>>> attitude of, "we've decided to do it this way, and we don't have to care much
>>> about what other groups think because we're VMS." I think the majority of
>>> VAX/VMS people came into it from the PDP-11 world, so to some extent they
>>> were hobbled by small system thinking even as they built bigger and bigger
>>> VAXes.
>>
>> Engineers from the PDP-11, managers from the IBM middle management world, and
>> everything went to hell in a handcart.
>>
>> As my friend BAH is wont to point out frequently.
>
> Although I think it's fair to say that VMS outlasted the PDP-10 and successors in the long
> run, and there may still be production users on OpenVMS.
>
> Personally, I found VMS a joy to use.
>

I always liked it, but we never stressed it too much.

--
Pete
Re: mainframe I/O, was CR or LF? [message #395392 is a reply to message #395376] Thu, 04 June 2020 19:26 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Questor <usenet@only.tnx> wrote:
> On 03 Jun 2020 17:20:07 -0400, Rich Alderson <news@alderson.users.panix.com>
> wrote:
>> usenet@only.tnx (Questor) writes:
>>
>>> DEC was making a big bet on homogenizing their product lines with VAX/VMS
>>> machines while deeemphasizing their "legacy" offerings. The VMS group was
>>> getting a big chunk of company resources and it seemed to result in the
>>> attitude of, "we've decided to do it this way, and we don't have to care much
>>> about what other groups think because we're VMS." I think the majority of
>>> VAX/VMS people came into it from the PDP-11 world, so to some extent they
>>> were hobbled by small system thinking even as they built bigger and bigger
>>> VAXes.
>>
>> Engineers from the PDP-11, managers from the IBM middle management world, and
>> everything went to hell in a handcart.
>
> The IBM managers came later. I think a shift to a more business-orientated
> approach (as opposed to an engineering-based one) was a good idea, but it didn't
> turn out well. They should have been hiring managers from smaller, leaner
> companies, not one with an even bigger bureaucracy.
>
> I think DEC's upper management foresaw the move towards more commodity
> computing, but I suspect they underestimated just how far that trend was going
> to go. And they backed the wrong model. They envisioned a VMS terminal on
> every desk, but the world was already moving towards a computer -- a PC -- on
> every desk. The pitch was that VMS was something of a universal computing
> solution, and one merely had to select the appropriate sized VAX machine. (This
> was derided internally as "one strategy, one egg, one basket.") However the
> concerns of a small computer user are not completely aligned with those of
> someone running a very big installation. So while the idea may sound good in
> theory, in practice shrinking or stretching VMS to cover every use case didn't
> work as well as hoped.
>

VAXClusters should have covered the shrinking or stretching for most cases,
unless some program needed something the sizeof a Cray. A MicroVMS box on
every desk would be about the equivalent of an IBM PC on every desk, but
the prices were never low enough.

> I could say quite a bit about DEC's decline and demise, but I'm not writing that
> essay today, nor posting it here.
>
>



--
Pete
Re: memory sizes, mainframe I/O, was CR or LF? [message #395393 is a reply to message #395386] Thu, 04 June 2020 19:27 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
scott@slp53.sl.home (Scott Lurndal) writes:

> John Levine <johnl@taugh.com> writes:
>> In article <1h8CG.525299$TM6.280020@fx42.iad>,
>> Scott Lurndal <slp53@pacbell.net> wrote:
>>>> I think making the page size only 512 bytes was a big mistake and wasn't forward
>>>> looking, given the trends of increasingly larger and increasingly less expensive
>>>> memory.
>>>
>>> Although in the late 70's, I'm not sure how evident that trend was...
>>
>> 4K DRAM chips appeared in 1973, 16K DRAM chips in 1974, and chip
>> densities were doubling every year or two.
>>
>> IBM's S/370 in the early 1970s had both 2K and 4K pages. The 512 byte
>> VAX pages were obviously too small at the time.
>
> I wonder if they wanted the size of the page to match the basic
> disk sector size; perhaps to avoid checkerboarding when managing
> the working set (another, hmmm, interesting VMS feature).

Makes sense.

By going down to 512 they drastically increase the odds that they can
page things out that they never have to page in again.

When I worked on S/360 I always wondered how many 4K pages I had where I
only really accessed one byte with any frequency.

The OS has to keep some kind of map, each real page is mapped to some
address space/real address. It's been too long since I read POPs on
this but assuming each 512 virtual bytes needs 2 addresses to track it,
that's only an overhead of 8 bytes for every 512 virtual bytes.

--
Dan Espen
Re: mainframe I/O, was CR or LF? [message #395394 is a reply to message #395392] Thu, 04 June 2020 20:04 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
Speaking of the Cray, you could get a vector add-on for a VAX that let it work like a Cray.
Re: memory sizes, mainframe I/O, was CR or LF? [message #395419 is a reply to message #395393] Fri, 05 June 2020 12:10 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Dan Espen <dan1espen@gmail.com> writes:
> scott@slp53.sl.home (Scott Lurndal) writes:
>
>> John Levine <johnl@taugh.com> writes:
>>> In article <1h8CG.525299$TM6.280020@fx42.iad>,
>>> Scott Lurndal <slp53@pacbell.net> wrote:
>>>> >I think making the page size only 512 bytes was a big mistake and wasn't forward
>>>> >looking, given the trends of increasingly larger and increasingly less expensive
>>>> >memory.
>>>>
>>>> Although in the late 70's, I'm not sure how evident that trend was...
>>>
>>> 4K DRAM chips appeared in 1973, 16K DRAM chips in 1974, and chip
>>> densities were doubling every year or two.
>>>
>>> IBM's S/370 in the early 1970s had both 2K and 4K pages. The 512 byte
>>> VAX pages were obviously too small at the time.
>>
>> I wonder if they wanted the size of the page to match the basic
>> disk sector size; perhaps to avoid checkerboarding when managing
>> the working set (another, hmmm, interesting VMS feature).
>
> Makes sense.
>
> By going down to 512 they drastically increase the odds that they can
> page things out that they never have to page in again.
>
> When I worked on S/360 I always wondered how many 4K pages I had where I
> only really accessed one byte with any frequency.
>
> The OS has to keep some kind of map, each real page is mapped to some
> address space/real address. It's been too long since I read POPs on
> this but assuming each 512 virtual bytes needs 2 addresses to track it,
> that's only an overhead of 8 bytes for every 512 virtual bytes.

Page tables are interesting beasts. Generally there are slightly more
than four bytes per page (in a 32-bit architecture, eight bytes in 64-bit)
of page table overhead (most page tables are tree structures three or four
levels deep - some support final entries at higher levels in the tree
which provides larger page sizes (e.g. intel x86-64 supports 4k, 2M and 1G
pages, ARM64 has three basic 'granule' sizes, 4k, 16k and 64k, and by
terminating the lookup at higher levels, supports two or three larger block
sizes per each of the granule sizes).

Then you have the hardware virtualization solutions, where the page tables
are nested (the guest page table physical addresses are in turn translated
by another set of hypervisor page tables into real physical addresses). For
performance, you need a bunch of TLBs, since a single table walk in the
nested case, where both levels used 4k pages, requires 23 memory accesses;
can be reduced to 11 using 1GB pages on the hypervisor side.
Re: memory sizes, mainframe I/O, was CR or LF? [message #395426 is a reply to message #395419] Fri, 05 June 2020 13:03 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Scott Lurndal <scott@slp53.sl.home> wrote:
> Dan Espen <dan1espen@gmail.com> writes:
>> scott@slp53.sl.home (Scott Lurndal) writes:
>>
>>> John Levine <johnl@taugh.com> writes:
>>>> In article <1h8CG.525299$TM6.280020@fx42.iad>,
>>>> Scott Lurndal <slp53@pacbell.net> wrote:
>>>> >> I think making the page size only 512 bytes was a big mistake and wasn't forward
>>>> >> looking, given the trends of increasingly larger and increasingly less expensive
>>>> >> memory.
>>>> >
>>>> > Although in the late 70's, I'm not sure how evident that trend was...
>>>>
>>>> 4K DRAM chips appeared in 1973, 16K DRAM chips in 1974, and chip
>>>> densities were doubling every year or two.
>>>>
>>>> IBM's S/370 in the early 1970s had both 2K and 4K pages. The 512 byte
>>>> VAX pages were obviously too small at the time.
>>>
>>> I wonder if they wanted the size of the page to match the basic
>>> disk sector size; perhaps to avoid checkerboarding when managing
>>> the working set (another, hmmm, interesting VMS feature).
>>
>> Makes sense.
>>
>> By going down to 512 they drastically increase the odds that they can
>> page things out that they never have to page in again.
>>
>> When I worked on S/360 I always wondered how many 4K pages I had where I
>> only really accessed one byte with any frequency.
>>
>> The OS has to keep some kind of map, each real page is mapped to some
>> address space/real address. It's been too long since I read POPs on
>> this but assuming each 512 virtual bytes needs 2 addresses to track it,
>> that's only an overhead of 8 bytes for every 512 virtual bytes.
>
> Page tables are interesting beasts. Generally there are slightly more
> than four bytes per page (in a 32-bit architecture, eight bytes in 64-bit)
> of page table overhead (most page tables are tree structures three or four
> levels deep - some support final entries at higher levels in the tree
> which provides larger page sizes (e.g. intel x86-64 supports 4k, 2M and 1G
> pages, ARM64 has three basic 'granule' sizes, 4k, 16k and 64k, and by
> terminating the lookup at higher levels, supports two or three larger block
> sizes per each of the granule sizes).
>
> Then you have the hardware virtualization solutions, where the page tables
> are nested (the guest page table physical addresses are in turn translated
> by another set of hypervisor page tables into real physical addresses). For
> performance, you need a bunch of TLBs, since a single table walk in the
> nested case, where both levels used 4k pages, requires 23 memory accesses;
> can be reduced to 11 using 1GB pages on the hypervisor side.
>

VM/370 has handshaking, I guess it’s called paravirtualization, where the
guest hands off all paging to the hypervisor, eliminating half the
overhead. I don’t know what x86 hypervisors do.

--
Pete
Re: mainframe I/O, was CR or LF? [message #395438 is a reply to message #395369] Fri, 05 June 2020 14:46 Go to previous messageGo to next message
usenet is currently offline  usenet
Messages: 556
Registered: May 2013
Karma: 0
Senior Member
On Thu, 04 Jun 2020 15:20:35 GMT, scott@slp53.sl.home (Scott Lurndal) wrote:
> Rich Alderson <news@alderson.users.panix.com> writes:
>> usenet@only.tnx (Questor) writes:
>>
>>> DEC was making a big bet on homogenizing their product lines with VAX/VMS
>>> machines while deeemphasizing their "legacy" offerings. The VMS group was
>>> getting a big chunk of company resources and it seemed to result in the
>>> attitude of, "we've decided to do it this way, and we don't have to care much
>>> about what other groups think because we're VMS." I think the majority of
>>> VAX/VMS people came into it from the PDP-11 world, so to some extent they
>>> were hobbled by small system thinking even as they built bigger and bigger
>>> VAXes.
>>
>> Engineers from the PDP-11, managers from the IBM middle management world, and
>> everything went to hell in a handcart.
>>
>> As my friend BAH is wont to point out frequently.
>
> Although I think it's fair to say that VMS outlasted the PDP-10 and successors in the long
> run, and there may still be production users on OpenVMS.

VAX/VMS may have lasted later, but it did not last longer. TOPS-10 and TOPS-20
systems were very popular with their users. The PDP-10 would have lasted even
longer if DEC hadn't thoroughly fumbled the follow-on processor to the KL10.

Also, I suspect there are still PDP-11s in active use.
Re: memory sizes, mainframe I/O, was CR or LF? [message #395439 is a reply to message #395386] Fri, 05 June 2020 14:47 Go to previous messageGo to previous message
usenet is currently offline  usenet
Messages: 556
Registered: May 2013
Karma: 0
Senior Member
On Thu, 04 Jun 2020 22:22:02 GMT, scott@slp53.sl.home (Scott Lurndal) wrote:
> John Levine <johnl@taugh.com> writes:
>> In article <1h8CG.525299$TM6.280020@fx42.iad>,
>> Scott Lurndal <slp53@pacbell.net> wrote:
>>>> I think making the page size only 512 bytes was a big mistake and wasn't forward
>>>> looking, given the trends of increasingly larger and increasingly less expensive
>>>> memory.
>>>
>>> Although in the late 70's, I'm not sure how evident that trend was...
>>
>> 4K DRAM chips appeared in 1973, 16K DRAM chips in 1974, and chip
>> densities were doubling every year or two.
>>
>> IBM's S/370 in the early 1970s had both 2K and 4K pages. The 512 byte
>> VAX pages were obviously too small at the time.
>
> I wonder if they wanted the size of the page to match the basic
> disk sector size; perhaps to avoid checkerboarding when managing
> the working set (another, hmmm, interesting VMS feature).

working set pre-loading = no fault insurance
Pages (5): [ «    1  2  3  4  5    »]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: Re: Compliments to Microsoft ! (Scams)
Next Topic: C P Clare Relays?
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Thu Mar 28 08:51:28 EDT 2024

Total time taken to generate the page: 0.12232 seconds