Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » Holy wars of the past - how did they turn out?
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Re: Holy wars of the past - how did they turn out? [message #405328 is a reply to message #405252] Sun, 07 February 2021 19:42 Go to previous messageGo to next message
Anne & Lynn Wheel is currently offline  Anne & Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Jim Jackson <jj@franjam.org.uk> writes:
> Perhaps you thought I was being serious, while I was being sarcastic :-)
> Having said that I loved the refs below...

somewhat sensitive, the IBM communication group was constantly claiming
my examples were incorrect, disparaging our customer executive
presentations, all sorts of political dirty tricks. but it wasn't
just token-ring and ethernet

.... we had been working with NSF director was suppose to get $20M to
interconnect the NSF supercomputer centers. Then congress cuts the
budget and some other things happen and eventually an RFP is released
(in part based on what we already had running) ... old archived (a.f.c)
post with 28Mar1986 preliminary release
http://www.garlic.com/~lynn/2002k.html#12
Internal politics prevent us from bidding, NSF director tries to help by
writing IBM a letter (copying CEO) with support from some other 3-letter
agencies ... but that just makes the internal politics worse (further
aggravated along the way with comments that what we already have running
is at least 5yrs ahead of all RFP responses). As regional networks
connected into the centers, it becomes the NSFNET backbone (precursor to
modern internet)
https://www.technologyreview.com/s/401444/grid-computing/

all during this period the IBM communication group was distributing all
sorts of fabricated claims and misinformation about SNA versus TCP/IP.
At one point somebody collects a bunch of their internal misinformation
email (quite a few higher up IBM communication group executives) and
forwards it to us.

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: Holy wars of the past - how did they turn out? [message #405329 is a reply to message #405307] Sun, 07 February 2021 19:47 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Dan Espen <dan1espen@gmail.com> wrote:
> Thomas Koenig <tkoenig@netcologne.de> writes:
>
>> Peter Flass <peter_flass@yahoo.com> schrieb:
>>> Thomas Koenig <tkoenig@netcologne.de> wrote:
>>
>>>> So, here's my requirement, two parts:
>>>>
>>>> Take a C block delineated by curly braces, like
>>>>
>>>> if (foo)
>>>> {
>>>> bar();
>>>> }
>>>> else
>>>> {
>>>> baz();
>>>> }
>>>>
>>>> I want to have a reasonably short command that I can apply to the
>>>> opening or closing curly brace of each of the blocks, and I want
>>>> to view them as one line indicating that something has been hidden.
>>>>
>>>> Second part: Have the same for other programming languages like Fortran with its
>>>>
>>>> DO I=1,10
>>>> call bar(i)
>>>> END DO
>>>>
>>>> syntax.
>>>>
>>>> And I don't want to add extra markers as described in
>>>> https://www.emacswiki.org/emacs/FoldingMode , I want this
>>>> integrated with the individual language modes.
>>>>
>>>
>>> Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
>>
>> Does ISPF support any Fortran version that has not been outdated
>> for 30 years?
>
> Not sure what you think ISPF is.
> Here were are talking about the ISPF editor.
> The editor doesn't care too much about what language your are editing.
> It's language support does include highlighting keywords but it does
> that without really understanding the actual language syntax.
>
> ISPF does have panels to invoke foreground and background compiles.
> Those panels are so brain dead that I've never seen any shop make
> use of them.
>

I was going to say we used them extensively, but now that I think back we
didn’t use them at all. It was simpler to have the program wrapped in a
line or so,of JCL and then just SUB it. I don’t think anyone ever used
foreground compilation, our batch was so fast.

--
Pete
Re: Holy wars of the past - how did they turn out? [message #405330 is a reply to message #405222] Sun, 07 February 2021 19:49 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Anssi Saari <as@sci.fi> writes:
> I guess I haven't either. Or I don't really know what my colleagues use
> since I've started in a new job during the pandemic... I remember I had
> an interesting talk years ago, probably in the naughties, with a
> stranger because I was wearing my Gnus T-Shirt. As I recall he
> recognized the elisp code on the shirt as lisp but wasn't an Emacs user.

"t" my header (gnus for a couple decades)
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: Holy wars of the past - how did they turn out? [message #405331 is a reply to message #405318] Sun, 07 February 2021 20:17 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
Thomas Koenig <tkoenig@netcologne.de> writes:
> The original idea of the 801 was to use it as a microcode core
> for other machines. They used it as a channel processor on the
> big iron systems later, which pretty much fits the description.

I periodically claim that John Cocke's motivation was to go to the
opposite extreme to the complexity of the failed "Future System" design
(never announced or shipped, one of the final nails was an analysis of
porting a 370/195 application to FS machine made out of the fastest
available hardware, it would have throughput of 370/145 ... around
factor of 30 slow down).

Presentation from the 801/risc group late 76 or early 77 was that the
extreme lack of features in the hardware would be compensated by
compiler technology ... including 801 would have no hardware
protection domain and all instructions could executed directly
by application or libraries w/o needing supervisor calls, the cp.r
operating system would only load "correct" programs and the pl.8
compiler would only generate correct programs.

A pitch was made to convert the huge myriad and variety of different
internal microprocessors to 801/risc ... emulators used in low &
mid-range 370s (801/risc iliad chips), as/400 followon to s/38, i/o
controllers and i/o channel processors. The 801/risc ROMP chip was
originally going to be used for a Displaywriter followon (running cp.r)
but when that got killed, they decided to retarget to the unix
workstation market, they got the company that had done the AT&T unix
port of PC/IX to do one for ROMP ... and privilege/non-privilege
hardware state had to be added (for the unix system model).

trivia: all the low/mid range 370s emulation ran about ten native
instructions per 370 instructions ... not all that different that
existing 370 emulators that run on Intel platforms ... and it would be
all of these that convert to 801/risc Iliad chips. the (370) 4361/4381
followon to 4331/4341 was suppose to use 801/risc Iliad ... I helped
with white paper that showed that majority of 370 could now be
implemented directly in cisc chip silicon ... rather than emulation.
That and many other of the 1980-era 801/risc efforts floundered and then
found some number of 801/risc chip engineers leaving for other vendors.

more trivia, some info about failed FS earl/mid 70s, here
http://www.jfsowa.com/computer/memo125.htm
http://people.cs.clemson.edu/~mark/fs.html

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: Holy wars of the past - how did they turn out? [message #405332 is a reply to message #405329] Sun, 07 February 2021 20:33 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
Peter Flass <peter_flass@yahoo.com> writes:

> Dan Espen <dan1espen@gmail.com> wrote:
>> Thomas Koenig <tkoenig@netcologne.de> writes:
>>
>>> Peter Flass <peter_flass@yahoo.com> schrieb:
>>>> Thomas Koenig <tkoenig@netcologne.de> wrote:
>>>
>>>> > So, here's my requirement, two parts:
>>>> >
>>>> > Take a C block delineated by curly braces, like
>>>> >
>>>> > if (foo) { bar(); } else { baz(); }
>>>> >
>>>> > I want to have a reasonably short command that I can apply to the
>>>> > opening or closing curly brace of each of the blocks, and I want
>>>> > to view them as one line indicating that something has been
>>>> > hidden.
>>>> >
>>>> > Second part: Have the same for other programming languages like
>>>> > Fortran with its
>>>> >
>>>> > DO I=1,10 call bar(i) END DO
>>>> >
>>>> > syntax.
>>>> >
>>>> > And I don't want to add extra markers as described in
>>>> > https://www.emacswiki.org/emacs/FoldingMode , I want this
>>>> > integrated with the individual language modes.
>>>> >
>>>>
>>>> Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
>>>
>>> Does ISPF support any Fortran version that has not been outdated for
>>> 30 years?
>>
>> Not sure what you think ISPF is. Here were are talking about the
>> ISPF editor. The editor doesn't care too much about what language
>> your are editing. It's language support does include highlighting
>> keywords but it does that without really understanding the actual
>> language syntax.
>>
>> ISPF does have panels to invoke foreground and background compiles.
>> Those panels are so brain dead that I've never seen any shop make use
>> of them.
>
> I was going to say we used them extensively, but now that I think back
> we didn’t use them at all. It was simpler to have the program wrapped
> in a line or so,of JCL and then just SUB it. I don’t think anyone ever
> used foreground compilation, our batch was so fast.

There was never any way to get the space on the panel to have all the
header libs and link libs needed for an application compile.

Just about everywhere else I worked programmers used JCL as you
described. To me the biggest problem with background compiles is that
you never knew when they were done. We had a couple hundred programmers
hitting enter all day long waiting for their compile to finish.

Once our development support group changed their stuff to run in the
foreground. The computer center took one look at it and felt compiles
were running too fast. Somehow they concluded that was bad and disabled
it. This was in a shop where the only stuff running was development.

Somehow they never caught on to my stuff.

I set up compile panels that would run foreground or background. They
would handle the same stuff our development support group had or any
ad-hoc compile. Instead of having space for a fixed number of header
libs, link libs, I made the compile panels use TBDISPL (there were
tables on the panel). You could put as many libs on the panel as you
wanted.

With the compile panels I set up, you'd hit enter then the panel would
lock with short messages for compile step, link step showing the
condition code for each step. The compile output did not go to the
spool it went into a PDSE or flat file. If you had an error and wanted
to look at the output you put an "L" (listing) on the command line and
hit enter.

The IBM stuff was just uninspired crap. They could have at least had
one compile panel doing foreground and background. With stuff I wrote,
for foreground you hit enter. For background you put an "S" (sub) on
the command line and hit enter.

The listing file had all the libs used listed at the front.
You might be working on 2 different problems using different
libraries. Instead of re-typing all the libs onto the panel,
you put "X" (extract) on the command line and the panel read
the libs out of the listing and put them on the panel.

I had a lot of fun with that stuff but eventually abandoned ISPF
because the whole process worked even better when driven from UNIX
with Makefiles.

--
Dan Espen
Re: CISC to FS to RISC, Holy wars of the past - how did they turn out? [message #405334 is a reply to message #405331] Sun, 07 February 2021 21:42 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <878s7zicis.fsf@localhost>,
Anne & Lynn Wheeler <lynn@garlic.com> wrote:
> Presentation from the 801/risc group late 76 or early 77 was that the
> extreme lack of features in the hardware would be compensated by
> compiler technology ... including 801 would have no hardware
> protection domain and all instructions could executed directly
> by application or libraries w/o needing supervisor calls, the cp.r
> operating system would only load "correct" programs and the pl.8
> compiler would only generate correct programs.

That wasn't a new idea. The Burroughs B5500 series depended on the
compilers not generating insecure code.

> but when that got killed, they decided to retarget to the unix
> workstation market, they got the company that had done the AT&T unix
> port of PC/IX to do one for ROMP ...

Yeah, that was me. IBM provided a rather heavyweight extended virtual
machine and our code ran rather slowly on top of that. Someone else
did a native port of BSD which ran a lot faster.

R's,
John
--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: CISC to FS to RISC, Holy wars of the past - how did they turn out? [message #405335 is a reply to message #405334] Sun, 07 February 2021 23:06 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3156
Registered: January 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> Yeah, that was me. IBM provided a rather heavyweight extended virtual
> machine and our code ran rather slowly on top of that. Someone else
> did a native port of BSD which ran a lot faster.

folklore is that they had these 200 pl.8 programmers (from displaywriter
project) that needed something to do ... the claim was that with their
801 & pl.8 knowledge they could quickly create an abstract virtual
machine greatly simplifying the unix port ... and the aggregate effort
for both them and you ... would be significantly less than you doing the
port directly.

Besides taking longer and running slower ... it also created a nightmare
for people doing their own new device drivers ... having to do one in
unix (AIX) and another in the virtual machine layer.

Palo Alto was working with USB on BSD port to IBM mainframe and with
UCLA port on port of their LOCUS to mainframe (they had it up and
running on ibm series/1)
https://en.wikipedia.org/wiki/LOCUS_(operating_system)

Then Palo Alto was redirected to do the BSD port to PC/RT (ROMP) (bare
machine) instead (comes out as AOS) ... they did it in enormous less
effort than just the Austin effort to create virtual machine.

Then Palo Alto also goes on to do LOCUS port to ibm mainframe and i386
(which ships as aix/370 and aix/386).

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: Holy wars of the past - how did they turn out? [message #405338 is a reply to message #405313] Sun, 07 February 2021 23:40 Go to previous messageGo to next message
Robin Vowels is currently offline  Robin Vowels
Messages: 426
Registered: July 2012
Karma: 0
Senior Member
On Monday, February 8, 2021 at 2:21:40 AM UTC+11, Thomas Koenig wrote:
> Dan Espen <dan1...@gmail.com> schrieb:
>> Thomas Koenig <tko...@netcologne.de> writes:
>
>>> Given that MVS is still stuck with Fortran 77 + extensions, the
>>> chances of ISPF having the correct syntax highligting for anything
>>> newer than Fortran 77 seem remote.
>>
>> I don't know Fortran but wouldn't most of the keywords still be the
>> same?
..
> Fortran has no reserved keywords as such.
..
That's not the answer to any question that the poster put.
..
> And also, a lot of the syntax is new since Fortran 90.
..
indeed.
..
>> I think I read recently that highlighting in ISPF is now user
>> customizable.
> OK.
Re: Holy wars of the past - how did they turn out? [message #405339 is a reply to message #405299] Mon, 08 February 2021 00:00 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: antispam

Robin Vowels <robin.vowels@gmail.com> wrote:
> On Sunday, February 7, 2021 at 9:32:56 AM UTC+11, anti...@math.uni.wroc.pl wrote:
>> J. Clarke <jclarke...@gmail.com> wrote:
>>> On Thu, 4 Feb 2021 21:08:25 -0000 (UTC), John Levine <jo...@taugh.com>
>>> wrote:
>>>
>>>> In article <u2mo1gl05lpufm1u3...@4ax.com>,
>>>> J. Clarke <jclarke...@gmail.com> wrote:
>>>> >>> RISC vs. CISC: The really complex CISC-architectures died out.
>>>> >
>>>> >What do you consider to be a "really complex CISC-architecture"?
>>>>
>>>> The usual example is VAX.
>>>>
>>>> I'd say IBM zSeries is pretty CISC but it has a unique niche..
>>>
>>> You might want to compare those to Intel.
>>>
>>> The instruction set reference for the VAX is a single chapter with 141
>>> pages. The instruction set reference for Intel is three volumes with
>>> more than 500 pages each.
>>
>> Such comparison completely misses the point. Important
>> design point for RISC that instructions should be
>> implementable to execute in one cycle using high clock
>> frequency.
> .
> The design of RISC machines was largely misguided as a
> better method than CISC machines.
> RISC was more suited to simple microprocessors, with limited
> instruction sets.
> A CISC instruction such as a memory move, or a translate
> instruction, did a lot of work. To run at the same speed,
> a RISC needed a clock rate about ten times faster
> than CISC to achieve the same speed.

What you write is extremaly misleading. RISC design was
based on observing actual running programs and taking
statistics of instructions use. RISC got rid of infrequently
used complex instructions, but it does not mean that
single RISC instruction only a little work. For
example autoincremet was frequent featue. In typical
program that marches trough an array RISC step would be
two instructions:

load with autoincrement
computing op

On i386 one could do this is similar way or use different
two instrucions:

compute with register indirect argument
increment address register

On early 360 only the second possibility was availble (of
course, each machine could also use longer sequences, but
I am interested in most efficient one).

SPARC and MIPS had register windows, conseqently procedure
entry and return did a lot of work in single instruction.
Unlike STM on 360 register window operation was done in
once clock. Later it turned out that procedure entry
and return while frequent is not frequent enough to
justify cost of hardware. Addionaly, with better compilers
RISC machine without register windows could do calls
only marginally slower than machine with register windows,
so gain was small and register windows went out of fashion.
But they nicely illustrate that single RISC instruction
could do a lot of work. The real question was which
instructions were important enough to allocate hardware
resources needed to do the work, and which were
unimportant and offered possibility of savings. Also,
part of RISC philosopy was that multicycle instructions
frequently can be split into seqence of single-cycle
ones. So while RISC may need more instructions for
given work, number of cycles was usually smaler than
for CISC. This is very visible comparing i386 and
RISC of comparable complexity: all i386 were multi-cycle
ones, frequently needing more than 3 inctructions,
RISC could do most (or all) in single cycle.

> In other words, a programmer writing code for a RISC was
> effectively writing microcode.
> To be useful to an assembler programmer, a computer instruction
> needed to do more work rather than less.

Do you have any experience writing RISC assembler? I have
worked on compiler backend for ARM and have written few
thousends lines of ARM assembly. Several ARM routines
had _less_ instructions that routine performing equivalent
function on i386. On average ARM seem to require slightly
more instructions than i386, but probably of order of few
percent more. Compiled code for ARM is longer by about
30%, but the main reason is not number of intructions. Rather
ARM instructions (more precisely ARM 5) are all 32-bits.
i386 instructions on average tend to be shorter than
4 bytes, so this is one reason for shorter code. Other
is constants: one can put 32-bit constants directly into
i386 intructions, but on ARM only small constant can
be included in instruction while other need to go into
literal pool (the same happens on old 360).

While I did not write anything substantial in assembler
for other RISC-s I saw reasonably large samples of assembler
for MIPS, SPARC and HPPA and I can assure you that neither
requires much more instructions than CISC. I also compiled
(using GCC) program having about 24000 lines of C for
different architectures. Longest executables were s390
and SPARC (IIRC of order 240 kB object code), shortest i360
(of order 180 kB), HAPPA was sligthly larger tham i386
(IIRC something like 190 kB).


> Looking back at first generation computers, we see that
> array operations were possible in 1951 on Pilot ACE,
> and on DEUCE (1955). These operations included memory
> move, array addition, array subtraction, etc.
> Such minimised the number of instructions needed to do
> a given computation, as well as, of course, to reduce
> execution time.
> Such instructions did not seem important to designers
> of second generation machines, with widespread use of
> transistors.
> More recently, computers implementing array operations
> did not appear until the 1970s.
> .
>> In 1980 that required drastic simplification
>> of instructions,
> .
>> now one can have more complexity and
>> still fit in one cycle. CPU designers formulated
>> several features deemed necessary for fast 1 IPC
>> implementation. This set of features became
>> religious definiton of RISC. RISC versus CICS
>> war died out mostly because due to advance in
>> manufacturing and CPU design several of religious
>> RISC featurs became almost irrelevant to CPU
>> speed.
>>
>> VAX instructions do complex things, in particular
>> multiple memory refereces with interesting
>> addressing modes. That was impossible to implement
>> in one cycle using technology from 1990 (and probably
>> still is impossible). 360 and 386 and their descendants
>> are in fact not that far from RISC:
> .
> I disagree.
> Most of the S/360 character instructions that move/compare/
> translate/search character strings are a long way from RISC.

Sure, S/360 has a lot of complex instructons. But most of
them are either system instructions or can be replaced by
seqences of simpler S/360 operations. In machine like
VAX almost all is complex intructions, if you remove then
machine probably would be useless. On S/360 if you want
fast code there is good chance that your program uses
mainly simple instructions (they are the fast ones).


> Floating-point instructions also are far from RISC, especially
> multiplication and division.

Huh? Every RISC that I used had FPU.

> Even addition and subtraction can
> require multiple steps for post-normalization.

Maybe you did not realize that RISC machines are pipelined?
FPU addition usually needs 2-3 pipeline stages, multiplication
may need between 2 and 5 (depending on actual machine).
On machine pipelined you may new operation every cycle, but
need to wait between providning arguments and using the
result. That is after issuing FPU instructons there must
be some other instructions (possibly another FPU instructins,
possible NOP if you have no useful work) before you may
use result. HPPA 712 had FPU multiply and add instruction
and could execute loads in the same cycle as FPU operation.
In effect 60 MHz HPPA could do 120 M flops per second
(usually it was less but I saw some small but real code
running at that speed).

Very early RISC-s had FPU as coprocessor that did FPU work
while main CPU simultanously executed integer instructions.
Clearly such coprocessor was much slower than later RISC,
but was not much different than coprocessors used on
comparable CISC. Of course, in early RISC times big mainframes
and supercomputers had better floating point speed than
RISC, but big machines had much more hardware and cost
was hundreds if not thousends times higher than RISC cost.

> (One of the few
> computational instructions that could have been have been
> implemented in a RISC was Halve floating-point.
> And then there are the decimal instrucuitons. Even addition and
> subtraction require multiple steps (not to mention multiplication
> and division. All these are CISC instructions.

Packed decimal instructitons on 360 do not fly either, they
are multicycle instructions. On machines where timings
are published it is clear that they are done by microcode
loop. For example on 360-85 decimal additon costs slightly
more per byte than 32-bit addition. With hardware for decimal
step RISC subroutine could do them at comparable speed. Even
on RISC without decimal hardware decimal subroutine can
run at resonable speed. Angain, on 360-85 single step of
TR has time equal to 3 ADD-s, plus substantial setup time.
RISC subroutine can do that at comparable speed.
The same applies to string instructions: on RISC you
need subroutine but subroutine can be quite fast.

Anyway, I do not consider decimal instructitons as core
instructions. I know that they are widely used in IBM
shops. However, somebody wanting best speed would go
to binary: binary data is smaller so whan your main
data is on tapes or discs transfer of bianry data is
faster. The only reason for decimal is inital entry
(which even in 1965 was not the main computational
cost), printing (again printers were much slower than
CPU-s so converion cost was not important) and
(main reason) inertia. Granted, decimal make a lot
of sense for cards-only shop, but when RISC arrived
I think that punched card as main storage were obsolete
and uneconomical (but there were enough inertia that
apparently some institutions used card quite long).

> In the integer instruction set, multiplication and division are CISC.
> Instructions such as Test and Set are complex, and possibly the
> loop control instructions BXLE and BXH.

No. Early RISC skipped multiplication because at that time
one could not fit fast multiplier on the chip. But
pretty quickly chip techonology catched up and RISC chips
included multiplies. Similarly for other instructions,
main point if instruction is useful and can have fast
implementation. There is nothing RISC-y in loop control
instruction that simutaneously jumps and updates register.
Some RISCS have instructions of this sort. RISC
normally avoids instructions needing multiple memory
accesses, as most such instructions can be replaced
by sequences of simpler instructins. But pragmatic
RISC recognizes that atomics have to be done as one
instruction. You may call them CISC, but as long as
you can do them without microcode (just using hardwired
control) and you do not spoil pipeline structure
they are OK. Similarly with division: it is CISC-y,
but if divider does not blow up your transistor budget
and rest of chip stays RISC, then it is OK. Around
15 years ago chip techology advanced enough that
high end RISC-s included dividers. Currently, tiny
RISC (Cortex-M0) has multipler, but no divider.
Bigger (but still relativly small) chips have dividers.

>> there is plenty
>> of complex instructions of dubious utility. But core
>> instruction set consists of instructions having one
>> memory access which starting from around 1990 can be
>> implemented in single cycle. They have complex
>> instruction encoding which required extra chip
>> space compared to religious RISC. But in modern
>> chips instruction decoders are tiny compared to
>> other parts. Around 1995 AMD and Intel invented
>> a trick so that effective speed of instruction
>> decoders is very high and religious RISC has
>> little if any advantage over 386 (or 360) there.
>>
>> To put this in historical context: I have translation
>> of computer architecture book from 1976 by Tanenbaum.
>> In this book Tanenbaum writes about implementing
>> very complex high-level style instructions using
>> microcode ("Cobol" machine, "Fortran" machine).
>> Tanenbaum was very positive about such machines
>> and advocated future design should be of this
>> sort. RISC movement was in competely different
>> direction, simplifing instruction set and eliminating
>> microcode. In a sense, RISC movement realised
>> that with moderate extra effort one could
>> turn former microcode engines into actually
>> useful and very fast processor.
> .
> The problem with RISC design is that one needs many more
> instructions to do the same amount of work. Many more instructions
> need to be fetched (compared to CISC), tying up the data bus at the
> same time that data is being being fetched from / stored to memory.

Most RISC-s have dedicated instruction fetch bus, separate
from data bus. So no problem with tying bus. Note that
all RISC-s that I used had caches, so buses were part of
CPU complex. Since wast majority of instructions comes
from cache, istruction fetch has limited impact on
main memory access (traffic between main memory and cache).
There is disadvantage: longer + more instructions mean that
RISC needs bigger cache or has lower hit rate for the
same cache size. This is important factor explaing why
i386 won: i386 made better use of caches than RISC.

Modern ARM in 32-bit mode offers option of mixed 16-bit
and 32-bit instructions -- this is example of RISC dropping
one of featurs that were claimed to be essential for RISC
(that is fixed length instructions).

--
Waldek Hebisch
Re: Holy wars of the past - how did they turn out? [message #405340 is a reply to message #405325] Mon, 08 February 2021 00:41 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2021-02-07, Stoat <fake@fake.org> wrote:

> On 8/02/21 4:21 am, Thomas Koenig wrote:
>
>> Dan Espen <dan1espen@gmail.com> schrieb:
>>
>>> Thomas Koenig <tkoenig@netcologne.de> writes:
>>>
>>>> Given that MVS is still stuck with Fortran 77 + extensions, the
>>>> chances of ISPF having the correct syntax highligting for anything
>>>> newer than Fortran 77 seem remote.
>>>
>>> I don't know Fortran but wouldn't most of the keywords still be the
>>> same?
>>
>> Fortran has no reserved keywords as such.
>>
>> And also, a lot of the syntax is new since Fortran 90.
>
> This reminds me of Tony Hoare's 1982 comment:
> “I don't know what the language of the year 2000 will look like, but I
> know it will be called Fortran.”

"A Real Programmer can write FORTRAN in any language."

--
/~\ Charlie Gibbs | "Some of you may die,
\ / <cgibbs@kltpzyxm.invalid> | but it's a sacrifice
X I'm really at ac.dekanfrus | I'm willing to make."
/ \ if you read it the right way. | -- Lord Farquaad (Shrek)
Re: Holy wars of the past - how did they turn out? [message #405344 is a reply to message #405340] Mon, 08 February 2021 05:55 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On 8 Feb 2021 05:41:25 GMT, Charlie Gibbs <cgibbs@kltpzyxm.invalid>
wrote:

> On 2021-02-07, Stoat <fake@fake.org> wrote:
>
>> On 8/02/21 4:21 am, Thomas Koenig wrote:
>>
>>> Dan Espen <dan1espen@gmail.com> schrieb:
>>>
>>>> Thomas Koenig <tkoenig@netcologne.de> writes:
>>>>
>>>> > Given that MVS is still stuck with Fortran 77 + extensions, the
>>>> > chances of ISPF having the correct syntax highligting for anything
>>>> > newer than Fortran 77 seem remote.
>>>>
>>>> I don't know Fortran but wouldn't most of the keywords still be the
>>>> same?
>>>
>>> Fortran has no reserved keywords as such.
>>>
>>> And also, a lot of the syntax is new since Fortran 90.
>>
>> This reminds me of Tony Hoare's 1982 comment:
>> “I don't know what the language of the year 2000 will look like, but I
>> know it will be called Fortran.”
>
> "A Real Programmer can write FORTRAN in any language."

Unfortunately this is true. I deal with, among other things, a
significant body of C transliterated from FORTRAN, arithmetic ifs and
computed gotoes and the whole nine yards. And nobody has ever
explained to me why it needed to be transliterated to C.
Re: Holy wars of the past - how did they turn out? [message #405345 is a reply to message #405344] Mon, 08 February 2021 06:26 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Mon, 08 Feb 2021 05:55:45 -0500
J. Clarke <jclarke.873638@gmail.com> wrote:

> Unfortunately this is true. I deal with, among other things, a
> significant body of C transliterated from FORTRAN,

With f2c ?

> arithmetic ifs and
> computed gotoes and the whole nine yards. And nobody has ever
> explained to me why it needed to be transliterated to C.

Probably someone noticed that C programmers were easier to find
than FORtRAN programmers.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Re: Holy wars of the past - how did they turn out? [message #405346 is a reply to message #405345] Mon, 08 February 2021 07:26 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Thomas Koenig

Ahem A Rivet's Shot <steveo@eircom.net> schrieb:
> On Mon, 08 Feb 2021 05:55:45 -0500
> J. Clarke <jclarke.873638@gmail.com> wrote:
>
>> Unfortunately this is true. I deal with, among other things, a
>> significant body of C transliterated from FORTRAN,
>
> With f2c ?
>
>> arithmetic ifs and
>> computed gotoes and the whole nine yards. And nobody has ever
>> explained to me why it needed to be transliterated to C.
>
> Probably someone noticed that C programmers were easier to find
> than FORtRAN programmers.

FORTRAN (pre-F90, even pre-F77) is a pretty small language. Anybody
who knows C should be able to read and modify it pretty easily.

The problem is more the lack of structure imposed by the lack of
many control structures pre-f77, and the lack of variable
declarations. Translating that code to C will only fix the
second shortcoming.

There was (is) a tool that is actually quite good at restructuring
pre-F77 Fortran code, the so-called "toolpack". It is rather
picky about extensions, though.
Re: Holy wars of the past - how did they turn out? [message #405348 is a reply to message #405339] Mon, 08 February 2021 08:22 Go to previous messageGo to next message
Robin Vowels is currently offline  Robin Vowels
Messages: 426
Registered: July 2012
Karma: 0
Senior Member
On Monday, February 8, 2021 at 4:00:03 PM UTC+11, anti...@math.uni.wroc.pl wrote:
> Robin Vowels <robin....@gmail.com> wrote:
>> On Sunday, February 7, 2021 at 9:32:56 AM UTC+11, anti...@math.uni.wroc.pl wrote:
>>> J. Clarke <jclarke...@gmail.com> wrote:
>>>> On Thu, 4 Feb 2021 21:08:25 -0000 (UTC), John Levine <jo...@taugh.com>
>>>> wrote:
>>>>
>>>> >In article <u2mo1gl05lpufm1u3...@4ax.com>,
>>>> >J. Clarke <jclarke...@gmail.com> wrote:
>>>> >>>> RISC vs. CISC: The really complex CISC-architectures died out.
>>>> >>
>>>> >>What do you consider to be a "really complex CISC-architecture"?
>>>> >
>>>> >The usual example is VAX.
>>>> >
>>>> >I'd say IBM zSeries is pretty CISC but it has a unique niche..
>>>>
>>>> You might want to compare those to Intel.
>>>>
>>>> The instruction set reference for the VAX is a single chapter with 141
>>>> pages. The instruction set reference for Intel is three volumes with
>>>> more than 500 pages each.
>>>
>>> Such comparison completely misses the point. Important
>>> design point for RISC that instructions should be
>>> implementable to execute in one cycle using high clock
>>> frequency.
>> .
>> The design of RISC machines was largely misguided as a
>> better method than CISC machines.
>> RISC was more suited to simple microprocessors, with limited
>> instruction sets.
>> A CISC instruction such as a memory move, or a translate
>> instruction, did a lot of work. To run at the same speed,
>> a RISC needed a clock rate about ten times faster
>> than CISC to achieve the same speed.
..
> What you write is extremaly misleading.
..
No it's not.
..
> RISC design was
> based on observing actual running programs and taking
> statistics of instructions use.
..
That was a flawed process in itself, because compilers
did not generate the best instruction sequences for a given job.
Special-casing specific optimisations for the S/360 (for example)
can take take up more space than it saves, so it is not done.
As far as I know, no IBM software (including compilers) used
the TRT instruction.
As well as that, memory was scarce.
If someone used conventional instructions to do a string
search, instead of a TRT, one would tend to find that the instruction
counts for the loop method would far outweigh any instruction counts
for TRT.
Put another way, a TRT to search, say, 50 characters would do the
work of 200 conventional instructions, and therefore, for a valid
comparison, the TRT would have to be weighted by a factor of 200.
..
> RISC got rid of infrequently
> used complex instructions,
..
As I said that process was flawed.
In any case, the basic instructions for a general-purpose
computer include add, subtract, and the logical operations.
These would be the basis for constructing a RISC computer,
without the need for instruction counts.
..
> but it does not mean that
> single RISC instruction only a little work. For
> example autoincremet was frequent featue.
..
The S/360 was deficient in that it did not have auto-increment,
Autoincrement was used in computers at least by 1958.
..
> In typical
> program that marches trough an array RISC step would be
> two instructions:
>
> load with autoincrement
> computing op
..
There would be more than that. There's loop control to consider.
..
> On i386 one could do this is similar way or use different
> two instrucions:
>
> compute with register indirect argument
> increment address register
..
See previous comment.
Re: Holy wars of the past - how did they turn out? [message #405349 is a reply to message #405348] Mon, 08 February 2021 09:19 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
Robin Vowels <robin.vowels@gmail.com> writes:

> If someone used conventional instructions to do a string
> search, instead of a TRT, one would tend to find that the instruction
> counts for the loop method would far outweigh any instruction counts
> for TRT.
> Put another way, a TRT to search, say, 50 characters would do the
> work of 200 conventional instructions, and therefore, for a valid
> comparison, the TRT would have to be weighted by a factor of 200.

I did some work on a parser on S/360 that desperately needed a 40%
speed up. I tried as hard as I could to get C to use TRT, no luck.
The average number of characters the TRT could parse was about 12.

So, I wrote an HLASM subroutine to use TRT. Using test data,
I got a 42% speed up. Finally we finished the rest of our changes to
the parser and we submitted our work to the performance group for
live data testing.

We had a big celebration when our overall performance speed up was
the same 42% I measured.

I've seen IBM documentation saying that TRT negatively affects
cache use and could be a problem. I think the problem with
compilers using TRT is that they can't tell how many characters
that TRT is going to zoom through. For a COBOL compiler scanning
long COBOL variable names, it's a win. For a C compiler scanning
short variable names, not so much.

--
Dan Espen
Re: Holy wars of the past - how did they turn out? [message #405352 is a reply to message #405332] Mon, 08 February 2021 12:33 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Dan Espen <dan1espen@gmail.com> wrote:
> Peter Flass <peter_flass@yahoo.com> writes:
>
>> Dan Espen <dan1espen@gmail.com> wrote:
>>> Thomas Koenig <tkoenig@netcologne.de> writes:
>>>
>>>> Peter Flass <peter_flass@yahoo.com> schrieb:
>>>> > Thomas Koenig <tkoenig@netcologne.de> wrote:
>>>>
>>>> >> So, here's my requirement, two parts:
>>>> >>
>>>> >> Take a C block delineated by curly braces, like
>>>> >>
>>>> >> if (foo) { bar(); } else { baz(); }
>>>> >>
>>>> >> I want to have a reasonably short command that I can apply to the
>>>> >> opening or closing curly brace of each of the blocks, and I want
>>>> >> to view them as one line indicating that something has been
>>>> >> hidden.
>>>> >>
>>>> >> Second part: Have the same for other programming languages like
>>>> >> Fortran with its
>>>> >>
>>>> >> DO I=1,10 call bar(i) END DO
>>>> >>
>>>> >> syntax.
>>>> >>
>>>> >> And I don't want to add extra markers as described in
>>>> >> https://www.emacswiki.org/emacs/FoldingMode , I want this
>>>> >> integrated with the individual language modes.
>>>> >>
>>>> >
>>>> > Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
>>>>
>>>> Does ISPF support any Fortran version that has not been outdated for
>>>> 30 years?
>>>
>>> Not sure what you think ISPF is. Here were are talking about the
>>> ISPF editor. The editor doesn't care too much about what language
>>> your are editing. It's language support does include highlighting
>>> keywords but it does that without really understanding the actual
>>> language syntax.
>>>
>>> ISPF does have panels to invoke foreground and background compiles.
>>> Those panels are so brain dead that I've never seen any shop make use
>>> of them.
>>
>> I was going to say we used them extensively, but now that I think back
>> we didn’t use them at all. It was simpler to have the program wrapped
>> in a line or so,of JCL and then just SUB it. I don’t think anyone ever
>> used foreground compilation, our batch was so fast.
>
> There was never any way to get the space on the panel to have all the
> header libs and link libs needed for an application compile.
>
> Just about everywhere else I worked programmers used JCL as you
> described. To me the biggest problem with background compiles is that
> you never knew when they were done. We had a couple hundred programmers
> hitting enter all day long waiting for their compile to finish.

We usually got sub-minute turnaround. The hardware was sized for peak
loads, which only occurred a few weeks a year. The rest of the time things
flew. I guess we were pretty lucky.

>
> Once our development support group changed their stuff to run in the
> foreground. The computer center took one look at it and felt compiles
> were running too fast. Somehow they concluded that was bad and disabled
> it. This was in a shop where the only stuff running was development.
>
> Somehow they never caught on to my stuff.
>
> I set up compile panels that would run foreground or background. They
> would handle the same stuff our development support group had or any
> ad-hoc compile. Instead of having space for a fixed number of header
> libs, link libs, I made the compile panels use TBDISPL (there were
> tables on the panel). You could put as many libs on the panel as you
> wanted.
>
> With the compile panels I set up, you'd hit enter then the panel would
> lock with short messages for compile step, link step showing the
> condition code for each step. The compile output did not go to the
> spool it went into a PDSE or flat file. If you had an error and wanted
> to look at the output you put an "L" (listing) on the command line and
> hit enter.
>
> The IBM stuff was just uninspired crap. They could have at least had
> one compile panel doing foreground and background. With stuff I wrote,
> for foreground you hit enter. For background you put an "S" (sub) on
> the command line and hit enter.
>
> The listing file had all the libs used listed at the front.
> You might be working on 2 different problems using different
> libraries. Instead of re-typing all the libs onto the panel,
> you put "X" (extract) on the command line and the panel read
> the libs out of the listing and put them on the panel.
>
> I had a lot of fun with that stuff but eventually abandoned ISPF
> because the whole process worked even better when driven from UNIX
> with Makefiles.
>



--
Pete
Re: CISC to FS to RISC, Holy wars of the past - how did they turn out? [message #405353 is a reply to message #405335] Mon, 08 February 2021 12:33 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Anne & Lynn Wheeler <lynn@garlic.com> wrote:
> John Levine <johnl@taugh.com> writes:
>> Yeah, that was me. IBM provided a rather heavyweight extended virtual
>> machine and our code ran rather slowly on top of that. Someone else
>> did a native port of BSD which ran a lot faster.
>
> folklore is that they had these 200 pl.8 programmers (from displaywriter
> project) that needed something to do ... the claim was that with their
> 801 & pl.8 knowledge they could quickly create an abstract virtual
> machine greatly simplifying the unix port ... and the aggregate effort
> for both them and you ... would be significantly less than you doing the
> port directly.
>
> Besides taking longer and running slower ... it also created a nightmare
> for people doing their own new device drivers ... having to do one in
> unix (AIX) and another in the virtual machine layer.

OS/2 suffered from this problem as well.

>
> Palo Alto was working with USB on BSD port to IBM mainframe and with
> UCLA port on port of their LOCUS to mainframe (they had it up and
> running on ibm series/1)
> https://en.wikipedia.org/wiki/LOCUS_(operating_system)
>
> Then Palo Alto was redirected to do the BSD port to PC/RT (ROMP) (bare
> machine) instead (comes out as AOS) ... they did it in enormous less
> effort than just the Austin effort to create virtual machine.
>
> Then Palo Alto also goes on to do LOCUS port to ibm mainframe and i386
> (which ships as aix/370 and aix/386).
>



--
Pete
Re: Holy wars of the past - how did they turn out? [message #405357 is a reply to message #405344] Mon, 08 February 2021 17:42 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
J. Clarke <jclarke.873638@gmail.com> wrote:
> On 8 Feb 2021 05:41:25 GMT, Charlie Gibbs <cgibbs@kltpzyxm.invalid>
> wrote:
>
>> On 2021-02-07, Stoat <fake@fake.org> wrote:
>>
>>> On 8/02/21 4:21 am, Thomas Koenig wrote:
>>>
>>>> Dan Espen <dan1espen@gmail.com> schrieb:
>>>>
>>>> > Thomas Koenig <tkoenig@netcologne.de> writes:
>>>> >
>>>> >> Given that MVS is still stuck with Fortran 77 + extensions, the
>>>> >> chances of ISPF having the correct syntax highligting for anything
>>>> >> newer than Fortran 77 seem remote.
>>>> >
>>>> > I don't know Fortran but wouldn't most of the keywords still be the
>>>> > same?
>>>>
>>>> Fortran has no reserved keywords as such.
>>>>
>>>> And also, a lot of the syntax is new since Fortran 90.
>>>
>>> This reminds me of Tony Hoare's 1982 comment:
>>> “I don't know what the language of the year 2000 will look like, but I
>>> know it will be called Fortran.”
>>
>> "A Real Programmer can write FORTRAN in any language."
>
> Unfortunately this is true. I deal with, among other things, a
> significant body of C transliterated from FORTRAN, arithmetic ifs and
> computed gotoes and the whole nine yards. And nobody has ever
> explained to me why it needed to be transliterated to C.
>
>

I have run into the same types of problems transliterating C to PL/I.
Sometimes it’s more trouble just to recode it right. Transliterating is
easier.

--
Pete
Re: Holy wars of the past - how did they turn out? [message #405359 is a reply to message #405357] Mon, 08 February 2021 18:16 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: J. Clarke

On Mon, 8 Feb 2021 15:42:19 -0700, Peter Flass <peter_flass@yahoo.com>
wrote:

> J. Clarke <jclarke.873638@gmail.com> wrote:
>> On 8 Feb 2021 05:41:25 GMT, Charlie Gibbs <cgibbs@kltpzyxm.invalid>
>> wrote:
>>
>>> On 2021-02-07, Stoat <fake@fake.org> wrote:
>>>
>>>> On 8/02/21 4:21 am, Thomas Koenig wrote:
>>>>
>>>> > Dan Espen <dan1espen@gmail.com> schrieb:
>>>> >
>>>> >> Thomas Koenig <tkoenig@netcologne.de> writes:
>>>> >>
>>>> >>> Given that MVS is still stuck with Fortran 77 + extensions, the
>>>> >>> chances of ISPF having the correct syntax highligting for anything
>>>> >>> newer than Fortran 77 seem remote.
>>>> >>
>>>> >> I don't know Fortran but wouldn't most of the keywords still be the
>>>> >> same?
>>>> >
>>>> > Fortran has no reserved keywords as such.
>>>> >
>>>> > And also, a lot of the syntax is new since Fortran 90.
>>>>
>>>> This reminds me of Tony Hoare's 1982 comment:
>>>> “I don't know what the language of the year 2000 will look like, but I
>>>> know it will be called Fortran.”
>>>
>>> "A Real Programmer can write FORTRAN in any language."
>>
>> Unfortunately this is true. I deal with, among other things, a
>> significant body of C transliterated from FORTRAN, arithmetic ifs and
>> computed gotoes and the whole nine yards. And nobody has ever
>> explained to me why it needed to be transliterated to C.
>>
>>
>
> I have run into the same types of problems transliterating C to PL/I.
> Sometimes it’s more trouble just to recode it right. Transliterating is
> easier.

I still don't understand why it had to be C to begin with. Half the
code in the system is Fortran, the rest is transliterated to C.
Re: Holy wars of the past - how did they turn out? [message #405375 is a reply to message #405339] Tue, 09 February 2021 04:24 Go to previous messageGo to next message
robin is currently offline  robin
Messages: 24
Registered: May 2013
Karma: 0
Junior Member
<antispam@math.uni.wroc.pl> wrote in message news:rvqggi$2k9$1@z-news.wcss.wroc.pl...
> Robin Vowels <robin.vowels@gmail.com> wrote:
>> On Sunday, February 7, 2021 at 9:32:56 AM UTC+11, anti...@math.uni.wroc.pl wrote:
>>> J. Clarke <jclarke...@gmail.com> wrote:
>>>> On Thu, 4 Feb 2021 21:08:25 -0000 (UTC), John Levine <jo...@taugh.com>
>>>> wrote:
>>>>
>>>> >In article <u2mo1gl05lpufm1u3...@4ax.com>,
>>>> >J. Clarke <jclarke...@gmail.com> wrote:
>>>> >>>> RISC vs. CISC: The really complex CISC-architectures died out.
>>>> >>
>>>> >>What do you consider to be a "really complex CISC-architecture"?
>>>> >
>>>> >The usual example is VAX.
>>>> >
>>>> >I'd say IBM zSeries is pretty CISC but it has a unique niche..
>>>>
>>>> You might want to compare those to Intel.
>>>>
>>>> The instruction set reference for the VAX is a single chapter with 141
>>>> pages. The instruction set reference for Intel is three volumes with
>>>> more than 500 pages each.
>>>
>>> Such comparison completely misses the point. Important
>>> design point for RISC that instructions should be
>>> implementable to execute in one cycle using high clock
>>> frequency.
>> .
>> The design of RISC machines was largely misguided as a
>> better method than CISC machines.
>> RISC was more suited to simple microprocessors, with limited
>> instruction sets.
>> A CISC instruction such as a memory move, or a translate
>> instruction, did a lot of work. To run at the same speed,
>> a RISC needed a clock rate about ten times faster
>> than CISC to achieve the same speed.
>
> What you write is extremaly misleading. RISC design was
> based on observing actual running programs and taking
> statistics of instructions use. RISC got rid of infrequently
> used complex instructions, but it does not mean that
> single RISC instruction only a little work. For
> example autoincremet was frequent featue. In typical
> program that marches trough an array RISC step would be
> two instructions:
>
> load with autoincrement
> computing op
>
> On i386 one could do this is similar way or use different
> two instrucions:
>
> compute with register indirect argument
> increment address register
>
> On early 360 only the second possibility was availble (of
> course, each machine could also use longer sequences, but
> I am interested in most efficient one).
>
> SPARC and MIPS had register windows, conseqently procedure
> entry and return did a lot of work in single instruction.
..
The RCA Spectra and English Electric System 4 had register windows
(for the four processor states). Apart ftrom that, these were IBM
360 clones.

> Unlike STM on 360 register window operation was done in
> once clock. Later it turned out that procedure entry
> and return while frequent is not frequent enough to
> justify cost of hardware. Addionaly, with better compilers
> RISC machine without register windows could do calls
> only marginally slower than machine with register windows,
> so gain was small and register windows went out of fashion.
> But they nicely illustrate that single RISC instruction
> could do a lot of work. The real question was which
> instructions were important enough to allocate hardware
> resources needed to do the work, and which were
> unimportant and offered possibility of savings. Also,
> part of RISC philosopy was that multicycle instructions
> frequently can be split into seqence of single-cycle
> ones. So while RISC may need more instructions for
> given work, number of cycles was usually smaler than
> for CISC. This is very visible comparing i386 and
> RISC of comparable complexity: all i386 were multi-cycle
> ones, frequently needing more than 3 inctructions,
> RISC could do most (or all) in single cycle.
>
>> In other words, a programmer writing code for a RISC was
>> effectively writing microcode.
..
I think that that is what I wrote earlier.

>> To be useful to an assembler programmer, a computer instruction
>> needed to do more work rather than less.
>
> Do you have any experience writing RISC assembler? I have
> worked on compiler backend for ARM and have written few
> thousends lines of ARM assembly. Several ARM routines
> had _less_ instructions that routine performing equivalent
> function on i386.
..
Is that relevant? In any case, you are overlooking something.
..
A better comparison might be made with the IBM 360.
..
> On average ARM seem to require slightly
> more instructions than i386, but probably of order of few
> percent more. Compiled code for ARM is longer by about
> 30%, but the main reason is not number of intructions. Rather
> ARM instructions (more precisely ARM 5) are all 32-bits.
> i386 instructions on average tend to be shorter than
> 4 bytes, so this is one reason for shorter code. Other
> is constants: one can put 32-bit constants directly into
> i386 intructions, but on ARM only small constant can
> be included in instruction while other need to go into
> literal pool (the same happens on old 360).
>
> While I did not write anything substantial in assembler
> for other RISC-s I saw reasonably large samples of assembler
> for MIPS, SPARC and HPPA and I can assure you that neither
> requires much more instructions than CISC.
..
That's just false for S/360.
..
> I also compiled
> (using GCC) program having about 24000 lines of C for
> different architectures. Longest executables were s390
> and SPARC (IIRC of order 240 kB object code), shortest i360
> (of order 180 kB), HAPPA was sligthly larger tham i386
> (IIRC something like 190 kB).
..
A lot depends on the quality of the compiler for the machines
under consideration, and a comparison is largely irrelevant.
..
>> Looking back at first generation computers, we see that
>> array operations were possible in 1951 on Pilot ACE,
>> and on DEUCE (1955). These operations included memory
>> move, array addition, array subtraction, etc.
>> Such minimised the number of instructions needed to do
>> a given computation, as well as, of course, to reduce
>> execution time.
>> Such instructions did not seem important to designers
>> of second generation machines, with widespread use of
>> transistors.
>> More recently, computers implementing array operations
>> did not appear until the 1970s.
>> .
>>> In 1980 that required drastic simplification
>>> of instructions,
>> .
>>> now one can have more complexity and
>>> still fit in one cycle. CPU designers formulated
>>> several features deemed necessary for fast 1 IPC
>>> implementation. This set of features became
>>> religious definiton of RISC. RISC versus CICS
>>> war died out mostly because due to advance in
>>> manufacturing and CPU design several of religious
>>> RISC featurs became almost irrelevant to CPU
>>> speed.
>>>
>>> VAX instructions do complex things, in particular
>>> multiple memory refereces with interesting
>>> addressing modes. That was impossible to implement
>>> in one cycle using technology from 1990 (and probably
>>> still is impossible). 360 and 386 and their descendants
>>> are in fact not that far from RISC:
>> .
>> I disagree.
>> Most of the S/360 character instructions that move/compare/
>> translate/search character strings are a long way from RISC.
>
> Sure, S/360 has a lot of complex instructons. But most of
> them are either system instructions or can be replaced by
> seqences of simpler S/360 operations.
..
Most of them are NOT system instructions.
Most of them are computational instructions.
..
All complex instructions can be simlated by sequences of
instructions taken the basic instruction set (add, subtract,
logical operations, etc.
But why would you bother, given that there's a complex instruction
that does the work, with less hand work.
..
> In machine like
> VAX almost all is complex intructions, if you remove then
> machine probably would be useless. On S/360 if you want
> fast code there is good chance that your program uses
> mainly simple instructions (they are the fast ones).
..
Your argument is fallacious. That would result in a SLOWER program.
..
>> Floating-point instructions also are far from RISC, especially
>> multiplication and division.
>
> Huh? Every RISC that I used had FPU.
..
And what the RISC processor do while tie FPU was
performing multiplication / division in either single or
double length?
..
>> Even addition and subtraction can
>> require multiple steps for post-normalization.
>
> Maybe you did not realize that RISC machines are pipelined?
> FPU addition usually needs 2-3 pipeline stages, multiplication
> may need between 2 and 5 (depending on actual machine).
..
Pipelining can't compensate for floating-point division etc.
..
> On machine pipelined you may new operation every cycle, but
> need to wait between providning arguments and using the
> result. That is after issuing FPU instructons there must
> be some other instructions (possibly another FPU instructins,
> possible NOP if you have no useful work)

Many NOPs would be needed for single/double-precision float mult/div.
Sort of defeats the argument for RISC.
..
> before you may
> use result. HPPA 712 had FPU multiply and add instruction
> and could execute loads in the same cycle as FPU operation.
..
sure can, but how many loads must be used during mult/div
amd where can you put the loaded results?
..
> In effect 60 MHz HPPA could do 120 M flops per second
> (usually it was less but I saw some small but real code
> running at that speed).
>
> Very early RISC-s had FPU as coprocessor that did FPU work
> while main CPU simultanously executed integer instructions.
> Clearly such coprocessor was much slower than later RISC,
> but was not much different than coprocessors used on
> comparable CISC. Of course, in early RISC times big mainframes
> and supercomputers had better floating point speed than
> RISC, but big machines had much more hardware and cost
> was hundreds if not thousends times higher than RISC cost.
..
If you want speed, you need real hardware.
..
>> (One of the few
>> computational instructions that could have been have been
>> implemented in a RISC was Halve floating-point.
>> And then there are the decimal instrucuitons. Even addition and
>> subtraction require multiple steps (not to mention multiplication
>> and division. All these are CISC instructions.
>
> Packed decimal instructitons on 360 do not fly either, they
> are multicycle instructions.

That's right, They are CISC instructions.
..
> On machines where timings
> are published it is clear that they are done by microcode
> loop. For example on 360-85 decimal additon costs slightly
> more per byte than 32-bit addition. With hardware for decimal
> step RISC subroutine could do them at comparable speed.
..
Rubbish.
..
> Even
> on RISC without decimal hardware decimal subroutine can
> run at resonable speed. Angain, on 360-85 single step of
> TR has time equal to 3 ADD-s, plus substantial setup time.
> RISC subroutine can do that at comparable speed.
> The same applies to string instructions: on RISC you
> need subroutine but subroutine can be quite fast.
..
That sort of thing used to be done on earlier word machines,
such as the CDC Cyber. Packing and unpacking characters
was a pain in the neck and was slow. Searching was scarcely
a breeze.
..
> Anyway, I do not consider decimal instructitons as core
> instructions.
..
You'd be wrong, of course. They are considered important
instructions in commercial work.
..
> I know that they are widely used in IBM
> shops.
..
They were used in a number of mainframes of the period,
including Burroughs, IBM, RCA, and English Electric,
Fujitsu.
..
> However, somebody wanting best speed would go
> to binary: binary data is smaller so whan your main
> data is on tapes or discs transfer of bianry data is
> faster. The only reason for decimal is inital entry
> (which even in 1965 was not the main computational
> cost), printing (again printers were much slower than
> CPU-s so converion cost was not important) and
> (main reason) inertia. Granted, decimal make a lot
> of sense for cards-only shop, but when RISC arrived
> I think that punched card as main storage were obsolete
> and uneconomical (but there were enough inertia that
> apparently some institutions used card quite long).

>> In the integer instruction set, multiplication and division are CISC.
>> Instructions such as Test and Set are complex, and possibly the
>> loop control instructions BXLE and BXH.
>
> No. Early RISC skipped multiplication because at that time
> one could not fit fast multiplier on the chip. But
> pretty quickly chip techonology catched up and RISC chips
> included multiplies. Similarly for other instructions,
> main point if instruction is useful and can have fast
> implementation. There is nothing RISC-y in loop control
> instruction that simutaneously jumps and updates register.
..
You are thinking that loop contril consists of just decrementing
a register and optionally branching. Loop control can consist
of two tests (one on equality and one on a trip count).
Take, for instance, TRT and CLC instructions. There are also
trip tests that can involve loading, decrementing, storing, and
testing.
..
> Some RISCS have instructions of this sort. RISC
> normally avoids instructions needing multiple memory
> accesses, as most such instructions can be replaced
> by sequences of simpler instructins.

Sort of defeats the usefulness of RISC.
..
> But pragmatic
> RISC recognizes that atomics have to be done as one
> instruction. You may call them CISC, but as long as
> you can do them without microcode (just using hardwired
> control) and you do not spoil pipeline structure
> they are OK. Similarly with division: it is CISC-y,
> but if divider does not blow up your transistor budget
> and rest of chip stays RISC, then it is OK.
..
Sort of blows up the aim of RISC.
..
> Around
> 15 years ago chip techology advanced enough that
> high end RISC-s included dividers. Currently, tiny
> RISC (Cortex-M0) has multipler, but no divider.
> Bigger (but still relativly small) chips have dividers.
>
>>> there is plenty
>>> of complex instructions of dubious utility. But core
[RISC]
>>> instruction set consists of instructions having one
>>> memory access which starting from around 1990 can be
>>> implemented in single cycle. They have complex
>>> instruction encoding which required extra chip
>>> space compared to religious RISC. But in modern
>>> chips instruction decoders are tiny compared to
>>> other parts. Around 1995 AMD and Intel invented
>>> a trick so that effective speed of instruction
>>> decoders is very high and religious RISC has
>>> little if any advantage over 386 (or 360) there.
>>>
>>> To put this in historical context: I have translation
>>> of computer architecture book from 1976 by Tanenbaum.
>>> In this book Tanenbaum writes about implementing
>>> very complex high-level style instructions using
>>> microcode ("Cobol" machine, "Fortran" machine).
>>> Tanenbaum was very positive about such machines
>>> and advocated future design should be of this
>>> sort. RISC movement was in competely different
>>> direction, simplifing instruction set and eliminating
>>> microcode. In a sense, RISC movement realised
>>> that with moderate extra effort one could
>>> turn former microcode engines into actually
>>> useful and very fast processor.
>> .
>> The problem with RISC design is that one needs many more
>> instructions to do the same amount of work. Many more instructions
>> need to be fetched (compared to CISC), tying up the data bus at the
>> same time that data is being being fetched from / stored to memory.
>
> Most RISC-s have dedicated instruction fetch bus,
..
Still, even with cache, the instructions still need to be fetched
from memory.
..
> separate
> from data bus. So no problem with tying bus. Note that
> all RISC-s that I used had caches, so buses were part of
> CPU complex. Since wast majority of instructions comes
> from cache, istruction fetch has limited impact on
> main memory access
..
Five to ten times the number of instructions [compared to RISC]
have to come from somewhere, and that is from memory. Memory is always
the bottleneck.
..
> (traffic between main memory and cache).
> There is disadvantage: longer + more instructions mean that
> RISC needs bigger cache or has lower hit rate for the
> same cache size. This is important factor explaing why
> i386 won: i386 made better use of caches than RISC.
>
> Modern ARM in 32-bit mode offers option of mixed 16-bit
> and 32-bit instructions -- this is example of RISC dropping
> one of featurs that were claimed to be essential for RISC
> (that is fixed length instructions).



---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Re: Holy wars of the past - how did they turn out? [message #405378 is a reply to message #405149] Tue, 09 February 2021 07:52 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Michael P. O'Connor

On Thu, 04 Feb 2021 17:15:51 +0000, Thomas Koenig wrote:

> The proverbial computer holy war is probably big vs. little endian,
> which has been pretty much decided in favor of little endian by default
> or by Intel, although TCP/IP is big-endian.
>
> Machine language vs. those CPU-time-wasting assemblers - made
> obsolescent by high-level programming languages.
>
> High-level vs. assembler: Hardly anybody does assembler any more.
>
> Structured programming vs. goto - structured programming won.
>
> RISC vs. CISC: The really complex CISC-architectures died out.
> The difference is now less important with superscalar architectures.
>
> VMS vs. Unix - decided by DEC's fate, and by Linux.
>
> DECNET vs. TCP/IP: See above.
>
> Emacs vs. vi: vim has led to a resurgence of vi, and many people are
> using this even on Windows.
>
> Everybody vs. Fortran: Hating FORTRAN become the very definition of a
> computer scientist. They didn't notice that, since 1991,
> it has become quite a modern programming language.
>
> Others?

There are many of us that still use Emacs as our daily driver. Just
because it seams one has won out, does not mean the other ones have fully
died off, there will still be those that will still use the "lossing"
side stuff.
Re: Holy wars of the past - how did they turn out? [message #405381 is a reply to message #405378] Tue, 09 February 2021 09:51 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
"Michael P. O'Connor" <mpop@mikeoconnor.net> writes:

> On Thu, 04 Feb 2021 17:15:51 +0000, Thomas Koenig wrote:
>
>> Emacs vs. vi: vim has led to a resurgence of vi, and many people are
>> using this even on Windows.
>
> There are many of us that still use Emacs as our daily driver. Just
> because it seams one has won out, does not mean the other ones have fully
> died off, there will still be those that will still use the "lossing"
> side stuff.

Emacs user using Pan? Heresy detected.

--
Dan Espen
Re: where did RISC come from, Holy wars of the past - how did they turn out? [message #405397 is a reply to message #405348] Tue, 09 February 2021 21:58 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <6cb4e945-1f3a-4364-8e17-51d9392f7589n@googlegroups.com>,
Robin Vowels <robin.vowels@gmail.com> wrote:
>> What you write is extremely misleading.
> .
> No it's not.

Yeah, really it is.

>> RISC design was
>> based on observing actual running programs and taking
>> statistics of instructions use.
> .
> That was a flawed process in itself, because compilers
> did not generate the best instruction sequences for a given job.

The point of RISC was to optimize the entire design including the compilers.
There was no point in including an instruction if the compilers couldn't
generate it. The IBM 801 had what was at the time the best optimizing compiler
in the world, and it didn't have TRT either.

The 360 TRT instruction is swell, but it was invented in an era when
all the system code was written in assember, and human programmers
could design their data structures keeping in mind that if they
designed them just so, they could use TRT and a few other cool
instructions (EX CLM perhaps) to speed up string scanning. That was
then, now programmers write loops and compilers optimize them. The x86
has REP SCASB which can scan a string looking for a particular value,
typically zero, but I don't know whether compilers recognize scanning
idioms and generate it like they do for block moves.

Another advantage of RISC is that it's easier to get right. The VAX had an instruction
MOVTUC, Move Translated Until Character, which was similar to TRT. When we first got
our VAX-11/750 at Yale and tried to run BSD Unix on it, it kept crashing. After some
painful debugging (particularly painful for us because I had to cross-compile everyhing
from a PDP-11), Bill Joy and we figured out at the same time that there was a bug in
the 750's s microcode implementation of MOVTUC, and since that instruction was in the
inner loop of printf(), that was a problem. We replaced it with simpler instructions
and BSD worked.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: where did RISC come from, Holy wars of the past - how did they turn out? [message #405406 is a reply to message #405397] Wed, 10 February 2021 10:09 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> In article <6cb4e945-1f3a-4364-8e17-51d9392f7589n@googlegroups.com>,
> Robin Vowels <robin.vowels@gmail.com> wrote:
>>> What you write is extremely misleading.
>> .
>> No it's not.
>
> Yeah, really it is.
>
>>> RISC design was
>>> based on observing actual running programs and taking
>>> statistics of instructions use.
>> .
>> That was a flawed process in itself, because compilers
>> did not generate the best instruction sequences for a given job.
>
> The point of RISC was to optimize the entire design including the compilers.
> There was no point in including an instruction if the compilers couldn't
> generate it. The IBM 801 had what was at the time the best optimizing compiler
> in the world, and it didn't have TRT either.
>
> The 360 TRT instruction is swell, but it was invented in an era when
> all the system code was written in assember, and human programmers
> could design their data structures keeping in mind that if they
> designed them just so, they could use TRT and a few other cool
> instructions (EX CLM perhaps) to speed up string scanning.

Burroughs medium systems had a bunch of CISCy instructions along
the same lines:

SDE/SDU (Scan delimiter EQUAL/UNEQUAL)
SEA (Search for string)
STB (Search table)
SLT (Search Linked List)
TRN (Translate string)
MVS (Move string)
CPS (Compare string)
HSH (Hash String)

The SDE/SDU/TRN date back to the B3500 (1965), SEA to the B4700 and
the rest came in with the architecture update called Omega
in the early 1980s.

The COBOL, BPL and SPRITE compilers would generate the first
five based on source language syntax elements. SDE/SDU were
heavily used when parsing text.

BPL:

SCAN input_line FOR etxc; &STRIP TRAILING BLANKS AND & P_00701141000
IF LEQ THEN IX1 := lscn + lscn &ETX'S HERE SO THAT THE TRUE & P_00701142000
ELSE IX1 := 158; &END OF LINE CAN BE FOUND & P_00701143000
WHILE ((input_line.IX1.1 = " ") OR & P_00701144000
(input_line.IX1.1=etxc)) AND (IX1 >= 0) DO_ & P_00701145000
IX1 := IX1 - 2; & P_00701146000
OD; & P_00701147000
IX1 := IX1+2; & P_00701148000
input_line.IX1.1 := etxc; & P_00701149000

01150000
SCAN ".?:}" FOR BASE.IX1.1.UA; &examine 03371000
IF_ found 03372000
THEN_ sentence_end := TRUE FI; 03373000

SCAN UNEQUAL next_n_input_characters (11) FOR digit; 03785000
IF_ none_found 03786000
THEN_ ERROR ("INVALID DEFINE NAME: * NON-DIGIT"); 03787000
RUN_FOR_THE_NEXT_S_DELIM; 03788000
LOOP_ 03789000
FI; 03790000

> That was
> then, now programmers write loops and compilers optimize them. The x86
> has REP SCASB which can scan a string looking for a particular value,
> typically zero, but I don't know whether compilers recognize scanning
> idioms and generate it like they do for block moves.

REP SCASB is not widely generated.

>
> Another advantage of RISC is that it's easier to get right. The VAX had an instruction
> MOVTUC, Move Translated Until Character, which was similar to TRT. When we first got
> our VAX-11/750 at Yale and tried to run BSD Unix on it, it kept crashing. After some
> painful debugging (particularly painful for us because I had to cross-compile everyhing
> from a PDP-11), Bill Joy and we figured out at the same time that there was a bug in
> the 750's s microcode implementation of MOVTUC, and since that instruction was in the
> inner loop of printf(), that was a problem. We replaced it with simpler instructions
> and BSD worked.

That said, I've seen several issues with RISC CPU's getting instructions
correct over the last three decades. Bigger issues with RISC CPU's have
been related to the memory model (particularly with respect to ordering);
notably with Alpha.
Re: where did RISC come from, Holy wars of the past - how did they turn out? [message #405407 is a reply to message #405406] Wed, 10 February 2021 10:26 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Kerr-Mudd,John

On Wed, 10 Feb 2021 15:09:14 GMT, scott@slp53.sl.home (Scott Lurndal)
wrote:

> John Levine <johnl@taugh.com> writes:
>> In article <6cb4e945-1f3a-4364-8e17-51d9392f7589n@googlegroups.com>,
>> Robin Vowels <robin.vowels@gmail.com> wrote:
>>>> What you write is extremely misleading.
>>> .
>>> No it's not.
>>
>> Yeah, really it is.
>>
>>>> RISC design was
>>>> based on observing actual running programs and taking
>>>> statistics of instructions use.
>>> .
>>> That was a flawed process in itself, because compilers
>>> did not generate the best instruction sequences for a given job.
>>
>> The point of RISC was to optimize the entire design including the
>> compilers. There was no point in including an instruction if the
>> compilers couldn't generate it. The IBM 801 had what was at the time
>> the best optimizing compiler in the world, and it didn't have TRT
>> either.
>>
>> The 360 TRT instruction is swell, but it was invented in an era when
>> all the system code was written in assember, and human programmers
>> could design their data structures keeping in mind that if they
>> designed them just so, they could use TRT and a few other cool
>> instructions (EX CLM perhaps) to speed up string scanning.
>
> Burroughs medium systems had a bunch of CISCy instructions along
> the same lines:
>
> SDE/SDU (Scan delimiter EQUAL/UNEQUAL)
> SEA (Search for string)
> STB (Search table)
> SLT (Search Linked List)
> TRN (Translate string)
> MVS (Move string)
> CPS (Compare string)
> HSH (Hash String)
>
> The SDE/SDU/TRN date back to the B3500 (1965), SEA to the B4700 and
> the rest came in with the architecture update called Omega
> in the early 1980s.
>
> The COBOL, BPL and SPRITE compilers would generate the first
> five based on source language syntax elements. SDE/SDU were
> heavily used when parsing text.
>
> BPL:
>
> SCAN input_line FOR etxc; &STRIP TRAILING BLANKS AND &
> P_00701141000 IF LEQ THEN IX1 := lscn + lscn &ETX'S HERE SO THAT
> THE TRUE & P_00701142000 ELSE IX1 := 158; &END OF
> LINE CAN BE FOUND & P_00701143000 WHILE ((input_line.IX1.1 = " ")
> OR & P_00701144000
> (input_line.IX1.1=etxc)) AND (IX1 >= 0) DO_ &
> P_00701145000 IX1 := IX1 - 2;
> & P_00701146000 OD;
> & P_00701147000
> IX1 := IX1+2; &
> P_00701148000 input_line.IX1.1 := etxc;
> & P_00701149000
>
>
> 01150000
> SCAN ".?:}" FOR BASE.IX1.1.UA; &examine
> 03371000 IF_ found
> 03372000 THEN_ sentence_end := TRUE FI;
> 03373000
>
> SCAN UNEQUAL next_n_input_characters (11) FOR digit;
> 03785000 IF_ none_found
> 03786000 THEN_ ERROR ("INVALID DEFINE NAME: *
> NON-DIGIT"); 03787000
> RUN_FOR_THE_NEXT_S_DELIM;
> 03788000 LOOP_
> 03789000
> FI;
> 03790000
>
>> That was
>> then, now programmers write loops and compilers optimize them. The x86
>> has REP SCASB which can scan a string looking for a particular value,
>> typically zero, but I don't know whether compilers recognize scanning
>> idioms and generate it like they do for block moves.
>
> REP SCASB is not widely generated.

rarely does just 1 character need special processing!
>
>>
>> Another advantage of RISC is that it's easier to get right. The VAX
>> had an instruction MOVTUC, Move Translated Until Character, which was
>> similar to TRT. When we first got our VAX-11/750 at Yale and tried to
>> run BSD Unix on it, it kept crashing. After some painful debugging
>> (particularly painful for us because I had to cross-compile everyhing
>> from a PDP-11), Bill Joy and we figured out at the same time that
>> there was a bug in the 750's s microcode implementation of MOVTUC, and
>> since that instruction was in the inner loop of printf(), that was a
>> problem. We replaced it with simpler instructions and BSD worked.
>
> That said, I've seen several issues with RISC CPU's getting
> instructions correct over the last three decades. Bigger issues with
> RISC CPU's have been related to the memory model (particularly with
> respect to ordering); notably with Alpha.
>
>



--
Bah, and indeed, Humbug.
Re: where did RISC come from, Holy wars of the past - how did they turn out? [message #405411 is a reply to message #405407] Wed, 10 February 2021 13:52 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
In article <XnsACCD9D0784677admin127001@144.76.35.252>,
Kerr-Mudd,John <notsaying@127.0.0.1> wrote:
>>> That was
>>> then, now programmers write loops and compilers optimize them. The x86
>>> has REP SCASB which can scan a string looking for a particular value,
>>> typically zero, but I don't know whether compilers recognize scanning
>>> idioms and generate it like they do for block moves.
>>
>> REP SCASB is not widely generated.
>
> rarely does just 1 character need special processing!

I can think of some plausible cases. The most obvious is strlen() where
you look for the 0 byte at the end, but I do a fair amount of searching for \n
to break a block of text into lines.

I wrote some little C routines, compiled them with clang and didn't see any SCASB, though.


--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: where did RISC come from, Holy wars of the past - how did they turn out? [message #405412 is a reply to message #405411] Wed, 10 February 2021 15:40 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> In article <XnsACCD9D0784677admin127001@144.76.35.252>,
> Kerr-Mudd,John <notsaying@127.0.0.1> wrote:
>>>> That was
>>>> then, now programmers write loops and compilers optimize them. The x86
>>>> has REP SCASB which can scan a string looking for a particular value,
>>>> typically zero, but I don't know whether compilers recognize scanning
>>>> idioms and generate it like they do for block moves.
>>>
>>> REP SCASB is not widely generated.
>>
>> rarely does just 1 character need special processing!
>
> I can think of some plausible cases. The most obvious is strlen() where
> you look for the 0 byte at the end, but I do a fair amount of searching for \n
> to break a block of text into lines.
>
> I wrote some little C routines, compiled them with clang and didn't see any SCASB, though.
>

From Intel's Software Optimization guide, General Optimization Guidelines:

"Using a REP prefix with string move instructions can provide high
performance in the situations described above. However, using a REP
prefix with string scan instructions (SCASB, SCASW, SCASD, SCASQ)
or compare instructions (CMPSB, CMPSW,SMPSD, SMPSQ) is not recommended
for high performance. Consider using SIMD instructions instead."

>
> --
> Regards,
> John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
> Please consider the environment before reading this e-mail. https://jl.ly
Re: Holy wars of the past - how did they turn out? [message #405419 is a reply to message #405378] Wed, 10 February 2021 22:33 Go to previous messageGo to next message
Rich Alderson is currently offline  Rich Alderson
Messages: 489
Registered: August 2012
Karma: 0
Senior Member
"Michael P. O'Connor" <mpop@mikeoconnor.net> writes:

> There are many of us that still use Emacs as our daily driver. Just
> because it seams one has won out, does not mean the other ones have fully
> died off, there will still be those that will still use the "lossing"
> side stuff.

Indeed, I'm reading this thread in an Emacs-based newsreader.

I've been an EMACS user since 1978 or so, and became the PDP-10 TECO
implementation maintainer in 1999 (when RMS blessed a Y2K fix I published in
comp.emacs). I used to know enough vi to edit the configuration files for GNU
Emacs, but that was in the days before autoconfigure...

--
Rich Alderson news@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Re: Holy wars of the past - how did they turn out? [message #405469 is a reply to message #405419] Thu, 11 February 2021 10:04 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: maus

On 2021-02-11, Rich Alderson <news@alderson.users.panix.com> wrote:
> "Michael P. O'Connor" <mpop@mikeoconnor.net> writes:
>
>> There are many of us that still use Emacs as our daily driver. Just
>> because it seams one has won out, does not mean the other ones have fully
>> died off, there will still be those that will still use the "lossing"
>> side stuff.
>
> Indeed, I'm reading this thread in an Emacs-based newsreader.
>
> I've been an EMACS user since 1978 or so, and became the PDP-10 TECO
> implementation maintainer in 1999 (when RMS blessed a Y2K fix I published in
> comp.emacs). I used to know enough vi to edit the configuration files for GNU
> Emacs, but that was in the days before autoconfigure...
>


slrn for many years.

--
greymausg@mail.com
Re: Holy wars of the past - how did they turn out? [message #405470 is a reply to message #405419] Thu, 11 February 2021 11:12 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Bob Eager

On Wed, 10 Feb 2021 22:33:42 -0500, Rich Alderson wrote:

> "Michael P. O'Connor" <mpop@mikeoconnor.net> writes:
>
>> There are many of us that still use Emacs as our daily driver. Just
>> because it seams one has won out, does not mean the other ones have
>> fully died off, there will still be those that will still use the
>> "lossing" side stuff.
>
> Indeed, I'm reading this thread in an Emacs-based newsreader.
>
> I've been an EMACS user since 1978 or so, and became the PDP-10 TECO
> implementation maintainer in 1999 (when RMS blessed a Y2K fix I
> published in comp.emacs). I used to know enough vi to edit the
> configuration files for GNU Emacs, but that was in the days before
> autoconfigure...

Prior to 1984, I was using a variety of editors on different systems,
including 'ed' on UNIX (learned that back in 1975). Never got seriously
round to vi.

Then I bought a PC clone. It came with Perfect Filer (a very basic
database), Perfect Calc (spreadsheet with basic EMACS keystrokes), and
Perfect Writer.

Perfect Writer was a pair of separate programs: editor and text
formatter. The formatting markup was a subset of Scribe. The editor was a
locked down subset of EMACS. I moved on to use MicroEMACS for
programming, them EMACS. Still do.



--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Re: Holy wars of the past - how did they turn out? [message #405482 is a reply to message #405419] Thu, 11 February 2021 11:53 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
Rich Alderson <news@alderson.users.panix.com> writes:

> "Michael P. O'Connor" <mpop@mikeoconnor.net> writes:
>
>> There are many of us that still use Emacs as our daily driver. Just
>> because it seams one has won out, does not mean the other ones have
>> fully died off, there will still be those that will still use the
>> "lossing" side stuff.
>
> Indeed, I'm reading this thread in an Emacs-based newsreader.
>
> I've been an EMACS user since 1978 or so, and became the PDP-10 TECO
> implementation maintainer in 1999 (when RMS blessed a Y2K fix I
> published in comp.emacs). I used to know enough vi to edit the
> configuration files for GNU Emacs, but that was in the days before
> autoconfigure...

I think I'm since '79 or so.

--
Dan Espen
Re: Holy wars of the past - how did they turn out? [message #405483 is a reply to message #405470] Thu, 11 February 2021 12:29 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On 11 Feb 2021 16:12:53 GMT
Bob Eager <news0073@eager.cx> wrote:

> Prior to 1984, I was using a variety of editors on different systems,
> including 'ed' on UNIX (learned that back in 1975). Never got seriously
> round to vi.

Prior to meeting unix and vi in the mid 1980s I used whatever was
handy (all too often WordStar in non-document mode), then there was unix
(well XENIX) and vi was the editor so I learned it (painfully as I recall).
Some time later I encountered the really excellent Brief on messy-dos,
liked it a lot and really tried to use it but my fingers kept typing vi
every time I wanted to edit a file and since I had the MKS toolkit it
worked. I let my fingers win and they're still doing it.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Re: Holy wars of the past - how did they turn out? [message #405484 is a reply to message #405163] Thu, 11 February 2021 12:44 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: David Lesher

Freddy1X <freddy1X@indyX.netX> writes:

>>
>> Others?

> PDA Vs. smartphone.

I use a PDA for one purpose, my password vault.
I figure it's harder to get into given the air gap.

--
A host is a host from coast to coast.................wb8foz@nrk.com
& no one will talk to a host that's close..........................
Unless the host (that isn't close).........................pob 1433
is busy, hung or dead....................................20915-1433
Re: Holy wars of the past - how did they turn out? [message #405496 is a reply to message #405483] Thu, 11 February 2021 13:59 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2021-02-11, Ahem A Rivet's Shot <steveo@eircom.net> wrote:

> On 11 Feb 2021 16:12:53 GMT
> Bob Eager <news0073@eager.cx> wrote:
>
>> Prior to 1984, I was using a variety of editors on different systems,
>> including 'ed' on UNIX (learned that back in 1975). Never got seriously
>> round to vi.
>
> Prior to meeting unix and vi in the mid 1980s I used whatever was
> handy (all too often WordStar in non-document mode), then there was unix
> (well XENIX) and vi was the editor so I learned it (painfully as I recall).
> Some time later I encountered the really excellent Brief on messy-dos,
> liked it a lot and really tried to use it but my fingers kept typing vi
> every time I wanted to edit a file and since I had the MKS toolkit it
> worked. I let my fingers win and they're still doing it.

I used a couple of mainframe editors in that brief period between
card-based systems and personal computers, but don't remember too
much of them now. MS-DOS's ed was enough like CP/M's ed that you
could pick it up quickly, while being different enough that you
could shoot yourself in the foot. One shop used KEDIT, which I
hear is somewhat like IBM's XEDIT. It's a nice enough editor that
I still keep a copy around and use it when Notepad doesn't turn
my crank.

On *n*x systems I bit the bullet and learned enough vi to get myself
going, and am still picking up features here and there in vim. My
fingers speak vi well enough that often when I'm in another editor
and want to move down the screen, a string of "j"s will appear.

I took a brief look at emacs a while ago. It seems to have a lot of
nice features, but its mindset is too different from mine. Perhaps
it comes down to that twist on the old saying: "It's a nice place to
live, but I wouldn't want to visit there."

I still miss CygnusEd on my Amigas, though.

--
/~\ Charlie Gibbs | "Some of you may die,
\ / <cgibbs@kltpzyxm.invalid> | but it's a sacrifice
X I'm really at ac.dekanfrus | I'm willing to make."
/ \ if you read it the right way. | -- Lord Farquaad (Shrek)
Re: Holy wars of the past - how did they turn out? [message #405497 is a reply to message #405496] Thu, 11 February 2021 14:12 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:

> I took a brief look at emacs a while ago. It seems to have a lot of
> nice features, but its mindset is too different from mine. Perhaps
> it comes down to that twist on the old saying: "It's a nice place to
> live, but I wouldn't want to visit there."

No reason to leave vi, even I can see it's strengths.
But vi users do have a path to using Emacs, just turn on vi mode.
You can even opt for the level of vi emulation you want.

What I notice about vi users that I think is less than optimal is that
they work with a bunch of terminals. Edit in one, compile in one,
test in another, read man pages in another.

Users that work that way have a strong chance of being befuddled when
they make a change, forget to save, then compile and test. With Emacs,
you get prompted if you fail to save before you invoke compile. And you
don't need to use line numbers to find errors, you just invoke
'next-error'. I suppose now vi or vim has similar features, but back in the
day that was the case.

As you say, Emacs users live in Emacs.

--
Dan Espen
Re: Holy wars of the past - how did they turn out? [message #405498 is a reply to message #405497] Thu, 11 February 2021 14:20 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Dan Espen <dan1espen@gmail.com> writes:
> Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
>
>> I took a brief look at emacs a while ago. It seems to have a lot of
>> nice features, but its mindset is too different from mine. Perhaps
>> it comes down to that twist on the old saying: "It's a nice place to
>> live, but I wouldn't want to visit there."
>
> No reason to leave vi, even I can see it's strengths.
> But vi users do have a path to using Emacs, just turn on vi mode.
> You can even opt for the level of vi emulation you want.
>
> What I notice about vi users that I think is less than optimal is that
> they work with a bunch of terminals. Edit in one, compile in one,
> test in another, read man pages in another.
>
> Users that work that way have a strong chance of being befuddled when
> they make a change, forget to save, then compile and test. With Emacs,
> you get prompted if you fail to save before you invoke compile. And you
> don't need to use line numbers to find errors, you just invoke
> 'next-error'. I suppose now vi or vim has similar features, but back in the
> day that was the case.

yes, vim has such features. Tell it to compile and it will run the
makefile and will open, in turn, each file that compiled with error, positioned
at the error.
Re: Holy wars of the past - how did they turn out? [message #405499 is a reply to message #405497] Thu, 11 February 2021 15:06 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Thu, 11 Feb 2021 14:12:30 -0500
Dan Espen <dan1espen@gmail.com> wrote:

> What I notice about vi users that I think is less than optimal is that
> they work with a bunch of terminals. Edit in one, compile in one,
> test in another, read man pages in another.

Yep and possibly even several edits open usually in tmux tabs
these days, I've tried using IDEs and they annoy me endlessly. Also I've
worked on a number of projects with odd build mechanisms.

> Users that work that way have a strong chance of being befuddled when
> they make a change, forget to save, then compile and test.

Never had that happen (that I recall), I always save before
switching tab (possibly early misadventures reinforced the habit).

> With Emacs,
> you get prompted if you fail to save before you invoke compile. And you
> don't need to use line numbers to find errors, you just invoke
> 'next-error'. I suppose now vi or vim has similar features, but back in
> the day that was the case.

They do - but I never use them, going to a line is quick and easy.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Re: Holy wars of the past - how did they turn out? [message #405513 is a reply to message #405498] Thu, 11 February 2021 15:38 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
scott@slp53.sl.home (Scott Lurndal) writes:

> Dan Espen <dan1espen@gmail.com> writes:
>> Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
>>
>>> I took a brief look at emacs a while ago. It seems to have a lot of
>>> nice features, but its mindset is too different from mine. Perhaps
>>> it comes down to that twist on the old saying: "It's a nice place to
>>> live, but I wouldn't want to visit there."
>>
>> No reason to leave vi, even I can see it's strengths.
>> But vi users do have a path to using Emacs, just turn on vi mode.
>> You can even opt for the level of vi emulation you want.
>>
>> What I notice about vi users that I think is less than optimal is that
>> they work with a bunch of terminals. Edit in one, compile in one,
>> test in another, read man pages in another.
>>
>> Users that work that way have a strong chance of being befuddled when
>> they make a change, forget to save, then compile and test. With Emacs,
>> you get prompted if you fail to save before you invoke compile. And you
>> don't need to use line numbers to find errors, you just invoke
>> 'next-error'. I suppose now vi or vim has similar features, but back in the
>> day that was the case.
>
> yes, vim has such features. Tell it to compile and it will run the
> makefile and will open, in turn, each file that compiled with error, positioned
> at the error.

I figured. Still we had a vi user just post using vi as I described.

I'm guessing vim has the full range of Emacs features, you don't have to
use a Makefile, you can just include the compile command as comments
in the source file. For example, my crontab file ends with:

# Local Variables:
# compile-command: "crontab ~/cron.linux"
# End:


--
Dan Espen
Re: Holy wars of the past - how did they turn out? [message #405514 is a reply to message #405419] Thu, 11 February 2021 15:39 Go to previous messageGo to next message
Andreas Kohlbach is currently offline  Andreas Kohlbach
Messages: 1456
Registered: December 2011
Karma: 0
Senior Member
On 10 Feb 2021 22:33:42 -0500, Rich Alderson wrote:
>
> "Michael P. O'Connor" <mpop@mikeoconnor.net> writes:
>
>> There are many of us that still use Emacs as our daily driver. Just
>> because it seams one has won out, does not mean the other ones have fully
>> died off, there will still be those that will still use the "lossing"
>> side stuff.
>
> Indeed, I'm reading this thread in an Emacs-based newsreader.
>
> I've been an EMACS user since 1978 or so, and became the PDP-10 TECO
> implementation maintainer in 1999 (when RMS blessed a Y2K fix I published in
> comp.emacs). I used to know enough vi to edit the configuration files for GNU
> Emacs, but that was in the days before autoconfigure...

Using vi here for most of the config files editing, which including the
..emacs as well as the .gnus (I write this article with it). Never thought
anything about it until another EMACS VS VIM discussion came up in the
usenet years ago. So I decided to edit the vimrc with Emacs and expected
the world to end, or at least the computer to go up in flames. But
nothing happened. :-D
--
Andreas

https://news-commentaries.blogspot.com/
Re: Holy wars of the past - how did they turn out? [message #405515 is a reply to message #405483] Thu, 11 February 2021 15:46 Go to previous messageGo to previous message
Andreas Kohlbach is currently offline  Andreas Kohlbach
Messages: 1456
Registered: December 2011
Karma: 0
Senior Member
On Thu, 11 Feb 2021 17:29:45 +0000, Ahem A Rivet's Shot wrote:
>
> Prior to meeting unix and vi in the mid 1980s I used whatever was
> handy (all too often WordStar in non-document mode), then there was unix
> (well XENIX) and vi was the editor so I learned it (painfully as I recall).

Too bad vi nor Emacs incorporated the WordStar Diamond. Probably because
both predate WordStar. I know you can remap keys in vi and Emacs to get
this. But would had been cool both editors would use the keys from
WordStar to move the cursor around.

vim here has something similar here, h, j, k and l to move the cursor.
--
Andreas
Pages (8): [ «    1  2  3  4  5  6  7  8    »]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: F11 was a DNS side job
Next Topic: Come to: http : / / mybeautifulworld . loveslife . biz . I am waiting ..........
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Tue Apr 23 09:35:20 EDT 2024

Total time taken to generate the page: 0.08196 seconds