Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Re: Endian wars, What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414015 is a reply to message #414007] Wed, 13 April 2022 10:13 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-13 03:55, John Levine wrote:
> According to Peter Flass <peter_flass@yahoo.com>:
>> I would think big-endian is best for instruction decoding. Leaving aside
>> issues of cache access, the processor can load the first byte to determine
>> the format (and hence length) of the rest of the instruction, unless the
>> architecture stores the opcode in the low-order byte and reads that first
>> in little-endian format.
>
> Look at the x86. Its instruction format treats memory as a stream of
> bytes. It is of course little-endian but its instruction format would
> work about the same in big-endian.

Same on VAX. Instruction is just a byte, and that's the first byte.
Additional bytes are consumed as needed.

> On the PDP-11 instructions were a sequence of 16 bit words and the
> memory was 16 bit words (or later perhaps larger cache lines) so again
> instruction decoding would have worked pretty much the same in either
> byte order.

Yup. Cache lines on 11/70 is 32 bits. I think some other J11 based
machines also had a 32-bit cache line.

Johnny
Re: Endian wars, What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414016 is a reply to message #414014] Wed, 13 April 2022 10:16 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-13 10:18, Peter Maydell wrote:
> In article <t35aiv$2gm6$2@gal.iecc.com>, John Levine <johnl@taugh.com> wrote:
>> On the PDP-11 instructions were a sequence of 16 bit words and the
>> memory was 16 bit words (or later perhaps larger cache lines) so again
>> instruction decoding would have worked pretty much the same in either
>> byte order.
>
> Did the PDP-11 read the whole 16-bit instruction as a single
> memory access? (I'm guessing it did.) If you load the whole
> instruction at once, as any modern fixed-width-instruction
> CPU will, you can lay out the instruction fields in it
> however you like, because you've already got all of them.

Basically all memory reads on a PDP-11 is always 16 bits (or more if you
have a larger cache line). If you did an instruction that only referred
to a byte, it still read the whole 16 bits. On writes, the memory could
handle 8-bit writes.

But also, instructions are always 16 bits, and they have to be word
aligned, so which byte order you'd have would make absolutely no difference.

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414017 is a reply to message #413991] Wed, 13 April 2022 10:19 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-12 20:49, John Levine wrote:
> According to Johnny Billquist <bqt@softjar.se>:
>>> Of course you're guessing. You weren't there, and neither you nor I
>>> know what the people at DEC were thinking when they designed the
>>> PDP-11. Everything you say about the -11 was equally true when IBM
>>> designed the 360, and they made it big-endian.
>>
>> Am I guessing that the addressing is the same if you refer to a value as
>> a byte or a word gives the same value using the same address?
>>
>> No. I am not. ...
>
> Hi. To repeat the part of my message you apparently didn't read, all
> the stuff you say is true. It was equally true about the big-endian
> IBM 360.

No, it is not.

If you have a byte addressable machine, and you use big-endian order,
then the "weight" of a byte is *not* 256^n where n is the relative
address to the base of the word.

> We have no idea whether any of it affected DEC's design of the PDP-11,
> or for that matter, the design of the IBM 360. You're guessing.

Since it seems you didn't even get my points, maybe I should just stop here.
But I think it's bordering on the silly to assume they did not take my
points into consideration when doing the design.

The one potentially interesting question to which I don't have an answer
is if they had other/additional reasons.

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414018 is a reply to message #414008] Wed, 13 April 2022 10:22 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-13 04:21, John Levine wrote:
> According to Peter Flass <peter_flass@yahoo.com>:
>> Little-endian was a screwball deviation.
>
> Indeed.

Indeed not. For the human brain, little-endian feels wrong. But from a
computer point of view, there are definitely things speaking for it.

Just as there are about designating the LSB as bit 0, and MSB as bit n,
and not the other way around. But older machines that that the other way
as well. Do you consider numbering bits with LSB as 0 as also being a
screwball deviation?

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414019 is a reply to message #413989] Wed, 13 April 2022 10:27 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-12 20:21, Peter Maydell wrote:
> In article <20220412150344.5f0a74ebd70faf038599df35@eircom.net>,
> Ahem A Rivet's Shot <steveo@eircom.net> wrote:
>> On Tue, 12 Apr 2022 13:56:45 GMT
>> scott@slp53.sl.home (Scott Lurndal) wrote:
>>
>>> I also take exception at the characterization of "bad code". When
>>> your design target is big-endian, there's no need to use byteswapping
>>> interfaces.
>>
>> Agreed, if you have a task involving manipulation of big endian
>> entities and have chosen a big endian processor for the job you can
>> perfectly correctly forget about the problem.
>>
>> The code isn't bad, just not entirely portable but for embedded use
>> not portable is fine.
>
> ...up until the point when you need to switch your embedded
> product to a different CPU architecture, when it turns out
> that actually it does matter. It's not that hard to put
> the conceptually-required-but-happen-to-be-nop transforms in
> as you go along. and it's awful to have to go back and
> retrofit them, so I maintain that not doing so is indeed
> sloppy practice.

I would disagree since I definitely differentiate between bad code and
non-portable code.

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414020 is a reply to message #414000] Wed, 13 April 2022 10:35 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
Peter Flass <peter_flass@yahoo.com> writes:
> Scott Lurndal <scott@slp53.sl.home> wrote:
>> John Levine <johnl@taugh.com> writes:
>>> According to Acceptable Name <metta.crawler@gmail.com>:
>>>> > I have never been able to figure out why DEC made the PDP-11
>>>> > little-endian, when all previous byte addressed machines were big-endian.
>>>
>>>> I read somewhere that RISC-V is little endian because that was good for performance.
>>>
>>> The PDP-11 was in 1969, RISC-V was four decades later in 2010 by which
>>> time the endian wars had largely burned out.
>>
>> I might argue that - big-endian processors are still available and
>> widely used in packet processing hardware such as DPUs. It's true that
>> most processors default to little-endian, but that's more because of the
>> ubiquity of x86 than anything else.
>>
>>>
>>> Data accesses in RISC-V are either endian set by a mode bit, code is
>>> always little-endian. The manual claims that little-endian makes it
>>> easier to look at instruction bits and see how long the instruction is
>>> which I find unpersuasive.
>>
>> They'd have to byte-swap the 32-bit instruction before decoding if they
>> accepted big-endian instructions or add a bunch of gates with
>> a dependency on the endianness bit in a control register. That
>> dependency would add complexity for OoO implementations.
>>
>> Same for ARMv8 where the 32-bit instructions are always little-endian.
>>
>> Makes sense, since the main
>> use for big-endian nowadays is to handle IP packets without byteswapping
>> and for processing data (e.g. interoperability). Doesn't
>> really make sense for the instruction stream to be big-endian.
>>
>
> I would think big-endian is best for instruction decoding.

Take a look at the encoding for the ARMv8 - there is no "first byte".
It's a 32-bit word that is not at all easy to decode manually[*].

RISCV does have the concept of an "opcode", which is the lowest
order 6 bits of the 32-bit instruction word, little-endian as it were.

One thing that software people don't generally appreciate is
the level of parallelism in a gate design - each bit in the
instruction word is a signal input to a combinational logic
circuit. Recall that the instruction must be decoded and all
the operands (registers, immediates) determined in a single
clock cycle.

All that said, there's no reason for the instruction stream
to be in any particular endedness. Note that Intel x86 instruction
stream is effectively little-endian as the first instruction
byte is the most significant byte (and often the only byte given
variable length instructions).

[*] Having written an ARMv8 simulator, I envy the hardware guys;
in software it is a sequence of check bit N, if set, check
bit N1, etc until one gets to the instruction group, then
one must decode the function and up to three register
numbers (5-bits each). The C++ code to decode to the
instruction group is about 1000 lines total (with comments),
not counting the decoding of each individual group.

if (!bit::test(instruction, 27))
{
/* 3322222222221111111111 */
/* 10987654321098765432109876543210 */
/* 0 */
if (!bit::test(instruction, 28))
{
/* 3322222222221111111111 */
/* 10987654321098765432109876543210 */
/* 00 */
if (bit::extract(instruction, 31, 21) != 0x1)
{
if (likely(bit::extract(instruction, 28, 25) == 0x2 && cpuState->is_sve_feature_enabled()))
{
/* 3322222222221111111111 */
/* 10987654321098765432109876543210 */
/* 0010 */
/* SVE - Scalable Vector Extension*/
return sve_decode(instruction, flags);
} else {
return instr_illegal(instruction, flags);
}
}
else
{
/* 3322222222221111111111 */
/* 10987654321098765432109876543210 */
/* 00000000001 */
return instr_illegal(instruction, flags);
}
}
...

Effectively, this is modeling the gates in a hardware design.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414021 is a reply to message #414013] Wed, 13 April 2022 10:43 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4237
Registered: February 2012
Karma: 0
Senior Member
pmaydell@chiark.greenend.org.uk (Peter Maydell) writes:
> In article <slrnt5bk6c.2ude.grahn+nntp@frailea.sa.invalid>,
> Jorgen Grahn <grahn+nntp@snipabacken.se> wrote:
>> On Tue, 2022-04-12, Peter Maydell wrote:
>>> I think it's the combination of:
>>> * a widely used big endian CPU in a particular market niche
>>> * a use case that is constantly dealing with big endian
>>> data (ie network packet fields) and so missing byteswaps
>>> are likely to be pervasive rather than limited to a small
>>> part of the codebase
>>> * systems where you can get away with running a completely
>>> different config to everybody else (embedded products can
>>> be built for non-standard endianness; servers running
>>> stock OSes need to go along with the rest of the world)
>>
>> Nitpick: They don't necessarily need to go along: most free Unix
>> software builds for big-endian and little-endian machines. I'm too
>> lazy to check if e.g. Debian Linux supports any big-endian
>> architectures right /now/, but they did supported PowerPC a few years
>> ago. That's thousands of correctly written programs.
>
> That's kind of what I mean -- Debian doesn't, and never has,
> supported big-endian Arm. Now of course Debian totally *could*
> build all that software for BE Arm, because it works on

There was a RHEL BE Arm version for a while, IIRC.

> s390x and so on. But there aren't enough general users
> out there for anybody to ship a distro for it. Which means
> that if you run BE Arm you are absolutely running a
> non-standard config where you had to build everything yourself,

Which is what most networking appliances already do. Custom
linux, generally. ThunderX had both BE and LE builds available
in its software development kit for customers.

Note that you can run BE applications on a LE OS (the OS
just sets SCTLR_EL1[EE] appropriately when scheduling the
process).

> and not a stock OS. The set of hardware configurations
> you can get a stock OS for is a subset of all the possible
> things one might be able to build.

Very few network appliances use off-the-shelf operating
systems, and they're the primary users of BE.

As it happens, most customers have chosen to stick with
LE on ARM as modern network stacks handle the byte-swapping
if necessary and most are switching to user-mode frameworks
like https://en.wikipedia.org/wiki/OpenDataPlane.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414023 is a reply to message #414020] Wed, 13 April 2022 13:16 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: pmaydell

In article <BcB5K.419965$iK66.163851@fx46.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
> [*] Having written an ARMv8 simulator, I envy the hardware guys;
> in software it is a sequence of check bit N, if set, check
> bit N1, etc until one gets to the instruction group, then
> one must decode the function and up to three register
> numbers (5-bits each). The C++ code to decode to the
> instruction group is about 1000 lines total (with comments),
> not counting the decoding of each individual group.

> if (!bit::test(instruction, 27))
> {
> /* 3322222222221111111111 */
> /* 10987654321098765432109876543210 */
> /* 0 */
> if (!bit::test(instruction, 28))
> {
> /* 3322222222221111111111 */
> /* 10987654321098765432109876543210 */
> /* 00 */

Is this autogenerated code? For QEMU we ended up settling
on having a code-generator script that reads in instruction
patterns and spits out C code that does all the 'mask
bitfields, test bits, identify which insn this is' work.
It makes it a lot easier to modify to add new instructions
compared to a hand-rolled C decoder: the implementation is
still inevitably serial and sequential, but the human
representation the programmer edits is much closer to
a parallel no-interdependencies one. (Our aarch64 decoder
is still hand-rolled, but we will probably convert it at
some point.)

-- PMM
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414024 is a reply to message #414017] Wed, 13 April 2022 14:04 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
According to Johnny Billquist <bqt@softjar.se>:
>> Hi. To repeat the part of my message you apparently didn't read, all
>> the stuff you say is true. It was equally true about the big-endian
>> IBM 360.
>
> No, it is not.
>
> If you have a byte addressable machine, and you use big-endian order,
> then the "weight" of a byte is *not* 256^n where n is the relative
> address to the base of the word. ...

Hm, let me see if I can use simpler words. All the reasons to make
the PDP-11 little-endian were just as true in 1963 when IBM made the 360.
If they were a big deal, IBM would have done the 360 the other
way. But they did not. Either the 360 designers were incompetent,
which they were not, or those details weren't important.

>> We have no idea whether any of it affected DEC's design of the PDP-11,
>> or for that matter, the design of the IBM 360. You're guessing.

What I said. You can imagine that the 360 or PDP-11 designers
considered one argument or another, but they didn't say, neither of us
know, and it is sheer silliness to assume that you or anyone else know
what they were thinking.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414025 is a reply to message #414017] Wed, 13 April 2022 14:18 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lars Brinkhoff

Johnny Billquist wrote:
> But I think it's bordering on the silly to assume they did not take my
> points into consideration when doing the design.

Maybe I can clarify. I'm quite sure John Levine didn't write any such thing.
He did not make any assumption about what what points DEC designers
did *not* take into consideration. He also did not make any assumption about
what they *did* take into consideration. There is a third option: to not make
any assumption what DEC designers were thinking, beyond what can be
found in written records from the time.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414026 is a reply to message #414024] Wed, 13 April 2022 16:10 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Acceptable Name

On Wednesday, April 13, 2022 at 2:04:53 PM UTC-4, John Levine wrote:
> According to Johnny Billquist <b...@softjar.se>:
>>> Hi. To repeat the part of my message you apparently didn't read, all
>>> the stuff you say is true. It was equally true about the big-endian
>>> IBM 360.
>>
>> No, it is not.
>>
>> If you have a byte addressable machine, and you use big-endian order,
>> then the "weight" of a byte is *not* 256^n where n is the relative
>> address to the base of the word. ...
>
> Hm, let me see if I can use simpler words. All the reasons to make
> the PDP-11 little-endian were just as true in 1963 when IBM made the 360.
> If they were a big deal, IBM would have done the 360 the other
> way. But they did not. Either the 360 designers were incompetent,
> which they were not, or those details weren't important.
>>> We have no idea whether any of it affected DEC's design of the PDP-11,
>>> or for that matter, the design of the IBM 360. You're guessing.
> What I said. You can imagine that the 360 or PDP-11 designers
> considered one argument or another, but they didn't say, neither of us
> know, and it is sheer silliness to assume that you or anyone else know
> what they were thinking.
> --
> Regards,
> John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
> Please consider the environment before reading this e-mail. https://jl.ly

Bravo! Exquisite! I always dreamed of seeing a real USENET flame war in real time. Simply magnificent. I can see the reviews now. I wish USENET had a lobby so I could hand out cigars, bottles of fine wine and bouquets of flowers to all the cast.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414029 is a reply to message #414025] Thu, 14 April 2022 11:45 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-13 20:18, Lars Brinkhoff wrote:
> Johnny Billquist wrote:
>> But I think it's bordering on the silly to assume they did not take my
>> points into consideration when doing the design.
>
> Maybe I can clarify. I'm quite sure John Levine didn't write any such thing.
> He did not make any assumption about what what points DEC designers
> did *not* take into consideration. He also did not make any assumption about
> what they *did* take into consideration. There is a third option: to not make
> any assumption what DEC designers were thinking, beyond what can be
> found in written records from the time.

I think some people are getting too obsessed about finding explicit
quotes. The facts that I pointed out are obvious and true as such.

To make the assumption that DEC didn't know these facts as well, and
took them into consideration when designing a completely new
architecture is silly, and would assume that DEC did hardware design off
the cuff like noone have ever seen.

So I think we can safely say that the two things I pointed out were most
certainly a part of the reason.

Now, if someone wants to be anal about it and say there is no proof that
they did consider these points, because there is no written record of
it, then so be it. Those same persons should perhaps also ask why DEC
decided to do a binary, and not, say, a trinary computer. Because you
won't find any explanation for choosing binary either. Did they then
just pick binary out of thin air, or was there some reason for that choice?

To bemuse people even more, here are some observations, and some written
document things for you:

The PDP-11, as we know it, came about in a very short time, very late in
the development. For the longest time, the PDP-11 looked *very*
different from what we know. As late as spring 1969, the proposed
architecture was so different that you cannot recognize any of it.
Instructions were 1 to 3 bytes in length, and could start at any
address. Single accumulator thing, with lots of special registers for
different purposes.

However, this earlier architecture (even marked as "final" and "no more
changes" in the sping of 1969) was also little endian.

And there are some potential reasons for this that you can see in the
instruction encoding. Memory reference instructions could be either two
or three bytes. Addresses 0 to FF were encoded with just a two byte
instruction. Basically a little similar to the zero page references in
the PDP-8. The instruction decoding figures out pretty late if this is
going to be an 8 bit or a 16 bit address. When decoding this, you start
stuffing the address bits from the instruction stream into Q (that's
what they call it in the documentation). Now, if you had used a
big-endian mode, this decoding becomes more complicated. Because if it
is a short address, addressing somewhere between 0 and FF, the
instruction stream decode stuff this in the low part of Q, and the high
part is set to 0. But if it is a long address of 16 bits, then the first
byte in the stream needs to go to the high byte of Q, and the next byte
after that needs to go to the low byte of Q.
This is obviously more messy, and makes the whole implementation more
complicated.

You can see this thing if you look at the 1969 design docs in
http://bitsavers.org/pdf/dec/pdp11/memos/.

Now, is that the full answer to why the PDP-11 is little-endian? No. It
is yet another piece of it. Even if this also isn't explicitly written
out. And also, the final PDP-11 have nothing in common with this design.
But it was a factor and input into the next design, which became what we
know as the PDP-11 today.

Most direct, obvious answer to "why was the PDP-11" little-endian could
be answered by "because the previous design for a machine called PDP-11
was little-endian". And that could be the end of the discussion. But of
course, we should examine why that previous design chose to be
little-endian. And here is one reason. Was this the only reason? Most
likely not.

I still don't know all the reasons, and I doubt we will find out. But
clearly there are some reasons we can see or deduce.

We could also try to see if Gordon Bell would be interested in giving
some answers. But he might not remember anymore, or might not have been
involved in that detail.

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414031 is a reply to message #414029] Thu, 14 April 2022 14:16 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Johnny Billquist <bqt@softjar.se> wrote:
> On 2022-04-13 20:18, Lars Brinkhoff wrote:
>> Johnny Billquist wrote:
>>> But I think it's bordering on the silly to assume they did not take my
>>> points into consideration when doing the design.
>>
>> Maybe I can clarify. I'm quite sure John Levine didn't write any such thing.
>> He did not make any assumption about what what points DEC designers
>> did *not* take into consideration. He also did not make any assumption about
>> what they *did* take into consideration. There is a third option: to not make
>> any assumption what DEC designers were thinking, beyond what can be
>> found in written records from the time.
>
> I think some people are getting too obsessed about finding explicit
> quotes. The facts that I pointed out are obvious and true as such.
>
> To make the assumption that DEC didn't know these facts as well, and
> took them into consideration when designing a completely new
> architecture is silly, and would assume that DEC did hardware design off
> the cuff like noone have ever seen.
>
> So I think we can safely say that the two things I pointed out were most
> certainly a part of the reason.
>
> Now, if someone wants to be anal about it and say there is no proof that
> they did consider these points, because there is no written record of
> it, then so be it. Those same persons should perhaps also ask why DEC
> decided to do a binary, and not, say, a trinary computer. Because you
> won't find any explanation for choosing binary either. Did they then
> just pick binary out of thin air, or was there some reason for that choice?
>
> To bemuse people even more, here are some observations, and some written
> document things for you:
>
> The PDP-11, as we know it, came about in a very short time, very late in
> the development. For the longest time, the PDP-11 looked *very*
> different from what we know. As late as spring 1969, the proposed
> architecture was so different that you cannot recognize any of it.
> Instructions were 1 to 3 bytes in length, and could start at any
> address. Single accumulator thing, with lots of special registers for
> different purposes.
>
> However, this earlier architecture (even marked as "final" and "no more
> changes" in the sping of 1969) was also little endian.
>
> And there are some potential reasons for this that you can see in the
> instruction encoding. Memory reference instructions could be either two
> or three bytes. Addresses 0 to FF were encoded with just a two byte
> instruction. Basically a little similar to the zero page references in
> the PDP-8. The instruction decoding figures out pretty late if this is
> going to be an 8 bit or a 16 bit address. When decoding this, you start
> stuffing the address bits from the instruction stream into Q (that's
> what they call it in the documentation). Now, if you had used a
> big-endian mode, this decoding becomes more complicated. Because if it
> is a short address, addressing somewhere between 0 and FF, the
> instruction stream decode stuff this in the low part of Q, and the high
> part is set to 0. But if it is a long address of 16 bits, then the first
> byte in the stream needs to go to the high byte of Q, and the next byte
> after that needs to go to the low byte of Q.
> This is obviously more messy, and makes the whole implementation more
> complicated.
>
> You can see this thing if you look at the 1969 design docs in
> http://bitsavers.org/pdf/dec/pdp11/memos/.
>
> Now, is that the full answer to why the PDP-11 is little-endian? No. It
> is yet another piece of it. Even if this also isn't explicitly written
> out. And also, the final PDP-11 have nothing in common with this design.
> But it was a factor and input into the next design, which became what we
> know as the PDP-11 today.
>
> Most direct, obvious answer to "why was the PDP-11" little-endian could
> be answered by "because the previous design for a machine called PDP-11
> was little-endian". And that could be the end of the discussion. But of
> course, we should examine why that previous design chose to be
> little-endian. And here is one reason. Was this the only reason? Most
> likely not.
>
> I still don't know all the reasons, and I doubt we will find out. But
> clearly there are some reasons we can see or deduce.
>
> We could also try to see if Gordon Bell would be interested in giving
> some answers. But he might not remember anymore, or might not have been
> involved in that detail.
>
> Johnny
>

Thanks! I didn’t know this.

--
Pete
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414122 is a reply to message #413989] Thu, 21 April 2022 17:24 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Tuesday, April 12, 2022 at 12:21:29 PM UTC-6, Peter Maydell wrote:
> In article <20220412150344.5f0a...@eircom.net>,
> Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>> On Tue, 12 Apr 2022 13:56:45 GMT
>> sc...@slp53.sl.home (Scott Lurndal) wrote:
>>
>>> I also take exception at the characterization of "bad code". When
>>> your design target is big-endian, there's no need to use byteswapping
>>> interfaces.
>>
>> Agreed, if you have a task involving manipulation of big endian
>> entities and have chosen a big endian processor for the job you can
>> perfectly correctly forget about the problem.
>>
>> The code isn't bad, just not entirely portable but for embedded use
>> not portable is fine.
> ...up until the point when you need to switch your embedded
> product to a different CPU architecture, when it turns out
> that actually it does matter. It's not that hard to put
> the conceptually-required-but-happen-to-be-nop transforms in
> as you go along. and it's awful to have to go back and
> retrofit them, so I maintain that not doing so is indeed
> sloppy practice.

I just differ in where to assign the blame.

Why do little-endian CPUs even exist?

Oh, I know, historically it was more efficient to fetch the least
significant part first, and start adding, with the second part
fetched by the time the carry was ready. But there's no excuse
for that today.

Oh, except compatibility with mountains of x86 software.

John Savard
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414123 is a reply to message #413991] Thu, 21 April 2022 17:34 Go to previous messageGo to next message
Quadibloc is currently offline  Quadibloc
Messages: 4399
Registered: June 2012
Karma: 0
Senior Member
On Tuesday, April 12, 2022 at 12:49:34 PM UTC-6, John Levine wrote:

> If you have any actual info about why DEC did it backward from everyone
> else, a lot of us would be delighted to see it. But guesses don't help.

There doesn't appear to be any documentary information available.

But "guesses", based on the history of computing at that time, seem to
settle the issue well enough that there's no need to worry about it further.

Lots of 16-bit computers of that time, when they did a 32-bit add from
memory to accumulator, operated on quantities stored in memory with
the least-significant 16 bits first.

This was because it made the operation quicker - you could fetch and
add the least significant 16 bits, and then the carry was ready for when
the most significant bits were read in. (Other things, like addressing multi-word
objects by their location at the lowest address, and looping by incrementing,
rather than decrementing, the address were deeply-ingrained conventions.)

Usually, though, when they stored two characters in a 16-bit word, they
stored the first one in the most significant bits.

This meant a messy, inconsistent mapping of the characters of a four-character
string stored in a 32-bit number that no one would ever have used for a computer
with a 32-bit word.

So it is *obvious* why the PDP-11 was what it was. It allowed a _consistent_
ordering of characters stored in a 32-bit word, just as consistent in its own way
as the big-endian order of the IBM 360, but in a way that was suitable for 16-bit
hardware.

That the floating-point was then inconsistent - is obviously due to a failure of
communication in a world where big-endianness was the universal convention.
So this doesn't refute the hypothesis that consistency was the goal.

Documentary evidence would be nice, if it could ever be found, but what was
going on there is basically so *obvious* that I find it hard to understand why
you would feel any pressing need for it.

John Savard
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414132 is a reply to message #414123] Thu, 21 April 2022 20:15 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
According to Quadibloc <jsavard@ecn.ab.ca>:
> On Tuesday, April 12, 2022 at 12:49:34 PM UTC-6, John Levine wrote:
>
>> If you have any actual info about why DEC did it backward from everyone
>> else, a lot of us would be delighted to see it. But guesses don't help.
>
> There doesn't appear to be any documentary information available.
>
> But "guesses", based on the history of computing at that time, seem to
> settle the issue well enough that there's no need to worry about it further.

Well, maybe. Over on comp.arch someone noted that Bitsavers has a
bunch of DEC memos from early 1969 that describe the evolving design
of the PDP-11. Until about the second week in March, the design was
sort of a byte addressed PDP-9, with a single accumulator and a byte
encoded instruction set that would have been easier to implement
little-endian. But by the end of the month it had turned into the
design they shipped, with 8 symmetrical registers, 16 bit datapaths
everywhere, and 16 bit instructions that would have worked equally
well in either byte order. (Don't argue about that last point unless
you actually programmed a PDP-11/20, /05, and /45 like I did.)

So it wasn't that it made the actual PDP-11 easier, it was that it
would have made an earlier paper design easier. There is nothing
explicit about why they chose the novel byte order beyond a comment in
someone's notes that the bits (not bytes) are numbered backwards. It's
not even clear if they knew their byte order was different from
previous byte addressed machines.



--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414137 is a reply to message #414123] Fri, 22 April 2022 02:23 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Thu, 21 Apr 2022 14:34:12 -0700 (PDT)
Quadibloc <jsavard@ecn.ab.ca> wrote:

> On Tuesday, April 12, 2022 at 12:49:34 PM UTC-6, John Levine wrote:
>
>> If you have any actual info about why DEC did it backward from everyone
>> else, a lot of us would be delighted to see it. But guesses don't help.
>
> There doesn't appear to be any documentary information available.

I would be astonished if there had been a single reason - there
never is for any major design decision IME. After the decision is made
someone *might* write down a convincing reason or two on a justification
document of some kind but that's always just an edited highlight of the
original far ranging (and unrecorded) discussions which cover every aspect
anyone involved can think of IME - and sometimes take place in a single
head.

--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414156 is a reply to message #414122] Fri, 22 April 2022 15:23 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2022-04-21, Quadibloc <jsavard@ecn.ab.ca> wrote:

> Why do little-endian CPUs even exist?
>
> Oh, I know, historically it was more efficient to fetch the least
> significant part first, and start adding, with the second part
> fetched by the time the carry was ready. But there's no excuse
> for that today.

8080 One little,
8085 Two little,
8086 Three little-endians.
8088 Four little,
80186 Five little,
80286 Six little-endians.
80386 Seven little,
80386SX Eight little,
80486 Nine little-endians.
Pentium DIVIDE ERROR

> Oh, except compatibility with mountains of x86 software.

The only reason everyone uses COBOL is that everyone uses COBOL.
-- me

--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <cgibbs@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414163 is a reply to message #414122] Fri, 22 April 2022 16:19 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
According to Quadibloc <jsavard@ecn.ab.ca>:
> I just differ in where to assign the blame.
>
> Why do little-endian CPUs even exist?
>
> Oh, I know, historically it was more efficient to fetch the least
> significant part first, ...

Actually, it wasn't. See other threads on the history of the PDP-11.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414168 is a reply to message #414156] Fri, 22 April 2022 16:48 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Fri, 22 Apr 2022 19:23:25 GMT
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:

> Pentium DIVIDE ERROR

We are Pentium of Borg,
Division is futile,
You will be approximated.

--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414250 is a reply to message #414122] Thu, 28 April 2022 08:47 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-21 23:24, Quadibloc wrote:
> Why do little-endian CPUs even exist?
>
> Oh, I know, historically it was more efficient to fetch the least
> significant part first, and start adding, with the second part
> fetched by the time the carry was ready. But there's no excuse
> for that today.

And I don't even get the problem? In what way is either order superior?
It seems like you (and some others) seem to think that big-endian
somehow is the only proper choice, and superior in some way.

This seems a very silly, subjective opinion. You are entitled to it, of
course, but can you make some proper argument why cpus actually should
be big-endian, that isn't based on "you like it".

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414251 is a reply to message #414250] Thu, 28 April 2022 09:05 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Bob Eager

On Thu, 28 Apr 2022 14:47:34 +0200, Johnny Billquist wrote:

> On 2022-04-21 23:24, Quadibloc wrote:
>> Why do little-endian CPUs even exist?
>>
>> Oh, I know, historically it was more efficient to fetch the least
>> significant part first, and start adding, with the second part fetched
>> by the time the carry was ready. But there's no excuse for that today.
>
> And I don't even get the problem? In what way is either order superior?
> It seems like you (and some others) seem to think that big-endian
> somehow is the only proper choice, and superior in some way.
>
> This seems a very silly, subjective opinion. You are entitled to it, of
> course, but can you make some proper argument why cpus actually should
> be big-endian, that isn't based on "you like it".

I personally think either is equally valid, but I wish a lot of hex dump
programs would show little endian words running from right to left. You
have to read each word, from right to left, but the words usually appear
left to rught!



--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414252 is a reply to message #414123] Thu, 28 April 2022 09:24 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-21 23:34, Quadibloc wrote:
> On Tuesday, April 12, 2022 at 12:49:34 PM UTC-6, John Levine wrote:
>
>> If you have any actual info about why DEC did it backward from everyone
>> else, a lot of us would be delighted to see it. But guesses don't help.
>
> There doesn't appear to be any documentary information available.

I should probably stop commenting on these things... :-)

> But "guesses", based on the history of computing at that time, seem to
> settle the issue well enough that there's no need to worry about it further.
>
> Lots of 16-bit computers of that time, when they did a 32-bit add from
> memory to accumulator, operated on quantities stored in memory with
> the least-significant 16 bits first.

Reference? What I do know is that the PDP-8 had hardware that could do
24-bit things, and for those, the first word is the high 12 bits (that
would be the FPP-12).
There don't seem to have been that many 16-bit machines around in 1969,
which could also do 32-bit operations directly to memory.

> This was because it made the operation quicker - you could fetch and
> add the least significant 16 bits, and then the carry was ready for when
> the most significant bits were read in. (Other things, like addressing multi-word
> objects by their location at the lowest address, and looping by incrementing,
> rather than decrementing, the address were deeply-ingrained conventions.)

A good point about arithmetic operations. But I don't understand the
loop comment at all.

> Usually, though, when they stored two characters in a 16-bit word, they
> stored the first one in the most significant bits.

Why? That sounds unlikely. We are talking about byte addressing, so
naturally, the first character would go in the first byte. And if you
then pulled a 16-bit value out, the first character would be in the low
bits.

> This meant a messy, inconsistent mapping of the characters of a four-character
> string stored in a 32-bit number that no one would ever have used for a computer
> with a 32-bit word.

Agree. That sounds very messy, and not anything I've ever seen anywhere.

> So it is *obvious* why the PDP-11 was what it was. It allowed a _consistent_
> ordering of characters stored in a 32-bit word, just as consistent in its own way
> as the big-endian order of the IBM 360, but in a way that was suitable for 16-bit
> hardware.

Just to point out the obvious here. The PDP-11 is a 16-bit machine, not
a 32-bit one. And the first character is naturally stored in the first
byte. If you then read it out as a word, that character is in the low
bits of the word.

But the PDP-11 is odd in it's treatment of 32-bit integers (as have been
mentioned). I previously mentioned this associated with EIS, which is
incorrect. This happened with the FPP-11. And the funny thing is that
the two 16-bit words of a 32-bit value is actually stored big endian,
which results in the overall "middle"-endian handling in the PDP-11, if
we talk about the hardware.

> That the floating-point was then inconsistent - is obviously due to a failure of
> communication in a world where big-endianness was the universal convention.
> So this doesn't refute the hypothesis that consistency was the goal.

Not sure in which way there is inconsistency here, and are we talking
about something else than the PDP-11 then?

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414261 is a reply to message #414252] Thu, 28 April 2022 17:25 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
According to Johnny Billquist <bqt@softjar.se>:
>> So it is *obvious* why the PDP-11 was what it was. It allowed a _consistent_
>> ordering of characters stored in a 32-bit word, just as consistent in its own way
>> as the big-endian order of the IBM 360, but in a way that was suitable for 16-bit
>> hardware.
>
> Just to point out the obvious here. The PDP-11 is a 16-bit machine, not
> a 32-bit one. And the first character is naturally stored in the first
> byte. If you then read it out as a word, that character is in the low
> bits of the word.

I'm seeing a lot of "obvious" in this discussion that reverse-engineers
people's personal preferences.

The 360/20 was a byte addressed 16 bit machine with 16 bit registers.
The first character in the first byte of what they called a halfword
was in the high bits if you read it as a 16 bit quantity. It's no more
"natural" or "obvious" either way.

Since nobody has ever been able to find a written descripton of the
decision to make the PDP-11 byte order the reverse of every previous
byte addressed machine at the time, I think the closest we can get are
the DEC notes at bitsavers. They described an earlier
never-implemented proposal for the -11 with byte aligned instructions
which would have been a little easier to implement little-endian, and
then the memory layout got carried over by default into the 16 bit
design they actually built.

It's also not clear that any of the people working on the -11 had
programmed 360s or the few other byte-addressed machines so maybe they
didn't even realize that their memory was backward.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414271 is a reply to message #414250] Thu, 28 April 2022 20:46 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Johnny Billquist <bqt@softjar.se> wrote:
> On 2022-04-21 23:24, Quadibloc wrote:
>> Why do little-endian CPUs even exist?
>>
>> Oh, I know, historically it was more efficient to fetch the least
>> significant part first, and start adding, with the second part
>> fetched by the time the carry was ready. But there's no excuse
>> for that today.
>
> And I don't even get the problem? In what way is either order superior?
> It seems like you (and some others) seem to think that big-endian
> somehow is the only proper choice, and superior in some way.
>
> This seems a very silly, subjective opinion. You are entitled to it, of
> course, but can you make some proper argument why cpus actually should
> be big-endian, that isn't based on "you like it".
>

If I write one-hundred thousand, I write 100,000, not 000,001. All else
being equal, why shouldn’t computers represent data as much as possible in
the way people think of it?

--
Pete
Re: What's different, was Why did Dennis Ritchie write that [message #414273 is a reply to message #414271] Thu, 28 April 2022 21:52 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
According to Peter Flass <peter_flass@yahoo.com>:
> If I write one-hundred thousand, I write 100,000, not 000,001.

Hmmn, big-endian cultural imperialism.

IF S/360 had been designed by people who spoke Urdu or Arabic or Hebrew, would have been big-endian or little-endian?

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: What's different, was Why did Dennis Ritchie write that [message #414274 is a reply to message #414273] Thu, 28 April 2022 22:54 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Radey Shouman

John Levine <johnl@taugh.com> writes:

> According to Peter Flass <peter_flass@yahoo.com>:
>> If I write one-hundred thousand, I write 100,000, not 000,001.
>
> Hmmn, big-endian cultural imperialism.
>
> IF S/360 had been designed by people who spoke Urdu or Arabic or
> Hebrew, would have been big-endian or little-endian?

Who knows? After all, Arabic numerals, meaning those actually written
in Arabic script, have the most significant digit on the left and the
least on the right. The physical order is the same as for left to right
scripts, but the logical order is different. I guess that means they're
little-endian.

When doing arithmetic, one almost always writes in little endian order,
regardless of the order of ones alphabetic script.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414283 is a reply to message #414271] Fri, 29 April 2022 08:23 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-29 02:46, Peter Flass wrote:
> Johnny Billquist <bqt@softjar.se> wrote:
>> On 2022-04-21 23:24, Quadibloc wrote:
>>> Why do little-endian CPUs even exist?
>>>
>>> Oh, I know, historically it was more efficient to fetch the least
>>> significant part first, and start adding, with the second part
>>> fetched by the time the carry was ready. But there's no excuse
>>> for that today.
>>
>> And I don't even get the problem? In what way is either order superior?
>> It seems like you (and some others) seem to think that big-endian
>> somehow is the only proper choice, and superior in some way.
>>
>> This seems a very silly, subjective opinion. You are entitled to it, of
>> course, but can you make some proper argument why cpus actually should
>> be big-endian, that isn't based on "you like it".
>>
>
> If I write one-hundred thousand, I write 100,000, not 000,001. All else
> being equal, why shouldn’t computers represent data as much as possible in
> the way people think of it?

So it's exactly because "you like it"?

It's a computer. It can deal with any format just as easily. Just
because you can't is hardly an argument. That's why we have computers in
the first place. To do all the boring tasks that can easily be described.

It's a long time since computers also stopped using decimal. Should they
go back to that because that is also in fact what you use?

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414284 is a reply to message #414261] Fri, 29 April 2022 08:30 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-28 23:25, John Levine wrote:
> According to Johnny Billquist <bqt@softjar.se>:
>>> So it is *obvious* why the PDP-11 was what it was. It allowed a _consistent_
>>> ordering of characters stored in a 32-bit word, just as consistent in its own way
>>> as the big-endian order of the IBM 360, but in a way that was suitable for 16-bit
>>> hardware.
>>
>> Just to point out the obvious here. The PDP-11 is a 16-bit machine, not
>> a 32-bit one. And the first character is naturally stored in the first
>> byte. If you then read it out as a word, that character is in the low
>> bits of the word.
>
> I'm seeing a lot of "obvious" in this discussion that reverse-engineers
> people's personal preferences.

That the PDP-11 is a 16-bit machine, and not a 32-bit machine is
obvious, and have nothing to do with personal preferences. :-)

That the first character is stored in the first byte is a rather natural
behavior.

> The 360/20 was a byte addressed 16 bit machine with 16 bit registers.
> The first character in the first byte of what they called a halfword
> was in the high bits if you read it as a 16 bit quantity. It's no more
> "natural" or "obvious" either way.

Since we seem to claim that the 360 was a byte addressed machine (I
honestly have no clue about the 360), you then seem to say that if we
were to do byte addresses, and look at a string, the layout is like this:

0: 2nd character
1: 1st character
2: 4th character
3: 3rd character
4: 6th character
5: 5th character

and so on. That really do not seem natural.

> Since nobody has ever been able to find a written descripton of the
> decision to make the PDP-11 byte order the reverse of every previous
> byte addressed machine at the time, I think the closest we can get are
> the DEC notes at bitsavers. They described an earlier
> never-implemented proposal for the -11 with byte aligned instructions
> which would have been a little easier to implement little-endian, and
> then the memory layout got carried over by default into the 16 bit
> design they actually built.
>
> It's also not clear that any of the people working on the -11 had
> programmed 360s or the few other byte-addressed machines so maybe they
> didn't even realize that their memory was backward.

Here we go again. Assuming that people who design a new architecture for
a large computer company would be unaware of what other designs exist is
just going way to far in my view. Anyone who would work like that would
be fired, I think.

And you think the PDP-11 memory is backward? I have still to find an
explanation for that view.

Johnny
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414290 is a reply to message #414283] Fri, 29 April 2022 10:29 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3867
Registered: January 2012
Karma: 0
Senior Member
Johnny Billquist <bqt@softjar.se> writes:

> On 2022-04-29 02:46, Peter Flass wrote:
>> Johnny Billquist <bqt@softjar.se> wrote:
>>> On 2022-04-21 23:24, Quadibloc wrote:
>>>> Why do little-endian CPUs even exist?
>>>>
>>>> Oh, I know, historically it was more efficient to fetch the least
>>>> significant part first, and start adding, with the second part
>>>> fetched by the time the carry was ready. But there's no excuse
>>>> for that today.
>>>
>>> And I don't even get the problem? In what way is either order superior?
>>> It seems like you (and some others) seem to think that big-endian
>>> somehow is the only proper choice, and superior in some way.
>>>
>>> This seems a very silly, subjective opinion. You are entitled to it, of
>>> course, but can you make some proper argument why cpus actually should
>>> be big-endian, that isn't based on "you like it".
>>>
>> If I write one-hundred thousand, I write 100,000, not 000,001. All
>> else
>> being equal, why shouldn’t computers represent data as much as possible in
>> the way people think of it?
>
> So it's exactly because "you like it"?
>
> It's a computer. It can deal with any format just as easily. Just
> because you can't is hardly an argument. That's why we have computers
> in the first place. To do all the boring tasks that can easily be
> described.
>
> It's a long time since computers also stopped using decimal. Should
> they go back to that because that is also in fact what you use?

Yes.

Besides all that, it seems that your argument is that there is no reason
to make computers easier to use.

--
Dan Espen
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414292 is a reply to message #414283] Fri, 29 April 2022 12:28 Go to previous messageGo to next message
D.J. is currently offline  D.J.
Messages: 821
Registered: January 2012
Karma: 0
Senior Member
On Fri, 29 Apr 2022 14:23:55 +0200, Johnny Billquist <bqt@softjar.se>
wrote:
> On 2022-04-29 02:46, Peter Flass wrote:
>> Johnny Billquist <bqt@softjar.se> wrote:
>>> On 2022-04-21 23:24, Quadibloc wrote:
>>>> Why do little-endian CPUs even exist?
>>>>
>>>> Oh, I know, historically it was more efficient to fetch the least
>>>> significant part first, and start adding, with the second part
>>>> fetched by the time the carry was ready. But there's no excuse
>>>> for that today.
>>>
>>> And I don't even get the problem? In what way is either order superior?
>>> It seems like you (and some others) seem to think that big-endian
>>> somehow is the only proper choice, and superior in some way.
>>>
>>> This seems a very silly, subjective opinion. You are entitled to it, of
>>> course, but can you make some proper argument why cpus actually should
>>> be big-endian, that isn't based on "you like it".
>>>
>>
>> If I write one-hundred thousand, I write 100,000, not 000,001. All else
>> being equal, why shouldn’t computers represent data as much as possible in
>> the way people think of it?
>
> So it's exactly because "you like it"?
>
> It's a computer. It can deal with any format just as easily. Just
> because you can't is hardly an argument. That's why we have computers in
> the first place. To do all the boring tasks that can easily be described.
>
> It's a long time since computers also stopped using decimal. Should they
> go back to that because that is also in fact what you use?

I am puzzled why your answer went off on a tangent.

We say 100,000 when we mean 100,000. Not 1.
--
Jim
Re: What's different, was Why did Dennis Ritchie write that [message #414293 is a reply to message #414273] Fri, 29 April 2022 14:19 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2022-04-29, John Levine <johnl@taugh.com> wrote:

> According to Peter Flass <peter_flass@yahoo.com>:
>
>> If I write one-hundred thousand, I write 100,000, not 000,001.
>
> Hmmn, big-endian cultural imperialism.
>
> IF S/360 had been designed by people who spoke Urdu or Arabic
> or Hebrew, would have been big-endian or little-endian?

?enil a no wolf txet dluow yaw hcihw dnA

--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <cgibbs@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414294 is a reply to message #414284] Fri, 29 April 2022 14:19 Go to previous messageGo to next message
Charlie Gibbs is currently offline  Charlie Gibbs
Messages: 5313
Registered: January 2012
Karma: 0
Senior Member
On 2022-04-29, Johnny Billquist <bqt@softjar.se> wrote:

> Since we seem to claim that the 360 was a byte addressed machine (I
> honestly have no clue about the 360), you then seem to say that if we
> were to do byte addresses, and look at a string, the layout is like this:
>
> 0: 2nd character
> 1: 1st character
> 2: 4th character
> 3: 3rd character
> 4: 6th character
> 5: 5th character
>
> and so on. That really do not seem natural.

I've heard this described as the "NUXI problem".

And no, the 360 doesn't work like that.

--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <cgibbs@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Re: What's different, was Why did Dennis Ritchie write that [message #414295 is a reply to message #414273] Fri, 29 April 2022 14:28 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
John Levine <johnl@taugh.com> wrote:
> According to Peter Flass <peter_flass@yahoo.com>:
>> If I write one-hundred thousand, I write 100,000, not 000,001.
>
> Hmmn, big-endian cultural imperialism.
>
> IF S/360 had been designed by people who spoke Urdu or Arabic or Hebrew,
> would have been big-endian or little-endian?
>

I think I read that people who use R-L languages still write numbers L-R
(big-endian).

--
Pete
Re: Big-endian history, What's different, was Why did Dennis Ritchie write [message #414304 is a reply to message #414284] Fri, 29 April 2022 22:59 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1405
Registered: December 2011
Karma: 0
Senior Member
According to Johnny Billquist <bqt@softjar.se>:
>> I'm seeing a lot of "obvious" in this discussion that reverse-engineers
>> people's personal preferences.
>
> That the PDP-11 is a 16-bit machine, and not a 32-bit machine is
> obvious, and have nothing to do with personal preferences. :-)

So far, so good.

> That the first character is stored in the first byte is a rather natural
> behavior.

Of course.

>> The 360/20 was a byte addressed 16 bit machine with 16 bit registers.
>> The first character in the first byte of what they called a halfword
>> was in the high bits if you read it as a 16 bit quantity. It's no more
>> "natural" or "obvious" either way.
>
> Since we seem to claim that the 360 was a byte addressed machine (I
> honestly have no clue about the 360)

Don't take mt word for it, read the manual.

General design of S/360:

http://bitsavers.org/pdf/ibm/360/princOps/A22-6821-7_360Prin cOpsDec67.pdf

The cut down 16-bit 360/20:

http://bitsavers.org/pdf/ibm/360/functional_characteristics/ A26-5847-3_360-20_funChar_Apr67.pdf

, you then seem to say that if we
> were to do byte addresses, and look at a string, the layout is like this:
>
> 0: 2nd character
> 1: 1st character
> 2: 4th character
> 3: 3rd character
> 4: 6th character
> 5: 5th character

Of course not. It's like this:

0: 1st character
1: 2nd character
2: 3rd character
3: 4th character
4: 5th character
5: 6th character

Since it's big-endian, if you do a LH instruction to load a 16 bit
value, the 0 byte is in the high bits and the 1 byte is in the low
bits.

The 360, both the regular 32 bit models and the 16 bit 360/20, were
consistently big-endian. IBM's zSeries which is backward compatible
with the 360, still is.

>> It's also not clear that any of the people working on the -11 had
>> programmed 360s or the few other byte-addressed machines so maybe they
>> didn't even realize that their memory was backward.
>
> Here we go again. Assuming that people who design a new architecture for
> a large computer company would be unaware of what other designs exist is
> just going way to far in my view. Anyone who would work like that would
> be fired, I think.

Perhaps, but I see nothing in the PDP-11 notes that suggest that anyone
realized that their byte order was backward from all the existing
byte-addressed machines. Maybe they knew and didn't care, maybe they
didn't know.
--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414311 is a reply to message #414271] Sat, 30 April 2022 03:08 Go to previous messageGo to next message
Ahem A Rivet's Shot is currently offline  Ahem A Rivet's Shot
Messages: 4843
Registered: January 2012
Karma: 0
Senior Member
On Thu, 28 Apr 2022 17:46:10 -0700
Peter Flass <peter_flass@yahoo.com> wrote:

> If I write one-hundred thousand, I write 100,000, not 000,001. All else
> being equal, why shouldn’t computers represent data as much as possible in
> the way people think of it?

Computers should represent data in whichever way optimises
processing.

Computers should present data in whichever way optimises
understanding.

If there are two apparently equally good choices and everyone else
has made one of them, why not try the other one ? Worst case you learn
something, best case there's an unexpected advantage on the road less
travelled.

--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414314 is a reply to message #414311] Sat, 30 April 2022 04:52 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: maus

On 2022-04-30, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
> On Thu, 28 Apr 2022 17:46:10 -0700
> Peter Flass <peter_flass@yahoo.com> wrote:
>
>> If I write one-hundred thousand, I write 100,000, not 000,001. All else
>> being equal, why shouldn’t computers represent data as much as possible in
>> the way people think of it?
>
> Computers should represent data in whichever way optimises
> processing.
>
> Computers should present data in whichever way optimises
> understanding.
>
> If there are two apparently equally good choices and everyone else
> has made one of them, why not try the other one ? Worst case you learn
> something, best case there's an unexpected advantage on the road less
> travelled.
>


I believe from recent messages that some people who read this group
still use Slackware, and I would ask them if slackware-current is still
systemd free?

--
greymausg@mail.com
It is I, alone, who can tell you.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414315 is a reply to message #414314] Sat, 30 April 2022 05:12 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: maus

On 2022-04-30, maus <maus@dmaus.org> wrote:
> On 2022-04-30, Ahem A Rivet's Shot <steveo@eircom.net> wrote:
>> On Thu, 28 Apr 2022 17:46:10 -0700
>> Peter Flass <peter_flass@yahoo.com> wrote:
>>
>>> If I write one-hundred thousand, I write 100,000, not 000,001. All else
>>> being equal, why shouldn’t computers represent data as much as possible in
>>> the way people think of it?
>>
>> Computers should represent data in whichever way optimises
>> processing.
>>
>> Computers should present data in whichever way optimises
>> understanding.
>>
>> If there are two apparently equally good choices and everyone else
>> has made one of them, why not try the other one ? Worst case you learn
>> something, best case there's an unexpected advantage on the road less
>> travelled.
>>
>
>
> I believe from recent messages that some people who read this group
> still use Slackware, and I would ask them if slackware-current is still
> systemd free?
>

To answer my own question, I checked Eric Hamleers page, and
Slackware-live is still systemd free.

Not that I think that is important.. but a detail.


--
greymausg@mail.com
It is I, alone, who can tell you.
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414318 is a reply to message #414311] Sat, 30 April 2022 13:54 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8375
Registered: December 2011
Karma: 0
Senior Member
Ahem A Rivet's Shot <steveo@eircom.net> wrote:
> On Thu, 28 Apr 2022 17:46:10 -0700
> Peter Flass <peter_flass@yahoo.com> wrote:
>
>> If I write one-hundred thousand, I write 100,000, not 000,001. All else
>> being equal, why shouldn’t computers represent data as much as possible in
>> the way people think of it?
>
> Computers should represent data in whichever way optimises
> processing.
>
> Computers should present data in whichever way optimises
> understanding.
>
> If there are two apparently equally good choices and everyone else
> has made one of them, why not try the other one ? Worst case you learn
> something, best case there's an unexpected advantage on the road less
> travelled.
>

OTOH, there’s a lot to be said for maintaining compatibility with others.
If nothing else, there are fewer things to be surprised by when moving to a
new system. Everyone once had there own version of BCD because there may
have been an unexpected advantage in having a particular character or
arrangement. ASCII and EBCDIC were developed to standardize on one good
choice over many other possibilities.

--
Pete
Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS? [message #414322 is a reply to message #414290] Sat, 30 April 2022 18:17 Go to previous messageGo to previous message
Anonymous
Karma:
Originally posted by: Johnny Billquist

On 2022-04-29 16:29, Dan Espen wrote:
> Johnny Billquist <bqt@softjar.se> writes:
>
>> So it's exactly because "you like it"?
>>
>> It's a computer. It can deal with any format just as easily. Just
>> because you can't is hardly an argument. That's why we have computers
>> in the first place. To do all the boring tasks that can easily be
>> described.
>>
>> It's a long time since computers also stopped using decimal. Should
>> they go back to that because that is also in fact what you use?
>
> Yes.
>
> Besides all that, it seems that your argument is that there is no reason
> to make computers easier to use.

In which way are they easier to use with one byte order than the other?
Do you have problems using any x86 based machines because of the byte
order? Do you normally even notice the byte order?

I can tell you that on the PDP-11, which I use all the time, I very
seldom is bothered by the byte order. However, once in a while, if I
want to just check some integer as just a byte, I am actually happy that
I use the same address as if I refer to the value as a word.
Basically, type conversions are very simple with little endian order.

So, which is "easier to use"?

Or are you saying that it's the decimal aspect that you miss, and which
is the "easier to use"?

Johnny
Pages (4): [ «    1  2  3  4    »]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: Re: What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
Next Topic: Re: CDC Hawk Disk Drive
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Sat Apr 20 01:51:01 EDT 2024

Total time taken to generate the page: 0.03534 seconds