>>>> Hardware is still sold, a lot of the software developed in the
>>>> last twenty years has been developed in the atmosphere of software
>>>> should be *free*. There is little incentive for innovative software
>>>> development.
>>>
>>> There's been plenty of free innovative mainframe software. For that
>>> matter, there are free PC compilers and interpreters for a number of
>>> languages, some quite innovative.
>>
>> The bulk of of the PC compilers are based on 30+ year old
>> technology. In the PC world language design and implementation
>> has been essentially stalled for several years.
>
> Any evidence to back up your assertion?
>
> I don't follow GCC all that closely, but it seems to me there are
> new versions and release numbers and talk of forks. Must be something
> going on there.
>
Morten did a write-up of what needs to be done. IIRC, it was about 2004 or
so. He hasn't mentioned that any of it...well, except one...has been
done to his satisfaction.
>>> Now my new fear is... that *everything* I know will become
>>> obsolete and useless in a pragmatic sense.
>>
>> That's everybody's fear. The half life of geekish knowledge is no more
>> than 4 years. I can still write PDP-8 and 11 Assembler and nobody
>> cares. Oh, and Teco...
>>
>
> That's it in a nutshell, Mr. Roper!!! You (and I) can do a lot of neat
> things like PDP-8 and PDP-11 Assembly language... and *no* one gives a
> flying rat's ass about it anymore!!! It saddens me and it's emotionally
> taxing. All those things we know how to do... those things are as *cool* as
> they ever were!!! People just can *not* appreciate them anymore..... :-(
But in this computing biz, what used to be will be done again. At some
point, the underbelly of a system will be so complicated and so dependent
on other complicated messes, that someone will come up with "new" bright
idea of a PDP-8 or PDP-11 of the original days to do a task which is very
important but doens't need all the fancy shmancy character machine language
support.
We may not see it; it took 2 more decades for people to "rediscover"
multi-CPUs in an SMP configuration (they're still not quite there yet)
than I thought would happen. The software underbelly is in such a mess
that it may take a while for that to become better before the focus
reverts back to hardware improvments.
Ibmekon wrote:
> On 20 Jan 2013 15:58:05 GMT, jmfbahciv <See.above@aol.com> wrote:
>
>> Ibmekon wrote:
>>> On 19 Jan 2013 14:42:50 GMT, jmfbahciv <See.above@aol.com> wrote:
>>>
>>>> Jorgen Grahn wrote:
>>>> > On Fri, 2013-01-18, jmfbahciv wrote:
>>>> >> Ibmekon wrote:
>>>> > ...
>>>> >>> Only in recent years have I begun to be impressed by modern computers.
>>>> >>
>>>> >> The hardware is OK; the OSes still need a lot of work. OSes should be
>>>> >> seen and not heard unless asked.
>>>> >>
>>>> >> Every single one still needs to be wrestled with on a minutely basis.
>>>> >
>>>> > It's not OSes, but software in general. My hardware may be 100 or
>>>> > 1000 times larger/faster/better now compare to 1993, but my software
>>>> > isn't.
>>>>
>>>> It's the OS which influences the apps programmers. If they have never
>>>> seen nor experienced a good OS, they won't know how software should
>>>> behave.
>>>>
>>>> /BAH
>>>
>>> At your DEC, was there any separation between the scope OS and the
>>> applications ?
>>
>> I don't understand the question.
>>
>> With TOPS-20, a user could hit $ or ? depending on what kind of
>> help s/he wanted. In alll other cases, the OS stayed out of the
>> way and allowed the user to whatever s/he wanted, including
>> obeying commands and arranging resources so that access was
>> immediate and scheduling devices and software resources was
>> alsmost invisible.
>>
>> AFter the PDP-10 OS programmers moved in the VMS groups, VMS
>> started to do similar things.
>>
>> Apps were able to use system calls, which were very well defined,
>> if they needed any data or actions from the monitor. Apps were
>> not allowed to place their trendils in the EXEC portions of the
>> monitor. Monitors executed as much of their code in behalf
>> of the user and not in exec mode.
>>
>> A huge part of MS' bugs is their corporate folklore of allowing
>> this to happen. Cutler had a battle early on to prevent any
>> random app from placing tendrils in the monitor (you call it
>> kernel) but lost that battle. This was unfortunate.
>>
>> /BAH
>
> Sorry.
>
> When I read my own post I saw it was unreadable - it should have read
> :
>
> "At DEC, was there any intentional separation between the scope of the
> functions of the OS and the applications ? "
>
> I tried to send the new version, my Newsreader has it in the Sent box,
> but it has not posted online.
>
> Nevertheless, your reply was on target.
:-) I guessed correctly.
> The question I was circling around is this - where is the imaginary
> mark between OS and Apps ?
The system call has been the traditional line and most software did not
cross it. There were exceptions but those were rare and were used
only for the configurations which needed the extras privs.
>
> And at what point in its history did Microsoft overstep it.
Since the programmers viewed the OS as just another app running on a
PC whcih didn't have to be secure w.r.t. another user, my guess would
be from the beginning. You can be very sloppy about which code runs
in exec or user mode if there is only one "program" ever running on
the system and that program is the kernel.
> Certainly the EU mandarins insisted the "Internet Explorer" browser
> and "Windows Media Player" should not be bundled in an OS.
Tehre is a huge difference between bundling and having an app insert
its tengrils into the running monitor at runtime or later.
> But surely it goes back to the early 1980's and the startup of MS-DOS
> and WORD.
> It is easy to see how MS could gain from allowing their own
> application access to the Monitor, but deny it for the market
> competition.
Access to the monitor can be done with monitor calls and not DEC's
equivalent of POKE when they only wanted to PEEK or SPY.
>
> Since you have read this far, /BAH - what were consequences of
> "Cutler's last stand" ?
Any software developer who needed something from the monitor would
not design a system call but simply read/write what s/he needed
into the running kernal. Design reviews would not have refused
this flavor of implementation since it was a corporate culture
thing. If there had been questions, the developer would have
plenty of history to point at to get his own way. Cutler tried
to establish that system call wall but nobody else in that
company knew nor wanted to understand the dangers of making that
wall holey. They were running PCs which were single-user, single
owner and didn't need the security that multi-user systems had
to have. I still see this attitude in any PC implementation
even though all now have to run multi-user even if there's
only one human being touching it.
Think about MS' backdoors which have to be there for the update
services. The progammers would not wait to go through a system
call design to get into the deep dark bowels of a running system.
Bottom line to your question: unending security problems and
bugs which, when fixed, beget 3 new ones.
Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
> On the other hand, the story upthread happened in 1993. Already then
> -- or a few years later -- it was understood that if your program
> couldn't cope with running on an SMP system, it was plain broken.
>
> The easiest way to avoid that was not to use threads. The easiest way
> to guarantee breakage was to use threads, without having an SMP system
> to test on.
>
> My first PC (a high-end AST, in 1996) had a free CPU socket on the
> motherboard for a second Pentium. Of course, it never made sense to
> add one, with hardware evolving so fast back then.
>
> I still don't own an actual SMP or "multi-core" system.
there were system support for multiprocessors on server side going back
decades ... handling multiple different applications running
simultaneously ... and there were some number of multi-threaded system
software like dbms and transaction systems that could take advantage of
multiprocessors. although lots of the earlier implementations only
allowed processor executing in single section of code at a time ... aka
different applications running on different processors with kernel
spin-lock ... so that entry to kernel was limited to single processor at
a time.
Microsoft, to its credit, has multi-threaded the calculations in Office
Excel 2007. But that's about where the credit ends.
Intel and AMD executives fail to hide their disappointment with
Microsoft well on the multi-threaded software front.
During a speech last June, Intel SVP Pat Gelsinger said the following:
"A couple of years ago, I had a discussion with Bill Gates (about the
multi-core products). He was just in disbelief. He said, 'We can't write
software to keep up with that.'"
Gates ordered the Intel executive to keep pumping out faster product.
"No, Bill, it's not going to work that way," Gelsinger informed him.
.... snip ...
Sequent had 32-way in the 80s/90s supported by their dynix version of
unix ... and in the 90-s were one of the companies doing 256-way with
sci. we did some work with sequent about their 256-way Numa-Q ... and
they mentioned that they done most of the work on NT to take it past
4-way scaleup.
i've mentioned before that charlie had invented compare-and-swap when he
was working on cp67 fine-grain multiprocessor locking (instruction named
compare-and-swap because CAS are charlie's initials) at the science
center. misc. past posts mentioning science center http://www.garlic.com/~lynn/subtopic.html#545tech
past posts mentioning compare-and-swap and/or multiprocessor http://www.garlic.com/~lynn/subtopic.html#smp
effort was then made to have compare-and-swap added to the upcoming 370
.... but was initially rebuffed by the owners of 370 architecture. they
claimed that the corporate favorite-son batch operating system people
were claiming test-and-set (multiprocessor locking) instruction (from
360s) was more than adequate for multiprocessor operation. The
"challenge" was to come up with other compare-and-swap uses (than kernel
multiprocessor locking) in order to get compare-and-swap added to 370
.... the result was the application multi-threaded examples (whether
running on multiprocessor or not) ... which still appear in principles
of operation http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/d z9zr003/A.6?DT=20040504121320
>>> Now my new fear is... that *everything* I know will become
>>> obsolete and useless in a pragmatic sense.
>>
>> That's everybody's fear. The half life of geekish knowledge is no more
>> than 4 years. I can still write PDP-8 and 11 Assembler and nobody
>> cares. Oh, and Teco...
>>
>
> That's it in a nutshell, Mr. Roper!!! You (and I) can do a lot of neat
> things like PDP-8 and PDP-11 Assembly language... and *no* one gives a
> flying rat's ass about it anymore!!! It saddens me and it's emotionally
> taxing. All those things we know how to do... those things are as *cool* as
> they ever were!!! People just can *not* appreciate them anymore..... :-(
>
Emotionally taxing? You know, I never cared about anyone appreciating
my programming. The absolute best thing about writing and debugging
code is you don't have to wait for the critics. The machine tells you
straight away and it doesn't lie, does not flatter, and does not have a
hidden agenda.
The only critics worth having are the ones who steal your code and make
it better.
You and I have ridden the mini-computer wave. It sucked us in at the
beginning, and spat us out on the beach. Sad? Not me! It was a blast.
--
To de-mung my e-mail address:- fsnospam$elliott$$
PGP Fingerprint: 1A96 3CF7 637F 896B C810 E199 7E5C A9E4 8E59 E248
>>>> OTOH, many photos 150 years old or so are still in file condition.
>>>> Will the computer stuff still be readable? {old nit returns)
>>>
>>> It will - provided it's been copied onto more up to date media as
>>> it becomes available and before the old media is unreadable.
>>
>> It's not just the media, it's the file format. You're making the
>
> Sure, you may well have to move it into a more modern format from
> time to time.
>
>> assumption that, in the future, there will still be software capable of
>> reading the format.
>
> No I'm making the assumption that before the data is unreadable it
> will be copied to something that will be readable for longer.
All well and good to say this, and I'm sure the "archive" sites will
keep up, but what about the digital equivalent of the photo album that
sits in Grandma's attic for 100 years and is finally rediscovered when
the house is sold or torn down. You dig out a 1GB flash drive with a
bunch of JPEGs on it...
>>> I used Foxbase+ Mac and it was a great product for the time. When I
>>> heard that Microsoft was taking it over I knew the jig was probably
>>> up.
>>
>> Micro$oft is the CA of small computers.
>
> CA ? caca?
>
Computer Associates, notorious for buying up lots of small mainframe
software companies with good products and good support, then destroying
them thru bad management and cheese-paring.
>>>> >Hardware is still sold, a lot of the software developed in the
>>>> >last twenty years has been developed in the atmosphere of software
>>>> >should be *free*. There is little incentive for innovative software
>>>> >development.
>>>>
>>>> There's been plenty of free innovative mainframe software. For that
>>>> matter, there are free PC compilers and interpreters for a number of
>>>> languages, some quite innovative.
>>>
>>> The bulk of of the PC compilers are based on 30+ year old
>>> technology. In the PC world language design and implementation
>>> has been essentially stalled for several years.
>>
>> Any evidence to back up your assertion?
>>
>> I don't follow GCC all that closely, but it seems to me there are
>> new versions and release numbers and talk of forks. Must be something
>> going on there.
>>
> Morten did a write-up of what needs to be done. IIRC, it was about 2004 or
> so. He hasn't mentioned that any of it...well, except one...has been
> done to his satisfaction.
>
> /BAH
There are really two lists, the one to keep the current GCC working and
the complete language technology update that GCC needs to support new
processors and innovation.
A better way to look at GCC is it was written when overlays and compile
to asm was need to barely get it to run on the available hardware. It has
been patched and tweaked many times but the fundamental design is
decades old.
It is like using 50's era machine shop tools when your competitors
are using NC machines and laser cutters.
Peter Flass Messages: 8375 Registered: December 2011
Karma: 0
Senior Member
On 1/21/2013 8:06 AM, jmfbahciv wrote:
>
> I had a much different technique. If I had to think about something,
> I'd play some kind of game, IIR Go, so that my fingers stayed busy
> while I thought. Randomly, changing sources makes me sudder and
> want to head for the backup tape :-).
>
I just ran into this the other day, and with my own code, too, but from
several years ago. I kept tweaking things and couldn't figure out why I
couldn't get it to work the way I wanted. Finally I sat down and went
thru it thoroughly and it turned out I was misunderstanding what a
routine was doing, probably because the name seemed to say one thing and
the code actually did something different (originally did the first and
later changed, but kept the old name for some stupid reason -- fixed
now, plus added comments.)
>>>> > OTOH, many photos 150 years old or so are still in file condition.
>>>> > Will the computer stuff still be readable? {old nit returns)
>>>>
>>>> It will - provided it's been copied onto more up to date media as
>>>> it becomes available and before the old media is unreadable.
>>>
>>> It's not just the media, it's the file format. You're making the
>>
>> Sure, you may well have to move it into a more modern format
>> from time to time.
>>
>>> assumption that, in the future, there will still be software capable of
>>> reading the format.
>>
>> No I'm making the assumption that before the data is unreadable
>> it will be copied to something that will be readable for longer.
>
> All well and good to say this, and I'm sure the "archive" sites will
> keep up, but what about the digital equivalent of the photo album that
> sits in Grandma's attic for 100 years and is finally rediscovered when
> the house is sold or torn down. You dig out a 1GB flash drive with a
> bunch of JPEGs on it...
Yep that's a problem. The data *can* be kept readable and usable,
but it has to be done or it becomes a data recovery problem, probably a
*very* hard one with a century old flash drive.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
>>> MS does seem to have addressed reliability issues a decade
>>> or more ago starting with nt and w2k but the *nix guys are
>>> still waving that around.
>>
>> On the other hand, just the other day we had a test bed go down
>> because Windows 7 decided to install an update and reboot.
>>
>
> If Windows 7 is anything like Vista, Charlie, you can select *not* to
> receive automatic updates... and just pick a time to update the system
> yourself. I had to do that to my wife's machine, because she got tired of
> the updates delaying her use of the computer.
I let the computer, Vista, tell me updates are available, and decide
which ones myself. All stems from an issue with a Win98Se computer
that insisted on trying to install a patch for a sound card not on the
computer. I copied my files off and formated and installed WinXP when
it became available.
..
JimP.
--
Brushing aside the thorns so I can see the stars. http://www.linuxgazette.net/ Linux Gazette http://www.drivein-jim.net/ Drive-In movie theaters http://story.drivein-jim.net/ A story Feb, 2011
>>>> > >Hardware is still sold, a lot of the software developed in the
>>>> > >last twenty years has been developed in the atmosphere of software
>>>> > >should be *free*. There is little incentive for innovative software
>>>> > >development.
>>>> >
>>>> > There's been plenty of free innovative mainframe software. For that
>>>> > matter, there are free PC compilers and interpreters for a number of
>>>> > languages, some quite innovative.
>>>>
>>>> The bulk of of the PC compilers are based on 30+ year old
>>>> technology. In the PC world language design and implementation
>>>> has been essentially stalled for several years.
>>>
>>> Any evidence to back up your assertion?
>>>
>>> I don't follow GCC all that closely, but it seems to me there are
>>> new versions and release numbers and talk of forks. Must be something
>>> going on there.
>>>
>> Morten did a write-up of what needs to be done. IIRC, it was about 2004 or
>> so. He hasn't mentioned that any of it...well, except one...has been
>> done to his satisfaction.
>>
>> /BAH
>
> There are really two lists, the one to keep the current GCC working and
> the complete language technology update that GCC needs to support new
> processors and innovation.
>
> A better way to look at GCC is it was written when overlays and compile
> to asm was need to barely get it to run on the available hardware. It has
> been patched and tweaked many times but the fundamental design is
> decades old.
>
> It is like using 50's era machine shop tools when your competitors
> are using NC machines and laser cutters.
You know, you are welcome to become a gcc contributor.
Recent developments:
Link-time optimization
a transition from C to C++ as the implementation language
A switch from LALR parsers generated with Bison, to hand-written
recursive-descent parsers.
>>> Try finding a system with less than 8 megs of RAM now.
>>> I have a phone that qualifies and probably a dishwasher (it's
>>> fairly old) but I wouldn't bet on the (much newer) washing
>>> machine or the TV.
>>
>> I was thinking desktop systems as I read that and thought the
>> figure was a bit high. Then, I noted that it was megs, not gigs.
>
> AFAIK, no machine to test it, but you can use email on a C64.
"Give me a telnet long enough and an Ethernet port on which to
place it, and I shal move the mail."
--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
>>>> Now my new fear is... that *everything* I know will become
>>>> obsolete and useless in a pragmatic sense.
>>>
>>> That's everybody's fear. The half life of geekish knowledge is no more
>>> than 4 years. I can still write PDP-8 and 11 Assembler and nobody
>>> cares. Oh, and Teco...
>>>
>>
>> That's it in a nutshell, Mr. Roper!!! You (and I) can do a lot of neat
>> things like PDP-8 and PDP-11 Assembly language... and *no* one gives a
>> flying rat's ass about it anymore!!! It saddens me and it's emotionally
>> taxing. All those things we know how to do... those things are as *cool*
>> as
>> they ever were!!! People just can *not* appreciate them anymore..... :-(
> But in this computing biz, what used to be will be done again.
Nope, great swags of it never will be again.
With decent optimising cross compilers like Walter's, there is absolutely
no point whatever in hand crafted assembler anymore unless it’s just a
hobby where you cant justify the cost of one of his cross compilers or you
enjoy doing it and accept that you wont be able to do as good a result.
In fact it remains to be seen whether we will see architectures that
are optimised for that sort of optimising cross compiler which are
just not suited to hand crafted assembler by humans at all.
We are already seeing that with high performance
military fighters that just cant be flown if the computer
stops working and the only viable option is to eject.
You just cant hand fly them anymore.
> At some point, the underbelly of a system will be so complicated
> and so dependent on other complicated messes, that someone
> will come up with "new" bright idea of a PDP-8 or PDP-11 of
> the original days
That happened LONG ago with single chip micros. Some of
them are quite a bit simpler than the PDP-11 and the PDP-8
is just too limited for the approach it took to be viable now
even with the most resource limited micros.
> to do a task which is very important but doens't need all
> the fancy shmancy character machine language support.
That happened LONG ago now.
> We may not see it; it took 2 more decades for people
> to "rediscover" multi-CPUs in an SMP configuration
Bullshit it did. That config never went away.
And the way google does it is NOTHING like the way DEC
used to do it, for a reason.
> (they're still not quite there yet)
Bullshit. They have in fact left it for dead, particularly with
operations like google.
> than I thought would happen. The software underbelly is
> in such a mess that it may take a while for that to become
> better before the focus reverts back to hardware improvments.
Even sillier. We see hardware improvements at a
MUCH higher rate than we ever do with software.
>>>> The faster the CPUs, the cheaper the RAM gets, the sloppier the
> programmers.
>>>> Making a program fit in 4KB really concentrated the mind!
>>>
>>> Not to mention getting perhaps one or two turn arounds a day. One desk
>>> checked *well*.
>>
>> That was before my time, except of course there are still situations
>> where you cannot *test* your software as well or often as you'd like.
>>
>>> Nowadays, you can't produce at the rate you are
>>> expected to if you do. Submit and recompile and get your sintax[1] err
>>> errors in seconds. This produces a more diffuse and confused state of
>>> mind which is much less pleasant and also more logical errors,
>>> methinks.
>>
>> It's a bit of both. Sometimes it makes perfect sense to hand over
>> work to the computer, e.g. "remove this variable declaration and then
>> compile-edit-compile until the resulting errors go away".
>>
>> At other times you should really stop and *think* -- but thinking is
>> hard and it's so much easier to just hack at the code at random until
>> it seems to work. Unit testing often has that effect on me; if I have
>> a lot of passing tests, I find it hard to convince myself that I
>> should also study the code until I see that it's obviously correct.
>
> I had a much different technique. If I had to think about something,
> I'd play some kind of game, IIR Go, so that my fingers stayed busy
> while I thought.
Yeah, I still do that with Freecell Pro.
More for the fundamentals of how I will do something
than the fine detail of the implementation, but quite a
bit with the fine detail too.
And not just with computing either, also with DIY
stuff and building kitchens from scratch etc too.
> Randomly, changing sources makes me sudder
> and want to head for the backup tape :-).
I don’t do that so much but can start implementing
something, realise that there is a real downside with
that approach, and need to backup and head off in
a different direction when I realise that there is a
much better approach for implementing it.
On 21 Jan 2013 09:21:40 GMT, Jorgen Grahn <grahn+nntp@snipabacken.se> wrote:
>
> My first PC (a high-end AST, in 1996) had a free CPU socket on the
> motherboard for a second Pentium. Of course, it never made sense to
> add one, with hardware evolving so fast back then.
>
> I still don't own an actual SMP or "multi-core" system.
I have a few - this old-ish IBM Thinkcentre, an AMD64 and an old Mac
7300 from 1997/8.
I had an argument recently with a noob who was convinced you needed
multiple cores to run more than one program simultaneously! I blame
Intel's somewhat misleading TV edvertising...
I keep eyeing up old Sun E450s with quad Ultra-Sparcs, I always wanted
a computer on wheels, but then I think of the power consumption :-(
--
Cheers,
Stan Barr plan.b .at. dsl .dot. pipex .dot. com
> Actually, during the Y2K boom, we had "meeting training".
> We got a whole bunch of rules, including one person holding a
> stop watch.
During the late 80's, our meeting training was compliments of
John Cleese's _Meetings, Bloody Meetings_.
I spent most of the 90's as an organizational representative on
the X/Open base standards committee, and contributed to the
Unix International standards as well. We were very careful to avoid
invention in X/Open - to be included in the standard an existence proof must
already have been in existence, preferably by multiple vendors. It was
when the behavior of a given feature varied amongst vendors that things
got tricky.
UI on the other hand, was all about invention (e.g. the DWARF standard came
from UI, along with the Large File (> 2GB) support extensions.
The only standards that would have been interesting to DEC in the BAH years would
have been the ANSI language standards and character set standards, I suspect.
> That's it in a nutshell, Mr. Roper!!! You (and I) can do a lot of
> neat things like PDP-8 and PDP-11 Assembly language... and *no*
> one gives a flying rat's ass about it anymore!!!
OTOH, a knowledge of S/360 or 8088 still carries over to a
considerable extent. Yeah, there are new addressing modes,
instructions and registers, but much of what you learned is still
valid.
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
> I learned by reading POP, Data Macros, the Language reference for
> the macros.
They say that the memory is the second[1] thing to go. Assuming that
you're talking OS/360, you'd need at least 5 types of manuals:
PoOps
Assembler[2]
Data Management[2]
Supervisor[2]
JCL[2]
The exact breakdown for DOS is similar; there is no one manual that
covers the assembler, the data management facilities and the
supervisor facilities.
> My employer at the time refused to believe me and sent me to an IBM
> class anyway.
There are worse things. WSU handed us a stack of 7070 manuals and told
us to read them before class. The class didn't cover anything beyond
what we learned from reading the manuals, and in some cases students
were able to answer questions that the instructor was unable to
handle.
[1] I don't remember the first.
[2] You might need the companion services and user guide manuals,
not just the references.
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
> True, but it's only in recent years that they've become the norm
> to the extent that even phones are coming with quad core
> processors now.
That may be true for small machines, but when is the last time that
you saw a mainframe with only one engine? For that matter, are
uniprocessors really the norm in server farms?
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
> I came into the programming business via the hardware one. It has
> always mystified me how people can write programs without at least
> a basic idea of how the machine works.
How do you learn to program a line of compatible computers where each
model has a different implementation? Your way is fine for one-off
designs in the 1950's, but breaks down for processor families.
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
>>>> >> >Hardware is still sold, a lot of the software developed in the
>>>> >> >last twenty years has been developed in the atmosphere of software
>>>> >> >should be *free*. There is little incentive for innovative software
>>>> >> >development.
>>>> >>
>>>> >> There's been plenty of free innovative mainframe software. For that
>>>> >> matter, there are free PC compilers and interpreters for a number of
>>>> >> languages, some quite innovative.
>>>> >
>>>> > The bulk of of the PC compilers are based on 30+ year old
>>>> > technology. In the PC world language design and implementation
>>>> > has been essentially stalled for several years.
>>>>
>>>> Any evidence to back up your assertion?
>>>>
>>>> I don't follow GCC all that closely, but it seems to me there are
>>>> new versions and release numbers and talk of forks. Must be something
>>>> going on there.
>>>>
>>> Morten did a write-up of what needs to be done. IIRC, it was about 2004 or
>>> so. He hasn't mentioned that any of it...well, except one...has been
>>> done to his satisfaction.
>>>
>>> /BAH
>>
>> There are really two lists, the one to keep the current GCC working and
>> the complete language technology update that GCC needs to support new
>> processors and innovation.
>>
>> A better way to look at GCC is it was written when overlays and compile
>> to asm was need to barely get it to run on the available hardware. It has
>> been patched and tweaked many times but the fundamental design is
>> decades old.
>>
>> It is like using 50's era machine shop tools when your competitors
>> are using NC machines and laser cutters.
>
> You know, you are welcome to become a gcc contributor.
I just did. I assume you also give away the fruits of your labor
free from mundane things like making a living.
> Recent developments:
>
> Link-time optimization
> a transition from C to C++ as the implementation language
> A switch from LALR parsers generated with Bison, to hand-written
> recursive-descent parsers.
Basically implementation tweaks. This is a serious comment. Linkers
are no longer needed. Combining link optimization with the compiler
optimizations is a fundamental start to a design change that creates
unified application optimization. .
The parser change is surprising it reflects the ineffectiveness of the
tool sets and will probably make it more difficult to make language
support changes in the future. It is even more surprising in the problem
was in the parser generator and rather than fixing that they chose to use
a non automated approach.
This isn't personal but a focus on a tool set that is now showing
its age.
>>>> Try finding a system with less than 8 megs of RAM now.
>>>> I have a phone that qualifies and probably a dishwasher (it's
>>>> fairly old) but I wouldn't bet on the (much newer) washing
>>>> machine or the TV.
>>>
>>> I was thinking desktop systems as I read that and thought the
>>> figure was a bit high. Then, I noted that it was megs, not gigs.
>>
>> AFAIK, no machine to test it, but you can use email on a C64.
>
> "Give me a telnet long enough and an Ethernet port on which to
> place it, and I shal move the mail."
For several years in the 70's the fastest average throughput at MIT
was an experimental link (run once) using an ox cart with a load of
tapes. Response time was not wonderful
> AFAIK, no machine to test it, but you can use email on a C64.
It used to be trivial, as there was a dial-up service to a shell that
provided Internet access using programs like pine. That is no longer
available where I live; in fact, I'm not sure if you can even get dial-
up Internet any longer. I am not aware of any Ethernet cards for the
C-64.
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
On Jan 21, 7:52 am, Peter Flass <Peter_Fl...@Yahoo.com> wrote:
> On 1/20/2013 1:23 PM, Charlie Gibbs wrote:
>> But this is all irrelevant in the eyes of a company like Microsoft.
>> The one relevant question is: "Does it make money?" And there,
>> alas, the answer is a resounding "yes".
>
> Or hopefully now, with "windoze ate", "NO!"
I think it's far too much to hope for that Windows 8 will fail
resoundingly enough to motivate IBM to dust off OS/2.
On the basis that a certain mentality is established in the
marketplace that will prevent the PC from just switching to Linux, and
so OS/2, with the IBM name on it, would actually make money and be a
vital element in weaning us off of Windows.
Well, Canbear is overstating it a bit. Separate but similar. While
usenet predates networked BBS by 3 or 4 years, newsgroups and echos
were very similar from a user POV. BBS were the way things like that
were popularized, since most of the public didn't have access to
Bitnet, Arpanet, etc. In fact, I recall that at one point the only
way to send (internet) e-mail to Africa was via a BBS in South Africa.
>> At some point, the underbelly of a system will be so complicated
>> and so dependent on other complicated messes, that someone
>> will come up with "new" bright idea of a PDP-8 or PDP-11 of
>> the original days
>
> That happened LONG ago with single chip micros. Some of
> them are quite a bit simpler than the PDP-11 and the PDP-8
> is just too limited for the approach it took to be viable now
> even with the most resource limited micros.
These days, one would put a chip with pipeline and cache and advanced
multiply/divide algorithms - Pentium or 360/195 class - even in a
pocket calculator, if it was to be fancy or low-level programmable.
Maybe we have to wait for nanotechnology for there to be a reason to
go to processors with such a limited power. Or system-on-a-chip
devices made on silicon carbide for taxing environmental conditions
(not there yet - the defect density is such that not even 8-bit micros
are possible in that material at present, I think - but I could be
wrong).