Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
>>> It turns out he wasn't noticing the space between the 'o' and the 'I' in
>>> 'Do It'; in the sans-serif system font we were using, a capital 'I' looked
>>> very much like a lower case 'l', so he was reading 'Do It' as 'Dolt' and
>>> was therefore kind of offended.
>>
>> Seems to me that's not just the font's fault; you don't expect random
>> words to be captitalized. Wonder why they insisted on "Do It" rather
>> than "Do it" or "do it"?
>
> It was not random. It was a title which tend to have initial
> caps on words.
I'm not sure I understand. Are you saying the texts on GUI buttons
are to be seen as titles, like the titles of movies or songs? I don't
seem to see that much in modern GUIs.
Uh, wait, I /do/ see it. Both browsers I use (Opera, Firefox) Do It
That Way, in menus and buttons. Now that I see it, it looks weird and
pompous, but I didn't notice before.
Perhaps it's because I'm swedish and a Unix users. Both are
lower-case cultures. Too Much Capitalization and a text looks either
like a song title by The Smiths, or like it was written in 1724.
/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
while the number of different applications may not have been heavily
threaded ... large critical applications that were major server use were
.... like all the major RDBMS.
Earlier in this thread, I mentioned Charlie having invented
compare-and-swap while doing fine-grain multi-processor locking work on
cp67 at the science center. By the mid-80s most of major server
platforms had support for compare-and-swap ... or instructions with
similar semantics ... that were used for large multi-threaded
applications (regardless of running on single processor or
multi-processor machine).
rs/6000 (rios chipset) was risc single processor only and didn't have
support for compare-and-swap semantics. in the unix world, large
multi-threaded DBMS when running on hardware platforms w/o
compare-and-swap semantics ... would fall back to kernel calls for
appropriate serializations. the rs/6000 DBMS benchmarks suffered greatly
(in comparison with platforms with support for compare-and-swap
semantics). Finally AIXV3 was modified to provied a supervisor-call
simulation for compare-and-swap (only works on single processor machine)
which supported compare-and-swap semantics in the supervisor-call
interrupt routine ... with very short pathlength and return to
application. past posts mentioning risc, 801, romp, rios, pc/rt,
rs/6000, somerset, power, power/pc, etc http://www.garlic.com/~lynn/subtopic.html#801
for other drift ... since rios chipset was single-processor only ...
the only other available path for scale-up was cluster/loosely-coupled
.... which we started doing in our ha/cmp product ... some past
posts http://www.garlic.com/~lynn/subtopic.html#hacmp
there were lots of activity working with national labs and other
institutions on scientific and numerical intensive workloads ... but the
primary straight-forward commercial was the large RDBMS that had both
vax/vms cluster support and portable versions to unix platforms. the
deal in ha/cmp was to provide vax/vms cluster global lock manager
semantics to aid in port of unix platform. Some number of the RDBMS
vendors had list of things that had been done wrong in the vax/vms
cluster global lock manager ... and since I was started from scratch, I
could implement the same API semantics ... while avoiding doing the
"wrong" things ... including some fixing some performance bottlenecks
blocking some of the higher scaleup levels. old post about early jan92
meeting in ellison's conference room on cluster scaleup http://www.garlic.com/~lynn/95.html#13
as periodically mentioned ... possibly within hrs of the last email
referenced ... end of jan92 ... the scaleup stuff was transferred and we
were told we couldn't work on anything with more than four processors.
On Tue, 22 Jan 2013 12:13:08 +0000, Ibmekon wrote:
[snip]
> MS were always going to support their flagship ACCESS database system
> first.
> When they announced a project to have their VISUAL BASIC, C , FOxPRO
> produce intermediary code, it was clear VFP was a goner.
> I still use VFP 5.0 at home though.
> BTW, what was the design bug you observed ?
Back in the .ndx days:
use table index index1,index2
Both indexes will be updated on changes.
set order to 2
Both indexes will be updated on changes.
set order to 0
The indexes will not be updated on changes even though the
index files are still open.
I had an app where I needed an indexed order and the physical
record order. I ended up creating an index on recno()!
>>> On the other hand, the story upthread happened in 1993. Already then
>>> -- or a few years later -- it was understood that if your program
>>> couldn't cope with running on an SMP system, it was plain broken.
>>
>> High end boxes had gone multi-processor quite some time before
>> that. In 1990 we were using quad core 88K based boxes (the kernel was
>> single threaded). That being said, if your code couldn't cope with SMP
>> it probably couldn't cope with a uniprocessor system that scheduled
>> differently to the box you tested on - in other words it was broken.
>
> Not in the context I was thinking of back then -- Unix, and
> specifically Solaris. As I remember the 1990s, Sun drove my part of
> the world and the future was threads, threads, and more threads[1].
>
> Plain Unix C applications with didn't do any funky stuff with shared
> memory would have no problems, until you rewrote them to be heavily
> threaded (without having a firm idea of how to do that safely[2]).
During this same period I worked on a project to develop middle ware
that used shared memory for interprocess communications. It ran on
4 or 5 flavors of Unix including Solaris, Linux, and z/OS.
On z/OS running in z/OS Unix wasn't an option so we used multitasking.
Fortunately, most of the code was portable so we didn't have to debug
without memory protection.
Different places, different approaches.
>>> The easiest way to avoid that was not to use threads. The easiest way
>>
>> Actually no - the first time I saw concurrency biting bad code
>> there were no threads, just multiple processes and a shared memory segment.
>
> OK, but I'd argue such applications were and are not the norm.
> If you're going to drop your process's memory protection anyway, why
> not use threads? (Assuming processes and threads were available in
> your environment.)
The real question is do you really want to drop memory protection?
It's not a good thing.
> In the early days of computer networking the inside joke was
> *sneaker net* referring to a student employed at the lab for a
> workterm sent to deliver a tape or disk.
Which makes me realize that our home network has mostly quit using
sneakernet -- not too long ago, I sent an e-mail, then rotated my
chair ninety degrees to receive it.
--
Joy Beeson
joy beeson at comcast dot net http://roughsewing.home.comcast.net/
The above message is a Usenet post.
I don't recall having given anyone permission to use it on a Web site.
>> Actually no - the first time I saw concurrency biting bad code
>> there were no threads, just multiple processes and a shared memory
>> segment.
>
> OK, but I'd argue such applications were and are not the norm.
I wrote quite a lot of code that used shared memory before threads
became popular. Given my druthers I'd still do things that way.
> If you're going to drop your process's memory protection anyway, why
> not use threads? (Assuming processes and threads were available in
> your environment.)
In a word control. With shared memory it's easy to know exactly
where the danger points are, with threads it's not so easy.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
> while the number of different applications may not have been heavily
> threaded ... large critical applications that were major server use were
> ... like all the major RDBMS.
OS/360 had "threads," in the form of "tasks," almost from the beginning
(196x). IME they weren't heavily used to run multiple threads at a
time, but usually to spin off a program and wait for it to complete, to
simplify error recovery if it bombed. SMP/E is the prince of this
technique.
Peter Flass Messages: 8375 Registered: December 2011
Karma: 0
Senior Member
On 1/22/2013 7:01 PM, Gene Wirchenko wrote:
> On Tue, 22 Jan 2013 12:13:08 +0000, Ibmekon wrote:
>
> [snip]
>
>> MS were always going to support their flagship ACCESS database system
>> first.
>> When they announced a project to have their VISUAL BASIC, C , FOxPRO
>> produce intermediary code, it was clear VFP was a goner.
>> I still use VFP 5.0 at home though.
>> BTW, what was the design bug you observed ?
>
> Back in the .ndx days:
>
> use table index index1,index2
> Both indexes will be updated on changes.
> set order to 2
> Both indexes will be updated on changes.
> set order to 0
> The indexes will not be updated on changes even though the
> index files are still open.
>
> I had an app where I needed an indexed order and the physical
> record order. I ended up creating an index on recno()!
>
A. Good idea.
B. How did you know that this corresponded to the physical record order?
What happened if you added?
C. One of the nice things about VSAM is that you don't need another
index to do this.
On Wed, 23 Jan 2013 07:44:59 -0500, Peter Flass <Peter_Flass@Yahoo.com> wrote:
> On 1/22/2013 7:01 PM, Gene Wirchenko wrote:
>> On Tue, 22 Jan 2013 12:13:08 +0000, Ibmekon wrote:
>>
>> [snip]
>>
>>> MS were always going to support their flagship ACCESS database system
>>> first.
>>> When they announced a project to have their VISUAL BASIC, C , FOxPRO
>>> produce intermediary code, it was clear VFP was a goner.
>>> I still use VFP 5.0 at home though.
>>> BTW, what was the design bug you observed ?
>>
>> Back in the .ndx days:
>>
>> use table index index1,index2
>> Both indexes will be updated on changes.
>> set order to 2
>> Both indexes will be updated on changes.
>> set order to 0
>> The indexes will not be updated on changes even though the
>> index files are still open.
>>
>> I had an app where I needed an indexed order and the physical
>> record order. I ended up creating an index on recno()!
>>
>
> A. Good idea.
> B. How did you know that this corresponded to the physical record order?
> What happened if you added?
I presume this was a DBF file - recno() was the physical record number.
All new records were added after the last record. It has been a long
time since I've had to pull any of that info out of my head. I started
work as a programming using Clipper - a dBase compiler which got extended
into a capable language - and which was a competitor of FoxBase/FoxPro.
--
Andy Leighton => andyl@azaal.plus.com
"The Lord is my shepherd, but we still lost the sheep dog trials"
- Robert Rankin, _They Came And Ate Us_
> 999 still works. Some mobile networks will also accept 911.
>
I thought it was a phone requirement - mine can happily ring 9999999999999
just by being in my pocket - I think it's a >UK requirement that any phone
can dial 999 (& pos. 911) without unlocking the keypad (hey, we're back On
Topic, most phones these days are TouchnScratch).
Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to spamtrap@library.lspace.org
>> Sometimes I'll flowchart a small piece of code if it's
>> particularly tricky,
>
> I've found that it's precisely the tricky code for which flowcharts
> are most useless. You have to carve the bird at the joints.
Agreed. In fact for the trickiest piece of code I have ever written
I only found one tool sufficiently expressive and precise to describe the
solution. That was of course the code - I spent two days trying to write a
detailed design document/diagram/something before giving up and writing the
code while I still had all the detail and big picture in my head. After I
had written the code I was able to extract a reasonable description to use
as documentation for the next poor sod to see it. I'd be prepared to bet
that that code didn't get changed at all from the time I left it to the
time the system was decommissioned.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
>>> Sometimes I'll flowchart a small piece of code if it's
>>> particularly tricky,
>>
>> I've found that it's precisely the tricky code for which flowcharts
>> are most useless. You have to carve the bird at the joints.
>
> Agreed. In fact for the trickiest piece of code I have ever written
> I only found one tool sufficiently expressive and precise to describe the
> solution. That was of course the code - I spent two days trying to write a
> detailed design document/diagram/something before giving up and writing the
> code while I still had all the detail and big picture in my head. After I
> had written the code I was able to extract a reasonable description to use
> as documentation for the next poor sod to see it. I'd be prepared to bet
> that that code didn't get changed at all from the time I left it to the
We use design documents that log the design decisions and detail
implementation choices..
I was in the middle of a consumer product design in Asia a few years ago
and they had an interesting approach to software design. They started
out by developing a large overview of the application that was broken
down into modules . They as a team then created a application resource
budget for each module. This include ROM and RAM requirements and
CPU cycles or response time if that was an issue. Individuals were a
assigned to be responsible for each module and provide current status
during development.
This did a lot for system reliability because each module was well defined
independent of the system organization and could be independently
swapped out. Unit testing at the module level was a big part of the testing
process.
>>>> > Now my new fear is... that *everything* I know will become
>>>> > obsolete and useless in a pragmatic sense.
>>>>
>>>> That's everybody's fear. The half life of geekish knowledge is no more
>>>> than 4 years. I can still write PDP-8 and 11 Assembler and nobody
>>>> cares. Oh, and Teco...
>>>>
>>>
>>> That's it in a nutshell, Mr. Roper!!! You (and I) can do a lot of neat
>>> things like PDP-8 and PDP-11 Assembly language... and *no* one gives a
>>> flying rat's ass about it anymore!!! It saddens me and it's emotionally
>>> taxing. All those things we know how to do... those things are as *cool*
>>> as
>>> they ever were!!! People just can *not* appreciate them anymore..... :-(
>>
>> But in this computing biz, what used to be will be done again. At some
>> point, the underbelly of a system will be so complicated and so dependent
>> on other complicated messes, that someone will come up with "new" bright
>> idea of a PDP-8 or PDP-11 of the original days to do a task which is very
>> important but doens't need all the fancy shmancy character machine
>> language
>> support.
>>
>> We may not see it; it took 2 more decades for people to "rediscover"
>> multi-CPUs in an SMP configuration (they're still not quite there yet)
>> than I thought would happen. The software underbelly is in such a mess
>> that it may take a while for that to become better before the focus
>> reverts back to hardware improvments.
>>
>
> BAH, knowing that *someday* things may be better... after I have gone to my
> eternal reward... may be a little comforting. But while I'm here, I can
> *not* "feel the love"!!! :-)
This newsgroup will document how and why we did the things that new
kids will rediscover. Perhaps they won't have to live with mistooks
we made and wish we could do over. For instance, the guy who disappeared
when I asked a serious question, could have documented a lot about
what instruction classes he would have liked but didn't do. There
will be CPUs or cores which will have R^nISCs to do work which doesn't
need all that fancy schmancy character data handling.
A class of instructions which were always very useful in DEC's biz
were the byte instructions. I've never seen you guys talk about
other manufacturers' instruction sets which had the equivalent
to ours. Ours could handle anything and we also had the test
and set masked bit instructions.
Once again, hardwaer is not my expertise so I can't talk much
about it.
>> Actually, during the Y2K boom, we had "meeting training".
>> We got a whole bunch of rules, including one person holding a
>> stop watch.
>
> During the late 80's, our meeting training was compliments of
> John Cleese's _Meetings, Bloody Meetings_.
>
>
>
> I spent most of the 90's as an organizational representative on
> the X/Open base standards committee, and contributed to the
> Unix International standards as well. We were very careful to avoid
> invention in X/Open - to be included in the standard an existence proof must
> already have been in existence, preferably by multiple vendors. It was
> when the behavior of a given feature varied amongst vendors that things
> got tricky.
>
> UI on the other hand, was all about invention (e.g. the DWARF standard came
> from UI, along with the Large File (> 2GB) support extensions.
>
> The only standards that would have been interesting to DEC in the BAH years
would
> have been the ANSI language standards and character set standards, I
suspect.
There was also ASCII and FORTRAN and COBOL and all the comm shite
and our internal standards, e.g., full file specifications, documentation,
and hardware and FS had their own, too. Oh, and EBCDIC and the entities
which we invented but got adopted by the industry and ...I can't think
of any more which were RPITAs ;-).
Ibmekon wrote:
> On 21 Jan 2013 13:06:19 GMT, jmfbahciv <See.above@aol.com> wrote:
>
> <All gone>
>
>> Any software developer who needed something from the monitor would
>> not design a system call but simply read/write what s/he needed
>> into the running kernal. Design reviews would not have refused
>> this flavor of implementation since it was a corporate culture
>> thing. If there had been questions, the developer would have
>> plenty of history to point at to get his own way. Cutler tried
>> to establish that system call wall but nobody else in that
>> company knew nor wanted to understand the dangers of making that
>> wall holey. They were running PCs which were single-user, single
>> owner and didn't need the security that multi-user systems had
>> to have. I still see this attitude in any PC implementation
>> even though all now have to run multi-user even if there's
>> only one human being touching it.
>>
>> Think about MS' backdoors which have to be there for the update
>> services. The progammers would not wait to go through a system
>> call design to get into the deep dark bowels of a running system.
>>
>> Bottom line to your question: unending security problems and
>> bugs which, when fixed, beget 3 new ones.
>>
>>
>> /BAH
>
> That confirms my belief - good fences make good neighbours.
YBYA.
>
> That if security is not built in from the ground up of a computer
> system - managers will not allow you to "retake the ground" later.
>
> MS Windows leave the front door open - a REGEDIT program allows access
> to internal configuration parameters of Windows.
Several times, I've tried to get people to talk about how Multics was
developed, especially the details of the work involved. This included
the process of developing a new thingie such a monitor call or a
command to a device driver. Then there is the "ensuring everything
works" processes. Over the years, the TOPS-10 group implemented
self-disciplinary processes so that we immediately used what we made.
Or the procedures of having a weekly monitor meeting which reviewed
all the MCOs written in the MCO book. (monitor change order).
The Multics group had to have had similar experiences but (I'm
assuming) different solutions. This all has do to with minute
to minute and daily work each of us did. None of this ever gets
documented becuase it's a daily living habit. Each OS developer
had his/her little habits which affected how an OS worked and what
actually got shipped to customers.
Peter Flass wrote:
> On 1/21/2013 8:06 AM, jmfbahciv wrote:
>>
>> I had a much different technique. If I had to think about something,
>> I'd play some kind of game, IIR Go, so that my fingers stayed busy
>> while I thought. Randomly, changing sources makes me sudder and
>> want to head for the backup tape :-).
>>
>
> I just ran into this the other day, and with my own code, too, but from
> several years ago. I kept tweaking things and couldn't figure out why I
> couldn't get it to work the way I wanted. Finally I sat down and went
> thru it thoroughly and it turned out I was misunderstanding what a
> routine was doing, probably because the name seemed to say one thing and
> the code actually did something different (originally did the first and
> later changed, but kept the old name for some stupid reason -- fixed
> now, plus added comments.)
>
>
You had to keep the old name just in case something else used it ;-).
One of the reasons I was a "bad" programmer was because I thought through
everything, wrote the specs, then wrote the code. By the time I was
writing code, the code was essentially writing itself. In a production
line environment like ours, this process took too long.
>> Sometimes I'll flowchart a small piece of code if it's
>> particularly tricky,
>
> I've found that it's precisely the tricky code for which flowcharts
> are most useless. You have to carve the bird at the joints.
>
>> or if I want to "optimize" it,
>
> I don't see how flowcharts help to optimize code.
On the infrequent occasions when I resort to flowcharting,
it's to rough out an algorithm. Optimization comes later.
--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
>> I have a neighbor who has complained bitterly abour her father. When
>> she, as a girl, would ask him a question, he would *explain* it. This
>> vexed her mightily as she "just wanted a simple answer". Well, she's
>> a nice person, kind, generous, bright and highly literate buit she's
>> not a hacker.
>
> That's my wife, too. Whenever she asks me for computer help it
> usually ends up in an argument because she just wants a simple
> answer and I usually try to giver her a full explanation.
Most people don't want to know how something works, or even how
to make it work. They just want to know which button to press.
That doesn't stop hopeless optimists like myself (hah!) from
dreaming that someday they'll learn enough to figure things out
for themselves. A silly dream, I know.
> Either that or she complains I don't show her how to do something,
> only sit down at the keyboard and type stuff, when I try to explain
> that I'm trying to figure it out myself.
I don't think many people realize just how many answers we work out
on the fly, not really knowing them at the time they ask a question.
I'm often reluctant to explain this; given their mindset it might
destroy their faith in the infallibility they need us to have.
--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
>>> Actually no - the first time I saw concurrency biting bad code
>>> there were no threads, just multiple processes and a shared memory
>>> segment.
>>
>> OK, but I'd argue such applications were and are not the norm.
>
> I wrote quite a lot of code that used shared memory before
> threads became popular. Given my druthers I'd still do things that
> way.
Uh-huh. I finally bit the bullet on threads when I discovered
that some Windoze APIs simply could _not_ be kept from blocking
for minutes at a time.
--
/~\ cgibbs@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
>> Sometimes I'll flowchart a small piece of code if it's
>> particularly tricky,
>
> I've found that it's precisely the tricky code for which flowcharts
> are most useless. You have to carve the bird at the joints.
>
>> or if I want to "optimize" it,
>
> I don't see how flowcharts help to optimize code.
>
Not optimize in a hardware sense (that's why I quoted it), optimize in
terms of the minimum amount of logic to get the job done. Sometimes a
flowchart can show you where some code nan be moved around to eliminate
extra branches, tests, etc.