Elliott said:
You are being hard on Word here. It is not so much an idle loop as an
'event loop'. Nearly everything with a GUI has something like an event
loop in it. It is horrible, but that's the way it is. If Word is
waiting for input, it will use very little CPU. Looking at top forces
you to believe that it is scheduling itself to run more than it ought
to, probably looking for stuff from other bits of Office in a dumber
than normal way, but it is never sitting in a hard loop as you claim.
In your response to John, you also discussed issues of this
nature imposed by the GUI implementation. I must admit a lot went
over my head as the days of me working as a real time embedded
systems software engineer was quite a ways back. Most
applications I worked on tended to work directly off the standard
input and output in a Unix type system. Keystrokes were handled
via interrupts in the device driver and sent to the application
grouped statistically. Although applications could be set up to
poll for everything, the correct way was for the application to
block while waiting for a semaphore or message in a task queue.
When this occurred, that task was totally dormant pulling no CPU
cycles whatsoever until it was passed the appropriate event from
the OS itself.
However, using a GUI to essentially replace the I/O device
drivers obviously will have issues I'm not familiar with. A
device driver by its hardware interrupt nature must run at top
priority in the system. I'm guessing that a formalized GUI (e.g.,
the finder, Openwindows, X11, etc.) is forced to have large
software components that must deal with multiple priority
components to handle the data flow control issues. Are these the
reasons you make the above comment? Because certainly from a
command line, all applications can be made (if the developer is
smart enough to choose it) to fully block when waiting for any
events.
Also, this "horrible" circumstance that the GUI introduces and
you refer to--is it a common issue in all GUIs that you know of?
For example, would this issue exist in typical Unix GUIs as well
(e.g., the Openlook Window manager or other Solaris interfaces,
X11 windows, and even something like Aqua over Darwin?
PS I like your and John's discussion about users getting into a working
pattern. It helps to explain how proponents from each camp stay
polarized.
I was going to elaborate a bit on that but I figured Sunday was
over in Australia anyway
I have an example that I frequently use in presenting this
concept. As kid, we used to have a backyard badminton set that I
enjoyed playing with others. Since I was "self taught" for many
years I learned to hold the badminton racket like a tennis racket
(i.e. if you hold your arm out straight in front of you with the
racket in your hand, you will be looking through the face of the
racket). When I got into high school in gym we were taught that
this is not correct and I needed to rotate the racket 90 degrees
in my hand so that when holding it out in front of you, you only
see the edge of the racket.
I really struggled unlearning the old technique and my game was
very poor for quite a while using the new grip. Eventually, as I
became adjusted I began realizing that I could do so many things
that were never possible for me to do before. For example, a hard
smash coming directly at your face could be instantly returned
with a slight flick of the wrist (backhand).
I've attempted to learn from this experience over the years that
when people with more experience than I in some subject suggest
that how I am doing something can be improved a lot by changing
something, it is worth my time and at least some effort to
explore it and see if I need to unlearn something that is holding
me back. This is not a natural thing to do. The pain to change
keeps many people (my self included) from doing things better.
Another thing is that superficial components of a problem seem to
get so much more weight in making decisions sometime. The Aqua
based finder in the new OS X may be more cosmetically appealling
to newcomers than say a Windows XP desktop. Is that alone really
enough to base a decision on? I'm more interested in how much of
the OS has been built in an orthogonally structured fashion. That
way it will have fewer bugs, the bugs that are there will be
detected and removed more easily. This kind of an OS is
strengthened over time. A poorly structured OS will only have a
future of increased entropy. Structure begets structure, entropy
follows entropy.
I would rather have a more limited GUI knowing that what's
underneath it is more solid. I want my life to be easier, it
would be awfully disappointing to realize how easy or intuitive
something could be by just changing how you think of it.
Case in point, trying getting a light user/newby to understand
what a hanging indent is and that you don't use multiple spaces.
If the application makes this easy to learn and apply (like
MacWrite, or Clarisworks used to be) It's easy for the person to
cross over. Trying to use the paragraph styles and formatting to
set up hanging indents in Word are still scary to ME!
This is the basis for so many of my rants with MS products. Items
that are not intuitive and have severe penalties and side effects
when not totally, fully, and extensively implemented, will NOT
be learned by many, many people because they will give up and
even reject the product itself (if they are given the
choice--many are stuck with Word because they have no choice as
John has pointed out before). If a product is structured
cohesively, it will tend to be intuitive such that if a newby
made a guess as to where he would find an answer or how to do
something, if after trying it finds he was correct, it fosters
confidence such that they will more easily try even more new
concepts. If they spend 4 frustrating hours and then STILL fail
due to some inane obscure gotcha in the product, they will give
up and probably never try searching for or trying any new aspect
of that product on their own again!
(and most folks are not tenacious enough to stay at it 4 hours. I
am, so if I've still failed--especially on multiple aspects of
that product--I tend to become that product's marketing
department's worst nightmare because after all that time I spent
trying to make it work, I will be able to rattle off a long list
of undeniable idiocies that exist in that product that will scare
the daylights out of anyone considering it
The other I was showing someone a few neat Mac tricks
(keyboard nav mostly) and he was mildly impressed. Then he counterd
with a virtuoso display on his Windows lappy, and so was I.
I will not deny that the interface flexibility--especially
customization--on the PC can be quite extensive. However, were
all of those "tricks" simply logical extensions of things you
already knew, or were they special exception capabilities that
were put in just to provide some kind of useful trick?
For example, the object pardigm should be maintained everywhere
possible since that is how we think. If I select an object and
then perform a sizing function on it, when I select a group of
object (i.e., an object which is a set of objects) and want to
size it, it should be performed in exactly the same way using the
same menu items, etc. What you see in a legacy product is a menu
item for sizing single objects and a separate item for sizing
groups of objects. What you are seeing in the GUI there is a
reflection of the lack of cohesion in the innards of the
software. There are two software functions so you get two menu
items even though intuitively they are functionally identical.
The only reason there are two items is because two sets of
software, each different, were developed at two different times
or by two different development groups.
Case in point. I have a spreadsheet. I select a full colum and by
positioning my cursor over the head of the colum I can make it
wider. What if I wanted to take three adjacent colums and scale
them all together a a single unit? Well I typically would just
select all three colums, position the cursor, and drag it to make
them all scale up together--exactly the same operation. This is
usually how you do this in most spreadsheets that I've used
EXCEPT of course for Excel. For some reason (likley one of the
two I just mentioned), scaling a single colum and a set are done
totally different. In fact, if you select a set of (say) three
colums and use the click and drag method to size them up, even
though all three are selected, the silly application only sizes
ONE of them. It conveniently ignores the fact that the other two
colums have also been selected--probably because scaling them as
a set is too complicated. But instead of simply putting up a
denial indicating that it can't do that (bad marketing policy),
it just pretends that you told it to do something else.
In other words, it didn't do what you told it to do and what
intuitively should have happened. This is a reason that when I'm
in a hurry and need to put to gether a fast spreadsheet, I do not
use excel--too much risk. Very little confidence. Why should I
change? Well in this case, corporate standards seem to revolve
around Excel for spreadsheets and therefore I really need to
understand it better to function well at my job.
So after I calm down from being so steamed about a stupid
behavior, I will start over and again try to figure out how to
size groups of colums different from how I've done it the last 20
years using at least three different major spreadsheet programs.
Get the point?