Hi Jeff:
Is it Sunday already? Must be time to lose the rest of the weekend in a
discussion with Jeff
Actually, I don't think it is so much the Mac OS X GUI as it is
the true multiuser, multitasking, full virtual memory unix like
underpinnings of OS X that give it that sluggishness.
OK, there's two issues there. I was talking about the Windows XP GUI. I
find it quicker to get stuff done: it may be ugly, but it's fast. Even if
you know it well, OS X seems to involve more keystrokes and contortions to
do things.
That would be at least in part because I come from a Windows background, so
my computer and folder structures and such are all set up that way. If I
were to take advantage of all the power tools buried in OS X and change my
working style, I suspect I would find less disadvantages in the UI.
I've used X
windows, Solaris, and Openwindows on sun workstations and they
all seem to have that same type of sluggishness. In fact OS X
seems to have the edge in terms of "feel" but that may simply be
because I've only used it on workstations that are faster than
the ones using the other interfaces.
Yes, they do. But I am not sure that's the fault of Unix. I suspect that
part of that is due to application software vendors being unwilling to
re-architect their applications for a multi-threading, multi-tasking
environment.
Certainly it wouldn't be virtual memory that's doing it (unless you happen
to be "using" the virtual memory...)
We've had virtual memory since
the days of DOS and Mac OS 7 or whatever.
Virtual memory simply assigns a portion of the disk to impersonate "real"
memory. If the user runs more applications than he has memory for, the
system "pages" the content of one or more chunks of an existing application
out to virtual memory to make room. If that happens much, the system gets
seriously slow. Anyone with a gig of real memory in OS X (or Windows) will
never hit virtual memory, PROVIDED they quit each application when they
finish with it.
If they adopt the old Mac user's operating method of leaving everything
running between uses, then they WILL be hitting virtual memory and their
system will be slow
I can't seem to get this through to Windows XP
users either: if you don't quit stuff you are not using, eventually your
system starts flogging the disk looking for memory and everything gets
veeerrry s l o o o o o w w w w . . . .
"Multiuser" doesn't really enter into the performance discussion. Really,
it's an accounting mechanism. It determines who is allowed to do what, and
where to send the bill for the services consumed. But it's the same
computer with the same memory and same disk doing the work, who gets the
bill has no effect on the throughput of the machine. On a desktop computer,
it's not really a consideration, because there is normally only one user
active (plus the system). On a home computer, you may get the situation
where Dad's email slows to a crawl because teenage daughter has downloaded
an entire music album and switched users so Dad doesn't know the download is
running. But such performance problems can usually be solved by sending Dad
to watch the ball game...
Now: Multitasking is a different issue. To begin with, no affordable
computer can do it, and never has been able to
Multitasking literally
means that the computer can process work from more than one task
simultaneously. How? It only HAS one CPU!! It can only ever do one thing
at a time
What the computer industry laughingly calls "multitasking" in
fact describes a computer's ability to task-switch very rapidly. Computers
just hitting the streets now have two CPUs, and could IN THEORY do two
things at once. Intel's "Hyperthreading" technology improves a computer's
ability to switch tasks fast by having two parallel sets of instruction
decoding pipelines to enable the computer to decode the instructions for the
next steps on one thread while executing he steps of the current thread.
But you still only get one task per CPU
Multitasking thus notionally means the computer appears to run SLOWER. In
theory, it is a "not good thing". On the face of it, if you are running one
application, the computer is giving that application 100 per cent of its
capability. If you are running ten applications, each application gets only
one tenth of the computer's power, and the user notices that the application
he is working in actually goes ten times slower in a multitasking operating
system.
So computer companies are lying in their back teeth. They can't multi-task,
and never have been able to. But they can "pretend" to. In practice, this
is where a lot of the really black arts of computer science are practiced.
I am old enough to have begun computing with mainframes, and they have
always had some form of multitasking. A considerable part of the system
administrator's working life was spend adjusting the priorities and resource
allocations of programs to run the machine as efficiently as possible.
Still is, on a big mainframe.
And this "system tuning" is still the area that really determines the
quality of the user experience on a multitasking desktop computer. I know
that Microsoft put an enormous amount of energy into improving the
user-responsiveness of Windows -- I was on the beta test for Windows NT 4,
2000, and XP. There are two key things they can play with: the "minimum
time slice" and the "thread priority". Your typical G5 Mac is peddling
along at two thousand million instructions per second. Unix has a bad habit
of defaulting to a 20 milli-second time-slice. Keeping the maths simple,
every application gets handed 40,000,000 instructions worth of processing
each time its request is answered. The OS won't look again, or give time to
anyone else, until 200 milliseconds go by. Now, that kind of allocation is
fine if you want to typeset the Gettysberg Address. It was fin back in the
days when CPUs ran at eight million instructions per second. It might even
work today on a mainframe processing a large batch job. But to decide that
the user has pressed the letter "b" and send the content of the keyboard
buffer to Word? Gimme a break!
So the first thing you can do to make an OS seem responsive is to cut those
time slices way back. Windows XP uses really short time slices, and thus
hands out a lot more of them. No application ever has to wait "long" to get
the computer's attention.
Then we can start playing around with Thread Priority. The way thread
priority works is that the OS hands out a hundred slices to the "High"
priority tasks, then 10 to the "medium" priority tasks, and then 1 to the
"low" priority tasks. The task waiting longest gets the slice handed out
next in each category (there are actually 255 priorities, but you get the
idea). Both Unix and Windows enable an application to adjust their
priorities. If the application has nothing to do, when it next gets a time
slice it responds with "Nothing to do and set my priority to 'low'". If
Acrobat Reader was sitting in the background it would set its priority to
'low". Then you open a PDF. Reader then tells the OS on the next time
slice "Really busy, jump me to High and call the ATSUI rendering engine for
me." Adjusting thread priorities is an art, not a science
If an
application designer gets his thread priority too low, his application is
unresponsive. If he gets it too high, his application slows the whole
system down and he loses sales. In between, there's a sweet spot that seems
responsive to the user, without draining too much system resource or
interfering with other applications. Unfortunately, the sweet spot depends
on what else is installed and running at the time. And the application
designer can't know that...
A way around that is to design the application for multithreading. Nearly
all of the applications that existed before OS X and Windows NT were
"single-threaded". Their code was one large single piece. There was no
point in splitting them up. The application would start, do what it had to
do, then exit. Only one application could be running at a time, and it
would exit when it was finished. So there was no performance benefit in
splitting it into multiple pieces: quite the reverse.
Now, there is a benefit in doing that. You put the tasks the user will want
to do frequently in small modules that can exit quickly. You run these
modules at high priority. The system seems responsive to the user. You put
the things the user does not need to be involved with into different pieces,
and run them at a lower priority. Since the user never needs to be
involved, he doesn't know how long they take and doesn't care. Best of both
worlds: the system seems far more responsive and the computer runs very
efficiently.
But splitting an application up is horrendously complex and difficult to
test, because all of the pieces may depend on work being done by another
piece. So application designers won't attempt it unless they can begin
their design that way.
Most "modern" applications have never gotten beyond "idle loop processing".
This is a sort of "cheat" multi-tasking. You take all of the bits that
interact with the user and put them in the Main thread. You take everything
else (all the bits that actually perform work) and place them into the Idle
Loop. The main Thread runs very frequently to check if the user wants to do
something, then falls quiet and calls its Idle Loop. Word uses this
technique. While you are typing, or moving the mouse, Word is doing nothing
but listen to you. When you stop typing or stop moving the mouse, that's
when Word begins to process the information you have given it. Pagination,
spelling, grammar, printing: all of those things happen only when you stop
to think. This technique benefits the whole system: if everyone builds
their applications this way, the system is much more responsive, and the
"work", the actual "processing" gets done only when the user doesn't want
the system for something else.
But it also leads to some peculiar effects. The "Beachball" is one. Word
calls a high-priority interrupt when it needs to display what has happened
as a result of something you have done. But it then needs to sit there and
wait until the background task (the idle loop) gets enough time in the CPU
to complete making up the thing to be displayed. You and I get to watch the
beachball until that happens. Cunning old farts like me know that Word is
waiting for the idle loop to get some time. We know it will get some time
right after the Main loop completes each time. And we know that if we click
the mouse button, the main loop will be called. So if you want the
beachball to stop and show you what you typed, click the mouse
For that reason, I suspect that it's going to be hard to again
match the great responsiveness and feel of a single user pseudo
multitasking OS such as the original Mac where you could sizzle
through operation after operation and the computer could actually
keep up with you.
No. But it is a LOT of laborious detailed work, splitting applications up
into multiple threads, adjusting the priorities of each, tweaking their
priorities, coding, and testing. It doesn't add ANY "new features" that you
could give to Marketing to SELL. But it has a very high chance of adding
LOTS of lovely new BUGS -- timing issues where the justification routine
crashes because it was waiting for the pagination routine which was waiting
for the printing routine when all three were interrupted by the user hitting
the delete key...
This is not something software vendors are actually "enthusiastic" about.
The move to MacIntel may assist them to learn to love it
Not that
anyone in the computer industry has EVER run out of excuses or the ability
to point the finger somewhere else to blame someone else.
But they will need to be a little more inventive to come up with a
convincing excuse when we point out that "your application runs ten times
faster on Mac OS than it does on Windows OS using the same computer to
process the same file." Might have to fix it then...
Cheers
--
Please reply to the newsgroup to maintain the thread. Please do not email
me unless I ask you to.
John McGhie <
[email protected]>
Microsoft MVP, Word and Word for Macintosh. Consultant Technical Writer
Sydney, Australia +61 4 1209 1410