J
Jim Zoetewey
I'm a sysadmin not an ACCESS developer. More to the point, our developer
recently made a change to our database that slowed the system down
immensely. Ordinarily, I'd just suggest that he change it back, but it's
a little more complicated than that.
Here's the situation: One of the forms was unbearably slow. He changed
it from being a query composed of 3 joins to one table in the hope that
that table would be faster to work with. It isn't. The form is now
slower. On the bright side, some things related to the form now work better.
Not only is the form now slower, but the total CPU usage jumped from a
range of 5-20% to 20-60% usage. I say "total CPU" usage because we have
4 2.0 GHZ CPU's (Xeons) on the machine. Whereas before one was running a
little higher than the others (10% when the others were at 1% or
something), now one of them is always running flat out at 100%. Which
one that is changes randomly (from my viewpoint). The other CPU's range
from 0% to 60%in their usage.
The developer wondered if MS Access might be interacting strangely with
Terminal Server (which is what we use to make the db accessible over the
internet), but I doubt it. TS is made to work well with multiple processors.
Beyond that of course, the problems appeared the day he uploaded the
changes and we've been using Terminal Services for 2 years now.
My theories go like this:
Theory 1: The Jet engine isn't multi-threaded or somehow doesn't use
multiple processors efficiently.
That's the impression I got from the following link:
http://groups.google.com/group/micr...0+multithreaded&rnum=5&hl=en#4d0eeeb2b3a6402d
If so, we'd be better off switching to SQL Server or some other database.
Theory 2: The changes he made just made obvious the fact that the
database is just too big. It's 285 MB. One of the tables actually has
more than one million rows in it. When I look at the logs, there's a lot
of data transfer going on. That, in combination with Jet, might also
explain the problem.
I thought about that possibility after reading this discussion:
http://groups.google.com/group/micr...42ec0?tvc=1&q=multiprocessor#7748f38fc0542ec0
If that's true, I'm thinking we might double our RAM, but I'm also
thinking that it might be worth splitting the old data off into a new
database and accessing it only when absolutely necessary. That would
solve the size problem.
For what it's worth, we're using Acess 2000 (on Server 2003) and the
object model used is DAO.
Though I think that both of my theories are pretty good, neither really
explains why everything slowed down after his changes to my
satisfaction. "Theory 3" would be that something in his changes is
really screwed up, but he's got a lot more experience as a developer
than I do as a sysadmin and I don't feel comfortable suggesting that one...
Any suggestions as to what might be the problem and/or possible
solutions would be appreciated.
Jim Zoetewey
recently made a change to our database that slowed the system down
immensely. Ordinarily, I'd just suggest that he change it back, but it's
a little more complicated than that.
Here's the situation: One of the forms was unbearably slow. He changed
it from being a query composed of 3 joins to one table in the hope that
that table would be faster to work with. It isn't. The form is now
slower. On the bright side, some things related to the form now work better.
Not only is the form now slower, but the total CPU usage jumped from a
range of 5-20% to 20-60% usage. I say "total CPU" usage because we have
4 2.0 GHZ CPU's (Xeons) on the machine. Whereas before one was running a
little higher than the others (10% when the others were at 1% or
something), now one of them is always running flat out at 100%. Which
one that is changes randomly (from my viewpoint). The other CPU's range
from 0% to 60%in their usage.
The developer wondered if MS Access might be interacting strangely with
Terminal Server (which is what we use to make the db accessible over the
internet), but I doubt it. TS is made to work well with multiple processors.
Beyond that of course, the problems appeared the day he uploaded the
changes and we've been using Terminal Services for 2 years now.
My theories go like this:
Theory 1: The Jet engine isn't multi-threaded or somehow doesn't use
multiple processors efficiently.
That's the impression I got from the following link:
http://groups.google.com/group/micr...0+multithreaded&rnum=5&hl=en#4d0eeeb2b3a6402d
If so, we'd be better off switching to SQL Server or some other database.
Theory 2: The changes he made just made obvious the fact that the
database is just too big. It's 285 MB. One of the tables actually has
more than one million rows in it. When I look at the logs, there's a lot
of data transfer going on. That, in combination with Jet, might also
explain the problem.
I thought about that possibility after reading this discussion:
http://groups.google.com/group/micr...42ec0?tvc=1&q=multiprocessor#7748f38fc0542ec0
If that's true, I'm thinking we might double our RAM, but I'm also
thinking that it might be worth splitting the old data off into a new
database and accessing it only when absolutely necessary. That would
solve the size problem.
For what it's worth, we're using Acess 2000 (on Server 2003) and the
object model used is DAO.
Though I think that both of my theories are pretty good, neither really
explains why everything slowed down after his changes to my
satisfaction. "Theory 3" would be that something in his changes is
really screwed up, but he's got a lot more experience as a developer
than I do as a sysadmin and I don't feel comfortable suggesting that one...
Any suggestions as to what might be the problem and/or possible
solutions would be appreciated.
Jim Zoetewey