I work for a State University and within our office we use Access all the
time. But whenever we deal with our extremely large access table our
machine
gets bogged down. Is there a way to juice up access, so it can handle
this
amount better?
You don't mention if a network is involved here or not. Leaving out some
critically details such as this is akin to telling me you took a rope, and
it
is hard to move. However, you left out the fact that a cow is attached to
the
rope!
Whenever it's opened it takes a few minutes to open the table.
Just researching options other than using it as little as possible.
Ms-access should open in the same amount of time if you have 10 records, or
have 10 million records in the database. Again, the above statement is not
clear here.
Do note that sql server is a heavy duty database system, and is capable to
scaling to VERY VERY large databases. However, DO NOTE THAT SQL SERVER IS
40% TO 200% SLOWER then ms-access. (yes, you read this right....sql server
is SLOWER!!!). The comparison here is that of a speed boat going at 40 knots
vs a aircraft carrier (which also can do 40 knots...believe it or not!!!).
However, the speed boat only has a crew of 1 or 2 people, and the aircraft
carrier has a crew of 5000. However, the speed boat is JUST as fast to carry
one or 2 people...but large amounts of people it can not carry.
However, do no confuse that issue with speed and reading of records.
So, be aware that sql server, or Oracle is NOT faster then JET if you are
talking about a single user system, and no network. The other issue is
course
sql server can handle more users, and handle larger files..but on the same
machine sql server is NOT faster then ms-access/JET.
So, if your application is slow now, upgrading to sql server WILL NOT fix
your performance problems. The solution to performance problems is how you
grab
the data, and your designs of he applications (you MUST BLAME THE
DEVELOPERS OF THE SYSTEM here , not the hardware, or the data engine!).
-pray for new super computers
A super computer model CRAY C90 which many universities still run was rated
at
about 400-500 MFLOPS
The Cray Y-MP M94 was rated at about 170 MFLOPS
A new desktop Pentium CUP is rated about 3000 MFLOPS (as much as ten times
faster then that Cray C90).
So, you got MORE THEN a Cray supercomputer on your desktop. The speed of
your
desktop computer exceeds that of a 10 million dollar Cray supercomputer
that is not very old!!!
ms-access has not been processing bound sine the advent of 1 gig Mhz
processors. So, getting a new processor with 5, or even 200 times the
processing WILL DO NOTING TO SPEED UP YOUR database. I repeat, increasing
the processing speed WILL DO NOT NOTING IN TERMS OF FIXING the performance
of the database. What is the limit factor here is bandwidth, and the
speed at which data is being brought off the disk drive. And, if your
designs/queries and use of the data is incorrect, processing WILL NOT FIX
this
problem. Upgrading from a 1 gig MHz processor to brand new 3 gig processor
will DO NOTING!!.
And, note that a average Pentium processor on a desktop today is FAR MORE
powerful then the Cray supercomputers of just a short while ago.
So, you are going to have to look at what is being done, and the fixes in
terms of performance are going to be that of changing your approaches, and
looking at thinks like reducing the bandwidth from the software to the data
files.
You don't mention what kind of processing you are doing here? Are you always
working with 600,000 records, are your problems generally that of working on
a small sub-set of the data? Are forms attached blindely to tables of
600,000
records ? (bad idea). There is MANY issues here...