Access Tables -- Size and Issues

F

forevergrateful

I have a few simple questions:

1. How can I know the actual size in KBs of each table in my database?

2. Is there a way to force a database attached to a program I've written in
VB to regularly compact just certain tables rather than the whole database
since certain tables have data added and deleted continually while most of
the tables remain static?

Thanks for any assistance the group can provide.
 
R

Rick B

I don't think you can find the size of a table. It just does not work that
way. I guess you could make a copy of your file and look at the size, then
delete the table, compact, and look again. Parts of the table,
relationships, data, etc. are stored in various places in the file. Access
does not really break the actual file on your disk into logical units that
correspond to the objects you see when you open the database.

As far as compacting, you don't compact an object, you compact the actual
file. Since all your tables are in one file, you can't compact just a
portion. Since it only takes a few seconds to compact a file, I'm not sure
why it would matter. Also, when it compacts, it only takes care of extra
space in the file that can be removed. It should take the same amount of
time whether you compact the changed objects, or all. In my experience,
compacting deals more with design changes to your database and only makes
small differences when you are simply adding data.


Rick B



I have a few simple questions:

1. How can I know the actual size in KBs of each table in my database?

2. Is there a way to force a database attached to a program I've written in
VB to regularly compact just certain tables rather than the whole database
since certain tables have data added and deleted continually while most of
the tables remain static?

Thanks for any assistance the group can provide.
 
F

forevergrateful

Thanks for the responses.

The reason I would like to do this is that it is normal for this DB to grow
from 20 MB to 2 GB in a matter of days of testing with the software. The
problem is: the 2 GB can easily be compacted to half that size. This does
not take seconds believe me and it can often fail in the process. It may be
I need to move to SQL or another method.

Thoughts?
 
N

Nikos Yannacopoulos

A 20MB database does not demand to be moved to SQL on account of its size
alone, Access can handle it just fine. What you should consider is:
A. Regular compacting. I have a multi-user one close on 50MB now (and
growing by the day), and I have set up a scheduled event on a server to
compact it every night with a small .exe made in VB 6.0. It works perfectly.
B. Find what is causing all that bloat, and minimize to the extent possible.
I this is just a side effect of development (many design changes), I
wouldn't worry about it too much. If, on the other hand, you create a lot of
temporary objects (like temp tables etc.) and delete them in a session,
then:
* try to limit this practice to the absolute minimum
* create the temp objects in the front end, and set it to auto compact on
close

HTH,
Nikos
 
F

forevergrateful

The database runs a test that fills it with historical data from various
other tables and compares the relationship of historical values. When it
does this over periods of 50 to 100 years, the data can grow exponentially
when all the various variables are examined.

Even in one ten hour period of testing it is not uncommon to grow from 20 MB
to 1.5 GB or even 2 GB. One can then delete the test and via various
relationships all data related to that test will be deleted. The problem
is, often the data is not deleted but stored for later viewing. In that
case, the database can remain at these huge sizes for days or even weeks.
When you then try to compact it can take forever and cause numerous errors.

Consequently, my thought is that I may be forced to move it to SQL. Or
possible use SQL only for the parts that grow and leave the more or less
static parts in the Access database.

Does this help explain my predicament? And do you have any further thoughts
to help?

FG
 
J

John Vinson

The database runs a test that fills it with historical data from various
other tables and compares the relationship of historical values. When it
does this over periods of 50 to 100 years, the data can grow exponentially
when all the various variables are examined.

It really sounds like you may want to consider having *three*
databases: a Frontend for the queries, forms, code, etc; one backend
for the static tables; and a second backend for the "test" tables.
This second backend can simply be deleted or archived and a new, empty
database created as needed without messing up your production
database.

John W. Vinson[MVP]
(no longer chatting for now)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top