Specifically, my company uses a LAN, for each office
location there is probably one data entry person on a
regular basis and the occasional manager who oversees the
work. The records entered per location is hard to guess
but I would say 20/week. I envision each location needing
about 6 tables with a few queries rounding this data into
a usable format. Time spent in the tables by each data
person could be most of the day.
So I am still wondering if it better for each location to
have their own seperate file with 6 (or so) tables and
thus I can link this data, along with 30 other locations
with exact setups, into a "master" file. Or perhaps I can
set all 30 location into a main file. Doing high powered
math that would be about 180 tables.
Your thoughts.
Performance wise, trying to bring together that many tables into a single
operation would likely be ***really*** slow. I still think that my original
suggestion would apply. If the relevant main records were to contain a field to
identify the "office", then all such main records and their child records could
easily be filtered as needed and you would still only need to deal with 6
tables. I can't even imaging trying to bring together 30 separate Parent/child
data structures in any type of operation/calculation scenario. If this database
is likely to exceed hundreds of megabytes in size or the number of users
performing contemporaneous edits could exceed 20 or so, moving to a more robust
database, such as SQL Server, might be a consideration (I am not, however, the
one to answer questions about SQL Server <g>).