T
Tim Cali
How would you handle this scenario...
A client is currently using a split database. The DB works well, however now
they would like to create a copy of the database (FE/BE) to a disconnected
machine and enter more data there. 2 machines = twice as fast as entering
data.
How can this be done? The database is simple enough in structure so that
there will be no overlapping data *except for* AutoNumber IDs. In other
words, certain business rules will be enforced....however the technical side
of things is a different story. Let's say they set up a DB (empty BE/FE) on
a disconnected machine, and begin entering data. Further let's say that a
table has an AutoNumber column. As both people enter data, both independent
columns will increment their respective AutoNumber IDs in exactly the same
way, therefore creating duplicates that must be resolved when they BE's are
merged together.
One thought is, after the 2nd DB has been updated, I artifically increase
the table's IDs so that there be no duplicates. Since this entire scenario
is a "one-off" situation, this would not be out of the realm of possibility.
HOWEVER, I sure would like to know if there is a better way to do this. In
practicality, there are 2 tables with 3 AutoNumber IDs and I would have to
carefully "sync" up each of them in one of the DBs, and then merge it with
the 2nd DB. I imagine this could be a very tedious process. Is there
another, better/built-in way?
You know...after re-reading this, I am wondering if I can simply
artificially increase the base AutoNumberIDs in the 2nd backend *before* it
is put on the disconnected machine, so that it would be virtually impossible
for IDs to intersect. Then I could double-check the IDs before merging them,
and as long as there are no dupes, continue to merge.
Any input is greatly appreciated.
A client is currently using a split database. The DB works well, however now
they would like to create a copy of the database (FE/BE) to a disconnected
machine and enter more data there. 2 machines = twice as fast as entering
data.
How can this be done? The database is simple enough in structure so that
there will be no overlapping data *except for* AutoNumber IDs. In other
words, certain business rules will be enforced....however the technical side
of things is a different story. Let's say they set up a DB (empty BE/FE) on
a disconnected machine, and begin entering data. Further let's say that a
table has an AutoNumber column. As both people enter data, both independent
columns will increment their respective AutoNumber IDs in exactly the same
way, therefore creating duplicates that must be resolved when they BE's are
merged together.
One thought is, after the 2nd DB has been updated, I artifically increase
the table's IDs so that there be no duplicates. Since this entire scenario
is a "one-off" situation, this would not be out of the realm of possibility.
HOWEVER, I sure would like to know if there is a better way to do this. In
practicality, there are 2 tables with 3 AutoNumber IDs and I would have to
carefully "sync" up each of them in one of the DBs, and then merge it with
the 2nd DB. I imagine this could be a very tedious process. Is there
another, better/built-in way?
You know...after re-reading this, I am wondering if I can simply
artificially increase the base AutoNumberIDs in the 2nd backend *before* it
is put on the disconnected machine, so that it would be virtually impossible
for IDs to intersect. Then I could double-check the IDs before merging them,
and as long as there are no dupes, continue to merge.
Any input is greatly appreciated.