T
TonyT
apologies if this appears as a repost, the first went south.
I have a fairly complex design situation in which I need to produce part
order quantities based on 'just in time' ordering, using historical sales
data, I'll explain a bit more to help fill in the picture;
Setup is accdb front end to accdb backend on XP, everything in DAO
I have a test setup with 90000 parts, about 9000 of these parts are
designated stock items and have a field of # of Days to Stock. There are
about 150000 records in the invoice parts table for the 9000 stock line parts.
This is used for seasonal sales, so I have a function that creates a totals
query (currently including 3 sub queries) to get sales info for each stock
part for the last 4 weeks sales and last years sales for the next 4 weeks (ie
-52 wks to -48 wks) and the same for the preceeding year.
I then work out the quantity required and also the number of days that the
current stock will last from 2 separate formulae and put this data into a
temporary table for the user to overide/edit its recommendations before that
gets transferred to the order detail table.
The whole process is quite slow, originally it was taking 90 seconds to
populate the temporary table with the order quantities and other data, but
with loads of tweaking and testing I've got it down to 22 seconds. BUT, this
is on my quick un-networked pc, not my customers ancient tat on slooow
networks.
I was surprised to see that during all my testing and re-working, that using
a series of nested subqueries to give me 3 different sets of date related
totals was considerably faster than using separate queries, a single crosstab
query or a single joined query between sales and parts tables. I've tried
moving data into more tables and parsing it more times and less, but I can't
improve on my current speed.
So my questions are these;
What should be the outright quickest method for getting 3 different sum
totals of items from a single table with moving date ranges as the sum
criteria? (I also require 4 fields of data from the parts table to complete
the calculations without having to multi-nest queries further down the line.)
Is there a better way to get acturate 'just in time' order info without the
overhead of my current nested query design? Has anyone come up with a decent
model for this in an access environment?
Are there other ways of shaving milliseconds from calculations and queries,
for example in my function, if I open a recordset based on the nested
subquery design and iterate through it, referring to the same field (of the
same record) repeatedly (ie the same field QtyInSt appears multiple times for
the calculations), is it quicker or slower to assign this value to a variable
and then refer to that, rather than the recordset field rcd!QtyInSt? Does it
matter if I use SELECT * FROM rather than naming the individual fields if I
want ALL of the fields returned?
I know that I'm into the realms of theory here, but I'm interested to learn
and I've done all the usual speed improvement tweaks (subdatasheets off,
short db names on the root folder, persistant be connections etc)
I've already sacrificed a calculated stock quantity for a stored one to save
the additional overhead, I just can't think of a better approach to solving
this, the only other approach I've considered, is trying is to use pc idle
time to calculate the figure that I need the most - how many parts will I
sell per day at this particular point in time - and then store that as a
figure within the parts table and then be able to get much quicker
calculations of qty required and remaining days of the current stock holding.
Your thoughts are most welcomed,
TonyT
I have a fairly complex design situation in which I need to produce part
order quantities based on 'just in time' ordering, using historical sales
data, I'll explain a bit more to help fill in the picture;
Setup is accdb front end to accdb backend on XP, everything in DAO
I have a test setup with 90000 parts, about 9000 of these parts are
designated stock items and have a field of # of Days to Stock. There are
about 150000 records in the invoice parts table for the 9000 stock line parts.
This is used for seasonal sales, so I have a function that creates a totals
query (currently including 3 sub queries) to get sales info for each stock
part for the last 4 weeks sales and last years sales for the next 4 weeks (ie
-52 wks to -48 wks) and the same for the preceeding year.
I then work out the quantity required and also the number of days that the
current stock will last from 2 separate formulae and put this data into a
temporary table for the user to overide/edit its recommendations before that
gets transferred to the order detail table.
The whole process is quite slow, originally it was taking 90 seconds to
populate the temporary table with the order quantities and other data, but
with loads of tweaking and testing I've got it down to 22 seconds. BUT, this
is on my quick un-networked pc, not my customers ancient tat on slooow
networks.
I was surprised to see that during all my testing and re-working, that using
a series of nested subqueries to give me 3 different sets of date related
totals was considerably faster than using separate queries, a single crosstab
query or a single joined query between sales and parts tables. I've tried
moving data into more tables and parsing it more times and less, but I can't
improve on my current speed.
So my questions are these;
What should be the outright quickest method for getting 3 different sum
totals of items from a single table with moving date ranges as the sum
criteria? (I also require 4 fields of data from the parts table to complete
the calculations without having to multi-nest queries further down the line.)
Is there a better way to get acturate 'just in time' order info without the
overhead of my current nested query design? Has anyone come up with a decent
model for this in an access environment?
Are there other ways of shaving milliseconds from calculations and queries,
for example in my function, if I open a recordset based on the nested
subquery design and iterate through it, referring to the same field (of the
same record) repeatedly (ie the same field QtyInSt appears multiple times for
the calculations), is it quicker or slower to assign this value to a variable
and then refer to that, rather than the recordset field rcd!QtyInSt? Does it
matter if I use SELECT * FROM rather than naming the individual fields if I
want ALL of the fields returned?
I know that I'm into the realms of theory here, but I'm interested to learn
and I've done all the usual speed improvement tweaks (subdatasheets off,
short db names on the root folder, persistant be connections etc)
I've already sacrificed a calculated stock quantity for a stored one to save
the additional overhead, I just can't think of a better approach to solving
this, the only other approach I've considered, is trying is to use pc idle
time to calculate the figure that I need the most - how many parts will I
sell per day at this particular point in time - and then store that as a
figure within the parts table and then be able to get much quicker
calculations of qty required and remaining days of the current stock holding.
Your thoughts are most welcomed,
TonyT