T
Tim Cali
Hi. In my suvey database, there is a query that creates a cartesian product
used as the source for another query, that latter of which is used for
report output. I need the cartesian product in order to aggregate properly.
Since there are so many respondents for this survey, and also since there
are so many questions, this "master" query is going to be much bigger than I
am used to. I think in a previous survey I had a maximum of 3K records in
the dynaset, and I aggregated from that. This new survey is going to yield
30-40K records, or 10x bigger than my last survey results for the dynaset.
Is a dynaset stored in RAM? I was thinking about it earlier, and this
cartesian product, which also uses an outer join, is incredibly useful.
However I can see the initial size of it being huge, and I am wondering how
Access handles its resources to be able to handle these huge select queries
at first, even when the final result may be an aggregation 1/10 or less the
size of the starting point.
Does this make sense? In other words, what if I had a table with one million
records, and I needed some kind of outer join to "prep" the data from which
to aggregate from. How does Access handle it. Do we need huge RAM
requirements? Does Access write this to a temporary table on disk, and then
erase it? What's going on "behind the scenes"?
Thanks for any insight.
Tim
used as the source for another query, that latter of which is used for
report output. I need the cartesian product in order to aggregate properly.
Since there are so many respondents for this survey, and also since there
are so many questions, this "master" query is going to be much bigger than I
am used to. I think in a previous survey I had a maximum of 3K records in
the dynaset, and I aggregated from that. This new survey is going to yield
30-40K records, or 10x bigger than my last survey results for the dynaset.
Is a dynaset stored in RAM? I was thinking about it earlier, and this
cartesian product, which also uses an outer join, is incredibly useful.
However I can see the initial size of it being huge, and I am wondering how
Access handles its resources to be able to handle these huge select queries
at first, even when the final result may be an aggregation 1/10 or less the
size of the starting point.
Does this make sense? In other words, what if I had a table with one million
records, and I needed some kind of outer join to "prep" the data from which
to aggregate from. How does Access handle it. Do we need huge RAM
requirements? Does Access write this to a temporary table on disk, and then
erase it? What's going on "behind the scenes"?
Thanks for any insight.
Tim