G'day "Howard Kaikow" <
[email protected]>,
Ok, let us take this example.
I fill a collection of data objects from a text file (produced from a
form save as data only)
At most, there will dozens of items, and indeed I have dimmed the
array to a max of 99 elements
Now, I have two choices.
Howards Method - CPU intensive and SLOW
Write all object contents to a temp doc - SLOW
Invoke sort (which is damn quick for CONTENT sorts)
Read all elements back into collection - SLOW
The Heretical Method
Run a bubble-sort on the collection items. (Quick, as n<100)
Why?
The 'statistical' inefficiency of the bubble sort is negated by the
small value of n. With n inside two orders of magnitude (n<100),
there is little difference between a N! (quick sort | partition sort)
and N^2-1 (bubble sort).
The bubble sort requires very quick maths with little additional
overhead. The shell sort and the partition sort introduce many more
lines of code, thus slowing down execution of a single loop.
Additionally, the standard convention with bubble sorts is to include
a flag to set on swap. If you complete an iteration without setting
the flag, the data is ordered. Thus, the bubble sort is the only sort
with an 'early completion' meaning for nearly ordered data of any size
it is far faster than either of the other methods.
So, your sort method depends on data source, built-in methods
available to deal with it, and then a high-level analysis of the data
itself.
I have achieved speed increases from 30 secs runtime down to 0.04
seconds on mainframes by employing them sensibly. This was an extreme
example case, but never-the-less demonstrates my point. There is a
niche for bubble sorts in the domain of sorting data.
Steve Hudson - Word Heretic
steve from wordheretic.com (Email replies require payment)
Without prejudice
Howard Kaikow reckoned: