AK said:
What do I multiple 500 by to get the "widget" value over time?
The following might help you get started. And it might be sufficient for
your needs. But as I will explain below, there are issues that you might
need to consider. Hopefully, my responses will encourage others who might
be able to resolve the issues.
The following procedure will construct a "standard normal" distribution for
500 widgets over 62 days.
1. Set A1 to -5.
2. Set A2 to the formula: =A1-2*$A$1/61. Copy A2 and paste into A3:A62.
A62 should be about 5. A1:A62 are called "z scores" -- the number of
standard deviations from the mean.
3. Set B1 to the formula: =500*NORMSINV(A1). Copy B1 and paste into
B2:B62. B1:B62 are the cumulative distribution for each of 62 days.
4. Set C1 to the formula: =B1.
5. Set C2 to the formula: =B2-B1. Copy C2 and paste into C3:C62. C1:C62
is the distribution of 500 widgets over each of 62 days.
If you select C1:C62 and use the Chart Wizard to draw an XY chart, you
should see a normal distribution.
However, note that C1:C62 contains fractional widgets, which is probably not
practical. So....
6. Set D1 to the formula: =ROUND(C1,0). Copy D1 and paste into D2
62.
D1
62 is the integral distribution of 500 widgets over each of 62 days.
Issues....
You might notice that, first, there are zero widgets on the several of the
first and last of the 62 days; and second, the sum of D1
62 is less than
500. (In some cases, it might be more.)
If you would like a minimum of 1 widget on each of the 1st and 62st days, I
do not know how to do that. There might be ad hoc methods for "stretching"
the distribution so that it will work. But I suspect the "proper" solution
is to (also?) alter the standard deviation, producing a normal distribution,
but not a "standard normal" distribution. There is no reason to limit the
solution to a "standard normal" distribution, unless that is your
requirement.
Also, I do not know how to ensure that the sum of the 62 data is exactly
500. Again, there are certainly ad hoc methods for "salting" the
distribution. And they might be sufficient, especially if you "salt" the
peak values first, creating the smallest percentage error. But I wonder if
there is a "proper" method of ensuring exact results -- or at least coming
closer. Then again, quantization "errors" are a pain to deal with.
----- original message -----