B
Bruce McF
Are any code examples available for basic schedule quality? I am only trying
to quantify how well each schedule is maintained with "complete in the past",
"incomplete in the future" and "resources as densely scheduled as possible
witho utoverallocation". (A check for resource assignments on all detail
tasks is implicit in this as is last status update, etc.) I am trying to
generate metrics to tell us if we can trust the schedule information and to
set minimum-acceptable quality limits.
I am not trying to check for any best practices at this level ... only that
the Finish Dates are credible. If tasks are well scheduled relative to the
status date, I figure that the Finish Dates are credible and a reasonable
estimate of when tasks will complete (within typical uncertainties). That's
the essential bit I need for reporting. (Best practices tend to make it
easier to maintain the schedule, but I am interested in how useful the data
is after normal processing.) I call this FDI ... Finish Date Integrity.
I think I've got the code to tell whether statused tasks fall on the correct
side of the status date and am using TSV's to look at availability vs. plan
to spot over/under allocations ...
BUT I HAVE TO THINK THIS HAS ALREADY BEEN DONE A HUNDRED TIMES?
I suppose if I could turnoff all the best-practice checking in the
commercially available tools, that would work, but my first reaction is that
we are pretty close with our code to date and will have more direct control
of the metrics that would be useful in our applications.
(resource-constrained iterative software development).
I know you all are loaded and appreciate any guidance you might offer.
Thanks,
Bruce
to quantify how well each schedule is maintained with "complete in the past",
"incomplete in the future" and "resources as densely scheduled as possible
witho utoverallocation". (A check for resource assignments on all detail
tasks is implicit in this as is last status update, etc.) I am trying to
generate metrics to tell us if we can trust the schedule information and to
set minimum-acceptable quality limits.
I am not trying to check for any best practices at this level ... only that
the Finish Dates are credible. If tasks are well scheduled relative to the
status date, I figure that the Finish Dates are credible and a reasonable
estimate of when tasks will complete (within typical uncertainties). That's
the essential bit I need for reporting. (Best practices tend to make it
easier to maintain the schedule, but I am interested in how useful the data
is after normal processing.) I call this FDI ... Finish Date Integrity.
I think I've got the code to tell whether statused tasks fall on the correct
side of the status date and am using TSV's to look at availability vs. plan
to spot over/under allocations ...
BUT I HAVE TO THINK THIS HAS ALREADY BEEN DONE A HUNDRED TIMES?
I suppose if I could turnoff all the best-practice checking in the
commercially available tools, that would work, but my first reaction is that
we are pretty close with our code to date and will have more direct control
of the metrics that would be useful in our applications.
(resource-constrained iterative software development).
I know you all are loaded and appreciate any guidance you might offer.
Thanks,
Bruce