Immediate Successor

K

Ken Kast

I want to model a situation where as soon as a resource becomes available a
task with another resource starts. The second task is a fixed work task.
Using the obvious SF relationship with ASAP tasks allows the first task to
start, then there can be a long lag until the second starts. In the real
world, by that time the resource may in fact no longer be available. The
first resource is really a material resource, but M S in it's project wisdom
has deemed that material is always available, and hence there's no calendar
associated with it. I can't assign both resources to the second task,
because I really need all the work done by the second resource.

I thought about trying to do this heuristically, trying to iterate to a
solution using resource leveling with a macro in between iterations playing
games with schedules. This isn't really a viable approach, because 1)
there's no guarantee it'll converge and 2) the project is so large that it
would take too long to run the iterations.

So, does anyone have a trick on how I can model this? BTW, I'm using
Proj2003, but ideally the solution would work going back to 2000.

Thanks.

Ken
 
S

Steve House [MVP]

Ken, your post is so general that is very confusing exactly what you're
trying to do - call me dense but I can't sort out what "1st task" "2nd task"
"1st resource" and "2nd resource" all refers to. Can you be more concrete?

Making some guesses what you're looking for, I think you're making it far
more complicated than it needs to be ...

Are you saying that the first task is one that produces or acquires a
material resource that is then to be used by a human in a second task where
work is performed that utilizes that material?

You are corrrect that project does not use calendars for materials but it's
very easy to model the fact that they aren't available until a certain
time - simply use a task to represent the process of creating or acquiring
the materials and perhaps lag to represent the delivery time. Let's say I
have to order widgets from an external vendor and it will be three weeks
from the time I order until they're delivered....the process involves 2
tasks and a milestone the way I like to model it...

1 Order 25 widget subassemblies from mfgr, 4hr, Joe Secretary
2 Widgets received, 0hr, 1FS+3ew
3 Assemble 25 fids incorporating widget subassemblies, 1wk, 2FS,
FredFidmaker & 25 widgets

As long as Fred is available when the widgets come in, task 3 that uses the
material will start when they arrive.

Or even simpler, if all we care about is when the materials show up and
don't need tasks to order and/or deliver them ...

1 Assemble 25 fids incorporating widget subassemblies, 1wk, SNET
Vendor'sPromisedDeliveryDate, FredFidmaker & 25 widgets

Or perhaps we are building the widgets inhouse and then using them in the
final assembly, then it's only two tasks...

1 Build 25 widgets, 1w, SamTheWidgetMan
2 Assemble 25 fids from widget subassemblies, 1w, 1FS, FredFidmaker

and we don't need widgets as a resource for the assembly task 2 since
they're not purchased and not part of the project's budget (though the
materials used to make them in task 1 would be).

"In the real world by that time the resource may no longer be available" -
does that mean that if the work on the second task doesn't start as soon as
the material is available someone else may come along and grab the materials
first or does the material decay and so have to be used within a certain
time period else it must be discarded and the process start over? OR does
it mean that the WORK resource doing task 2 may become unavailable after a
certain period of time? If it's the first problem that has a very easy
solution - since you're the project manager, you're the boss and you "own"
those resources until you hand over the completed project to its customers,
including the materials that are task 1's deliverable. Simply make a policy
decision that says the output of task 1 is earmarked for task 2 and anyone
touching it is subject to disciplinary action <grin>.

You say there can be a long "lag" between task 1 and task 2 but in fact
there's no reason for it to happen that way unless you permit it to when you
build the schedule. As the PM you control when tasks are scheduled - you
don't let Fred decide when he's going to work on the fids, task 2, you tell
him when he needs to do it ... big difference. If a delay between task 1
and task 2 would result in spoilage of the materials, simply don't give Fred
the option to start it late. Instead, tell him he must start it as soon as
the widgets are available and give him the date when you estimate they'll be
ready to go - removing those scheduling uncertainties is part of the reason
for planning in the first place.

Hope this helps
 
K

Ken Kast

Here's my problem in a a large nutshell. I'm trying to use Proj to model
the loading of data into a database prior to its going operational. My
primary "work" resource is the database computer. It's got a know
transaction-processing capability. I've got a couple dozen sources of data
that are being consolidated; each source has its data date-time stamped and
the data needs to be processed FIFO. I wanted to use resource leveling to
schedule when each source data will be loaded. (As an aside, some data is
warehoused, some live feeds.)

That part of the problem is pretty straightforward. The hooker is that each
data source has preprocessing software that's being written and debugged.
Sometimes the software for a particular source is unavailable--say a defect
is found or a rev is being made. So to process the feed I need to have
computing capability on the DB machine AND the software available. If
software goes down, I want to be able to juggle the schedule so that I don't
lose the time on the db machine, i.e., process some other warehoused data.
What's happening is that resource leveling schedules the software when it's
available (as a predecessor to the data-loading task) then schedules the
data loading when the db machine is available; of course by then the
software may be scheduled to be down.

The software is really material, but it is material that does have a
calendar associated with it.

Ken
 
S

Steve House [MVP]

Gotta think about this a little but I thought I'd share a few first
impressions. I have doubts that your preprocessing software should be
viewed as a material resource. For that matter, it's iffy whether either
the software or the computer really are resources of any sort at all. At
first glance the software is part of the project's deliverables while the
hardware is part of the facilities infrastructure. Material resources are
those things that are actually incorporated into the task's/project's
deliverables or expended in the course of its production. Examples are
bricks used in the construction of a wall, film for a camera in a movie
shoot, or fuel for a generator supplying power for the lights used in
filming. The database hardware itself *might* be a work resource but I'm not
sure if that's really necessary in your example either. Work resources
normally would be the programmers, administrators and other humans creating
and installing the software and loading the data. Think about scheduling
the humans, not the machines.

I may be naive but how does the software go down if you have to wait for the
machine to be free once its debugged as you state at the end of your post??
Surely you don't mean that it's erased or disabled if you don't use it as
soon as it's ready? Software just isn't that volatile, at least not since
the old Geniac days when everything was stored in core and there was no
offline storage (grin). Don't know about you, but when I load some software
onto my computers, it sits there on the disk unless I erase it and I can
choose to execute it at any point in the future at my convenience. The
software is the deliverable of the tasks that create and debug it and the
schedule is being driven, in part, by the time those creation processes
take, not a calendar of the available working time of the software. Data
can't be loaded until the software is available so your schedule depends on
the timing of the tasks that create and test it. Once the software is ready
for a particular data set you can let it sit in limbo until its associated
data is ready and/or the target hardware is available to accept the load.
If the load has to be scheduled around other activities on the target
machine, incorporate those activities as tasks in your project plan. Think
in terms like "The front-end software for ABC dataset will be ready Tuesday,
we can have the ABC data available after Wednesday, but the hardware is tied
up with XYZ tasks until Friday so the first we can take the production
machine offline for the loadup after the software & data are ready is Friday
so that's when we'll schedule it."

If you really want to use resource leveling you would have task 1 XYZ in the
plan using resource BigHardware. I assume you have a development machine
available so your actual database hardware isn't tied up when writing and
debugging the software. Meanwhile you would also have tasks 2 Write Frontend
Software for ABC data, 3 Test Frontend Software for ABC data (predecessor
2), and 4 Prepare ABC data for loading but they don't use the BigHardware
resource. Then we have task 5, LoadDatabase with predecessors 3 & 4 and
using BigHardware. Resource leveling will shift the loading to the first
available time after XYZ finishes. What's wrong with that method?

Seems like one of your problems is you're trying to go into production with
software that isn't adequately tested. Can't you do something about that?


--
Steve House [MVP]
MS Project Trainer & Consultant
Visit http://www.mvps.org/project/faqs.htm for the FAQs
 
K

Ken Kast

Steve House said:
Gotta think about this a little but I thought I'd share a few first
impressions. I have doubts that your preprocessing software should be
viewed as a material resource. For that matter, it's iffy whether either
the software or the computer really are resources of any sort at all. At
first glance the software is part of the project's deliverables while the
hardware is part of the facilities infrastructure. Material resources are
those things that are actually incorporated into the task's/project's
deliverables or expended in the course of its production. Examples are
bricks used in the construction of a wall, film for a camera in a movie
shoot, or fuel for a generator supplying power for the lights used in
filming. The database hardware itself *might* be a work resource but I'm
not sure if that's really necessary in your example either. Work
resources normally would be the programmers, administrators and other
humans creating and installing the software and loading the data. Think
about scheduling the humans, not the machines.

Don't forget that I'm not trying to model the development with Project,
rather the loading of the data. So the only things working are the
application software and the db machine. And the only one of these that is
effectively constrained is the the db machine. The apps all run on machines
different from the db machine, so they are not cpu or io constrained.
I may be naive but how does the software go down if you have to wait for
the machine to be free once its debugged as you state at the end of your
post??
The software can go dowm, i.e, crash, and so is unavailable until that bug
is fixed, since the crash puts a block in the FIFO requirement. Another
possibility is that a defect is uncovered, so that the preprocessing isn't
correct, i.e., the data being uploaded isn't correct from the new db's
perspective.
Surely you don't mean that it's erased or disabled if you don't use it as
soon as it's ready? Software just isn't that volatile, at least not since
the old Geniac days when everything was stored in core and there was no
offline storage (grin). Don't know about you, but when I load some
software onto my computers, it sits there on the disk unless I erase it
and

The software is there as bytes on the hard drive, but it isn't useful or
usable if it has errors. From my view, it makes no difference if it's not
there or not usable.
I can choose to execute it at any point in the future at my convenience.
The software is the deliverable of the tasks that create and debug it and
the schedule is being driven, in part, by the time those creation processes
take, not a calendar of the available working time of the software. Data
can't be loaded until the software is available so your schedule depends on
the timing of the tasks that create and test it. Once the software is
ready for a particular data set you can let it sit in limbo until its
associated data is ready and/or the target hardware is available to accept
the load. If the load has to be scheduled around other activities on the
target machine, incorporate those activities as tasks in your project plan.
Think in terms like "The front-end software for ABC dataset will be ready
Tuesday, we can have the ABC data available after Wednesday, but the
hardware is tied up with XYZ tasks until Friday so the first we can take
the production machine offline for the loadup after the software & data are
ready is Friday so that's when we'll schedule it."

I have 6 months of warehouse data that has to get loaded at the same time
I'm processing 6 months of live data. The preprocessing requirements are
such that there won't be very many spare cycles, hence it's really important
that the db machine be run 24/7 (at least figuratively). With the way I've
modeleed it now I have roughly 16,000 tasks: one for the software
availability (resourced with app software), one for the loading task
(resourced with db machine). They are linked. In addition, to enforce the
fifo requirement, each loading task is linked to the next logical day's
loading; the loading could take place on the same calendar day, or later.
On a 3Gig machine this takes several hours to level, which means it is not
useful as a what-if/planning tool. I believe your model below is an
elaboration on mine. I think it would suffer from the same problem: it
models a simultaneity relationship as a link. I think your (and my)
approach would work if there weren't competing demands (different data
feeds) for the same resource. What is actually happening is that resource
leveling parks all the preprocessing at the beginning of the activity, then
spreads the data loading over days. In the steady state case, resource
leveling then gives a feasible plan, i.e., keeps the data loading at NGT
100% of capability. But if I change the software availability date, Project
moves the milestone to an OK date, but then lets the data loading occur at a
time when the software isn't there. This problem might be handable if the
preprocessing could be separated from the loading, but there's not a
facility to stage the preprocessed data. So from a logical (real-world)
perspective data processing and data loading are really one activity. I
can't make them one task, though, because the resources are not
interchangeable. If Project had a calendar for material; if Project had a
predecessor relationship that enforced simultaneity; then I'd be in
data-loading heaven.
If you really want to use resource leveling you would have task 1 XYZ in
the plan using resource BigHardware. I assume you have a development
machine available so your actual database hardware isn't tied up when
writing and debugging the software. Meanwhile you would also have tasks 2
Write Frontend Software for ABC data, 3 Test Frontend Software for ABC
data (predecessor 2), and 4 Prepare ABC data for loading but they don't
use the BigHardware resource. Then we have task 5, LoadDatabase with
predecessors 3 & 4 and using BigHardware. Resource leveling will shift
the loading to the first available time after XYZ finishes. What's wrong
with that method?

Seems like one of your problems is you're trying to go into production
with software that isn't adequately tested. Can't you do something about
that?

It's actually 8 months till we go operational. The data loading serves
several purposes: get the db populated; provide rigorous developmental test
of our software; and what is increasingly a bigger issue, debug all the
feeds. Despite ICDs the feeds aren't all to spec. Sometime the feed has to
change, sometimes the software does.

Ken
 
K

Ken Kast

Steve,

One other thing that would make a different model of my problem work would
be if resources assigned to summary tasks were checked against their
calendar. It's not obvious what it would mean for a resource to be
available for part of the summary duration. What does it mean to split a
summary, since it's nothing more than a roll-up of independent durations.
But it would solve situation like I have.

BTW, you might think it's kind of an off-the-wall "project" I'm trying to
model. Before there was Project I used to use Timeline as a system
engineering tool. The nice thing about using a product like Project rather
than a true modeling tool is that it's much more available, and
non-engineering reviewers are less intimidated by it.

Ken
 
S

Steve House [MVP]

Never, ever, under any circumstances, assign resources to summary tasks.
Summaries are roll-ups of the performance tasks and in a very real sense
don't actually exist as tangible events like "real" tasks do - they are
purely artifacts inserted for convenience in organization and reporting.
 
S

Steve House [MVP]

See embedded
....
Don't forget that I'm not trying to model the development with Project,
rather the loading of the data. So the only things working are the
application software and the db machine. And the only one of these that
is effectively constrained is the the db machine. The apps all run on
machines different from the db machine, so they are not cpu or io
constrained.

That may be true but those tasks are being done by human resources. The
software is developed and debugged by a task performed by a human.
The software can go dowm, i.e, crash, and so is unavailable until that bug
is fixed, since the crash puts a block in the FIFO requirement. Another
possibility is that a defect is uncovered, so that the preprocessing isn't
correct, i.e., the data being uploaded isn't correct from the new db's
perspective.

I hate to harp on the point, but debug the software BEFORE you attempt to go
live with it- don't just say "well this should work, let's go for it <g>".
Your schedule isn't determined by a "calendar" for the software
availability, it's determined by how long it will take your programmers to
insure the software is working correctly before attempting your data load.
Once the software has passed its QC stage it's going to be available on
demand whenever your WORK schedule says you need it without regard to the
hours of the day, days of the week, or the time that's elapsed since its
debugging was completed. Of course surprises may happen, that's why project
planning is an iterative process continuing throughtout the project's
lifespan, but as a developer myself I wouldn't even dream of attempting a
load of live data onto the production environment until the front-end has
been exhaustively tested and I was 99.99% certain it was error free.
The software is there as bytes on the hard drive, but it isn't useful or
usable if it has errors. From my view, it makes no difference if it's not
there or not usable.

See above - your schedule is driven by how long it takes to insure the
preprocessing software will perform as required before even attempting the
data loading. Then after that QC process is complete and you are confident
the software will be there and run bug free whenever it's required, your
plan should schedule the data processing and the data loading based on the
availability of the db hardware. Seems like the crux of the problem is
trying to cut corners and skip debugging until you're doing the production
data load.

If you examine your plan with the idea that a task is an observable physical
activity performed by resources doing work and extending over a finite
quantifiable time period it might fall into place better. I teach my
students to always start the task names with an action verb, just to keep
their minds focussed on that basic fact.

Resource leveling is only a tool to help you make managment decisions. Its
sole function is to resolve situations where a resource is booked on
multiple tasks that conflict so that it results in his being expected to
produce more man-hours of work than he is capable of doing in the time
allowed. In your project, the only place resource leveling might help is if
your machine is booked for several mutually exclusive tasks - perhaps 12
hours on one task, 12 hours on a second, and 12 hours on a third - all
starting out scheduled to run during the same 12 hour time frame. Since they
can't run together, the only resolution is to move two of them to times when
they can run unimpeded. It doesn't make your decisions for you so if it's
doing things to the schedule that are unacceptable for meeting your business
objectives, it may not be the right tool for that specific job and so simply
don't use it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top