Oracle performance with Microsoft Project

T

tom thayer

We are running Microsoft Project 2000 against tables in a database via
ODBC. We have large projects and the performance is terrible when
saving changes. MSP updates the entire project even if you only change
a few things. The tables in the database are tied into a 3rd party CRM
so using MPP files is not an alternative. The database being used is
Oracle on an HP-UX system with lots of memory. The database is on a
different machine than MSP. The best save times we get are with the
Oracle ODBC driver. Even though the save times are long, we note that
if we use a SQL Server database (also on different machine), the save
times are < 40% of the save times against Oracle. E.g, a save time
that took 5 minutes on Oracle only took 2 minutes on SQL Server, or 20
minutes on Oracle only took 5 minutes on SQL Server. We have done
performance tuning and looked at Oracle Performance Manager and TOP
SQL and see that the sql is optimized. Why is the SQL Server path so
much faster than the Oracle path here?
 
J

Jim Kennedy

tom thayer said:
We are running Microsoft Project 2000 against tables in a database via
ODBC. We have large projects and the performance is terrible when
saving changes. MSP updates the entire project even if you only change
a few things. The tables in the database are tied into a 3rd party CRM
so using MPP files is not an alternative. The database being used is
Oracle on an HP-UX system with lots of memory. The database is on a
different machine than MSP. The best save times we get are with the
Oracle ODBC driver. Even though the save times are long, we note that
if we use a SQL Server database (also on different machine), the save
times are < 40% of the save times against Oracle. E.g, a save time
that took 5 minutes on Oracle only took 2 minutes on SQL Server, or 20
minutes on Oracle only took 5 minutes on SQL Server. We have done
performance tuning and looked at Oracle Performance Manager and TOP
SQL and see that the sql is optimized. Why is the SQL Server path so
much faster than the Oracle path here?
Do they use bind variables?
Jim
 
T

Thomas T

tom thayer said:
We are running Microsoft Project 2000 against tables in a database via
ODBC. We have large projects and the performance is terrible when
saving changes. MSP updates the entire project even if you only change
a few things. The tables in the database are tied into a 3rd party CRM
so using MPP files is not an alternative. The database being used is
Oracle on an HP-UX system with lots of memory. The database is on a
different machine than MSP. The best save times we get are with the
Oracle ODBC driver. Even though the save times are long, we note that
if we use a SQL Server database (also on different machine), the save
times are < 40% of the save times against Oracle. E.g, a save time
that took 5 minutes on Oracle only took 2 minutes on SQL Server, or 20
minutes on Oracle only took 5 minutes on SQL Server. We have done
performance tuning and looked at Oracle Performance Manager and TOP
SQL and see that the sql is optimized. Why is the SQL Server path so
much faster than the Oracle path here?

ODBC, eh? What version of Oracle is the HPUX box running? Is it running
Unix or Linux or Windows?

I assume the SQL Server is running on a Microsoft system; are you using ODBC
to connect to SQL Server?

Here's my guess at the answers above: I'm guessing that the HP-UX is not
running Windows, so you're limited to ODBC. The SQL Server's box is
probably running an NT-based Windows O/s (Windows NT 3.5/4 or Win2k or
Win2003 server). You're probably using an OLE DB provider when you connect
to SQL Server, but the "reliable dinosaur" ODBC for connecting to your
Oracle. So I'm pointing towards ODBC as the bottleneck. Try loading Oracle
onto a Win2k server, and if it's at least Oracle 8, you can use Oracle's OLE
DB Provider. (Don't use the Microsoft OLE DB for Oracle driver, I found it
to be unreliable.)

The Oracle system might also be tuned incorrectly. I believe some operating
systems could induce paging (swap file usage) depending on the memory asked
of it. So you say the HP UX box has a large amount of memory- if the SGA
was tuned for a large amount of memory, but the o/s converts that SGA size
into a swap file, you could be getting hurt by disc access. Dropping the
SGA size might help.

Also, were separate tablespaces used for indexes & tables? Or is everything
in one tablespace?

From my limited experience with SQL Server (mainly 6 and 7 and 2000),
there's nothing to tune; it handles itself. Therefore an out-of-the-box SQL
Server installation might run better initially then an out-of-the-box Oracle
installation.

Just some wild guesses...

-Thomas
 
T

tom thayer

Yes, it uses bind variables. Oracle Performance Manager and MS SQL
Server Profiler both show sql's with bind variables.
 
D

Dave Hau

Are you running other apps on the same Oracle server? If so, are you
getting decent performance on those apps?

Cheers,
Dave
 
B

Billy Verreynne

(e-mail address removed) (tom thayer) wrote
We are running Microsoft Project 2000 against tables in a database via
ODBC. We have large projects and the performance is terrible when
saving changes.
We have done
performance tuning and looked at Oracle Performance Manager and TOP
SQL and see that the sql is optimized. Why is the SQL Server path so
much faster than the Oracle path here?

Before you can performance tune, you need to identify the problem. And
that in my book means running sql trace and using TKPROF to see just
what the heck is happening between the application and Oracle.

My suggestion is to implement a database session trigger that will
check the schema logon name and if that is the MS Project Schema,
enabling sql trace in that session (using the
dbms_system.set_sql_trace_in_session(sid,serial#,true) call). Easiest
and most seamless way to enable sql tracing for 3rd party apps IMO.

Run the trace files via TKPROF after the MS Project update has been
completed and then work with what is *really* happening under the hood
between Oracle and MS Project (instead of trying to guess via OEM or
other tools).

Question. Just how was that schema generated? Who created the tables
and indexes and stuff in Oracle? MS Project via ODBC? If so.. then
*OUCH*! No wonder you are having performance issues.
 
R

Rob Schneider

*snip*
Question. Just how was that schema generated? Who created the tables
and indexes and stuff in Oracle? MS Project via ODBC? If so.. then
*OUCH*! No wonder you are having performance issues.

Billy ...Nice post. What's the alternative to using MS Project via ODBC
to create the tables and indexes for Oracle?
 
B

Billy Verreynne

Rob Schneider said:
What's the alternative to using MS Project via ODBC
to create the tables and indexes for Oracle?

You still let MS Project do it via ODBC - then you dump the schema
definition to a SQL script (using something like TOAD) and fix what is
broken. Am thinking specifically of the physical definition of the
schema. Using IOTs. Sequences and triggers. Storage clauses. Using
partitioning to reduces i/o overheads. Making sure that everything is
analyzed so that the CBO can work on more than a thumbsuck.

I'm sure that the TKPROF trace will point to one or more SQLs that are
the real performance culprits. Then it should not be that difficult to
determine what physical changes can be made in the schema to make
these less of a performance problem. Maybe that will just require
using OEM and not even re-building the schema from a dumped-and-fixed
SQL script.

The problem with ODBC is that when a db independant product is used,
the product does not use native and optimal SQL for the chosen db. You
are relying on the ODBC driver to translate what the application says,
into good-and-optimal-native-SQL for that db. The ODBC driver will not
realise that a specific CREATE TABLE and CREATE INDEX are better
served by an IOT.. or that partitioning suits a specific table.

Logical db design is good. However, one must never forget that in
order to make that logical design work and work well, requires a
(proper) physical design for the chosen db. And this is here when many
ISV products using databases fall down. In fact, IMO, this is where
many db developments also fail. Developers tend not to think about the
physical aspects of the database - just slam the logical db design
onto it and fire up the code generators and report writers.
 
J

Jim Kennedy

tom thayer said:
Yes, it uses bind variables. Oracle Performance Manager and MS SQL
Server Profiler both show sql's with bind variables.

What is the bottle neck? I would run perf stats while it is doing a save
and see what it is doing.
Jim
 
S

Sten Rognes

I worked with a MS Project 2000/Oracle environment a couple of years
ago and it did not take much time with the product and MS support
techs to figure out that this is an application that really only will
scale when used with SQL Server. I bet you can spend one millon
dollars buying more hardware and 6 months tuning your Oracle database
backend without getting the environment to scale. The application is
the problem and the vendor has minimal experience and interest in
changing the app to work efficient Oracle.

Sten
 
J

Joel Garry

Thomas T said:
ODBC, eh? What version of Oracle is the HPUX box running? Is it running
Unix or Linux or Windows?

Since no one else said it, hp-ux is an operating system. Should post
the version, though.
I assume the SQL Server is running on a Microsoft system; are you using ODBC
to connect to SQL Server?

Here's my guess at the answers above: I'm guessing that the HP-UX is not
running Windows, so you're limited to ODBC. The SQL Server's box is
probably running an NT-based Windows O/s (Windows NT 3.5/4 or Win2k or
Win2003 server). You're probably using an OLE DB provider when you connect
to SQL Server, but the "reliable dinosaur" ODBC for connecting to your
Oracle. So I'm pointing towards ODBC as the bottleneck. Try loading Oracle
onto a Win2k server, and if it's at least Oracle 8, you can use Oracle's OLE
DB Provider. (Don't use the Microsoft OLE DB for Oracle driver, I found it
to be unreliable.)

Going from hp-ux to Win2k, unless it is some really old box, is likely
a BIG step down.
The Oracle system might also be tuned incorrectly. I believe some operating
systems could induce paging (swap file usage) depending on the memory asked
of it. So you say the HP UX box has a large amount of memory- if the SGA
was tuned for a large amount of memory, but the o/s converts that SGA size
into a swap file, you could be getting hurt by disc access. Dropping the
SGA size might help.

swapinfo -a would tell, but it would have to be seriously
misconfigured. I'd say, if you don't know what hp-ux is, you might
withhold giving advice about it.
Also, were separate tablespaces used for indexes & tables? Or is everything
in one tablespace?

Since other people will beat you up for this, I'll just ask, why do
you think separation will help?

jg
 
T

Tanel Poder

Hi!

If it works well with SQLServer and Project Server you could use that for
storing your projects and either replicate the tables your CRM needs to
Oracle or use them directly from MSSQL using heterogenous services...

Tanel.
 
P

Phil Singer

Sten said:
I worked with a MS Project 2000/Oracle environment a couple of years
ago and it did not take much time with the product and MS support
techs to figure out that this is an application that really only will
scale when used with SQL Server. I bet you can spend one millon
dollars buying more hardware and 6 months tuning your Oracle database
backend without getting the environment to scale. The application is
the problem and the vendor has minimal experience and interest in
changing the app to work efficient Oracle.

Sten

I was also involved with an attempt to use Oracle with MS Project 2000
(about 3.5 years ago, IFRC). We finally put a network analyzer on a
simple "open project" request. The result was even worse than this
thread's previous comments would suggest. The tables were designed
for an old version of Sql Server (multiple CHAR columns of 255 rather
than one of 2000). But the worst was the embedded SQL logic: as an
example, if table A was joined with tables B, C, and D, the trace
revealed that table A's primary key was fetched 4 times, just to make
sure that it was still there (if it was actually used, it was fetched
again).

What we had to do was to create a shell around MS Project (Corporate
politics dictated that Project and Oracle both be used). The user would
work on a project in MS Project. When it became time to save to the
database ("publish the project", "check it in", whatever term you want
to use) the shell would create a .csv version of the data. It would
then be FTP'd to the Oracle server, where a daemon was running. This
task would detect that a project was being uploaded. It would promptly
spawn another task which would SQL*Load the .csv to a temp table and
run a PL/SQL routine to update the main database.

Subsequent work on the project required sending a request to the shell
to unload a .csv version of the project.

This actually worked. Fortunately, corporate politics did not require
that anyone actually used this mess, so we never found out how well it
would scale (perhaps I should have worded that "how poorly it would
scale"). An earlier post suggests that Daniel found a real solution to
this problem, and I would be very interested in a sketch of where the
cure is.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top