--- Mohan2005 <[EMAIL PROTECTED]> wrote:

> 
> Thank you all kindly for these valuable comments and
> suggestions. I have
> notices somethings we have obvious problems and need
> to be addressed.
> 
> We are running this 10g on a hardware RAID-5 array
> (SCSI).
> 
> The 10g is at the backend of 12 tomcat servers load
> balanced thru a apache
> web server.
> 
> Each tomcat has a max. connections of 40
> simulataneously.
> 
> As you have said we have identified few sql
> processes overloading the CPU's.
> 
> However, we cannot move back to non HT technology as
> this is not provided at
> a HW level to us.
> 
> So were either planning to go for a quad (HT) with
> lots more RAM so to
> increase SGA.
> 
> Thanks again for your help.
> --
> View this message in context:
>
http://www.nabble.com/Recommended-Specs-for-Oracle-10g-db-server-t1340651.html#a3595161
> Sent from the Tomcat - User forum at Nabble.com.

I noticed one comment sent to you was their issue with
bottle necks dealt with locks.  We had a similar issue
with an MS SQL server at my last company.  I started
studying the issue a bit and the problem was we were
doing multiple inserts, updates, deletes in a
transaction using multiple JDBC (executeUpdate)calls. 


What I did to speed this up was to create a SQL Buffer
which would stack multiple insert,update, and delete
statements in a single string which would get sent to
the backend (database) server at once instead of this
happening individually.  The only thing you lose is
the update count returned by JDBC, but what you gain
is a HUGE performance gain by allow you to shorten the
over all transaction and significantly reduce the lock
time. 

We had things which were running 30 minutes or so
before we made the buffer changes.  After adding this
buffering mechanism we added to our DAO (Data Access
Objects....they were Plain Old Java Objects) the runs
of an individual users updates took between 30-60
seconds (1/2 minute thru 1 minute).  This was their
overall process time to perform all of the updates the
software was making to their data.  

So, our difference was better than we hoped for.  30
minutes vs 30-60 seconds is a performance gain to me
;-).  We had something like 1000-100,000 insert and
updates at a time.  What I did was come up with a
piece that would run multiple large SQL statements. 
Reason: different databases allow different buffer
sizes for the SQL which can be run at one time.  MS
SQL Server I limited the overall calling to around
1000 statements at a time.  You'll have to figure out
what happens with Oracle and how many statements can
be included at one time.  Basically:

buffer.addUpdate("INSERT......yada yada yada....");
buffer.addUpdate("INSERT......yada yada yada....");
buffer.addUpdate("UPDATE......yada yada yada....");
buffer.addUpdate("UPDATE......yada yada yada....");
buffer.addUpdate("UPDATE......yada yada yada....");
buffer.addUpdate("DELETE......yada yada yada....");
...etc

buffer.update();//runs the buffered SQL statements.

...every time I called addUpdate "limit" number of
times I added the current List to a Vector to hold
statements which would be called when update was
issued.  So...basically...update did:

//you've already gotten a Statement object s
for(int i = 0; i < vector.size();i++){
   List l = (List)vector.get(i);
   StringBuffer sql = new StringBuffer();
   for(int x = 0; x <l.size();x++){
      String s = (String)l.get(i);
      sql.append(s);
      sql.append('\n');
   }
   //remeber since you have multipe statements here
   //the last call is the only call the return
   //value mentions...update count of the last
   //statement in the buffer
   s.executeUpdate(sql.toString());
}

Anyways, I've used this scheme with MySQL, MS SQL,
Firebird, Sybase, and HSQLDB, and it worked equally
well with all.  You don't have to use it with
everything, but for very large DB transactions it
seems to be the best way to get around locks and hangs
and other performance issues.  So, maybe you could get
away with locating your main long running transactions
and replacing any code there with this type of scheme
to see how it helps.  Granted this may not even be
your issue as it depends on what your software is
doing, but if you are performing many (hundreds or
thousands or hundreds of thousands) updates in a
single transaction and using multipe executeUpdate
calls from JDBC to do this updates or using EJB or
Hibernate or something else like it then I'm guessing
this is going to be a large part of your problem if
not "the" problem.  MS SQL server hates long running
transactions (not sure about Oracle, but guessing it's
the same..just the way locks work), and it's PIDs
would show processes locking up and dropping off the
map, never coming back, and leaving locks locked
indefinitely or for hours at a time sometimes.

Basically once you are done implementing this
buffering the only real JDBC code you'll end up with
is getting the Connection, Statement, and calling
update on your buffer which calls these buffered
statements.

Connection con = ...
con.setAutoCommit(false);//...the usual
Statement s = ...


buffer.update();//errors can still be thrown so you
could rollback on errors....

con.commit();

Anyways, that's a little vague, but you should be able
to get the point.  I've done it and it works, and was
the only way we were able to support our number of
users with this educational software we had created. 
Before that database updates were taking way too long
and it greatly reduced the number of users we could
support and we were getting all kinds of complaints.  

Now it's running fine and supporting more and more
users with no real ceiling reached as of yet, and that
was the only change we implemented once we knew the
database was our bottlekneck.  We had to make some
changes in our overall code obviously, but it was
pretty easy as we had already implemented a set of
Objects which were responsible for database access
(the norm) instead of having anything scattered into
other objects.  We called this layer the DAO (Data
Access Objects) layer, and they are called other
things by different people.  

You'll know best whether this would help you or not,
but don't let the need to change a little code deter
you as getting around database bottleknecks without
being a little inventive can cost you hundreds of
thousands if not millions by trying to get around them
with more clustering and hardware schemes until you
know what the real issue is.  My guess: the person who
told you his issue was locks is giving you the best
advice.

Hope some of it helps,

Wade

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

  • RE: Recommended Specs for Oracle 10g db server Wade Chandler

Reply via email to