-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Rendra,

At the risk of getting sucked into the insanity...

On 4/8/2010 7:19 AM, Cin Lung wrote:
> It's running 32 Bit windows 2003 only With 8GB Ram.

32-bit Microsoft Windows can access 8GiB of RAM (much more, in fact),
but each process is still limited to a 4GiB address space, and
practically Windows will never give you that much anyway. If you want to
use more than roughly 1800MiB for Tomcat, you'll have to go to 64-bit
Microsoft Windows.

> I am merely trying to find a way out and I have exhausted my
> resources to make the software as fast as possible.

You didn't tell us what exhaustive measures you're already tried.

> By the way the number of data that is being processed by the heavy app is in
> millions of rows. I ran the SQL directly to the mysql server and it worked
> ok (within minutes and not freezing the server).

That suggests one of a few things:

1. Remoteness (that is, not running on the server running MySQL)
   has a penalty
2. Your webapp isn't building the query you think it is
3. Your webapp is doing much more with the results than simply listing
   them

Let's examine those possibilities one at a time, shall we?

First, if remoteness is incurring a penalty, it's generally because
you're transmitting a lot of data. Are you transmitting millions of rows
from the server to the client? Why? Is this something that can be done
in a stored procedure on the server, or even on a separate, server-only
process? IIRC Connector/J, in its default configuration, downloads the
entire result set before returning control to your code. That means that
if you are selecting millions of rows, you have to wait for them all to
go from the server to your webapp, and they all take up a bunch of
memory while you're working with them. Have you verified this is not
happening to you?

Second, have you dumped-out the query your webapp is building to make
sure it's the one you think is running?

Third, is your webapp doing anything else with these rows other than,
say, performing a simple examination of them? I can't imagine why a web
application would need to fetch millions of rows at once. I can
understand fetching a small portion of millions of rows at once (e.g.
SELECT a,b,c FROM huge_table LIMIT 10000,100) and displaying them to the
user, but never actually transferring that many rows.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAku+VB8ACgkQ9CaO5/Lv0PAxTQCeLdkcbe7sdKz4M3/8ScXCYmRl
4XAAnRbd5z3H4gl7b8Hhxj1n8eAW9G1H
=pieL
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to