> POE sounds good to start with, right?
For a daemon to monitor a queue? Probably not. I'd just write a simple
script that polls a database table and forks a new process to handle any
jobs it finds. Parallel::ForkManager is handy for this kind of thing.
Yes. POE looks overkill compare to P
My app is an online hotel booking system. That part is the part where it
searches availabilities through all combination of dates, number of
guests, rooms, packages, allotments, pricing structures, and more.
This doesn't sound like a "search" at all. Typically individuals know at
least SOMETHI
Badai Aqrandista wrote:
...Your requests take 110 seconds each? What is your application doing?
My app is an online hotel booking system. That part is the part where
it searches availabilities through all combination of dates, number of
guests, rooms, packages, allotments, pricing structur
On Wed, 2005-08-31 at 14:13 +1000, Badai Aqrandista wrote:
> Lots of SQL queries and calculation based on the query results. I've cached
> most of the query results, but the calculation results can't be cached
> because they depend on user inputs.
If this is serious number-crunching, it might be
On 30 Aug 2005, at 10:16, Badai Aqrandista wrote:
I used to use A::S::MySQL, but it created 2 connections for every
process. This caused 'Too many connections' error. So I tried to
use memcached. Now that I know how to make one connection per
process (using database.table identifier on al
Okay, so it's slow because it does a lot of work. How does it do this
search? Is it a bunch of SQL queries, or a request to a remote server of
some other kind, or an internal calculation? Is there some possible
caching you can do?
Lots of SQL queries and calculation based on the query resul
Badai Aqrandista wrote:
My app is an online hotel booking system. That part is the part where it
searches availabilities through all combination of dates, number of
guests, rooms, packages, allotments, pricing structures, and more.
Okay, so it's slow because it does a lot of work. How does it
You *do* have KeepAlive off in your httpd, right?
No...
I mean in the backend Apache, not the frontend whatever.
Yes, I understand... I put it in the backend...
When you're happering your server, is the CPU on the server
running at or near 100%? If not, you have other problems.
Almost
Hi,
Rewriting things in C is a last resort. If you must do it, try
Inline::C.
Yes... Inline::C looks good...
Your requests take 110 seconds each? What is your application doing?
My app is an online hotel booking system. That part is the part where it
searches availabilities through all com
On 30 Aug 2005, at 01:56, Badai Aqrandista wrote:
You *do* have KeepAlive off in your httpd, right?
No...
I mean in the backend Apache, not the frontend whatever.
When you're happering your server, is the CPU on the server
running at or near 100%? If not, you have other problems.
On Tue, 2005-08-30 at 14:25 +1000, Badai Aqrandista wrote:
> I have localized one subroutine that has been heavily called in the search
> function. That should be the way to pick the candidate, shouldn't it?
What usually matters when working on speed is where the most wall clock
time is being spe
Hi,
I see from an earlier post on the mason-users list that your app is
using HTML::Mason and Apache::Session::Memcached. It seems like you've
got quite a few variables to juggle in your performance bottleneck
analysis.
Actually, I am using TT and Mason in this application. TT is used to supp
Quoting Badai Aqrandista <[EMAIL PROTECTED]>:
>
> >
> >Then how do you know what to write in C?
> >
>
> I have localized one subroutine that has been heavily called in the
> search
> function. That should be the way to pick the candidate, shouldn't
> it?
>
> I have never done this and I am wo
Then how do you know what to write in C?
I have localized one subroutine that has been heavily called in the search
function. That should be the way to pick the candidate, shouldn't it?
I have never done this and I am worried that writing it in C would push the
project's deadline a littl
On Aug 29, 2005, at 5:11 PM, David Hodgkinson wrote:
You *do* have KeepAlive off in your httpd, right?
That is one of the great things about perlbal[1]. You can support
KeepAlive without using more resources.
- ask
[1] http://www.danga.com/perlbal/
--
http://www.askbjoernhansen.com/
On 29 Aug 2005, at 01:15, Badai Aqrandista wrote:
I think I have to write some of the code in C. I can't find any
other places in the code to optimize (or probably I uncounciously
don't want to make changes because I don't have any test suites)...
Then how do you know what to write in C?
On 29 Aug 2005, at 01:15, Badai Aqrandista wrote:
I think I have to write some of the code in C. I can't find any other
places in the code to optimize (or probably I uncounciously don't want to
make changes because I don't have any test suites)...
Then how do you know what to write in C?
top - 17:24:27 up 34 days, 9:01, 4 users, load average: 20.67, 12.84,
9.26
Tasks: 142 total, 7 running, 135 sleeping, 0 stopped, 0 zombie
Cpu(s): 88.7% us, 7.6% sy, 0.0% ni, 0.0% id, 2.0% wa, 0.0% hi, 1.7%
si
Mem:906736k total, 359464k used, 547272k free, 6184k
On Aug 24, 2005, at 0:49, Badai Aqrandista wrote:
I have put a reverse procy in front of my mod_perl servers and I
have set MaxClient to 30. I have tried setting it to 50, but it
slows down the response time.
This is what top gave me when I hammered the test server with httperf:
-
On Thu, 2005-08-25 at 17:46 +1000, Badai Aqrandista wrote:
> To reduce memory consumption -> look for ways to do more things on compile
> time
Also avoid loading large things into memory all at once (e.g. slurping
files), pass references instead of copies, use the disk more, etc.
> To reduce CP
If the performance you get is not enough for you, look for the
bottleneck. This could be running out of CPU, running out of memory, or
contention for a shared resource like a database. Then you fix it, by
changing code or buying hardware.
I think I've got the point now:
To reduce memory con
Hi all,
If the performance you get is not enough for you, look for the
bottleneck. This could be running out of CPU, running out of memory, or
contention for a shared resource like a database. Then you fix it, by
changing code or buying hardware.
I'll try that...
Thanks for helping me out..
On Wed, 2005-08-24 at 17:49 +1000, Badai Aqrandista wrote:
> I have put a reverse procy in front of my mod_perl servers and I have set
> MaxClient to 30. I have tried setting it to 50, but it slows down the
> response time.
Are you running out of memory when you set it to 50? That's what you
ar
On Wed, 2005-08-24 at 07:02 -0400, Sean Davis wrote:
> As an aside, are there rules of thumb about what cache works best in various
> situations?
Cache::FastMmap and BerkeleyDB are the fastest by far, but they are
local to one machine. Cache::Memcached or a simple key/value table in a
MySQL serve
On 8/23/05 10:04 PM, "Perrin Harkins" <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-08-24 at 10:31 +1000, Badai Aqrandista wrote:
>> Anyway, to fix this, I'm trying to make my onw shared memory with
>> Apache::SharedMem.
>
> You can share read-only data by loading it into normal variables during
> st
> Does this sound like fixing the wrong problem?
Yes. Put a reverse proxy in front of your server, tune MaxClients so
you won't go into swap, and then benchmark to see how much load you can
handle. Then think about tuning.
Thanks for replying...
I have put a reverse procy in front of my mo
On Wed, 2005-08-24 at 10:31 +1000, Badai Aqrandista wrote:
> Anyway, to fix this, I'm trying to make my onw shared memory with
> Apache::SharedMem.
Don't use that module. It's very inefficient.
> But it seems that shared memory is just a memory area
> that any process can read from or write to
On Tue, 2005-08-23 at 17:23 +1000, Badai Aqrandista wrote:
> How do I maintain the size of the shared memory between apache children?
> What cause a memory page to be copied (not shared) from perl's point of
> view?
Anything that writes to memory -- modifying any variable (even just
reading one i
On Tue, 2005-08-23 at 17:23 +1000, Badai Aqrandista wrote:
> How do I maintain the size of the shared memory between apache children?
> What cause a memory page to be copied (not shared) from perl's point of
> view?
Anything that writes to memory -- modifying any variable (even just
reading one i
29 matches
Mail list logo