On Tuesday 07 November 2006 17:17, Torsten Foertsch wrote:
> You can directly look at /proc//smaps or use Linux::Smaps. I
> don't know what GTop does but I know how /proc//smaps work.
> Also, don't do it too often. Reading from /proc//smaps is really
> expensive particularly for large process sizes
On Nov 7, 2006, at 1:03 PM, Perrin Harkins wrote:
I think some of these are a little over-zealous, Jonathan.
oh, they're completely off the hook. but they work.
BTW, none of my approaches were pre mature optimization. They were
all based on profiling code via Apache::Status to get better
On Tue, 2006-11-07 at 11:35 -0500, Jonathan Vanasco wrote:
> some tips:
I think some of these are a little over-zealous, Jonathan.
> but things like File::Find eat up a TON of memory ( 2+
> mb ), and
> most functionality can be coded in 10-15 minutes. ( vs 2 minutes if
On Nov 7, 2006, at 6:57 AM, Kjetil Kjernsmo wrote:
Exactly what numbers are you then reading out of top then?
Run your app on a clean box -- ie: don't run anything other than
apache/mp ( if you can toss the db onto another service , do it ).
but turn off everything else.
Reboot your mac
. I hadn't read up on that, because I thought
> Apache2::Resource did the same thing, but yeah, I think that is
> something we want to look into. Also, you're saying that the
> $Apache2::SizeLimit::HOW_BIG_IS_IT is a better way to assess the shared
> memory usage then GTop?
up on that, because I thought
Apache2::Resource did the same thing, but yeah, I think that is
something we want to look into. Also, you're saying that the
$Apache2::SizeLimit::HOW_BIG_IS_IT is a better way to assess the shared
memory usage then GTop?
--
Kjetil Kjernsmo
Information Systems Developer
Opera Software ASA
h i've extensively profiles, i've got about 110MB of
> shared memory, and each child consumes 10-20 mb of unshared memory
Wow, that's very nice.
--
Kjetil Kjernsmo
Information Systems Developer
Opera Software ASA
100-500 requests on each child to test the process growth.
which version of apache are you using? are you using threads?
on my app, which i've extensively profiles, i've got about 110MB of
shared memory, and each child consumes 10-20 mb of unshared memory
On Monday 06 November 2006 17:48, Kjetil Kjernsmo wrote:
> I'm working on improving the scalability and performance of our app. It
> is rather bad, and I noticed that not many minutes after restart, the
> amount of shared memory is down to 10% of the total memory (and I'm at
&
Hi all!
I'm working on improving the scalability and performance of our app. It
is rather bad, and I noticed that not many minutes after restart, the
amount of shared memory is down to 10% of the total memory (and I'm at
least at 1/3 on my home Axkit::App::TABOO), and besides, it i
On Tue, 2006-04-18 at 14:23 -0400, Osmani Gomez wrote:
> The interesting thing is there is another server with the same OS,
> pretty same config with the same code, but apache was installed there
> some time ago, I do not know the compile parameter. This one show a high
> shared mem
Hello List,
I have a weird problem, related to shared memory.
I am using
OS: RedHat 7.3
Apache 1.3.34
mod_perl 1.29
the compile options were the standard:
DO_HTTPD=1 USE_APACI=1 EVERYTHING=1
for apache configure: activate-module=src/modules/perl/libperl.a
--enable-shared=perl ...
perl
>>
>Any advice on memory use on Solaris is appreciated
><
>
>Take a look at:
>
>http://people.ee.ethz.ch/~oetiker/tobjour/2003-02-12-09-00-f-1.html
>
>There's some useful information, including a chunk covering what you're
>after. Specifically try:
>
> pmap -x PID
>
>the private/anon co
Any advice on memory use on Solaris is appreciated
<
Take a look at:
http://people.ee.ethz.ch/~oetiker/tobjour/2003-02-12-09-00-f-1.html
There's some useful information, including a chunk covering what you're
after. Specifically try:
pmap -x PID
the private/anon column shows the am
dules I use, and
>
> have loaded the specific the DB driver etc. I'm not using CGI, and
> only one custom module is installed (which in turn uses the CPAN
> modules in the startup file)
>
> The problem I have is the top and ps commands don't give the shared
>
ly one custom module is installed (which in turn uses the CPAN modules in the startup file)
The problem I have is the top and ps commands don't give the shared memory size. However, if I issue
ipcs -m
there are no shared memory segments. Should mod_perl make use of them? Likewise, when I
> POE sounds good to start with, right?
For a daemon to monitor a queue? Probably not. I'd just write a simple
script that polls a database table and forks a new process to handle any
jobs it finds. Parallel::ForkManager is handy for this kind of thing.
Yes. POE looks overkill compare to P
My app is an online hotel booking system. That part is the part where it
searches availabilities through all combination of dates, number of
guests, rooms, packages, allotments, pricing structures, and more.
This doesn't sound like a "search" at all. Typically individuals know at
least SOMETHI
Badai Aqrandista wrote:
...Your requests take 110 seconds each? What is your application doing?
My app is an online hotel booking system. That part is the part where
it searches availabilities through all combination of dates, number of
guests, rooms, packages, allotments, pricing structur
On Wed, 2005-08-31 at 14:13 +1000, Badai Aqrandista wrote:
> Lots of SQL queries and calculation based on the query results. I've cached
> most of the query results, but the calculation results can't be cached
> because they depend on user inputs.
If this is serious number-crunching, it might be
On 30 Aug 2005, at 10:16, Badai Aqrandista wrote:
I used to use A::S::MySQL, but it created 2 connections for every
process. This caused 'Too many connections' error. So I tried to
use memcached. Now that I know how to make one connection per
process (using database.table identifier on al
Okay, so it's slow because it does a lot of work. How does it do this
search? Is it a bunch of SQL queries, or a request to a remote server of
some other kind, or an internal calculation? Is there some possible
caching you can do?
Lots of SQL queries and calculation based on the query resul
Badai Aqrandista wrote:
My app is an online hotel booking system. That part is the part where it
searches availabilities through all combination of dates, number of
guests, rooms, packages, allotments, pricing structures, and more.
Okay, so it's slow because it does a lot of work. How does it
You *do* have KeepAlive off in your httpd, right?
No...
I mean in the backend Apache, not the frontend whatever.
Yes, I understand... I put it in the backend...
When you're happering your server, is the CPU on the server
running at or near 100%? If not, you have other problems.
Almost
Hi,
Rewriting things in C is a last resort. If you must do it, try
Inline::C.
Yes... Inline::C looks good...
Your requests take 110 seconds each? What is your application doing?
My app is an online hotel booking system. That part is the part where it
searches availabilities through all com
On 30 Aug 2005, at 01:56, Badai Aqrandista wrote:
You *do* have KeepAlive off in your httpd, right?
No...
I mean in the backend Apache, not the frontend whatever.
When you're happering your server, is the CPU on the server
running at or near 100%? If not, you have other problems.
On Tue, 2005-08-30 at 14:25 +1000, Badai Aqrandista wrote:
> I have localized one subroutine that has been heavily called in the search
> function. That should be the way to pick the candidate, shouldn't it?
What usually matters when working on speed is where the most wall clock
time is being spe
Hi,
I see from an earlier post on the mason-users list that your app is
using HTML::Mason and Apache::Session::Memcached. It seems like you've
got quite a few variables to juggle in your performance bottleneck
analysis.
Actually, I am using TT and Mason in this application. TT is used to supp
Quoting Badai Aqrandista <[EMAIL PROTECTED]>:
>
> >
> >Then how do you know what to write in C?
> >
>
> I have localized one subroutine that has been heavily called in the
> search
> function. That should be the way to pick the candidate, shouldn't
> it?
>
> I have never done this and I am wo
Then how do you know what to write in C?
I have localized one subroutine that has been heavily called in the search
function. That should be the way to pick the candidate, shouldn't it?
I have never done this and I am worried that writing it in C would push the
project's deadline a littl
On Aug 29, 2005, at 5:11 PM, David Hodgkinson wrote:
You *do* have KeepAlive off in your httpd, right?
That is one of the great things about perlbal[1]. You can support
KeepAlive without using more resources.
- ask
[1] http://www.danga.com/perlbal/
--
http://www.askbjoernhansen.com/
On 29 Aug 2005, at 01:15, Badai Aqrandista wrote:
I think I have to write some of the code in C. I can't find any
other places in the code to optimize (or probably I uncounciously
don't want to make changes because I don't have any test suites)...
Then how do you know what to write in C?
On 29 Aug 2005, at 01:15, Badai Aqrandista wrote:
I think I have to write some of the code in C. I can't find any other
places in the code to optimize (or probably I uncounciously don't want to
make changes because I don't have any test suites)...
Then how do you know what to write in C?
top - 17:24:27 up 34 days, 9:01, 4 users, load average: 20.67, 12.84,
9.26
Tasks: 142 total, 7 running, 135 sleeping, 0 stopped, 0 zombie
Cpu(s): 88.7% us, 7.6% sy, 0.0% ni, 0.0% id, 2.0% wa, 0.0% hi, 1.7%
si
Mem:906736k total, 359464k used, 547272k free, 6184k
On Aug 24, 2005, at 0:49, Badai Aqrandista wrote:
I have put a reverse procy in front of my mod_perl servers and I
have set MaxClient to 30. I have tried setting it to 50, but it
slows down the response time.
This is what top gave me when I hammered the test server with httperf:
-
On Thu, 2005-08-25 at 17:46 +1000, Badai Aqrandista wrote:
> To reduce memory consumption -> look for ways to do more things on compile
> time
Also avoid loading large things into memory all at once (e.g. slurping
files), pass references instead of copies, use the disk more, etc.
> To reduce CP
If the performance you get is not enough for you, look for the
bottleneck. This could be running out of CPU, running out of memory, or
contention for a shared resource like a database. Then you fix it, by
changing code or buying hardware.
I think I've got the point now:
To reduce memory con
Hi all,
If the performance you get is not enough for you, look for the
bottleneck. This could be running out of CPU, running out of memory, or
contention for a shared resource like a database. Then you fix it, by
changing code or buying hardware.
I'll try that...
Thanks for helping me out..
On Wed, 2005-08-24 at 17:49 +1000, Badai Aqrandista wrote:
> I have put a reverse procy in front of my mod_perl servers and I have set
> MaxClient to 30. I have tried setting it to 50, but it slows down the
> response time.
Are you running out of memory when you set it to 50? That's what you
ar
On Wed, 2005-08-24 at 07:02 -0400, Sean Davis wrote:
> As an aside, are there rules of thumb about what cache works best in various
> situations?
Cache::FastMmap and BerkeleyDB are the fastest by far, but they are
local to one machine. Cache::Memcached or a simple key/value table in a
MySQL serve
On 8/23/05 10:04 PM, "Perrin Harkins" <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-08-24 at 10:31 +1000, Badai Aqrandista wrote:
>> Anyway, to fix this, I'm trying to make my onw shared memory with
>> Apache::SharedMem.
>
> You can share read-only data
> Does this sound like fixing the wrong problem?
Yes. Put a reverse proxy in front of your server, tune MaxClients so
you won't go into swap, and then benchmark to see how much load you can
handle. Then think about tuning.
Thanks for replying...
I have put a reverse procy in front of my mo
On Wed, 2005-08-24 at 10:31 +1000, Badai Aqrandista wrote:
> Anyway, to fix this, I'm trying to make my onw shared memory with
> Apache::SharedMem.
Don't use that module. It's very inefficient.
> But it seems that shared memory is just a memory area
> that any pro
On Tue, 2005-08-23 at 17:23 +1000, Badai Aqrandista wrote:
> How do I maintain the size of the shared memory between apache children?
> What cause a memory page to be copied (not shared) from perl's point of
> view?
Anything that writes to memory -- modifying any variable (even ju
On Tue, 2005-08-23 at 17:23 +1000, Badai Aqrandista wrote:
> How do I maintain the size of the shared memory between apache children?
> What cause a memory page to be copied (not shared) from perl's point of
> view?
Anything that writes to memory -- modifying any variable (even ju
I think I have to reword the question:
How do I maintain the size of the shared memory between apache children?
What cause a memory page to be copied (not shared) from perl's point of
view?
This brings the question of how to increase shared memory usage. I've tried
to load eve
Kevin A. McGrail wrote:
http://www.thoughtworthy.com/downloads/Apache-DBI-0.97-KAM-patch
Here's a list of the changes:
Patch June 29, 2005 - Thanks to Kevin A. McGrail (ThoughtWorthy Media,
Inc.)
- Added a feature 'Apache::AuthDBI->setProjID(1)' to set a Shar
he project ID for the shared memory key.
http://www.thoughtworthy.com/downloads/Apache-DBI-0.97-KAM-patch
Here's a list of the changes:
Patch June 29, 2005 - Thanks to Kevin A. McGrail (ThoughtWorthy Media,
Inc.)
- Added a feature 'Apache::AuthDBI->setProjID(1)' to s
On Tue, Mar 15, 2005 at 05:31:28PM -0500, Perrin Harkins wrote:
> BerkeleyDB, MySQL, or SQLite3, but I don't think BDB works in Win32.
Please don't tell my code that :). It's been using the BerkeleyDB module
on a daily basis for nearly a year with few problems. The module is
available as a PPM on
On Wed, 2005-03-16 at 19:30 -0500, jonathan vanasco wrote:
> just wanted to clarify - i said that it was closer to the speed of bdb
> than mysql -- not faster than bdb
In my tests, it's a lot slower than MySQL. I'm doing 50% writes in this
test though, so it may be a poor benchmark.
I'm not try
I just saw perrin's post on perlmonks -- can't remember my login/pass
for there
just wanted to clarify - i said that it was closer to the speed of bdb
than mysql -- not faster than bdb
most of my tests have been with selects though -- i routinely use it as
a way to store/retrieve certain types
perl processes (or threads) running under
> Apache.
It's possible on Unix, using mmap or SysV shared memory segments. I
have an application that uses SysV shared memory and it works great.
I don't know what's possible on Windows. Somebody on perlmonks would
know:
http://www.perlmonks.org/
ScottG.
And we appreciate it. :)
ditto
Perrin Harkins wrote:
I just want to make it very clear to people that they should not fork
with an open db handle and then try to use it.
And we appreciate it. :)
-ofer
On Tue, 2005-03-15 at 19:40 -0500, jonathan vanasco wrote:
> On Mar 15, 2005, at 7:30 PM, Perrin Harkins wrote:
>
> > Don't fork with an open database connection and then try to use it. It
> > will not work.
>
> Really? I figured that it would work just right -- except whatever
> you programm
On Mar 15, 2005, at 7:30 PM, Perrin Harkins wrote:
Don't fork with an open database connection and then try to use it. It
will not work.
Really? I figured that it would work just right -- except whatever
you programmed to use that will crash and burn and time out waiting to
access the single s
On Tue, 2005-03-15 at 18:40 -0500, Jonathan Vanasco wrote:
> If you instantiate a DBI handle before the fork, then that handle is
> shared to all the children. If you instantiate the handle after the
> fork, the handle belongs to only that child , and shared across the
> requests of that child.
is an in-process call to a library that
keeps a shared memory cache. In other words, using BDB means using
shared memory.
> On the other hand, mod_perl seems to have some deep interactions with
> the innards of the Apache server, for example direct manipulation of the
> "bucket bri
ere's been a lot of work done
by a lot of people on the DBI, so that's probably where you'll get the
most mileage.
You keep saying that you want to share memory, but what you really want
to do is share data. In terms of security, stability, etc. you solve a
lot of problems by avoidin
If you instantiate the handle after the
fork, the handle belongs to only that child , and shared across the
requests of that child. So, if you get a handle, then spawn 12
children, they will each share the single handle. If your code creates
a handle after the fork, then you will have 12 hand
een a number of
Apache2/mod_perl2 request handlers ? If yes, how ?
BerkeleyDB, MySQL, or SQLite3, but I don't think BDB works in Win32. I
don't think any of the shared memory modules work on Win32 either, and
they tend to be slower than MySQL anyway, for various reasons.
Thank you. Now
to Skylos (and Jonathan before),
Skylos wrote:
memcached has nothing specifically to do with database access speed.
It is a general purpose shared cache utility that can be accessed by
all the systems on a platform independant basis.
[...]
Sorry, I misunderstood the information on the homepage of
> Apache2/mod_perl2 request handlers ? If yes, how ?
BerkeleyDB, MySQL, or SQLite3, but I don't think BDB works in Win32. I
don't think any of the shared memory modules work on Win32 either, and
they tend to be slower than MySQL anyway, for various reasons.
- Perrin
Perrin Harkins wrote:
On Tue, 2005-03-15 at 15:05 -0500, Jonathan Vanasco wrote:
i've found sqlite3 to be wy faster than mysql for simple
transactions and, while there can be delays from locking, the speed
improvements of querying more than make up for it in the instances I've
tested
Sounds
On Tue, 2005-03-15 at 15:05 -0500, Jonathan Vanasco wrote:
> i've found sqlite3 to be wy faster than mysql for simple
> transactions and, while there can be delays from locking, the speed
> improvements of querying more than make up for it in the instances I've
> tested
Sounds good. I'll s
i've found sqlite3 to be wy faster than mysql for simple
transactions and, while there can be delays from locking, the speed
improvements of querying more than make up for it in the instances I've
tested
obviously, its not an answer to everything -- but for some situations,
its quite nice
* Perrin Harkins shaped the electrons to say...
On Tue, 2005-03-15 at 14:11 -0500, Jonathan Vanasco wrote:
sqlite3 is closer in speed to bdb than mysql, and offers more of the
'accessibility' that mysql offers.
i find myself using it much more than bdb lately
SQLite2 was pretty slow when I benchma
On Tue, 2005-03-15 at 14:11 -0500, Jonathan Vanasco wrote:
> sqlite3 is closer in speed to bdb than mysql, and offers more of the
> 'accessibility' that mysql offers.
> i find myself using it much more than bdb lately
SQLite2 was pretty slow when I benchmarked it for simple hash-like
usage. MySQ
sqlite3 is closer in speed to bdb than mysql, and offers more of the
'accessibility' that mysql offers.
i find myself using it much more than bdb lately
On Mar 15, 2005, at 1:04 PM, Perrin Harkins wrote:
This is true, but it sounds like Andre wants to replace a bespoke
database structure with so
On Tue, 2005-03-15 at 17:47 +, Martin Moss wrote:
> I haven't used Memchache yet, but it has nothing to do
> with databases
> It CAN be used to prevent database load, OR it could
> be used to prevent too much access to flocking
> files...
This is true, but it sounds like Andre wants to re
--- André Warnier <[EMAIL PROTECTED]> wrote:
> Jonathan Vanasco wrote:
> >
> > look into memcached -- http://danga.com/memcached/
>
> Thank you for the link, I didn't know that module,
> and I'll keep it in
> mind for some other projects.
> But it is slightly off the mark as far as the issue
>
subject, found a
lot but a bit confusing...
Environment :
Apache 2 / mod_perl 2 / perl 5.8.4+
Windows and Unix(es)
Apache mod_perl2 Handlers
The basic question is : does there exist a platform-independent way for
a bunch of apache2/mod_perl2 handlers to share information that would be
stored in some
mod_perl2 Handlers
The basic question is : does there exist a platform-independent way for
a bunch of apache2/mod_perl2 handlers to share information that would
be
stored in some single shared memory area, assuming part of the
thus-shared information changes from time to time ?
More details :
My
apache2/mod_perl2 handlers to share information that would be
stored in some single shared memory area, assuming part of the
thus-shared information changes from time to time ?
More details :
My application is a document storage and retrieval system, where
documents are stored in some filing structure on
ooks
at first glance -- mainly due to shared copy-on-write
pages (vs. IPC shared memory). The amount of RSS used
by any one of a process group can become runtime dependent
if any of the COW memory gets tickled.
Now add in issues of swapped proc's that may have lazy
evaluation of their COW pa
Suddenly, Perrin Harkins uttered:
On Mon, 2005-02-14 at 18:45 -0500, Stas Bekman wrote:
that approach is not very practical if change your code base constantly.
Since you will have to retune things every time you change your code.
I know, it's terrible, but it's all I've come up with so far.
Perhap
Perrin Harkins wrote:
On Mon, 2005-02-14 at 18:45 -0500, Stas Bekman wrote:
that approach is not very practical if change your code base constantly.
Since you will have to retune things every time you change your code.
I know, it's terrible, but it's all I've come up with so far. Maybe we
need t
On Mon, 2005-02-14 at 18:45 -0500, Stas Bekman wrote:
> that approach is not very practical if change your code base constantly.
> Since you will have to retune things every time you change your code.
I know, it's terrible, but it's all I've come up with so far. Maybe we
need to rethink how the
Perrin Harkins wrote:
[...]
It seems that to tune properly you would need to send in requests with a
benchmarking tool and keep adjusting MaxClients upward while watching
the free memory on the box. That's really ugly. Maybe there's a way to
count memory used by each process slowly but accurately
On Mon, 2005-02-14 at 17:39 -0500, Perrin Harkins wrote:
> > Thanks for pursuing that issue at the linux kernel list, Richard. As you
> > have suggested it doesn't look very good.
>
> I want to add my thanks too.
You are welcome. I wish I wasn't the bearer of such bad news tho.
> Or on anythin
m SIZE, but it seems that
won't work.
> Once we get some "official" resolution we should update the tools like
> Apache-VMonitor to reflect that discovery, in order to not mislead users.
> More importantly it affects Apache::SizeLimit and Apache::GTopSizeLimit,
> which as
iscovery, in order to not mislead users.
More importantly it affects Apache::SizeLimit and Apache::GTopSizeLimit,
which as the things are now, aren't working properly when someone limits
on shared memory.
Can others give us an update on other OSes? I know Gtop works on BSDs and
Solaris.
On Thu, 06 Jan 2005 13:30:40 -0700
"John D. Sanders" <[EMAIL PROTECTED]> wrote:
> How can I increase the amount of shared memory space that this error
> is talking about?
>
> Apache/2.0.46 (Red Hat) configured -- resuming normal operations
> Loading Apache:
How can I increase the amount of shared memory space that this error is
talking about?
Apache/2.0.46 (Red Hat) configured -- resuming normal operations
Loading Apache::Request at runtime. You could increase shared memory
between Apache processes by preloading it in your httpd.conf or
Hello again,
Please keep it on the List.
On Sun, 21 Dec 2003, Daniel Wilson wrote:
> The new system has 512MB ram as aposed to the old having 1.5GB
That in itself would suggest to me that the settings in your httpd.conf
might need to be different on the two machines. Are they? Have you
looked
Hi there,
On Sat, 20 Dec 2003, Daniel Wilson wrote:
> Moved my site to a new server.
>
> Old Server
> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
> 30056 apache 9 0 10964 10M 10288 S 0.0 0.7 0:00 0 httpd
> New Server
> 2733 apache16 0 12716 1
Daniel Wilson wrote:
Its not sharing the modules at start up and none of
the config or compile options have changed.
You diff'ed httpd.conf and startup.pl and found no differences at all?
The only thing different is the fact the old server has a custom
SMP kernel and the new has redhat's stock ke
Moved my site to a new server.
Old Server
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
30056 apache 9 0 10964 10M 10288 S 0.0 0.7 0:00 0 httpd
30057 apache 9 0 10964 10M 10288 S 0.0 0.7 0:00 0 httpd
30058 apache 9 0 10964 10M 1028
88 matches
Mail list logo