Neil Gunton wrote:
[...]
At the risk of stating the obvious, but since you are talking about
mod_perl (and thus I suppose perl), the basic module File::Find is a
good starting point to collect all kinds of statistics about a file
hierarchy.
Such as how many levels maximum and average, how many
Neil Gunton wrote:
Neil Gunton wrote:
It seems like this might have something to do with mod_deflate, which I
am using in combination with mod_disk_cache. This page gives a clue that
there might be a problem with the way files are cached when these
modules are both enabled:
http://www.digita
Neil Gunton wrote:
It seems like this might have something to do with mod_deflate, which
I am using in combination with mod_disk_cache. This page gives a clue
that there might be a problem with the way files are cached when these
modules are both enabled:
http://www.digitalsanctuary.com/tec
Adam Prime wrote:
That does look like a big deal, if i were in your situation, I'd try
running with only mod_deflate, then only mod_cache, and see what
happens. There are benefits to running the reverse proxy alone (without
mod_cache), so that'd be the first scenario i'd try.
Or split them
Neil Gunton wrote:
Well, the du just finished, it took 214 minutes to complete. I just took
a look at one of the directories in the cache. Now, I have it set for a
depth of 3, so I looked at d/d/d just randomly selected. Then I did a du
there. Here's the output:
server:/var/cache/www/d/d/d# du -
On Mon, Nov 24, 2008 at 4:15 PM, Neil Gunton <[EMAIL PROTECTED]> wrote:
> Perrin Harkins wrote:
>>
>> A ton of RAM in the server might help too.
>
> I've already got 4GB in there.
Some desktop machines ship with that much these days. You could bump
it up to 16 or 32 (assuming it's 64-bit) pretty
Perrin Harkins wrote:
A ton of RAM in the server might help too.
I've already got 4GB in there.
Well, the du just finished, it took 214 minutes to complete. I just took
a look at one of the directories in the cache. Now, I have it set for a
depth of 3, so I looked at d/d/d just randomly select
On Mon, Nov 24, 2008 at 8:18 AM, Thomas Lindgren
<[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm using mod_perl2. I'd like to reject user requests, e.g., very large
> PUTs, after examining headers and authorizing the user, but before the
> request body is read. Quota checking could be an example of wh
On Mon, Nov 24, 2008 at 2:52 PM, André Warnier <[EMAIL PROTECTED]> wrote:
> Can you do a "return" from a "require"-d file, and what does it do exactly ?
Doing an exit from a file during a require() is not normally a
problem. Try it. It's kind of a bad programming practice, but Perl
will tolerate
On Mon, Nov 24, 2008 at 3:46 PM, Michael Peters <[EMAIL PROTECTED]> wrote:
> He's already using RAID0, which should be the best performance of RAID since
> it doesn't have to use any parity blocks/disks right?
Yes, I missed that. He could still improve the throughput by adding more disks.
> And
On Mon, 24 Nov 2008, Neil Gunton wrote:
I think the issue here is the large size of the directory tree itself -
simply traversing this seems to be a problem. I started off a du this
morning on that tree, at around 9am, and it's now after 12 midday and
the command is still not done yet. Meanwh
On Mon, Nov 24, 2008 at 03:37:29PM -0500, Perrin Harkins wrote:
> On Mon, Nov 24, 2008 at 3:16 PM, Michael Peters <[EMAIL PROTECTED]> wrote:
> > Well except for getting 15K disks you probably won't be able to get much
> > more improvement from just the hardware.
>
> You don't think so? RAID and S
Thomas Lindgren wrote:
Hi all,
I'm using mod_perl2. I'd like to reject user requests, e.g., very
large PUTs, after examining headers and authorizing the user, but
before the request body is read. Quota checking could be an example of
where this is useful.
However, at this time, even if I re
Perrin Harkins wrote:
On Mon, Nov 24, 2008 at 3:16 PM, Michael Peters <[EMAIL PROTECTED]> wrote:
Well except for getting 15K disks you probably won't be able to get much
more improvement from just the hardware.
You don't think so? RAID and SSD can both improve your write
throughput pretty sig
André Warnier wrote:
Neil Gunton wrote:
[...]
Hi.
I am not really an expert on large websites, caches and so on, but in
our applications we are managing a large number of files.
One of the things we have learned over the years, is that even on modern
operating systems, having large numbers of e
Michael Peters wrote:
Michael Peters wrote:
But these benchmarks (http://www.debian-administration.org/articles/388)
say the following:
For quick operations on large file tree, choose Ext3 or XFS.
Benchmarks from other authors have
supported the use of ReiserFS for operations on large nu
On Mon, Nov 24, 2008 at 3:16 PM, Michael Peters <[EMAIL PROTECTED]> wrote:
> Well except for getting 15K disks you probably won't be able to get much
> more improvement from just the hardware.
You don't think so? RAID and SSD can both improve your write
throughput pretty significantly.
- Perrin
Michael Peters wrote:
According to these benchmarks
(http://fsbench.netnation.com/new_hardware/2.6.0-test9/scsi/bonnie.html)
ReiserFS handles deletes much better than ext2 (10,015/sec vs 729/sec)
But these benchmarks (http://www.debian-administration.org/articles/388) say
the following:
F
Neil Gunton wrote:
Perrin Harkins wrote:
On Mon, Nov 24, 2008 at 2:42 PM, Neil Gunton <[EMAIL PROTECTED]> wrote:
The section on "Maintaining the Disk Cache" says you should use
htcacheclean, which is what I've been doing, and it doesn't seem to
be up to
the job.
I can't speak to your file
Neil Gunton wrote:
[...]
Hi.
I am not really an expert on large websites, caches and so on, but in
our applications we are managing a large number of files.
One of the things we have learned over the years, is that even on modern
operating systems, having large numbers of entries in each directo
Perrin Harkins wrote:
On Mon, Nov 24, 2008 at 2:42 PM, Neil Gunton <[EMAIL PROTECTED]> wrote:
The section on "Maintaining the Disk Cache" says you should use
htcacheclean, which is what I've been doing, and it doesn't seem to be up to
the job.
I can't speak to your filesystem question but you
Perrin Harkins wrote:
On Thu, Nov 20, 2008 at 4:50 PM, Bruce Johnson
<[EMAIL PROTECTED]> wrote:
This happens when the exit is triggered in the nested require file. which is
code like this:
# Check to see if the person was logged in, and if not go to login page and
# pass the url and parameters
On Mon, Nov 24, 2008 at 2:42 PM, Neil Gunton <[EMAIL PROTECTED]> wrote:
> The section on "Maintaining the Disk Cache" says you should use
> htcacheclean, which is what I've been doing, and it doesn't seem to be up to
> the job.
I can't speak to your filesystem question but you might consider
getti
Neil Gunton wrote:
http://httpd.apache.org/docs/2.0/mod/mod_disk_cache.html#cachegcinterval
Oops - sorry, I seem to have been looking at the 2.0 docs, rather than
the 2.2. In 2.2, it appears that CacheGCInterval has disappeared...
Now, looking at the 2.2. caching guide:
http://httpd.apache.
Perrin Harkins wrote:
One thing you didn't mention is why you're using mod_cache at all for
things not generated by mod_perl. Why don't you serve the static
files directly from your front-end server? That's the most common
setup I've seen, with proxying only for mod_perl requests.
Yes, I am o
On Mon, Nov 24, 2008 at 1:56 PM, Neil Gunton <[EMAIL PROTECTED]> wrote:
> Someone replied to me off-list suggesting using Squid instead of httpd for
> the front-end caching reverse proxy. I guess that is a good question - I use
> Apache for proxying mainly because I know apache quite well, and like
On Thu, Nov 20, 2008 at 4:50 PM, Bruce Johnson
<[EMAIL PROTECTED]> wrote:
> This happens when the exit is triggered in the nested require file. which is
> code like this:
>
> # Check to see if the person was logged in, and if not go to login page and
> # pass the url and parameters
> if (!defined($
Neil Gunton wrote:
The cache and front-end proxy help to serve images without bogging down
the heavy mod_perl processes, while also obviously caching the mod_perl
content. The site gets around 100,000 page requests or more per day. The
cache is set to 1000MB, with htcacheclean running in daemon
Hi all,
I posted this to the Apache httpd users list, but no reply there, so I'm
posting here in the hopes that someone else who uses mod_perl with
mod_cache in a reverse proxy setup might have insight.
I am using Apache 2.2.9 (built from source) on Debian Lenny to run a
fairly large communi
Hi all,
I'm using mod_perl2. I'd like to reject user requests, e.g., very large
PUTs, after examining headers and authorizing the user, but before the
request body is read. Quota checking could be an example of where this is
useful.
However, at this time, even if I return an error in the handler,
30 matches
Mail list logo