Re: Setting HTTP-STATUS and Filters

2006-03-18 Thread Tom Schindl
Hi,

solved it myself $f->r->status() how stupid I haven't seen this before,
works like a charme but one more question, what is a filter supposed to
return on ERROR why does one have to set the status one self and return
Apache2::Const::SERVER_ERROR from the filter handller isn't setting the
request-status automatically? E.g. if there's an untrapped error.

Thanks

Tom

Tom Schindl wrote:
> Hi,
> 
> I'm on the way to release a new version of my AxKit-Port to Apache2
> named Apache2::TomKit and are facing the problem that I need to set the
> HTTP-Status to SERVER_ERROR when the transformation fails but I have the
> feeling that this is not possible any more in this situation.
> 
> Any suggestions?
> 
> Tom
> 
> 



Semaphores

2006-03-18 Thread Bill Whillers
To avoid use of Reload, I'm using semaphores to ensure all processes are up to 
date with a single large hash that rarely gets updated. Is this a bad idea?

I'd appreciate any constructive criticisms:

# each session
unless(-e "$semaphore/$$"){
#  do the caching routine from updated database.
# `touch $semaphore/$$`;
}

# when updating the specific dataset/database:
# get list of semaphores
# unlink the list


Thanks,
Bill


Re: Semaphores

2006-03-18 Thread Jonathan Vanasco


i asked something like that once before (though not in terms of  
mod_perl) , and was told this:


	if the disk doesn't have a lot of read/writes already, the the OS  
should optimize a bunch of the filechecking into memory and won't  
touch the disk
	if it does have a lot of use ( uploads/ shred use as email/imap/ 
etc ) then use either memcached or another in-memory system /  
daemon / socket connection that never  touches the disk


i know little of how the kernel handles that sort of stuff

the one thing i'd note about your plan is this:
	if you're updating a large hash, then on update that hash will be  
copied into each child -- it won't be shared among the apache  
children.  i'm not sure how the memory mapping works - will all of  
the hash be replaced with per-child memory, will only parts.  someone  
w/more knowledge of the internals would be better off answering  
that.  but the point is that  you end up losing a lot of shared  
memory space, which (i think) defeats the point of having a large  
hash of data in the first place.

personally, i do a stop/start whenever i need to handle a data change



Re: Semaphores

2006-03-18 Thread Bill Whillers
Thanks -  It'd be great to share this memory, but I'm suspecting start/stop 
could cause other problems -- i.e. dropping current connections. Also, we'd 
still need to know when to start/stop the given machine's service.

As for touching, instead of using the physical disk, we could use an extra 
column in a database row that always gets loaded to do the check if that 
ultimately matters.  If the semaphore is missing (machine/process), it can be 
reloaded.

I'm interested to hear what others are doing to acheive the same results.



On Saturday 18 March 2006 16:55, Jonathan Vanasco wrote:
> i asked something like that once before (though not in terms of
> mod_perl) , and was told this:
>
>   if the disk doesn't have a lot of read/writes already, the the OS
> should optimize a bunch of the filechecking into memory and won't
> touch the disk
>   if it does have a lot of use ( uploads/ shred use as email/imap/
> etc ) then use either memcached or another in-memory system /
> daemon / socket connection that never  touches the disk
>
> i know little of how the kernel handles that sort of stuff
>
> the one thing i'd note about your plan is this:
>   if you're updating a large hash, then on update that hash will be
> copied into each child -- it won't be shared among the apache
> children.  i'm not sure how the memory mapping works - will all of
> the hash be replaced with per-child memory, will only parts.  someone
> w/more knowledge of the internals would be better off answering
> that.  but the point is that  you end up losing a lot of shared
> memory space, which (i think) defeats the point of having a large
> hash of data in the first place.
>   personally, i do a stop/start whenever i need to handle a data change