On 3-4-2008 7:47, Joel Poloney wrote:
1. I have a consumer in a while(1) { //consume } fashion. That would
basically run forever. As I understand it, this is the way most web servers
work (at the core, core level). In this model, I would have to make sure
that the consumer was always running (perhaps do 3-way redundancy and have 3
consumers doing this). My concern with this model is that if I implement
this in PHP, I'm afraid of memory issues and so on. I don't necessarily
trust a PHP script to run in a while(1) fashion forever.

I have this set-up, with four different consumers with varying load (two do more than two million entries per day, one only a few hundred and one a few ten thousand). I have seen no memory leak whatsoever from them. The only major memory leakage from php I've seen recently was with SimpleXML, so if you intend to do XML-processing with those scripts, use DOM or expat.

Ironically, ActiveMQ leaks memory in my setup and therefore has a lower uptime than my php-consumers :X

2. I could somehow get the queue size and batch process these. Basically,
get the queue size (say it was 200 entries), consume 200 messages and then
the PHP script would end. Then you would have this script in a cronjob that
runs every x amount of time.

Unfortunately, I have to implement this in PHP. So, I'm not really sure
which way would be the proper way to go about doing this. If any one has any
thoughts or comments, it would be greatly appreciated.

I would go with 1. But you may want to add some control-code to ensure you can easily and cleanly kill your scripts. In my code, I added a few signal handlers, so the consumer first finishes its most recent job and then dies. But than the fread-calls in the Stomp-implementation won't get interrupted. So I've a 'kill'-message for that situation, so my consumer dies as soon as its either blocked in fread waiting for a new message or right after its current job.

Best regards,

Arjen

Reply via email to