W dniu 2015-11-28 03:32, wie...@porcupine.org napisaƂ(a):
bi...@dev-ops.pl:
W dniu 2015-11-27 16:52, wie...@porcupine.org napisa?(a):
> Wietse Venema:
>> Django [BOfH]:
>> > Via clusterfs Klaus may share /var/lib/postfix/cachepool between all 4 MX.
>>
>> LMDB Postfix caches support sharing; non-LMDB caches cannot be
>> shared at all.
>
> To be precise: LMDB Postfix caches support shared access by readers
> and writers. Other caches cannot be shared by writers or by
> readers+writers. Read-only sharing is OK, but irrelevant for caches.
>
>> However, the Postfix LMDB client requires fcntl locks. If clusterfs
>> does not support fcntl locks, then things will blow up. If fcntl
>> performance sucks, use memcache without persistent backup.
>>
>>        Wietse
>>

We use couchbase cluster with memcached buckets to share postscreen
cache between cluster of postfix machines. We you use Couchbase , then
you run a proxy application to your bucket called moxi. It gives you one common memcached shared between all postfix machines in cluster without connecting them together, you only use moxi - proxy application to your
single memcache bucket.

What is the latency for looking up information that is NOT in the
memcache? If it is 10 millseconds, then postscreen can handle only
100 connections per second, and it becomes a performance bottleneck.

        Wietse

I have no idea what is the latency for lookup not cached information. But I think I don't really understand how it's done. Why postscreen cant handle more than 100 connections per seconds with usage of shared memcache? If you can, please explain that fact to me, because I think it's also my perfomance problem with postscreen, because we handle about minimum 8000 connections per minute on every machine in cluster, so that gives about 133 conn/second. If it's a bottleneck I'll plan to move to other caching system, such as you said before LMDB.

deceq

Reply via email to