At Tue, 8 Aug 2006 15:59:52 +1000,
Bron Gondwana wrote:
>
> Yes, exactly - though we're thinking about asking Igor (the author
> of Nginx) to allow you to choose a local bind address for each
> connection.
Note that, IIUC, with *BSD at least the source address is chosen based
on the peer's networ
On Mon, Aug 07, 2006 at 10:47:10PM +0200, Phil Pennock wrote:
> On 2006-08-07 at 19:23 +0200, Hack Kampbjorn wrote:
> > Phil Pennock wrote:
> > >The "easy" fix is theoretically to configure up extra private addresses
> > >as aliases on the backend, and distribute the load over all of them.
> > >Thi
On Mon, Aug 07, 2006 at 05:59:33PM +0200, Phil Pennock wrote:
> On 2006-08-07 at 12:15 -0300, Henrique de Moraes Holschuh wrote:
> > On Mon, 07 Aug 2006, Kjetil Torgrim Homme wrote:
> > > I think David is missing the issue: it's the proxied connection which is
> > > problematic, not the connection
On 2006-08-07 at 19:23 +0200, Hack Kampbjorn wrote:
> Phil Pennock wrote:
> >The "easy" fix is theoretically to configure up extra private addresses
> >as aliases on the backend, and distribute the load over all of them.
> >This avoids having multiple ports and multiple entries -- it's one
> >cyrus
Phil Pennock wrote:
The "easy" fix is theoretically to configure up extra private addresses
as aliases on the backend, and distribute the load over all of them.
This avoids having multiple ports and multiple entries -- it's one
cyrus.conf listening. The problem may be making sure that the front-
On Mon, 07 Aug 2006, Kjetil Torgrim Homme wrote:
> On Mon, 2006-08-07 at 12:15 -0300, Henrique de Moraes Holschuh wrote:
> > On Mon, 07 Aug 2006, Kjetil Torgrim Homme wrote:
> > > I think David is missing the issue: it's the proxied connection which is
> > > problematic, not the connection to the c
On 2006-08-07 at 12:15 -0300, Henrique de Moraes Holschuh wrote:
> On Mon, 07 Aug 2006, Kjetil Torgrim Homme wrote:
> > I think David is missing the issue: it's the proxied connection which is
> > problematic, not the connection to the client. this locks the IP
> > addresses to the frontend's and
On Mon, 2006-08-07 at 12:15 -0300, Henrique de Moraes Holschuh wrote:
> On Mon, 07 Aug 2006, Kjetil Torgrim Homme wrote:
> > I think David is missing the issue: it's the proxied connection which is
> > problematic, not the connection to the client. this locks the IP
> > addresses to the frontend's
On Mon, 07 Aug 2006, Kjetil Torgrim Homme wrote:
> I think David is missing the issue: it's the proxied connection which is
> problematic, not the connection to the client. this locks the IP
> addresses to the frontend's and the backend's, and the port on the
> backend side is always 143 (or whate
On Sun, 2006-08-06 at 11:40 +1000, Bron Gondwana wrote:
> On Sat, 5 Aug 2006 16:02:44 -0700 (PDT), "David Lang" <[EMAIL PROTECTED]>
> said:
> > On Sat, 5 Aug 2006, Bron Gondwana wrote:
> >
> > > Your frontend only can make connections out using any port it likes, but
> > > there are only 65k of t
On Sat, 5 Aug 2006 16:02:44 -0700 (PDT), "David Lang" <[EMAIL PROTECTED]> said:
> On Sat, 5 Aug 2006, Bron Gondwana wrote:
>
> > Your frontend only can make connections out using any port it likes, but
> > there are only 65k of them, and at any one time, a fraction of those
> > will be tied up do
On Fri, Aug 04, 2006 at 03:26:39PM +1000, Robert Mueller wrote:
> Anyway the good news:
> Before: 2 frontend servers with 7000+ connections (eg 14,000+ total) using
> 6G of RAM with a load on each of about 2
> After: 1 frontend server with 14,000+ connections, less than 1G of RAM
> usage, load of
not sure if we qualify as big enough, but here goes: we typically have
3000 concurrent TLS/SSL connections on each Perdition server during peak
hours (although we occasionally see 5000), but the CPU impact is
negligible[1]. at peak, 8% system and 12% user out of 400% CPU
available (this is Del
On Tue, 2006-08-01 at 19:34 -0400, Greg A. Woods wrote:
> Is anyone here running enough concurrent IMAP/SSL connections to know if
> the SSL overhead chews up enough CPU to conflict with something like
> un-accellerated iSCSI (i.e. enough to also justify a crypto
> accellerator, perhaps as well as
Greg A. Woods wrote:
> That's good to hear! Thanks for the refs.
No prob, just note that they are substantial $$$ compared to an FC card
(you're not saving any money with these babies) and those are the only 2
that have any aspirations of supporting anything outside of Windows -
there is a third
At Sat, 29 Jul 2006 20:07:12 -0500,
Phil Brutsche wrote:
>
> Greg A. Woods wrote:
> > not yet in smart controllers that simply make it look like a more
> > traditional storage device thus off-loading all the protocol handling
> > to a dedicated control processor
>
> I should point out that those
Our mail store is on a LeftHand SAN, which we bought this summer. The
speed is pretty good, even on just a GigE network, and it's certainly
a helluva lot cheaper than FC stuff. Downsides include the lack of an
integrated fencing device for failover (most FC switches are fencing
devices), and the
Greg A. Woods wrote:
> not yet in smart controllers that simply make it look like a more
> traditional storage device thus off-loading all the protocol handling
> to a dedicated control processor
I should point out that those controllers exist, but are rare and have
limited OS support: Adaptec's
At Wed, 26 Jul 2006 16:20:57 -0500,
Greg Harris wrote:
>
> On 7/26/06 3:33 PM, "Greg A. Woods" <[EMAIL PROTECTED]> wrote:
>
> > Using a SCSI host interface isn't going to be nearly so flexible as
> > using a Fibre Channel one, especially in the longer run (e.g. if you
> > ever want to add more st
On 7/26/06 3:33 PM, "Greg A. Woods" <[EMAIL PROTECTED]> wrote:
> At Sun, 23 Jul 2006 23:37:41 +0100,
> Mark Hellman wrote:
>>
>> Do you think a RAID array like this one:
>> http://www.infortrend.com/main/2_product/a08u-c2412.asp
>> would be adequate for storing Cyrus mailboxes?
>
> Using a S
At Sun, 23 Jul 2006 23:37:41 +0100,
Mark Hellman wrote:
>
> Do you think a RAID array like this one:
> http://www.infortrend.com/main/2_product/a08u-c2412.asp
> would be adequate for storing Cyrus mailboxes?
Using a SCSI host interface isn't going to be nearly so flexible as
using a Fibre Cha
Robert Banz wrote:
> The second thing to consider is that the performance on modern SATA
> drives, if you're using a driver for the SATA interface that supports
> advanced features such as command queueing, are going to show you
> performance akin to SCSI drives -- even more so if you place them
>
At Sun, 23 Jul 2006 13:28:10 -0400,
Wesley Craig wrote:
>
> On 23 Jul 2006, at 11:00, Robert Banz wrote:
> > The second thing to consider is that the performance on modern SATA
> > drives, if you're using a driver for the SATA interface that
> > supports advanced features such as command queue
On 23 Jul 2006, at 11:00, Robert Banz wrote:
The second thing to consider is that the performance on modern SATA
drives, if you're using a driver for the SATA interface that
supports advanced features such as command queueing, are going to
show you performance akin to SCSI drives -- even mor
I'm not a postfix expert, however, if it's like any other MTA it's
"queue" directory is usually what runs pretty hot. The first thing I
would do is isolate the MTA-related stuff to a very fast piece disk
that is *not* the same storage that houses you Cyrus mailboxes &
databases.
The se
I have noticed that mail delivery (from Postfix to Cyrus using LMTP) can be
harsh in terms of I/O when several hundreds of messages are being delivered
at once. I have local mailing-lists with more than 1 thousand members and
when a 500 KB email is sent to the list, the server's load average
increa
I have noticed that mail delivery (from Postfix to Cyrus using LMTP) can be
harsh in terms of I/O when several hundreds of messages are being delivered
at once. I have local mailing-lists with more than 1 thousand members and
when a 500 KB email is sent to the list, the server's load average
increa
27 matches
Mail list logo