https://github.com/facebook/flashcache/blob/master/doc/flashcache-doc.txt
On Nov 28, 2011, at 4:04 PM, Micah Anderson wrote:
> Dovecot-GDH writes:
>
>> If I/O performance is a concern, you may be interested in ZFS and Flashcache.
>>
>> Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Lay
Dovecot-GDH writes:
> If I/O performance is a concern, you may be interested in ZFS and Flashcache.
>
> Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Adaptive Read
> Cache)
> ZFS does run on Linux http://zfs-fuse.net
>
> Flashcache: https://github.com/facebook/flashcache/
That
On Tue, 22 Nov 2011 11:45:47 +0100, Jan-Frode Myklebust
wrote:
Ah, then Timo's reply was right. He suggested you do the
lmtp-deliveries
to the same server that you would send you imap-user to. You can do
this
trough dovecot director and lmtp-proxying.
So instead of:
lmtp:unix:priv
On Tue, Nov 22, 2011 at 11:17:12AM +0100, Patrick Westenberg wrote:
>
> No. I want to know if dovecot writes to the indexes if it receives a
> mail via lmtp.
>
> Someone proposed to store the index files on a locally installed SSD
> on a frontend (imap) machine and stick the users to that machin
Jan-Frode Myklebust schrieb:
I wondered that too. It looked to me like you tried to ask where the
lmtp-service picks up the path to indexes, right? AFAIU it picks that up
from the /var/run/dovecot/auth-master socket.
No. I want to know if dovecot writes to the indexes if it receives a
mail vi
On Mon, Nov 21, 2011 at 10:45:49PM +0100, Patrick Westenberg wrote:
> Timo Sirainen schrieb:
> >On Wed, 2011-11-16 at 19:40 +0100, Patrick Westenberg wrote:
> >>I already use lmtp:unix:private/dovecot-lmtp as transport but where is
> >>the link to the indexes?
> >
> >You can switch to lmtp:tcp:1.2.
Timo Sirainen schrieb:
On Wed, 2011-11-16 at 19:40 +0100, Patrick Westenberg wrote:
I already use lmtp:unix:private/dovecot-lmtp as transport but where is
the link to the indexes?
You can switch to lmtp:tcp:1.2.3.4:24 where 1.2.3.4 would be Dovecot
LMTP proxy, which would forward the connectio
On Wed, 2011-11-16 at 19:40 +0100, Patrick Westenberg wrote:
> Timo Sirainen schrieb:
> > On Mon, 2011-11-07 at 01:08 +0100, Patrick Westenberg wrote:
> >>
> >> My mail exchangers use dovecot-lda and I think indexes will be written
> >> from these servers too or am I wrong with this?
> >
> > You ca
Timo Sirainen schrieb:
On Mon, 2011-11-07 at 01:08 +0100, Patrick Westenberg wrote:
My mail exchangers use dovecot-lda and I think indexes will be written
from these servers too or am I wrong with this?
You can use LMTP and LMTP proxying.
I already use lmtp:unix:private/dovecot-lmtp as tran
On Mon, 2011-11-07 at 01:08 +0100, Patrick Westenberg wrote:
> Ed W schrieb:
>
> > See the "sticky" in my reply. You use one of several techniques to
> > ensure that users always end up on the server with the indexes on. That
> > way much of the IO is served from that local machine and you only
On Thu, 2011-11-10 at 00:30 -0800, Mark Hanford wrote:
> I've got a centos 6 server running Dovecot 2.0.beta6 (3156315704ef).
> For legacy reasons (I'm moving mail from a Dovecot 1.1.1 and FreeBSD box
> with user home directories NFS mounted), my index files are setup to be
> in /u/indexes/
>
Ed W schrieb:
See the "sticky" in my reply. You use one of several techniques to
ensure that users always end up on the server with the indexes on. That
way much of the IO is served from that local machine and you only access
the SAN for the (in theory much less frequent) access to the mail fi
On 11/3/2011 10:21 AM, Ed W wrote:
>
>> I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
>> thinking about a SSD based LUN for the indexes. As I'm using multiple
>> servers this LUN will use OCFS2.
>
> Given that the SAN always has the network latency behind it, might you
> be
On 03/11/2011 16:53, Patrick Westenberg wrote:
> Ed W schrieb:
>
>>> I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
>>> thinking about a SSD based LUN for the indexes. As I'm using multiple
>>> servers this LUN will use OCFS2.
>>
>> Given that the SAN always has the network lat
I'm using the GIT version, that 0.5 version is quite a bit outdated. I was
not all that worried about using ZFS on this experiment because we do have
the old mail storage on ext3 synchronized and ready to switch back, and I
could disable dedup and compression on-the-fly if needed (which eventually
On 11/3/2011 1:24 PM, Felipe Scarel wrote:
> Reasons to choose ZFS were snapshots, and mainly dedup and compression
> capabilities. I know, it's ironic since I'm not able to use them now due to
> severe performance issues with them (mostly dedup) turned on.
>
> I do like the emphasis on data integ
Reasons to choose ZFS were snapshots, and mainly dedup and compression
capabilities. I know, it's ironic since I'm not able to use them now due to
severe performance issues with them (mostly dedup) turned on.
I do like the emphasis on data integrity and fast on-the-fly
configurability of ZFS to an
Patrick Westenberg wrote:
Ed W schrieb:
I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
thinking about a SSD based LUN for the indexes. As I'm using multiple
servers this LUN will use OCFS2.
Given that the SAN always has the network latency behind it, might you
be better t
Ed W schrieb:
I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
thinking about a SSD based LUN for the indexes. As I'm using multiple
servers this LUN will use OCFS2.
Given that the SAN always has the network latency behind it, might you
be better to look at putting the SSDs
> I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
> thinking about a SSD based LUN for the indexes. As I'm using multiple
> servers this LUN will use OCFS2.
Given that the SAN always has the network latency behind it, might you
be better to look at putting the SSDs in the fron
On 03/11/2011 11:32, Felipe Scarel wrote:
> I'm using native ZFS (http://zfsonlinux.org) on production here (15k+
> users, over 2TB of mail data) with little issues. Dedup and compression
> disabled, mind that.
>
OT: but what were the rough criteria that led you to using ZFS over say
LVM with EXT4
-boun...@dovecot.org [mailto:dovecot-boun...@dovecot.org] On
> Behalf Of Patrick Westenberg
> Sent: Tuesday, November 01, 2011 5:19 PM
> To: dovecot@dovecot.org
> Subject: Re: [Dovecot] Indexes to MLC-SSD
>
> Dovecot-GDH schrieb:
> > If I/O performance is a concern, you may be i
nberg
Sent: Tuesday, November 01, 2011 5:19 PM
To: dovecot@dovecot.org
Subject: Re: [Dovecot] Indexes to MLC-SSD
Dovecot-GDH schrieb:
> If I/O performance is a concern, you may be interested in ZFS and
Flashcache.
>
> Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Ad
Dovecot-GDH schrieb:
If I/O performance is a concern, you may be interested in ZFS and Flashcache.
Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Adaptive Read
Cache)
ZFS does run on Linux http://zfs-fuse.net
I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
If I/O performance is a concern, you may be interested in ZFS and Flashcache.
Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Adaptive Read
Cache)
ZFS does run on Linux http://zfs-fuse.net
Flashcache: https://github.com/facebook/flashcache/
Both of these techniques can use a pair
On 27/10/2011 03:36, Stan Hoeppner wrote:
> On 10/26/2011 4:13 PM, Patrick Westenberg wrote:
>> Hi all,
>>
>> is anyone on this list who dares/dared to store his index files on a
>> MLC-SSD?
> I have not. But I can tell you that a 32GB Corsair MLC SSD in my
> workstation died after 4 months of lau
On 10/26/2011 4:13 PM, Patrick Westenberg wrote:
> Hi all,
>
> is anyone on this list who dares/dared to store his index files on a
> MLC-SSD?
I have not. But I can tell you that a 32GB Corsair MLC SSD in my
workstation died after 4 months of laughably light duty. It had nothing
to do with cell
Am 03.12.2010 12:14, schrieb Andre Nathan:
> On Thu, 2010-12-02 at 13:40 +0100, Robert Schetterer wrote:
>> hm , i have no problems with ocfs2 (1.4.3-1: amd64 ) on drbd ubuntu
> lucid
>> using dovecot vers 2 recommended settings for cluster file systems
>> i have my index files in the maildir dir
On Thu, 2010-12-02 at 13:40 +0100, Robert Schetterer wrote:
> hm , i have no problems with ocfs2 (1.4.3-1: amd64 ) on drbd ubuntu
lucid
> using dovecot vers 2 recommended settings for cluster file systems
> i have my index files in the maildir dir
Robert, are you running an active-active setup wi
On Thu, 2010-12-02 at 12:41 -0200, Henrique Fernandes wrote:
>
> No, i don't have a list where i can ask! =/
>
https://www.redhat.com/mailman/listinfo/redhat-list is the more
"general" list, its low traffic so you wont be inconvenienced by
subscribing .
or maybe more specific...
https://www.
Did you see the IO wait in the picture i send ?
We did not use GFS cause we need an fencing hardware, they don't garante
data without an fencing harware...
Anyway, we change to storage so we now are able to give a biger quota.
Before use to be 200 mb and pretty much alluser uses pop.
We have 900
Am 02.12.2010 14:53, schrieb Henrique Fernandes:
> I have abou 9000 clients.
many clients
>
> 500 GB of storage used
small store
anyway you shouldnt run into problems with that
>
> About the mail option at ocfs2 store creating, i guess the other guy
> that format the storage system did not use
Am 02.12.2010 14:10, schrieb Henrique Fernandes:
> With recomendation settings do you have ?
http://wiki2.dovecot.org/MailLocation/SharedDisk
and a few more optimizes about maildir
>
> how many clients ?
i started the server not long ago so they are less yet ( about 100
maildirs )
but i have my
With recomendation settings do you have ?
how many clients ?
sorry, i did not say i use dovecot 2.0.6
[]'sf.rique
On Thu, Dec 2, 2010 at 10:40 AM, Robert Schetterer wrote:
> Am 02.12.2010 13:13, schrieb Henrique Fernandes:
> > Hello people!
> >
> > I have huge problems with IO wait becase dov
Am 02.12.2010 13:13, schrieb Henrique Fernandes:
> Hello people!
>
> I have huge problems with IO wait becase dovecot configured to use maildir
> is under OCFS2 1.4. Now i have an question to OCFS2 each disk action is
> really heavy becaue it has no index.
>
> Now i am thinking in what can be do
On Nov 24, 2008, at 1:00 PM, Daniel Watts wrote:
Perhaps dovecot could try the fix but if it still fails just go
and delete the indexes itself?
It's not really possible to delete indexes automatically because of
a crash.
May I ask why? This is bascially what we do manually if we encounter
Timo,
If it happens again, please make a copy of the files before deleting.
It's a lot easier to debug these bugs if I can see the dovecot.index
and dovecot.index.log contents.
I will do - have asked my sysadmin to make backups the next time it happens.
Perhaps dovecot could try the fix but
On Nov 24, 2008, at 11:14 AM, Daniel Watts wrote:
Nov 19 17:11:32 mink dovecot: Panic: IMAP([EMAIL PROTECTED]): file
mail-transaction-log.c: line 341
(mail_transaction_log_set_mailbox_sync_pos): assertion failed:
(file_offset >= log->head->saved_tail_offset)
I'll see if I can get this fix
Timo Sirainen wrote:
On Wed, 2008-01-02 at 09:55 -0800, Daniel L. Miller wrote:
Timo Sirainen wrote:
On Mon, 2007-12-31 at 10:54 -0800, Daniel L. Miller wrote:
When something "bad" happens to the indexes, my e-mail client
(Thunderbird) reports an "unable to succeed" error on
On Wed, 2008-01-02 at 09:55 -0800, Daniel L. Miller wrote:
> Timo Sirainen wrote:
> > On Mon, 2007-12-31 at 10:54 -0800, Daniel L. Miller wrote:
> >
> >> When something "bad" happens to the indexes, my e-mail client
> >> (Thunderbird) reports an "unable to succeed" error on opening a
> >> mail
Timo Sirainen wrote:
On Mon, 2007-12-31 at 10:54 -0800, Daniel L. Miller wrote:
When something "bad" happens to the indexes, my e-mail client
(Thunderbird) reports an "unable to succeed" error on opening a
mailbox. Leaving that mailbox and coming back works fine. Is this
expected behaviou
On Mon, 2007-12-31 at 10:54 -0800, Daniel L. Miller wrote:
> When something "bad" happens to the indexes, my e-mail client
> (Thunderbird) reports an "unable to succeed" error on opening a
> mailbox. Leaving that mailbox and coming back works fine. Is this
> expected behaviour?
It's expected,
Scott Silva wrote:
on 12/31/2007 10:54 AM Daniel L. Miller spake the following:
When something "bad" happens to the indexes, my e-mail client
(Thunderbird) reports an "unable to succeed" error on opening a
mailbox. Leaving that mailbox and coming back works fine. Is this
expected behaviour?
on 12/31/2007 10:54 AM Daniel L. Miller spake the following:
When something "bad" happens to the indexes, my e-mail client
(Thunderbird) reports an "unable to succeed" error on opening a
mailbox. Leaving that mailbox and coming back works fine. Is this
expected behaviour?
You need to at lea
44 matches
Mail list logo