cot@dovecot.org
Subject: [Dovecot] GlusterFS + Dovecot
Hello,
Has anyone used GlusterFS as storage file system for dovecot or any other
email system..?
It says that it can be presented as a NFS, CIFS and as GlusterFS using the
native client, technically using the client would allow the machine to
Am 20.06.2012 17:50, schrieb Romer Ventura:
> Hello,
>
>
>
> Has anyone used GlusterFS as storage file system for dovecot or any other
> email system..?
>
>
>
> It says that it can be presented as a NFS, CIFS and as GlusterFS using the
> native client, technically using the client would all
On 6/20/2012 10:50 AM, Romer Ventura wrote:
> Has anyone used GlusterFS as storage file system for dovecot or any other
> email system?
I have not, but can tell you from experience and education that
distributed filesystems don't work well with transactional workloads
such as IMAP and SMTP. The
On 20.6.2012, at 18.50, Romer Ventura wrote:
> Has anyone used GlusterFS as storage file system for dovecot or any other
> email system..?
I've heard Dovecot complains about index corruption once in a while with
glusterfs, even when not in multi-master mode. I wouldn't use it without some
heavy
Hello,
Has anyone used GlusterFS as storage file system for dovecot or any other
email system..?
It says that it can be presented as a NFS, CIFS and as GlusterFS using the
native client, technically using the client would allow the machine to read
and write to it, therefore, I think that Do
On 2010-02-17, Ed W wrote:
>
> Anyone had success using some other clustered/HA filestore with dovecot
> who can share their experience? (OCFS/GFS over DRBD, etc?)
We´ve been using IBM´s GPFS filesystem on (currently) seven x-series
servers running RHEL4 and RHEL5, all SAN-attached all serving t
I use GlusterFS with Dovecot and it works without issues. The GlusterFS team
has made huge progress since 2.0 and with the new 3.0 version they have again
proved that GlusterFS can get better.
You have kindly shared some details of your config before - care to
update us on what you are
> > Sure .. but you can break the index files in exactly the same way as
> > with NFS. :)
> >
> That is right :)
For us, all the front end exim servers pass their mail to a single final
delivery server. It was done so that we didn't have all the front end
servers needing to mount the storage. It
Original-Nachricht
> Datum: Fri, 19 Feb 2010 04:37:04 +0200
> Von: Timo Sirainen
> An: dovecot@dovecot.org
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
> On Fri, 2010-02-19 at 03:12 +0100, Steve wrote:
> > > This has the same proble
On Fri, 2010-02-19 at 03:12 +0100, Steve wrote:
> > This has the same problems as with NFS (assuming the servers aren't only
> > delivering mails, without updating index files). http://wiki.dovecot.org/NFS
> >
> Except that NFS is not so flexible as GlusterFS. In GlusterFS I can
> replicate, stri
Original-Nachricht
> Datum: Fri, 19 Feb 2010 03:02:48 +0200
> Von: Timo Sirainen
> An: Dovecot Mailing List
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
> On 19.2.2010, at 0.37, Steve wrote:
>
> > You can do that. But with Gluster
On 19.2.2010, at 0.37, Steve wrote:
> You can do that. But with GlusterFS and Dovecot you don't need to. You can
> mount read/write the same GlusterFS share on all the mail servers. Dovecot
> will usually add the hostname of the delivering system into the maildir file
> name. As long as the del
Original-Nachricht
> Datum: Thu, 18 Feb 2010 21:32:46 +
> Von: John Lyons
> An: Dovecot Mailing List
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
>
> Dare I ask...(as it's not exactly clear from the Gluster docs)
>
> I
Dare I ask...(as it's not exactly clear from the Gluster docs)
If I take 5 storage servers to house my /mail can my cluster of 5 front
end dovecot servers all mount/read/write to /mail.
The reason I ask is the docs seem to suggest I should be doing 5
servers, having 5 partitions, one for each ma
Quoting Steve :
I have already installed GFS on a cluster in the past, but never on DRBD.
Me too (I did in on a real physical SAN before).
Hmm... when I started with GlusterFS I thought that using more then
two nodes is something that I will never need.
GlusterFS is really designed to all
Original-Nachricht
> Datum: Thu, 18 Feb 2010 13:51:33 -0600
> Von: Eric Rostetter
> An: dovecot@dovecot.org
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
> Quoting Steve :
>
> >> > My interest is more in bootstrapping
Quoting Steve :
> My interest is more in bootstrapping a more highly available system
> from lower quality (commodity) components than very high end use
GFS+DRBD should fit the bill... You need several nics and cables,
but they are dirt cheap... Just 2 machines with the same disk setup,
and a
Original-Nachricht
> Datum: Thu, 18 Feb 2010 08:36:36 -0800
> Von: Brandon Lamb
> An: Dovecot Mailing List
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
> On Wed, Feb 17, 2010 at 11:55 AM, Steve wrote:
> >
> > Original-Na
On Wed, Feb 17, 2010 at 11:55 AM, Steve wrote:
>
> Original-Nachricht
>> Datum: Wed, 17 Feb 2010 20:15:30 +0100
>> Von: alex handle
>> An: Dovecot Mailing List
>> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
>
>> >
Original-Nachricht
> Datum: Wed, 17 Feb 2010 21:25:46 -0600
> Von: Eric Rostetter
> An: dovecot@dovecot.org
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
> Quoting Ed W :
>
> > Anyone had success using some other clustered/HA filestor
Quoting Ed W :
Anyone had success using some other clustered/HA filestore with
dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)
GFS2 over DRBD in an active-active setup works fine IMHO. Not perfect,
but it was cheap and works well... Let's me reboot machines with
"no down
Original-Nachricht
> Datum: Wed, 17 Feb 2010 20:15:30 +0100
> Von: alex handle
> An: Dovecot Mailing List
> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
> >
> > Anyone had success using some other clustered/HA filestore with dovecot
&
>
> Anyone had success using some other clustered/HA filestore with dovecot who
> can share their experience? (OCFS/GFS over DRBD, etc?)
>
> My interest is more in bootstrapping a more highly available system from
> lower quality (commodity) components than very high end use
we use drbd with ext3
GlusterFs always strikes me as being "the solution" (one day...). It's
had a lot of growing pains, but there have been a few on the list had
success using it already.
Given some time has gone by since I last asked - has anyone got any more
recent experience with it and how has it worked out w
ay, August 11, 2008 6:32 PM
To: Dovecot Mailing List
Subject: Re: [Dovecot] GlusterFS
On Aug 11, 2008, at 10:22 AM, Timo Sirainen wrote:
> On Aug 7, 2008, at 3:57 AM, Jeroen Koekkoek wrote:
>
>> I receive the following error message.
>>
>> Aug 7 09:38:51 m
On Aug 11, 2008, at 10:22 AM, Timo Sirainen wrote:
On Aug 7, 2008, at 3:57 AM, Jeroen Koekkoek wrote:
I receive the following error message.
Aug 7 09:38:51 mta2 dovecot: POP3([EMAIL PROTECTED]):
nfs_flush_fcntl:
fcntl(/var/vmail/domain.tld/somebody/Maildir/dovecot.index, F_RDLCK)
failed: Fu
On Aug 7, 2008, at 3:57 AM, Jeroen Koekkoek wrote:
I receive the following error message.
Aug 7 09:38:51 mta2 dovecot: POP3([EMAIL PROTECTED]):
nfs_flush_fcntl:
fcntl(/var/vmail/domain.tld/somebody/Maildir/dovecot.index, F_RDLCK)
failed: Function not implemented
Dovecot tries to flush kernel
From: "Ed W" <[EMAIL PROTECTED]>
Sent: Sunday, August 10, 2008 11:09 AM
I'm also interested to hear how it works out? It appears that the
straightline speed is high for gluster, but it's per file performance has
enough overhead that it's a signficant problem for maildir type
applications whi
Pawel Panek wrote:
We use a Dovecot setup with GlusterFS. Dovecot 1.1.2 and GlusterFS
OT: beside fcntl problem, how is GlusterFS doing for you?
I have some miserable remarks using GlusterFS and FUSE. What about
your experience?
I'm also interested to hear how it works out? It appears that
We use a Dovecot setup with GlusterFS. Dovecot 1.1.2 and GlusterFS
OT: beside fcntl problem, how is GlusterFS doing for you?
I have some miserable remarks using GlusterFS and FUSE. What about your
experience?
Pawel
Hi everybody,
We use a Dovecot setup with GlusterFS. Dovecot 1.1.2 and GlusterFS
1.3.9. I enabled the following options (I don't have posix-locks
translators):
lock_method = dotlock
dotlock_use_excl = no
mmap_disable = yes
mail_nfs_index = yes
mail_nfs_storage = yes
I receive the following error
31 matches
Mail list logo