On 2012-04-02 7:15 PM, Micah Anderson wrote:
Charles Marcus writes:
On 2012-03-27 11:47 AM, Micah Anderson wrote:
One would be the ability to perform *intelligent* incremental /
rotated backups. I can do this now by running a dsync backup
operation and then doing manual hardlinking or moving
Charles Marcus writes:
> On 2012-03-27 11:47 AM, Micah Anderson wrote:
>> One would be the ability to perform *intelligent* incremental /
>> rotated backups. I can do this now by running a dsync backup
>> operation and then doing manual hardlinking or moving of the backup
>> directories (daily.1
On 3/29/2012 5:24 AM, Stan Hoeppner wrote:
> This happens with a lot of "fan boys". There was so much hype
> surrounding ZFS that even many logically thinking people were frothing
> at the mouth waiting to get their hands on it. Then, as with many/most
> things in the tech world, the goods didn't
On 3/28/2012 3:54 PM, Jeff Gustafson wrote:
> On Wed, 2012-03-28 at 11:07 -0500, Stan Hoeppner wrote:
>
>> Locally attached/internal/JBOD storage typically offers the best
>> application performance per dollar spent, until you get to things like
>> backup scenarios, where off node network throughp
On 23.3.2012, at 23.25, Timo Sirainen wrote:
> and even if you don't understand that, here's another document disguising as
> an algorithm class problem :) If anyone has thoughts on how to solve it,
> would be great:
>
> http://dovecot.org/tmp/dsync-redesign-problem.txt
>
> It only deals with
On 27.3.2012, at 1.14, Michescu Andrei wrote:
> This being said and acknowledged here are my 2 cents:
>
> I think that the current '1 brain / 2 workers' seems to be the correct
> model. The "the client" connects to the "server" and pushes the local
> changes and after retrieves the updated/new it
On Wed, 2012-03-28 at 11:07 -0500, Stan Hoeppner wrote:
> Locally attached/internal/JBOD storage typically offers the best
> application performance per dollar spent, until you get to things like
> backup scenarios, where off node network throughput is very low, and
> your backup software may suff
On 3/27/2012 3:57 PM, Jeff Gustafson wrote:
> We do have a FC system that another department is using. The company
> dropped quite a bit of cash on it for a specific purpose. Our department
> does not have access it to. People are somewhat afraid of iSCSI around
> here because they believe
On Tue, 2012-03-27 at 15:09 -0500, Stan Hoeppner wrote:
> On 3/26/2012 2:34 PM, Jeff Gustafson wrote:
>
> > Do you have any suggestions for a distributed replicated filesystem
> > that works well with dovecot? I've looked into glusterfs, but the
> > latency is way too high for lots of small fi
On 3/26/2012 2:34 PM, Jeff Gustafson wrote:
> Do you have any suggestions for a distributed replicated filesystem
> that works well with dovecot? I've looked into glusterfs, but the
> latency is way too high for lots of small files. They claim this problem
> is fixed in glusterfs 3.3. NFS to
On 2012-03-27 11:47 AM, Micah Anderson wrote:
One would be the ability to perform *intelligent* incremental /
rotated backups. I can do this now by running a dsync backup
operation and then doing manual hardlinking or moving of the backup
directories (daily.1, daily.2, weekly.1, monthly.1, etc.)
Timo Sirainen writes:
> In case anyone is interested in reading (and maybe helping!) with a dsync
> redesign that's intended to fix all of its current problems, here are some
> possibly incoherent ramblings about it:
thank you for opening this discussion about dsync!
besides the problems I've
Hello Timo,
Thank you very much for planning a redesign of the dsycn and for opening
this discussion.
As I can see from the replies that came until now everybody misses the
main point of IMAP: IMAP has been designed to work as a disconnected,
high-latency data store.
To make this more clear: onc
On Sat, 2012-03-24 at 08:19 +0100, Attila Nagy wrote:
>
> I personally think that Dovecot could gain much more if the amount of
> work going into fixing or improving dsync would go into making Dovecot
> to (be able of) use a high scale, distributed storage backend.
> I know it's much harder, be
On 24.3.2012, at 9.19, Attila Nagy wrote:
> Well, dsync is a very useful tool, but with continuous replication it tries
> to solve a problem which should be handled -at least partially- elsewhere.
> Storing stuff in plain file systems and duplicating them to another one just
> doesn't scale.
d
On Sat, Mar 24, 2012 at 08:19:48AM +0100, Attila Nagy wrote:
> On 03/23/12 22:25, Timo Sirainen wrote:
> >
> Well, dsync is a very useful tool, but with continuous replication
> it tries to solve a problem which should be handled -at least
> partially- elsewhere. Storing stuff in plain file systems
On 03/23/12 22:25, Timo Sirainen wrote:
In case anyone is interested in reading (and maybe helping!) with a dsync
redesign that's intended to fix all of its current problems, here are some
possibly incoherent ramblings about it:
http://dovecot.org/tmp/dsync-redesign.txt
and even if you don't
In case anyone is interested in reading (and maybe helping!) with a dsync
redesign that's intended to fix all of its current problems, here are some
possibly incoherent ramblings about it:
http://dovecot.org/tmp/dsync-redesign.txt
and even if you don't understand that, here's another document d
18 matches
Mail list logo