On Wed, Nov 20, 2019, at 4:41 PM, Deborah Pickett wrote:
> > I'm curious how these are working for you, or what sort of configuration
> > and workflows leads to having #calendars and #addressbooks as top-level
> > shared mailboxes? I've only very recently started learning how our DAV bits
> > work
> I'm curious how these are working for you, or what sort of configuration
and workflows leads to having #calendars and #addressbooks as top-level
shared mailboxes? I've only very recently started learning how our DAV bits
work (they have previously been black-boxes for me), and so far have only
s
On Wed, Nov 20, 2019, at 11:06 AM, Deborah Pickett wrote:
> On 2019-11-20 10:03, ellie timoney wrote:
>>> foo also includes "#calendars" and "#addressbooks" on my server so there
are weird characters to deal with.
>>>
>> Now that's an interesting detail to consider.
>>
> I should restate my ori
On 2019-11-20 10:03, ellie timoney wrote:
foo also includes "#calendars" and "#addressbooks" on my server so there
are weird characters to deal with.
Now that's an interesting detail to consider.
I should restate my original message because I'm being fast and loose
with the meaning of "contain
On Tue, Nov 19, 2019, at 9:38 AM, Deborah Pickett wrote:
> > Food for thought. Maybe instead of having one "%SHARED" backup, having one
> > "%SHARED.foo" backup per top-level shared folder would be a better
> > implementation? I haven't seen shared folders used much in practice, so
> > it's in
Food for thought. Maybe instead of having one "%SHARED" backup, having one
"%SHARED.foo" backup per top-level shared folder would be a better implementation? I
haven't seen shared folders used much in practice, so it's interesting to hear about it.
Looking at your own data, if you had one "%S
> Related: I had to apply the patch described in
> (https://www.mail-archive.com/info-cyrus@lists.andrew.cmu.edu/msg47320.html),
> "backupd IOERROR reading backup files larger than 2GB", because during
> initial population of my backup, chunks tended to by multiple GB in size
> (my %SHARED user ba
Further progress report: with small chunks, compaction takes about 15
times longer. It's almost as if there is an O(n^2) complexity
somewhere, looking at the rate that the disk file grows. (Running perf
on a compaction suggests that 90% of the time ctl_backups is doing
compression, decompress
On 2019-11-11 11:10, ellie timoney wrote:
This setting might be helpful:
Thanks, I saw that setting but didn't really think through how it would
help me. I'll experiment with it and report back.
That would be great, thanks!
Progress report: I started with very large chunks (minimum 64 MB,
m
On Fri, Nov 8, 2019, at 1:35 PM, Deborah Pickett wrote:
> I didn't know if copying
> the filesystem of a (paused) Cyrus replica was a supported way of
> backing up, but now I do.
Yeah, as long as there are no cyrus processes running, the database/index files
can just be copied about and won't b
On 2019-11-08 09:13, ellie timoney wrote:
I'm not sure if I'm just not understanding, but if the chunk offsets were to
remain the same, then there's no benefit to compaction? A (say) 2gb file full
of zeroes between small chunks is still the same 2gb on disk as one that's
never been compacted a
I'm not sure if I'm just not understanding, but if the chunk offsets were to
remain the same, then there's no benefit to compaction? A (say) 2gb file full
of zeroes between small chunks is still the same 2gb on disk as one that's
never been compacted at all!
And if you don't use the compaction
12 matches
Mail list logo