On Wed, Sep 4, 2019 at 9:42 PM Andras Pataki
wrote:
>
> Dear ceph users,
>
> After upgrading our ceph-fuse clients to 14.2.2, we've been seeing sporadic
> segfaults with not super revealing stack traces:
>
> in thread 7fff5a7fc700 thread_name:ceph-fuse
>
> ceph version 14.2.2 (4f8fa0a0024755aae7
Hello Ceph-users,
I am currently testing / experimenting with Ceph with some extra hardware that
is laying around. I am running Nautilus on Ubuntu 18.04 (all nodes).
The problem statement is that I’d like to backup a FreeNAS server using ZFS
Snapshots and replication to a Ceph cluster.
I crea
On 05/09/2019 18.39, Yan, Zheng wrote:
stray subdir never get fragmented in current implementation.
Then this is a problem, right? Once stray subdirs hit 100K files things
will start failing. Is there a solution for this, or do we need to
figure out some other backup mechanism that doesn't in
On Thu, Sep 5, 2019 at 4:31 PM Hector Martin wrote:
>
> I have a production CephFS (13.2.6 Mimic) with >400K strays. I believe
> this is caused by snapshots. The backup process for this filesystem
> consists of creating a snapshot and rsyncing it over daily, and
> snapshots are kept locally in the
I have a production CephFS (13.2.6 Mimic) with >400K strays. I believe
this is caused by snapshots. The backup process for this filesystem
consists of creating a snapshot and rsyncing it over daily, and
snapshots are kept locally in the FS for 2 months for backup and
disaster recovery reasons.