[ceph-users] Re: Sequence replacing a failed OSD disk? [EXT]

2021-01-04 Thread Matthew Vernon
On 31/12/2020 09:10, Rainer Krienke wrote: Yesterday my ceph nautilus 14.2.15 cluster had a disk with unreadable sectors, after several tries the OSD was marked down and rebalancing started and has also finished successfully. ceph osd stat shows the osd now as "autoout,exists". Usually the step

[ceph-users] Re: What is the specific meaning "total_time" in RGW ops log

2021-01-04 Thread Daniel Gryniewicz
total_time is calculated from the top of process_request() until the bottom of process_request(). I know that's not hugely helpful, but it's accurate. This means it starts after the front-end passes the request off, and counts until after a response is sent to the client. I'm not sure if it

[ceph-users] Compression of data in existing cephfs EC pool

2021-01-04 Thread Paul Mezzanini
Hey everyone, I've got an EC pool as part of our cephfs for colder data. When we started using it, compression was still marked experimental. Since then it has become stable so I turned compression on to "aggressive". Using 'ceph df detail' I can see that new data is getting compressed. We

[ceph-users] Re: Compression of data in existing cephfs EC pool

2021-01-04 Thread Burkhard Linke
Hi, On 1/4/21 5:27 PM, Paul Mezzanini wrote: Hey everyone, I've got an EC pool as part of our cephfs for colder data. When we started using it, compression was still marked experimental. Since then it has become stable so I turned compression on to "aggressive". Using 'ceph df detail' I ca

[ceph-users] Re: Compression of data in existing cephfs EC pool

2021-01-04 Thread Paul Mezzanini
That does make sense and I wish it were true however what I'm seeing doesn't support your hypothesis. I've had several drives die and be replaced since the go-live date and I'm actually in the home stretch on reducing the pg_num on that pool so pretty much every PG has already been moved severa

[ceph-users] Re: Compression of data in existing cephfs EC pool

2021-01-04 Thread DHilsbos
Paul; I'm not familiar with rsync, but is it possible you're running into a system issue of the copies being shallow? In other words, is it possible that you're ending up with a hard-link (2 directory entries pointing to the same initial inode), instead of a deep copy? I believe CephFS is impl

[ceph-users] Re: Compression of data in existing cephfs EC pool

2021-01-04 Thread Paul Mezzanini
I'm using rsync so I can have it copy times/permissions/acl's etc easier. It also has an output that's one line per file and informative. Actual copy line: rsync --owner --group --links --hard-links --perms --times --acls --itemize-changes "${DIRNAME}/${FILENAME}" "${DIRNAME}/.${FILENAME}.copy

[ceph-users] Re: Compression of data in existing cephfs EC pool

2021-01-04 Thread Thorbjørn Weidemann
I have not tried this myself, but could it be related to the compress_required_ratio mentioned here? https://books.google.dk/books?id=vuiLDwAAQBAJ&pg=PA80&lpg=PA80 zip-files probably can't be compressed all that much. On Mon, Jan 4, 2021 at 9:44 PM Paul Mezzanini wrote: > I'm using rsync so I c

[ceph-users] Re: Compression of data in existing cephfs EC pool

2021-01-04 Thread Paul Mezzanini
​Most are moderate to highly compressible files which is why I'm trying so hard to figure this one out. On average I seem to hit about 50% and the file sizes for what I care about compressing are from 10M up to around 1G. I'm looking at saving around 700T of space on this tier. Loving the i

[ceph-users] Re: Bluestore migration: per-osd device copy

2021-01-04 Thread Chris Dunlop
Just to follow up... I was able to use a per-osd device copy to migrate nearly 80 FileStore osds to BlueStore using a version of the script in: https://tracker.ceph.com/issues/47839 Cheers, Chris On Fri, Oct 09, 2020 at 12:05:32PM +1100, Chris Dunlop wrote: Hi, The docs have scant detail