On 31/12/2020 09:10, Rainer Krienke wrote:
Yesterday my ceph nautilus 14.2.15 cluster had a disk with unreadable
sectors, after several tries the OSD was marked down and rebalancing
started and has also finished successfully. ceph osd stat shows the osd
now as "autoout,exists".
Usually the step
total_time is calculated from the top of process_request() until the
bottom of process_request(). I know that's not hugely helpful, but it's
accurate.
This means it starts after the front-end passes the request off, and
counts until after a response is sent to the client. I'm not sure if it
Hey everyone,
I've got an EC pool as part of our cephfs for colder data. When we started
using it, compression was still marked experimental. Since then it has become
stable so I turned compression on to "aggressive". Using 'ceph df detail' I
can see that new data is getting compressed. We
Hi,
On 1/4/21 5:27 PM, Paul Mezzanini wrote:
Hey everyone,
I've got an EC pool as part of our cephfs for colder data. When we started using it,
compression was still marked experimental. Since then it has become stable so I turned
compression on to "aggressive". Using 'ceph df detail' I ca
That does make sense and I wish it were true however what I'm seeing doesn't
support your hypothesis. I've had several drives die and be replaced since the
go-live date and I'm actually in the home stretch on reducing the pg_num on
that pool so pretty much every PG has already been moved severa
Paul;
I'm not familiar with rsync, but is it possible you're running into a system
issue of the copies being shallow?
In other words, is it possible that you're ending up with a hard-link (2
directory entries pointing to the same initial inode), instead of a deep copy?
I believe CephFS is impl
I'm using rsync so I can have it copy times/permissions/acl's etc easier. It
also has an output that's one line per file and informative.
Actual copy line:
rsync --owner --group --links --hard-links --perms --times --acls
--itemize-changes "${DIRNAME}/${FILENAME}" "${DIRNAME}/.${FILENAME}.copy
I have not tried this myself, but could it be related to the
compress_required_ratio mentioned here?
https://books.google.dk/books?id=vuiLDwAAQBAJ&pg=PA80&lpg=PA80
zip-files probably can't be compressed all that much.
On Mon, Jan 4, 2021 at 9:44 PM Paul Mezzanini wrote:
> I'm using rsync so I c
Most are moderate to highly compressible files which is why I'm trying so hard
to figure this one out. On average I seem to hit about 50% and the file sizes
for what I care about compressing are from 10M up to around 1G. I'm looking at
saving around 700T of space on this tier.
Loving the i
Just to follow up...
I was able to use a per-osd device copy to migrate nearly 80 FileStore
osds to BlueStore using a version of the script in:
https://tracker.ceph.com/issues/47839
Cheers,
Chris
On Fri, Oct 09, 2020 at 12:05:32PM +1100, Chris Dunlop wrote:
Hi,
The docs have scant detail
10 matches
Mail list logo