Re: [ceph-users] CephFS quota

2016-08-14 Thread Willi Fehler
Hello guys, I found this in the documentation. 1. /Quotas are not yet implemented in the kernel client./Quotas are supported by the userspace client (libcephfs, ceph-fuse) but are not yet implemented in the Linux kernel client. I missed this. Sorry. Regards - Willi Am 14.08.16 um 08:54

Re: [ceph-users] CephFS quota

2016-08-14 Thread w...@42on.com
> Op 14 aug. 2016 om 08:55 heeft Willi Fehler het > volgende geschreven: > > Hello guys, > > my cluster is running on the latest Ceph version. My cluster and my client > are running on CentOS 7.2. > > ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) > > My Client is using Cep

[ceph-users] Substitute a predicted failure (not yet failed) osd

2016-08-14 Thread Goncalo Borges
Hi cephfers I have a really simple question: the documentation always refers to the procedure to substitute failed disks. Currently I have a predicted failure in a raid 0 osd and I would like to substitute before it fails without having to go by replicating pgs once the osd is removed from crus

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-14 Thread Christian Balzer
Hello, I shall top-quote, summarize here. Firstly we have to consider that Ceph is deployed by people with a wide variety of needs, budgets and most of all cluster sizes. Wido has the pleasure (or is that nightmare? ^o^) to deal with a really huge cluster, thousands of OSDs and an according lar

Re: [ceph-users] Substitute a predicted failure (not yet failed) osd

2016-08-14 Thread David Turner
If you are trying to reduce extra data movement, set and unset the nobackfill and norecover flags when you do the same for noout. You will want to follow the instructions to fully remove the osd from the cluster including outing the osd, removing it from the crush map, removing it's auth from t

Re: [ceph-users] Substitute a predicted failure (not yet failed) osd

2016-08-14 Thread Christian Balzer
Hello, If we go by the subject line, your data is still all there and valid (or at least mostly valid). Also, is that an actual RAID0, with multiple drives? If so, why? That just massively increases your failure probabilities AND the amount of affected data when it fails. Anyway, if that OSD i

Re: [ceph-users] Substitute a predicted failure (not yet failed) osd

2016-08-14 Thread Goncalo Borges
Hi Christian If we go by the subject line, your data is still all there and valid (or at least mostly valid). Also, is that an actual RAID0, with multiple drives? If so, why? Its a RAID 0 of one disk. The controller we use just gives that only possibility to use single drives. Cheers G.