Re: [ceph-users] Directly addressing files on individual OSD

2017-03-19 Thread Youssef Eldakar
Thanks, Ronny, for the suggestion on adapting the cluster to Ceph. Youssef Eldakar Bibliotheca Alexandrina From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Ronny Aasen [ronny+ceph-us...@aasen.cx] Sent: Thursday, March 16, 2017 19:10 To: c

[ceph-users] Understanding Ceph in case of a failure

2017-03-19 Thread Karol Babioch
Hi, I have a few questions regarding Ceph in case of a failure. My setup consists of three monitors and two hosts, each of which hosts a couple of OSDs. Basically it looks like this: > root@max:~# ceph osd tree > ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY > -1 19.87860 root de

Re: [ceph-users] active+clean+inconsistent and pg repair

2017-03-19 Thread Mehmet
Hi Shain, what i would do: take the osd.32 out # systemctl stop ceph-osd@32 # ceph osd out osd.32 this will cause rebalancing. to repair/reuse the drive you can do: # smartctl -t long /dev/sdX This will start a long self-test on the drive and - i bet - abort this after a while with somethin

Re: [ceph-users] RBD Mirror: Unable to re-bootstrap mirror daemons

2017-03-19 Thread Vaibhav Bhembre
Thanks Jason! The "set" option is quite handy! That did solve the problem and the daemons seem to be able to talk to their remote clusters. On Sat, Mar 18, 2017 at 7:40 PM, Jason Dillaman wrote: > The log shows that the rbd-mirror daemon is attempting to connect to > the cluster "ceph3" using the

Re: [ceph-users] Understanding Ceph in case of a failure

2017-03-19 Thread Ashley Merrick
Might be good if you can attach the full decompiled crushmap so we can see exactly how things are listed/setup. -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Karol Babioch Sent: 19 March 2017 20:42 To: ceph-users@lists.ceph.com Subject: [ceph

Re: [ceph-users] Upgrading 2K OSDs from Hammer to Jewel. Our experience

2017-03-19 Thread Simon Leinen
cephmailinglist writes: > e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0 chown ceph:ceph > [...] > [...] Also at that time one of our pools got a lot of extra data, > those files where stored with root permissions since we did not > restarted the Ceph daemons yet, the 'find' in step e

Re: [ceph-users] Snapshot Costs

2017-03-19 Thread Simon Leinen
Gregory Farnum writes: > On Tue, Mar 7, 2017 at 12:43 PM, Kent Borg wrote: >> I would love it if someone could toss out some examples of the sorts >> of things snapshots are good for and the sorts of things they are >> terrible for. (And some hints as to why, please.) > They're good for CephFS s

Re: [ceph-users] Ceph-osd Daemon Receives Segmentation Fault on Trusty After Upgrading to 0.94.10 Release

2017-03-19 Thread Wido den Hollander
> Op 17 maart 2017 om 8:39 schreef Özhan Rüzgar Karaman > : > > > Hi; > Yesterday i started to upgrade my Ceph environment from 0.94.9 to 0.94.10. > All monitor servers upgraded successfully but i experience problems on > starting upgraded OSD daemons. > > When i try to start an Ceph OSD Daemo

Re: [ceph-users] Snapshot Costs

2017-03-19 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Simon Leinen > Sent: 19 March 2017 17:23 > To: Gregory Farnum > Cc: ceph-users > Subject: Re: [ceph-users] Snapshot Costs > > Gregory Farnum writes: > > On Tue, Mar 7, 2017 at 12:43 PM, Kent

Re: [ceph-users] Understanding Ceph in case of a failure

2017-03-19 Thread Karol Babioch
Hi, just for the sake of completeness, here is my decompiled CRUSH map in case it is needed to further investigate: > # begin crush map > tunable choose_local_tries 0 > tunable choose_local_fallback_tries 0 > tunable choose_total_tries 50 > tunable chooseleaf_descend_once 1 > tunable chooseleaf_v

[ceph-users] OSDs will turn down after the fio test when Ceph use RDMA as its ms_type.

2017-03-19 Thread 邱宏瑋
Hi I want to test the performance for Ceph with RDMA, so I build the ceph with RDMA and deploy into my test environment manually. I use the fio for my performance evaluation and it works fine if the Cepu use the *async + posix* as its ms_type. After changing the ms_type from *async + posix* to *a

Re: [ceph-users] Understanding Ceph in case of a failure

2017-03-19 Thread Christian Balzer
Hello, you do realize that you very much have a corner case setup there, right? Ceph works best and as expected when you have a replication of 3 and as least 3 OSD servers, them having enough capacity (space) to handle the loss of one node. That being said, if you'd search the archives, a simi