Re: [ceph-users] Ceph not recovering after osd/host failure

2017-10-16 Thread Anthony Verevkin
Hi Matteo, This looks like the 'noout' flag might be set for your cluster. Please check it with: ceph osd dump | grep flags If you see 'noout' flag is set, you can unset it with: ceph osd unset noout Regards, Anthony - Original Message - > From: "Matteo Dacrema" > To: ceph-users@lists

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-16 Thread Anthony Verevkin
Not sure if anyone has noticed this yet, but I see your osd tree does not include hosts level - you get OSDs right under the root bucket. Default crush rule would make sure to allocate OSDs from different hosts - and there are no hosts in hierarchy. OSD would usually put itself under the hostna

Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]

2017-10-16 Thread Anthony Verevkin
> From: "Sage Weil" > To: "Alfredo Deza" > Cc: "ceph-devel" , ceph-users@lists.ceph.com > Sent: Monday, October 9, 2017 11:09:29 AM > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and > disk partition support] > > To put this in context, the goal here is to kill ceph-

Re: [ceph-users] Ceph cluster network bandwidth?

2017-11-20 Thread Anthony Verevkin
> From: "John Spray" > Sent: Thursday, November 16, 2017 11:01:35 AM > > On Thu, Nov 16, 2017 at 3:32 PM, David Turner > wrote: > > That depends on another question. Does the client write all 3 > > copies or > > does the client send the copy to the primary OSD and then the > > primary OSD > >

Re: [ceph-users] Cluster Security

2018-09-24 Thread Anthony Verevkin
It is not quite clear to me what you are trying to achieve. If you want to separate HyperVisors from Ceph, that would not give you much. HV is man-in-the-middle anyway so they would be able to tap into traffic whatever you do. iSCSI won't help you here. Also you would probably need to let the HV

[ceph-users] Ceph replication factor of 2

2018-05-23 Thread Anthony Verevkin
This week at the OpenStackSummit Vancouver I can hear people entertaining the idea of running Ceph with replication factor of 2. Karl Vietmeier of Intel suggested that we use 2x replication because Bluestore comes with checksums. https://www.openstack.org/summit/vancouver-2018/summit-schedule/ev

Re: [ceph-users] Offsite replication scenario

2019-01-16 Thread Anthony Verevkin
I would definitely see huge value in going to 3 MONs here (and btw 2 on-site MGR and 2 on-site MDS) However 350Kbps is quite low and MONs may be latency sensitive, so I suggest you do heavy QoS if you want to use that link for ANYTHING else. If you do so, make sure your clients are only listing t