Re: [ceph-users] osd not removed from crush map after ceph osd crush remove

2016-02-22 Thread Stillwell, Bryan
Dimitar, I'm not sure why those PGs would be stuck in the stale+active+clean state. Maybe try upgrading to the 0.80.11 release to see if it's a bug that was fixed already? You can use the 'ceph tell osd.* version' command after the upgrade to make sure all OSDs are running the new version. A

Re: [ceph-users] Rack weight imbalance

2016-02-22 Thread Gregory Farnum
On Mon, Feb 22, 2016 at 9:29 AM, George Mihaiescu wrote: > Hi, > > We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we > would like to get your input on this. > > The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with > the largest pool being rgw and usi

[ceph-users] Ceph Tech Talk on Thurs

2016-02-22 Thread Patrick McGarry
Hey cephers, Just a reminder that this month’s Ceph Tech Talk will be Thursday at 1p EST. This month we have a development update on CephFS as we approach the Jewel release and the migration of CephFS from “nearly awesome” to “fully awesome!” Don’t miss out. http://ceph.com/ceph-tech-talks/ -

[ceph-users] Rack weight imbalance

2016-02-22 Thread George Mihaiescu
Hi, We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we would like to get your input on this. The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with the largest pool being rgw and using a replica 3. For non-technical reasons (budgetary, etc) we are cons

Re: [ceph-users] OSD Crash with scan_min and scan_max values reduced

2016-02-22 Thread M Ranga Swami Reddy
So basically the issue - http://tracker.ceph.com/issues/4698 osd suicide timeout On Mon, Feb 22, 2016 at 7:06 PM, M Ranga Swami Reddy wrote: > Hello, > I have reduced the scan_min and scan_max as below. After the below > change, during the scrubbing, got the op_tp_thread time out after 15. > Aft

[ceph-users] OSD Crash with scan_min and scan_max values reduced

2016-02-22 Thread M Ranga Swami Reddy
Hello, I have reduced the scan_min and scan_max as below. After the below change, during the scrubbing, got the op_tp_thread time out after 15. After some time, OSDs crashed also... Any suggestions will be helpful... Thanking you. == -osd_backfill_scan_min = 64 -osd_backfill_scan_max = 512 +osd_bac

[ceph-users] Ceph geo-replication

2016-02-22 Thread Alexandr Porunov
Is it possible to replicate objects not in all regions but in nearest only or specific regions? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cannot reliably create snapshot after freezing QEMU IO

2016-02-22 Thread Saverio Proto
Hello Jason, from this email on ceph-dev http://article.gmane.org/gmane.comp.file-systems.ceph.devel/29692 it looks like 0.94.6 is coming out very soon. We avoid testing the unreleased packaged then and we wait for the official release. thank you Saverio 2016-02-19 18:53 GMT+01:00 Jason Dillam

Re: [ceph-users] osd not removed from crush map after ceph osd crush remove

2016-02-22 Thread Dimitar Boichev
Anyone ? Regards. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dimitar Boichev Sent: Thursday, February 18, 2016 5:06 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] osd not removed from crush map after ceph osd crush remove Hello, I am running a tiny cluster