Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-05 Thread Nick Fisk
Hi Justin, I'm doing iSCSI HA. Myself and several others have had troubles with LIO and Ceph, so until the problems are fixed, I wouldn't recommend that approach. But hopefully it will become the best solution in the future. If you need iSCSI, currently the best method is probably: Shared IP

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-05 Thread Ric Wheeler
On 04/05/2015 11:22 AM, Nick Fisk wrote: Hi Justin, I'm doing iSCSI HA. Myself and several others have had troubles with LIO and Ceph, so until the problems are fixed, I wouldn't recommend that approach. But hopefully it will become the best solution in the future. If you need iSCSI, currently

Re: [ceph-users] OSD auto-mount after server reboot

2015-04-05 Thread Loic Dachary
On 04/04/2015 22:09, shiva rkreddy wrote: > HI, > I'm currently testing Firefly 0.80.9 and noticed that OSD are not > auto-mounted after server reboot. > It used to mount auto with Firefly 0.80.7. OS is RHEL 6.5. > > There was another thread earlier on this topic with v0.80.8, suggestion was

[ceph-users] Ceph Code Coverage

2015-04-05 Thread Rajesh Raman
Hi All, Does anyone has executed code coverage run on Ceph recently using Teuthology? (Some old reports from Loic's blog is here taken in Jan 2013, but I am interested in latest runs if anyone has run using Teuthology) Thanks and Reg

Re: [ceph-users] Ceph Code Coverage

2015-04-05 Thread Loic Dachary
Hi, On 05/04/2015 18:32, Rajesh Raman wrote:> Hi All, > > > > Does anyone has executed code coverage run on Ceph recently using Teuthology? > (Some old reports from Loic's blog is here > taken in Jan 2013, > but I am interested in la

[ceph-users] Rebalance after empty bucket addition

2015-04-05 Thread Andrey Korolyov
Hello, after reaching certain ceiling of host/PG ratio, moving empty bucket in causes a small rebalance: ceph osd crush add-bucket 10.10.2.13 ceph osd crush move 10.10.2.13 root=default rack=unknownrack I have two pools, one is very large and it is keeping up with proper amount of pg/osd but ano

Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Francois Lafont
Hi, Lionel Bouton wrote : > Sorry this wasn't clear: I tried the ioprio settings before disabling > the deep scrubs and it didn't seem to make a difference when deep scrubs > occured. I have never tested these parameters (osd_disk_thread_ioprio_priority and osd_disk_thread_ioprio_class), but did

Re: [ceph-users] OSD auto-mount after server reboot

2015-04-05 Thread shiva rkreddy
We have currently two osds configured on this system running RHEL6.5, sharing a ssd drive as journal devices. udevadm trigger --sysname-match=sdb or udevadm trigger --sysname-match=/dev/sdb, return without any output. Same thing happens on ceph 0.80.7 where mount and services are started automat

Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Lionel Bouton
Hi, On 04/06/15 02:26, Francois Lafont wrote: > Hi, > > Lionel Bouton wrote : > >> Sorry this wasn't clear: I tried the ioprio settings before disabling >> the deep scrubs and it didn't seem to make a difference when deep scrubs >> occured. > I have never tested these parameters (osd_disk_thread_i

Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Francois Lafont
On 04/06/2015 02:54, Lionel Bouton wrote: >> I have never tested these parameters (osd_disk_thread_ioprio_priority and >> osd_disk_thread_ioprio_class), but did you check that the I/O scheduler of >> the disks is cfq? > > Yes I did. Ah ok. It was just in case. :) >> Because, if I understand we

[ceph-users] UnSubscribe Please

2015-04-05 Thread JIten Shah
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com