Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-05 Thread Nick Fisk
Hi Justin,

I'm doing iSCSI HA. Myself and several others have had troubles with LIO and
Ceph, so until the problems are fixed, I wouldn't recommend that approach.
But hopefully it will become the best solution in the future.

If you need iSCSI, currently the best method is probably:  Shared IP
failover +TGT with RBD backend

By default TGT won't be able to fail between cluster nodes if there are IOs
in progress, so you might need to get creative to overcome this.

The other option, if it meets your requirements is NFS with a shared
failover IP.

Nick


> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eric Eastman
> Sent: 05 April 2015 02:37
> To: Justin Chin-You
> Cc: Ceph Users
> Subject: Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS
> 
> You may want to look at the Clustered SCSI Target Using RBD Status
> Blueprint, Etherpad and video at:
> 
> https://wiki.ceph.com/Planning/Blueprints/Hammer/Clustered_SCSI_target
> _using_RBD
> http://pad.ceph.com/p/I-scsi
> https://www.youtube.com/watch?v=quLqLnWF6A8&index=7&list=PLrBUGiI
> NAakNGDE42uLyU2S1s_9HVevK-
> 
> Eric
> 
> 
> On Sat, Apr 4, 2015 at 7:30 AM, Justin Chin-You 
> wrote:
> Hi All,
> 
> Hoping someone can help me understand CEPH HA or point me in the
> direction of a doc I missed.
> 
> I understand how CEPH HA itself works in regards to PG, OSD and
> Monitoring. However what isn't clear for me is the failover in regards to
> things like iSCSI and the not yet production ready CIFS/NFS.
> 
> Scenario:
> I have 2 servers that are peered and running CEPH and I am replicating
> between both. Using CEPH I have iSCSI targets and CIFS/NFS stores.
> 
> In the event a server should fail how are iSCSI Initiators and CIFS/NFS
clients
> re-directed? I am assuming Multipath and Virtual IPs but I can't figure
out if
> this is something I need to configure/run on the OS side or if it is in
CEPH
> itself.
> 
> Any help appreciated!
> 
> Thanks!
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-05 Thread Ric Wheeler

On 04/05/2015 11:22 AM, Nick Fisk wrote:

Hi Justin,

I'm doing iSCSI HA. Myself and several others have had troubles with LIO and
Ceph, so until the problems are fixed, I wouldn't recommend that approach.
But hopefully it will become the best solution in the future.

If you need iSCSI, currently the best method is probably:  Shared IP
failover +TGT with RBD backend

By default TGT won't be able to fail between cluster nodes if there are IOs
in progress, so you might need to get creative to overcome this.

The other option, if it meets your requirements is NFS with a shared
failover IP.

Nick


Mike Christie has been actively working on iSCSI support with HA. Worth 
comparing notes :)


ric





-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Eric Eastman
Sent: 05 April 2015 02:37
To: Justin Chin-You
Cc: Ceph Users
Subject: Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

You may want to look at the Clustered SCSI Target Using RBD Status
Blueprint, Etherpad and video at:

https://wiki.ceph.com/Planning/Blueprints/Hammer/Clustered_SCSI_target
_using_RBD
http://pad.ceph.com/p/I-scsi
https://www.youtube.com/watch?v=quLqLnWF6A8&index=7&list=PLrBUGiI
NAakNGDE42uLyU2S1s_9HVevK-

Eric


On Sat, Apr 4, 2015 at 7:30 AM, Justin Chin-You 
wrote:
Hi All,

Hoping someone can help me understand CEPH HA or point me in the
direction of a doc I missed.

I understand how CEPH HA itself works in regards to PG, OSD and
Monitoring. However what isn't clear for me is the failover in regards to
things like iSCSI and the not yet production ready CIFS/NFS.

Scenario:
I have 2 servers that are peered and running CEPH and I am replicating
between both. Using CEPH I have iSCSI targets and CIFS/NFS stores.

In the event a server should fail how are iSCSI Initiators and CIFS/NFS

clients

re-directed? I am assuming Multipath and Virtual IPs but I can't figure

out if

this is something I need to configure/run on the OS side or if it is in

CEPH

itself.

Any help appreciated!

Thanks!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD auto-mount after server reboot

2015-04-05 Thread Loic Dachary


On 04/04/2015 22:09, shiva rkreddy wrote:
> HI,
> I'm currently testing Firefly 0.80.9 and noticed that OSD are not 
> auto-mounted after server reboot.
> It used to mount auto with Firefly 0.80.7.  OS is RHEL 6.5.
> 
> There was another thread earlier on this topic with v0.80.8, suggestion was 
> to add mount points to /etc/fstab.
> 
> Question is whether the 0.80.7 behaviour could return or its needs to be done 
> via /etc/fstab or something else?

It should work without adding lines in /etc/fstab. Could you give more details 
about your setup ? Could you try

udevadm trigger --sysname-match=sdb

if an osd is managing /dev/sdb. Does that mount the osd ? It would also be 
useful to have the output of ls -l /dev/disk/by-partuuid

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Code Coverage

2015-04-05 Thread Rajesh Raman
Hi All,



Does anyone has executed code coverage run on Ceph recently using Teuthology? 
(Some old reports from Loic's blog is 
here taken in Jan 
2013, but I am interested in latest runs if anyone has run using Teuthology)



Thanks and Regards,

Rajesh Raman





PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Code Coverage

2015-04-05 Thread Loic Dachary
Hi,

On 05/04/2015 18:32, Rajesh Raman wrote:> Hi All,
> 
>  
> 
> Does anyone has executed code coverage run on Ceph recently using Teuthology? 
> (Some old reports from Loic's blog is here 
>  taken in Jan 2013, 
> but I am interested in latest runs if anyone has run using Teuthology)

Not to my knowledge. The logic is still there and it would be great to use it 
and publish coverage reports. An other idea would be to use coverage data to 
match tests with lines of codes they cover. It would then allow us to know 
which jobs to run to cover a modification that modifies these lines.

Cheers

> 
>  
> 
> Thanks and Regards,
> 
> Rajesh Raman
> 
>  
> 
> 
> --
> 
> PLEASE NOTE: The information contained in this electronic mail message is 
> intended only for the use of the designated recipient(s) named above. If the 
> reader of this message is not the intended recipient, you are hereby notified 
> that you have received this message in error and that any review, 
> dissemination, distribution, or copying of this message is strictly 
> prohibited. If you have received this communication in error, please notify 
> the sender by telephone or e-mail (as shown above) immediately and destroy 
> any and all copies of this message in your possession (whether hard copies or 
> electronically stored copies).
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Rebalance after empty bucket addition

2015-04-05 Thread Andrey Korolyov
Hello,

after reaching certain ceiling of host/PG ratio, moving empty bucket
in causes a small rebalance:

ceph osd crush add-bucket 10.10.2.13
ceph osd crush move 10.10.2.13 root=default rack=unknownrack

I have two pools, one is very large and it is keeping up with proper
amount of pg/osd but another one contains in fact lesser amount of PGs
than the number of active OSDs and after insertion of empty bucket in
it goes to a rebalance, though that the actual placement map is not
changed. Keeping in mind that this case is very far from being
offensive to any kind of a sane production configuration, is this an
expected behavior?

Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Francois Lafont
Hi,

Lionel Bouton wrote :

> Sorry this wasn't clear: I tried the ioprio settings before disabling
> the deep scrubs and it didn't seem to make a difference when deep scrubs
> occured.

I have never tested these parameters (osd_disk_thread_ioprio_priority and
osd_disk_thread_ioprio_class), but did you check that the I/O scheduler of
the disks is cfq? Because, if I understand well, these parameters have no
effect if the I/O scheduler is no cfq and, for instance, in Ubuntu 14.04
the I/O scheduler is deadline by default (not cfq).

By the way, even if I don't use these parameters, should I use cfq I/O
scheduler instead of deadline? Is there a best I/O scheduler for Ceph?

-- 
François Lafont
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD auto-mount after server reboot

2015-04-05 Thread shiva rkreddy
We have currently two  osds configured on this system running  RHEL6.5,
sharing a ssd drive as journal devices.

udevadm trigger --sysname-match=sdb or udevadm trigger
--sysname-match=/dev/sdb, return without any output. Same thing happens on
ceph 0.80.7 where mount and services are started automatically.

Output paste from ceph 0.80.9 is : http://pastebin.com/1Yqntadi


On Sun, Apr 5, 2015 at 11:22 AM, Loic Dachary  wrote:

>
>
> On 04/04/2015 22:09, shiva rkreddy wrote:
> > HI,
> > I'm currently testing Firefly 0.80.9 and noticed that OSD are not
> auto-mounted after server reboot.
> > It used to mount auto with Firefly 0.80.7.  OS is RHEL 6.5.
> >
> > There was another thread earlier on this topic with v0.80.8, suggestion
> was to add mount points to /etc/fstab.
> >
> > Question is whether the 0.80.7 behaviour could return or its needs to be
> done via /etc/fstab or something else?
>
> It should work without adding lines in /etc/fstab. Could you give more
> details about your setup ? Could you try
>
> udevadm trigger --sysname-match=sdb
>
> if an osd is managing /dev/sdb. Does that mount the osd ? It would also be
> useful to have the output of ls -l /dev/disk/by-partuuid
>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Lionel Bouton
Hi,

On 04/06/15 02:26, Francois Lafont wrote:
> Hi,
>
> Lionel Bouton wrote :
>
>> Sorry this wasn't clear: I tried the ioprio settings before disabling
>> the deep scrubs and it didn't seem to make a difference when deep scrubs
>> occured.
> I have never tested these parameters (osd_disk_thread_ioprio_priority and
> osd_disk_thread_ioprio_class), but did you check that the I/O scheduler of
> the disks is cfq?

Yes I did.

>  Because, if I understand well, these parameters have no
> effect if the I/O scheduler is no cfq

AFAIK cfq is the only elevator supporting priorities in the mainline kernel.

>  and, for instance, in Ubuntu 14.04
> the I/O scheduler is deadline by default (not cfq).
>
> By the way, even if I don't use these parameters, should I use cfq I/O
> scheduler instead of deadline? Is there a best I/O scheduler for Ceph?

It probably depends on your actual workload. It may depend on your
hardware too: some benchmarks have shown it to perform faster on HDD but
slower than deadline or noop on SSD.

--
Lionel Bouton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow performance during recovery operations

2015-04-05 Thread Francois Lafont
On 04/06/2015 02:54, Lionel Bouton wrote:

>> I have never tested these parameters (osd_disk_thread_ioprio_priority and
>> osd_disk_thread_ioprio_class), but did you check that the I/O scheduler of
>> the disks is cfq?
> 
> Yes I did.

Ah ok. It was just in case. :)

>>  Because, if I understand well, these parameters have no
>> effect if the I/O scheduler is no cfq
> 
> AFAIK cfq is the only elevator supporting priorities in the mainline kernel.

Ok.

>> By the way, even if I don't use these parameters, should I use cfq I/O
>> scheduler instead of deadline? Is there a best I/O scheduler for Ceph?
> 
> It probably depends on your actual workload. It may depend on your
> hardware too: some benchmarks have shown it to perform faster on HDD but
> slower than deadline or noop on SSD.

Interesting...

I have only HDD (no SSD) and currently the I/O scheduler is "deadline"
(the default scheduler of my Ubuntu Trusty). So, I'll think about
switching from "deadline" to "cfq"...

Thanks for your answer Lionel. ;)

-- 
François Lafont
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] UnSubscribe Please

2015-04-05 Thread JIten Shah



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com