You may want to look at the Clustered SCSI Target Using RBD Status
Blueprint, Etherpad and video at:
https://wiki.ceph.com/Planning/Blueprints/Hammer/Clustered_SCSI_target_using_RBD
http://pad.ceph.com/p/I-scsi
https://www.youtube.com/watch?v=quLqLnWF6A8&index=7&list=PLrBUGiINAakNGDE42uLyU2S1s_9HV
Thank you all!!
This all makes more sense now. I think I know the direction where we are
heading.
Justin
On Apr 4, 2015 6:18 PM, "Don Doerner" wrote:
> Hi Justin,
>
>
>
> Ceph, proper, does not provide those services. Ceph *does* provide Linux
> block devices (look for Rados Block Devices, ak
Hi Justin,
Ceph, proper, does not provide those services. Ceph does provide Linux block
devices (look for Rados Block Devices, aka, RBD) and a filesystem, CephFS.
I don’t know much about the filesystem, but the block devices are present on an
RBD client that you set up, following the instructi
Key problem resolved by actually installing (as opposed to simply configuring)
the EPEL repo. And with that, the cluster became viable. Thanks all.
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 04 April, 2015 09:47
To: ceph-us...@ceph.com
Sub
HI,
I'm currently testing Firefly 0.80.9 and noticed that OSD are not
auto-mounted after server reboot.
It used to mount auto with Firefly 0.80.7. OS is RHEL 6.5.
There was another thread earlier on this topic with v0.80.8, suggestion was
to add mount points to /etc/fstab.
Question is whether th
On 04/04/2015 03:30 PM, Justin Chin-You wrote:
> Hi All,
>
> Hoping someone can help me understand CEPH HA or point me in the direction
> of a doc I missed.
>
> I understand how CEPH HA itself works in regards to PG, OSD and Monitoring.
> However what isn't clear for me is the failover in regards
Hi Justin,
I could probably be wrong on this but you're having to use a Ceph gateway
rather than natively interracting with the cluster right? If so then the
only way that you'd really be able to get HA would be to install a load
balancer in front of multiple gateways. Under normal conditions when
OK, apparently it's also a good idea to install EPEL, not just copy over the
repo configuration from another installation.
That resolved the key error and It appears that I have it all installed.
-don-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don
Doerner
Sent: 04
Folks,
I am having a hard time setting up a fresh install of GIANT on a fresh install
of RHEL7 - which you would think would be about the easiest of all situations...
1. Using ceph-deploy 1.5.22 - for some reason it never updates the
/etc/yum.repos.d to include all of the various ceph repo
Hi All,
Hoping someone can help me understand CEPH HA or point me in the direction
of a doc I missed.
I understand how CEPH HA itself works in regards to PG, OSD and Monitoring.
However what isn't clear for me is the failover in regards to things like
iSCSI and the not yet production ready CIFS/N
On Apr 3, 2015, at 12:37 AM, LOPEZ Jean-Charles wrote:
> according to your ceph osd tree capture, although the OSD reweight is set to
> 1, the OSD CRUSH weight is set to 0 (2nd column). You need to assign the OSD
> a CRUSH weight so that it can be selected by CRUSH: ceph osd crush reweight
> os
hello all!
As the documentation said "One of the unique features of Ceph is that it
decouples data and metadata".for applying the mechanism of decoupling, Ceph
uses Metadata Server (MDS) cluster.MDS cluster manages metadata operations,
like open or rename a file
On the other hand, Ceph implement
12 matches
Mail list logo