Hi,
On 02/01/2018 07:21 AM, Mayank Kumar wrote:
Thanks Gregory and Burkhard
In kubernetes we use rbd create and rbd map/unmap commands. In this
perspective are you referring to rbd as the client or after the image
is created and mapped, is there a different client running inside the
kernel
Thanks Gregory and Burkhard
In kubernetes we use rbd create and rbd map/unmap commands. In this
perspective are you referring to rbd as the client or after the image is
created and mapped, is there a different client running inside the kernel
that you are referring to which can get osd and mon up
Ceph assumes monitor IP addresses are stable, as they're the identity for
the monitor and clients need to know them to connect.
Clients maintain a TCP connection to the monitors while they're running,
and monitors publish monitor maps containing all the known monitors in the
cluster. These are pus
Resending in case this email was lost
On Tue, Jan 23, 2018 at 10:50 PM Mayank Kumar wrote:
> Thanks Burkhard for the detailed explanation. Regarding the following:-
>
> >>>The ceph client (librbd accessing a volume in this case) gets
> asynchronous notification from the ceph mons in case of rele
Thanks Burkhard for the detailed explanation. Regarding the following:-
>>>The ceph client (librbd accessing a volume in this case) gets
asynchronous notification from the ceph mons in case of relevant changes,
e.g. updates to the osd map reflecting the failure of an OSD.
i have some more question
Hi,
On 01/23/2018 09:53 AM, Mayank Kumar wrote:
Hi Ceph Experts
I am a new user of Ceph and currently using Kubernetes to deploy Ceph
RBD Volumes. We our doing some initial work rolling it out to internal
customers and in doing that we are using the ip of the host as the ip
of the osd and m