mons time-out ? Would
that start resulting in I/O failures ?
On Mon, Mar 5, 2018 at 9:44 PM Gregory Farnum wrote:
> On Sun, Mar 4, 2018 at 12:02 AM Mayank Kumar wrote:
>
>> Ceph Users,
>>
>> My question is if all mons are down(i know its a terrible situation to
>> be
Ceph Users,
My question is if all mons are down(i know its a terrible situation to be),
does an existing rbd volume which is mapped to a host and being
used(read/written to) continues to work?
I understand that it wont get notifications about osdmap, etc, but assuming
nothing fails, does the read
and they won't be able to do any paxos consensus and things
> will grind to a halt.
>
> In contrast, the OSD IPs don't matter at all on their own. I'd just be
> worried about if whatever's changing the IP also changes the hostname or
> otherwise causes
Resending in case this email was lost
On Tue, Jan 23, 2018 at 10:50 PM Mayank Kumar wrote:
> Thanks Burkhard for the detailed explanation. Regarding the following:-
>
> >>>The ceph client (librbd accessing a volume in this case) gets
> asynchronous notification from the
@computational.bio.uni-giessen.de> wrote:
> Hi,
>
>
> On 01/23/2018 09:53 AM, Mayank Kumar wrote:
>
>> Hi Ceph Experts
>>
>> I am a new user of Ceph and currently using Kubernetes to deploy Ceph RBD
>> Volumes. We our doing some initial work rolling it out
Hi Ceph Experts
I am a new user of Ceph and currently using Kubernetes to deploy Ceph RBD
Volumes. We our doing some initial work rolling it out to internal
customers and in doing that we are using the ip of the host as the ip of
the osd and mons. This means if a host goes down , we loose that ip.
t;Ceph Statistics" is VERY broad. Are you talking IOPS, disk usage,
> throughput, etc? disk usage is incredibly simple to calculate, especially
> if the RBD has object-map enabled. A simple rbd du rbd_name would give you
> the disk usage per RBD and return in seconds.
>
> On
Hi Ceph Users
I am relatively new to Ceph and trying to Provision CEPH RBD Volumes using
Kubernetes.
I would like to know what are the best practices for hosting a multi tenant
CEPH cluster. Specifically i have the following questions:-
- Is it ok to share a single Ceph Pool amongst multiple tena