ehalf Of
> Anthony D'Atri
> Sent: Friday, May 20, 2016 1:32 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Do you see a data loss if a SSD hosting several OSD
> journals crashes
>
>
>> Ceph will not acknowledge a client write before all journ
Anthony D'Atri
Sent: Friday, May 20, 2016 1:32 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Do you see a data loss if a SSD hosting several OSD
journals crashes
> Ceph will not acknowledge a client write before all journals (replica
> size, 3 by default) have received t
> Ceph will not acknowledge a client write before all journals (replica
> size, 3 by default) have received the data, so loosing one journal SSD
> will NEVER result in an actual data loss.
Some say that all replicas must be written; others say that only min_size, 2 by
default, must be written be
la
> Subject: Re: [ceph-users] Do you see a data loss if a SSD hosting
> several OSD journals crashes
>
>
> Hello,
>
> first of all, wall of text. Don't do that.
> Use returns and paragraphs liberally to make reading easy.
> I'm betting at least half of t
Hi,
Yes and no, for the actual data loss. This depends on your crush map.
If you're using the original map (which came with the installation),
then your smallest failure domain will be the host. If you have replica
size and 3 hosts and 5 OSDs per host (15 OSDs total), then loosing the
journ
Hello,
first of all, wall of text. Don't do that.
Use returns and paragraphs liberally to make reading easy.
I'm betting at least half of the people who could have answered you
question took a look at this blob of text and ignored it.
Secondly, search engines are your friend.
The first hit when
* We are trying to assess if we are going to see a data loss if an SSD that
is hosting journals for few OSDs crashes. In our configuration, each SSD is
partitioned into 5 chunks and each chunk is mapped as a journal drive for one
OSD. What I understand from the Ceph documentation: "Consisten