ow). Also,
please use pastebin or similar service to avoid mailing the logs to
the list.
> Rbd-mirror is running as "rbd-mirror --cluster=cephdr"
>
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Monday, April 8, 2019 9:30 AM
rbd-mirror --cluster=cephdr"
Thanks,
-Vikas
-Original Message-
From: Jason Dillaman
Sent: Monday, April 8, 2019 9:30 AM
To: Vikas Rana
Cc: ceph-users
Subject: Re: [ceph-users] Ceph Replication not working
The log appears to be missing all the librbd log messages. The process see
og file.
>
> We removed the pool to make sure there's no image left on DR site and
> recreated an empty pool.
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Friday, April 5, 2019 2:24 PM
> To: Vikas Rana
> Cc: ceph-use
What is the version of rbd-mirror daemon and your OSDs? It looks it
found two replicated images and got stuck on the "wait_for_deletion"
step. Since I suspect those images haven't been deleted, it should
have immediately proceeded to the next step of the image replay state
machine. Are there any ad
Hi there,
We are trying to setup a rbd-mirror replication and after the setup,
everything looks good but images are not replicating.
Can some please please help?
Thanks,
-Vikas
root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool info nfs
Mode: pool
Peers:
UUID
If you are so worried about the storage efficiency: why not use erasure
coding?
EC performs really well with Luminous in our experience.
Yes, you generate more IOPS and somewhat more CPU load and a higher latency.
But it's often worth a try.
Simple example for everyone considering 2/1 replicas: co
Nobody cares about their data until they don't have it anymore. Using
replica 3 is the same logic as RAID6. Its likely if one drive has crapped
out, more will meet the maker soon. If you care about your data, then do
what you can to keep it around. If its a lab like mine, who cares its
all ephe
Den fre 25 maj 2018 kl 00:20 skrev Jack :
> On 05/24/2018 11:40 PM, Stefan Kooman wrote:
> >> What are your thoughts, would you run 2x replication factor in
> >> Production and in what scenarios?
> Me neither, mostly because I have yet to read a technical point of view,
> from someone who read and
On 05/24/2018 11:40 PM, Stefan Kooman wrote:
>> What are your thoughts, would you run 2x replication factor in
>> Production and in what scenarios?
Me neither, mostly because I have yet to read a technical point of view,
from someone who read and understand the code
I do not buy Janne's "trust me,
Quoting Anthony Verevkin (anth...@verevkin.ca):
> My thoughts on the subject are that even though checksums do allow to
> find which replica is corrupt without having to figure which 2 out of
> 3 copies are the same, this is not the only reason min_size=2 was
> required. Even if you are running all
"
À: c...@jack.fr.eu.org
Cc: "ceph-users"
Envoyé: Jeudi 24 Mai 2018 08:33:32
Objet: Re: [ceph-users] Ceph replication factor of 2
Den tors 24 maj 2018 kl 00:20 skrev Jack < [ mailto:c...@jack.fr.eu.org |
c...@jack.fr.eu.org ] >:
Hi,
I have to say, this is a common yet wor
Hi,
I coudn't agree more, but just to re-emphasize what others already said:
the point of replica 3 is not to have extra safety for
(human|software|server) failures, but to have enough data around to
allow rebalancing the cluster when disks fail.
after a certain amount of disks in a cluste
Den tors 24 maj 2018 kl 00:20 skrev Jack :
> Hi,
>
> I have to say, this is a common yet worthless argument
> If I have 3000 OSD, using 2 or 3 replica will not change much : the
> probability of losing 2 devices is still "high"
> On the other hand, if I have a small cluster, less than a hundred OS
Hi,
About Bluestore, sure there are checksum, but are they fully used ?
Rumors said that on a replicated pool, during recovery, they are not
> My thoughts on the subject are that even though checksums do allow to find
> which replica is corrupt without having to figure which 2 out of 3 copies a
This week at the OpenStackSummit Vancouver I can hear people entertaining the
idea of running Ceph with replication factor of 2.
Karl Vietmeier of Intel suggested that we use 2x replication because Bluestore
comes with checksums.
https://www.openstack.org/summit/vancouver-2018/summit-schedule/ev
.com] On Behalf Of
> Tomasz Kuzemko
> Sent: 01 July 2016 13:28
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] CEPH Replication
>
> Still in case of object corruption you will not be able to determine which
> copy is valid. Ceph does not provide data integrity with
: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Tomasz
Kuzemko
Sent: 01 July 2016 13:28
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CEPH Replication
Still in case of object corruption you will not be able to determine which copy
is valid. Ceph does not provide data
Of
> c...@jack.fr.eu.org <mailto:c...@jack.fr.eu.org>
> Sent: 01 July 2016 13:07
> To: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] CEPH Replication
>
> It will put each object on 2 OSD, on 2 separate n
e adding nodes in future as require, but will always keep an uneven
> number.
>
> ,Ashley
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> c...@jack.fr.eu.org
> Sent: 01 July 2016 13:07
> To: ceph-users@lists.ceph.com
even replication.
Will be adding nodes in future as require, but will always keep an uneven
number.
,Ashley
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
c...@jack.fr.eu.org
Sent: 01 July 2016 13:07
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-
It will put each object on 2 OSD, on 2 separate node
All nodes, and all OSDs will have the same used space (approx)
If you want to allow both copies of an object to put stored on the same
node, you should use osd_crush_chooseleaf_type = 0 (see
http://docs.ceph.com/docs/master/rados/operations/crus
Hello,
Looking at setting up a new CEPH Cluster, starting with the following.
3 x CEPH OSD Servers
Each Server:
20Gbps Network
12 OSD's
SSD Journal
Looking at running with replication of 2, will there be any issues using 3
nodes with a replication of two, this should "technically" give me ½ t
We went to 3 copies because 2 isn't safe enough for the default. With 3
copies and a properly configured system your data is approximately as safe
as the data center it's in. With 2 copies the durability is a lot lower
than that (two 9s versus four 9s or something). The actual safety numbers
did no
Hi all,
I have one question
Why did default replication change to 3 in Ceph Firefly? I think 2 copys of
object is enough for backup. And increase the number of replication also
increase latency when object has to be replicated to secondary and tertiory
OSD.
So why default replication is 3, not 2
24 matches
Mail list logo