Hi Loic
Commenting out the sanity check did the trick. The code is working as I'd
expected.
Thanks
On Fri, Apr 28, 2017 at 1:48 AM, Loic Dachary wrote:
>
>
> On 04/27/2017 11:43 PM, Oleg Kolosov wrote:
> > Hi Loic,
> > Of course.
> > I'm implementing a version of Pyramid Code. In Pyramid you re
You can't have different EC profiles in the same pool either. You have to
create the pool as either a specific EC profile or as Replica. If you
choose EC you can't even change the EC profile later, however you can
change the amount of copies a Replica pool has. An EC pool of 1:1 doesn't
do anyth
On 04/28/2017 02:48 PM, David Turner wrote:
> Wouldn't k=1, m=1 just be replica 2?
Well yes. But Ceph does not support mixing replication and erasure code in the
same pool.
> EC will split the object into k pieces (1)... Ok, that's the whole object.
I was just wondering if jerasure tolerate
Wouldn't k=1, m=1 just be replica 2? EC will split the object into k pieces
(1)... Ok, that's the whole object. And then you want to be able to lose m
copies of the object (1)... Ok, that's an entire copy of that whole
object. That isn't erasure coding, that is full 2 copy replication. For
erasure
On 04/27/2017 11:43 PM, Oleg Kolosov wrote:
> Hi Loic,
> Of course.
> I'm implementing a version of Pyramid Code. In Pyramid you remove one of the
> global parities of Reed-Solomon and add one local parity for each local
> group. In my version, I'd like to add local parity to the global parity
Hi Loic,
Of course.
I'm implementing a version of Pyramid Code. In Pyramid you remove one of
the global parities of Reed-Solomon and add one local parity for each local
group. In my version, I'd like to add local parity to the global parity
(meaning that for the case the global parity = 1, it would
Hi Oleg,
On 04/27/2017 11:23 PM, Oleg Kolosov wrote:
> Hi,
> I'm working on various implementation of LRC codes for study purposes. The
> layers implementation in the LRC module is very convenient for this, but I've
> came upon a problem in one of the cases.
> I'm interested in having k=1, m=1 i
tions/erasure-code-lrc/) might
help.
Cheers,
Maxime
From: ceph-users on behalf of Burkhard
Linke
Date: Wednesday 8 March 2017 08:05
To: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] Replication vs Erasure Coding with only 2
elementsinthe failure-domain.
Hi,
On 03/07/2017 05:5
Hi,
On 03/07/2017 05:53 PM, Francois Blondel wrote:
Hi all,
We have (only) 2 separate "rooms" (crush bucket) and would like to
build a cluster being able to handle the complete loss of one room.
*snipsnap*
Second idea would be to use Erasure Coding, as it fits our performance
require
Hello,
On Wed, 9 Nov 2016 21:56:08 +0100 Andreas Gerstmayr wrote:
> Hello,
>
> >> 2 parallel jobs with one job simulating the journal (sequential
> >> writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the
> >> other job simulating the datastore (random writes of 1MB)?
> >>
> >
Hello,
2 parallel jobs with one job simulating the journal (sequential
writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the
other job simulating the datastore (random writes of 1MB)?
To test against a single HDD?
Yes, something like that, the first fio job would need go again
On Tue, 8 Nov 2016 08:55:47 +0100 Andreas Gerstmayr wrote:
> 2016-11-07 3:05 GMT+01:00 Christian Balzer :
> >
> > Hello,
> >
> > On Fri, 4 Nov 2016 17:10:31 +0100 Andreas Gerstmayr wrote:
> >
> >> Hello,
> >>
> >> I'd like to understand how replication works.
> >> In the paper [1] several replicat
2016-11-07 3:05 GMT+01:00 Christian Balzer :
>
> Hello,
>
> On Fri, 4 Nov 2016 17:10:31 +0100 Andreas Gerstmayr wrote:
>
>> Hello,
>>
>> I'd like to understand how replication works.
>> In the paper [1] several replication strategies are described, and
>> according to a (bit old) mailing list post
Hello,
On Fri, 4 Nov 2016 17:10:31 +0100 Andreas Gerstmayr wrote:
> Hello,
>
> I'd like to understand how replication works.
> In the paper [1] several replication strategies are described, and
> according to a (bit old) mailing list post [2] primary-copy is used.
> Therefore the primary OSD wa
Thanks @jelopez for the link. I don't think this is what we want because
it's just for RGW. It would be much better to have the native and low-level
geo-replication for both RBD, RGW and CephFS. We would like to know about
the process or idea about this :)
Thanks and regards
On Fri, Feb 19, 2016
Hi,
this is where it is discussed :
http://docs.ceph.com/docs/hammer/radosgw/federated-config/
JC
> On Feb 18, 2016, at 15:14, Alexandr Porunov
> wrote:
>
> Is it possible to replicate objects across the regions. How can we create
> such clusters?
>
> Could you suggest me helpful articles/
On Wed, May 27, 2015 at 6:57 PM, Christian Balzer wrote:
> On Wed, 27 May 2015 14:06:43 -0700 Gregory Farnum wrote:
>
>> On Tue, May 19, 2015 at 7:35 PM, John Peebles wrote:
>> > Hi,
>> >
>> > I'm hoping for advice on whether Ceph could be used in an atypical use
>> > case. Specifically, I have a
On Wed, 27 May 2015 14:06:43 -0700 Gregory Farnum wrote:
> On Tue, May 19, 2015 at 7:35 PM, John Peebles wrote:
> > Hi,
> >
> > I'm hoping for advice on whether Ceph could be used in an atypical use
> > case. Specifically, I have about ~20TB of files that need replicated
> > to 2 different sites.
On Tue, May 19, 2015 at 7:35 PM, John Peebles wrote:
> Hi,
>
> I'm hoping for advice on whether Ceph could be used in an atypical use case.
> Specifically, I have about ~20TB of files that need replicated to 2
> different sites. Each site has its own internal gigabit ethernet network.
> However, t
That is correct, you make a tradeoff between space, performance and
resiliency. By reducing replication from 3 to 2, you will get more space
and likely more performance (less overhead from third copy), but it comes
at the expense of being able to recover your data when there are multiple
failures.
Ok, now if I run a lab and the data is somewhat important but I can bare
losing the data, couldn't I shrink the pool replica count and that
increases the amount of storage I can use without using erasure coding?
So for 145TB with a replica of 3 = ~41 TB total in the cluster
But if that same clust
For example, here is my confuguration:
superuser@admin:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
242T 209T 20783G 8.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
ec_backup-storage 4 9629G 3.88
Thank you! That helps alot.
On Mar 12, 2015 10:40 AM, "Steve Anthony" wrote:
> Actually, it's more like 41TB. It's a bad idea to run at near full
> capacity (by default past 85%) because you need some space where Ceph can
> replicate data as part of its healing process in the event of disk or n
Actually, it's more like 41TB. It's a bad idea to run at near full
capacity (by default past 85%) because you need some space where Ceph
can replicate data as part of its healing process in the event of disk
or node failure. You'll get a health warning when you exceed this ratio.
You can use erasu
Hello,
On Thu, Mar 12, 2015 at 3:07 PM, Thomas Foster
wrote:
> I am looking into how I can maximize my space with replication, and I am
> trying to understand how I can do that.
>
> I have 145TB of space and a replication of 3 for the pool and was thinking
> that the max data I can have in the c
Yeah, so generally those will be correlated with some failure domain,
and if you spread your replicas across failure domains you won't hit
any issues. And if hosts are down for any length of time the OSDs will
re-replicate data to keep it at proper redundancy.
-Greg
Software Engineer #42 @ http://i
On Tue, Sep 16, 2014 at 5:10 PM, JIten Shah wrote:
> Hi Guys,
>
> We have a cluster with 1000 OSD nodes and 5 MON nodes and 1 MDS node. In
> order to be able to loose quite a few OSD’s and still survive the load, we
> were thinking of making the replication factor to 50.
>
> Is that too big of a
Depending on what level of verification you need, you can just do a "ceph
pg dump" and look to see which OSDs host every PG. If you want to
demonstrate replication to a skeptical audience, sure, turn off the
machines and show that data remains accessible.
-Greg
On Friday, May 30, 2014, wrote:
>
Which model you have hard drives?
2014-03-14 21:59 GMT+04:00 Greg Poirier :
> We are stressing these boxes pretty spectacularly at the moment.
>
> On every box I have one OSD that is pegged for IO almost constantly.
>
> ceph-1:
> Device: rrqm/s wrqm/s r/s w/srkB/swkB/s
We are stressing these boxes pretty spectacularly at the moment.
On every box I have one OSD that is pegged for IO almost constantly.
ceph-1:
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sdv 0.00 0.00
On Fri, Mar 14, 2014 at 9:37 AM, Greg Poirier wrote:
> So, on the cluster that I _expect_ to be slow, it appears that we are
> waiting on journal commits. I want to make sure that I am reading this
> correctly:
>
> "received_at": "2014-03-14 12:14:22.659170",
>
> { "t
So, on the cluster that I _expect_ to be slow, it appears that we are
waiting on journal commits. I want to make sure that I am reading this
correctly:
"received_at": "2014-03-14 12:14:22.659170",
{ "time": "2014-03-14 12:14:22.660191",
"event":
Right. So which is the interval that's taking all the time? Probably
it's waiting for the journal commit, but maybe there's something else
blocking progress. If it is the journal commit, check out how busy the
disk is (is it just saturated?) and what its normal performance
characteristics are (is i
Many of the sub ops look like this, with significant lag between
received_at and commit_sent:
{ "description": "osd_op(client.6869831.0:1192491
rbd_data.67b14a2ae8944a.9105 [write 507904~3686400] 6.556a4db0
e660)",
"received_at": "2014-03-13 20:42:05.811936",
On Thu, Mar 13, 2014 at 3:56 PM, Greg Poirier wrote:
> We've been seeing this issue on all of our dumpling clusters, and I'm
> wondering what might be the cause of it.
>
> In dump_historic_ops, the time between op_applied and sub_op_commit_rec or
> the time between commit_sent and sub_op_applied i
Very interesting ... thank you for pointing me on that.
By reading http://tracker.ceph.com/issues/4929, it seems it is a pretty new
feature.
When you say "you can use it with RADOS", you mean it is already well tested and stable ?
Or something like "not so far from production readiness" ?
Bes
On 12/28/2013 02:40 PM, Cedric Lemarchand wrote:
Le 28/12/2013 14:35, Wido den Hollander a écrit :
On 12/28/2013 02:07 PM, Cedric Lemarchand wrote:
Hello Cepher,
As my needs are to lower the $/To, I would like to know if the
replication ratio could only be an integer or can be set to 1,5 or
1
Le 28/12/2013 14:35, Wido den Hollander a écrit :
On 12/28/2013 02:07 PM, Cedric Lemarchand wrote:
Hello Cepher,
As my needs are to lower the $/To, I would like to know if the
replication ratio could only be an integer or can be set to 1,5 or
1,25 ?
In others words, do Ceph can compute data
On 12/28/2013 02:07 PM, Cedric Lemarchand wrote:
Hello Cepher,
As my needs are to lower the $/To, I would like to know if the
replication ratio could only be an integer or can be set to 1,5 or 1,25 ?
In others words, do Ceph can compute data parity or only do multiple
copy of data ?
No, that i
Kopie:ceph-users@lists.ceph.com 'ceph-users@lists.ceph.com');>
> Datum:25.06.2013 17:39
> Betreff:Re: [ceph-users] Replication between 2 datacenter
> --
>
>
>
> On Tue, 25 Jun 2013, joachim.t...@gad.de 'joachim.t...@
On Tue, 25 Jun 2013, joachim.t...@gad.de wrote:
> hi folks,
>
> i have a question concerning data replication using the crushmap.
>
> Is it possible to write a crushmap to achive a 2 times 2 replcation in the
> way a have a pool replication in one data center and an overall replication
> of this
41 matches
Mail list logo