Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-24 Thread vitalif
We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that is rather harsh, even for EC. 4kb IO is slow in Ceph even without EC. Your 3 GB/s linear writes don't matter anything. Ceph adds a significant overhead to e

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Lars Marowsky-Bree
On 2019-07-08T19:37:13, Paul Emmerich wrote: > object_map can be a bottleneck for the first write in fresh images We're working with CephFS here. -- SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah, HRB 21284 (AG Nürnberg) "Architects should open possibilities and not determi

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Paul Emmerich
On Mon, Jul 8, 2019 at 2:42 PM Maged Mokhtar wrote: > > On 08/07/2019 13:02, Lars Marowsky-Bree wrote: > > On 2019-07-08T12:25:30, Dan van der Ster wrote: > > > >> Is there a specific bench result you're concerned about? > > We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Lars Marowsky-Bree
On 2019-07-08T14:36:31, Maged Mokhtar wrote: Hi Maged, > Maybe not related, but we find with rbd, random 4k write iops start very low > at first for a new image and then increase over time as we write. If we > thick provision the image it work does not show this. This happens on random > small b

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Dan van der Ster
On Mon, Jul 8, 2019 at 1:02 PM Lars Marowsky-Bree wrote: > > On 2019-07-08T12:25:30, Dan van der Ster wrote: > > > Is there a specific bench result you're concerned about? > > We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a > pool that could do 3 GiB/s with 4M blocksize. S

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Maged Mokhtar
On 08/07/2019 13:02, Lars Marowsky-Bree wrote: On 2019-07-08T12:25:30, Dan van der Ster wrote: Is there a specific bench result you're concerned about? We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Lars Marowsky-Bree
On 2019-07-08T12:25:30, Dan van der Ster wrote: > Is there a specific bench result you're concerned about? We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that is rather harsh, even for EC. > I would think tha

Re: [ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Dan van der Ster
Hi Lars, Is there a specific bench result you're concerned about? I would think that small write perf could be kept reasonable thanks to bluestore's deferred writes. FWIW, our bench results (all flash cluster) didn't show a massive performance difference between 3 replica and 4+2 EC. I agree abou

[ceph-users] Erasure Coding performance for IO < stripe_width

2019-07-08 Thread Lars Marowsky-Bree
Morning all, since Luminous/Mimic, Ceph supports allow_ec_overwrites. However, this has a performance impact that looks even worse than what I'd expect from a Read-Modify-Write cycle. https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/ also mentions that the small writes would read

Re: [ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Janne Johansson
Den fre 14 juni 2019 kl 15:47 skrev Sean Redmond : > Hi James, > Thanks for your comments. > I think the CPU burn is more of a concern to soft iron here as they are > using low power ARM64 CPU's to keep the power draw low compared to using > Intel CPU's where like you say the problem maybe less of

Re: [ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Sean Redmond
Hi James, Thanks for your comments. I think the CPU burn is more of a concern to soft iron here as they are using low power ARM64 CPU's to keep the power draw low compared to using Intel CPU's where like you say the problem maybe less of a concern. Using less power by using ARM64 and providing E

Re: [ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread David Byte
I can't speak to the SoftIron solution, but I have done some testing on an all-SSD environment comparing latency, CPU, etc between using the Intel ISA plugin and using Jerasure. Very little difference is seen in CPU and capability in my tests, so I am not sure of the benefit. David Byte Sr. Te

Re: [ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Brett Niver
Also the picture I saw at Cephalocon - which could have been inaccurate, looked to me as if it multiplied the data path. On Fri, Jun 14, 2019 at 8:27 AM Janne Johansson wrote: > > Den fre 14 juni 2019 kl 13:58 skrev Sean Redmond : >> >> Hi Ceph-Uers, >> I noticed that Soft Iron now have hardware

Re: [ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Janne Johansson
Den fre 14 juni 2019 kl 13:58 skrev Sean Redmond : > Hi Ceph-Uers, > I noticed that Soft Iron now have hardware acceleration for Erasure > Coding[1], this is interesting as the CPU overhead can be a problem in > addition to the extra disk I/O required for EC pools. > Does anyone know if any other

[ceph-users] Erasure Coding - FPGA / Hardware Acceleration

2019-06-14 Thread Sean Redmond
Hi Ceph-Uers, I noticed that Soft Iron now have hardware acceleration for Erasure Coding[1], this is interesting as the CPU overhead can be a problem in addition to the extra disk I/O required for EC pools. Does anyone know if any other work is ongoing to support generic FPGA Hardware Acceleratio

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-10 Thread Christian Balzer
Hello, On Wed, 10 Apr 2019 20:09:58 +0200 Paul Emmerich wrote: > On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote: > > > > > > Hello, > > > > Another thing that crossed my mind aside from failure probabilities caused > > by actual HDDs dying is of course the little detail that most Ceph

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-10 Thread Paul Emmerich
On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote: > > > Hello, > > Another thing that crossed my mind aside from failure probabilities caused > by actual HDDs dying is of course the little detail that most Ceph > installations will have have WAL/DB (journal) on SSDs, the most typical > rati

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-10 Thread Hector Martin
On 10/04/2019 18.11, Christian Balzer wrote: > Another thing that crossed my mind aside from failure probabilities caused > by actual HDDs dying is of course the little detail that most Ceph > installations will have have WAL/DB (journal) on SSDs, the most typical > ratio being 1:4. > And given th

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-10 Thread Christian Balzer
Hello, Another thing that crossed my mind aside from failure probabilities caused by actual HDDs dying is of course the little detail that most Ceph installations will have have WAL/DB (journal) on SSDs, the most typical ratio being 1:4. And given the current thread about compaction killing pur

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-02 Thread Christian Balzer
On Tue, 2 Apr 2019 19:04:28 +0900 Hector Martin wrote: > On 02/04/2019 18.27, Christian Balzer wrote: > > I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2 > > pool with 1024 PGs. > > (20 choose 2) is 190, so you're never going to have more than that many > unique sets o

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-02 Thread Hector Martin
On 02/04/2019 18.27, Christian Balzer wrote: I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2 pool with 1024 PGs. (20 choose 2) is 190, so you're never going to have more than that many unique sets of OSDs. I just looked at the OSD distribution for a replica 3 pool ac

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-02 Thread Christian Balzer
Hello Hector, Firstly I'm so happy somebody actually replied. On Tue, 2 Apr 2019 16:43:10 +0900 Hector Martin wrote: > On 31/03/2019 17.56, Christian Balzer wrote: > > Am I correct that unlike with with replication there isn't a maximum size > > of the critical path OSDs? > > As far as I kn

Re: [ceph-users] Erasure Coding failure domain (again)

2019-04-02 Thread Hector Martin
On 31/03/2019 17.56, Christian Balzer wrote: Am I correct that unlike with with replication there isn't a maximum size of the critical path OSDs? As far as I know, the math for calculating the probability of data loss wrt placement groups is the same for EC and for replication. Replication to

[ceph-users] Erasure Coding failure domain (again)

2019-03-31 Thread Christian Balzer
Hello, considering erasure coding for the first time (so excuse seemingly obvious questions) and staring at the various previous posts and documentation and in particular: http://docs.ceph.com/docs/master/dev/osd_internals/erasure_coding/ Am I correct that unlike with with replication there is

Re: [ceph-users] Erasure coding with more chunks than servers

2018-10-05 Thread Paul Emmerich
Oh, and you'll need to use m>=3 to ensure availability during a node failure. Paul Am Fr., 5. Okt. 2018 um 11:22 Uhr schrieb Caspar Smit : > > Hi Vlad, > > You can check this blog: > http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters > > Note! Be aware that these setting

Re: [ceph-users] Erasure coding with more chunks than servers

2018-10-05 Thread Caspar Smit
Hi Vlad, You can check this blog: http://cephnotes.ksperis.com/blog/2017/01/27/erasure-code-on-small-clusters Note! Be aware that these settings do not automatically cover a node failure. Check out this thread why: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024423.html K

Re: [ceph-users] Erasure coding with more chunks than servers

2018-10-04 Thread Paul Emmerich
Yes, you can use a crush rule with two steps: take default chooseleaf indep 5 emit take default chooseleaf indep 2 emit You'll have to adjust it when adding a server, so it's not a great solution. I'm not sure if there's a way to do it without hardcoding the number of servers (I don't think there

[ceph-users] Erasure coding with more chunks than servers

2018-10-04 Thread Vladimir Brik
Hello I have a 5-server cluster and I am wondering if it's possible to create pool that uses k=5 m=2 erasure code. In my experiments, I ended up with pools whose pgs are stuck in creating+incomplete state even when I created the erasure code profile with --crush-failure-domain=osd. Assuming that

[ceph-users] Erasure coding and the way objects fill up free space

2018-08-06 Thread Jean-Philippe Méthot
Hi, There’s something I would like to understand regarding advanced erasure coding and the way objects take up place. Let’s say that I have 10 nodes of 4 OSDs and an erasure coded pool set with K=6, M=2 and a crush failure domain of host. I can technically fill up this ceph cluster until one O

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Konstantin Shalygin
So if you want, two more questions to you : - How do you handle your ceph.conf configuration (default data pool by user) / distribution ? Manually, config management, openstack-ansible... ? - Did you made comparisons, benchmarks between replicated pools and EC pools, on the same hardware / drives

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Paul Emmerich
2018-07-10 6:26 GMT+02:00 Konstantin Shalygin : > > rbd default data pool = erasure_rbd_data > > > Keep in mind, your minimal client version is Luminous. > specifically, it's 12.2.2 or later for the clients! 12.2.0/1 clients have serious bugs in the rbd ec code that will ruin your day as soon

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Gilles Mocellin
Le 2018-07-10 06:26, Konstantin Shalygin a écrit : Does someone have used EC pools with OpenStack in production ? By chance, I found that link : https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/ Yes, this good post. My configuration is: cinder.conf: [erasure-rbd-hdd]

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-09 Thread Konstantin Shalygin
Does someone have used EC pools with OpenStack in production ? By chance, I found that link : https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/ Yes, this good post. My configuration is: cinder.conf: [erasure-rbd-hdd] volume_driver = cinder.volume.drivers.rbd.RBDDriver

[ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-08 Thread Gilles Mocellin
Hello Cephers ! After having read since Luminuous that EC pools are now supported for writable RBD pools, I decided to use it in a new OpenStack Cloud deployment. The gain on storage is really noticeable, and I want to reduce the storage cost. So I decided to use ceph-ansible to deploy the Ceph

Re: [ceph-users] erasure coding chunk distribution

2018-02-23 Thread Gregory Farnum
On Fri, Feb 23, 2018 at 5:05 AM Dennis Benndorf < dennis.bennd...@googlemail.com> wrote: > Hi, > > at the moment we use ceph with one big rbd pool and size=4 and use a > rule to ensure that 2 copies are in each of our two rooms. This works > great for VMs. But there is some big data which should b

[ceph-users] erasure coding chunk distribution

2018-02-23 Thread Dennis Benndorf
Hi, at the moment we use ceph with one big rbd pool and size=4 and use a rule to ensure that 2 copies are in each of our two rooms. This works great for VMs. But there is some big data which should be stored online but a bit cheaper. We think about using cephfs for it with erasure coding and

Re: [ceph-users] Erasure Coding Pools and PG calculation - documentation

2017-11-16 Thread Tim Gipson
> > From: ceph-users on behalf of Tim Gipson > > > > Sent: Saturday, November 11, 2017 5:38:02 AM > > To: ceph-users@lists.ceph.com > > Subject: [ceph-users] Erasure Coding Pools and PG calculation - > > docume

Re: [ceph-users] Erasure Coding Pools and PG calculation - documentation

2017-11-12 Thread Christian Wuerdig
> > > > > > From: ceph-users on behalf of Tim > Gipson > > > > Sent: Saturday, November 11, 2017 5:38:02 AM > > To: ceph-users@lists.ceph.com > > Subject: [ceph-users] Erasure Coding Poo

Re: [ceph-users] Erasure Coding Pools and PG calculation - documentation

2017-11-12 Thread Tim Gipson
; > Get Outlook for Android > > > From: ceph-users on behalf of Tim Gipson > > Sent: Saturday, November 11, 2017 5:38:02 AM > To: ceph-users@lists.ceph.com > Subject: [ceph-users] Erasure Coding Pools and PG calculation - > docu

Re: [ceph-users] Erasure Coding Pools and PG calculation - documentation

2017-11-12 Thread Christian Wuerdig
t; ,Ashley > > Get Outlook for Android > > > From: ceph-users on behalf of Tim Gipson > > Sent: Saturday, November 11, 2017 5:38:02 AM > To: ceph-users@lists.ceph.com > Subject: [ceph-users] Erasure Coding Pools and PG calculation - > docume

Re: [ceph-users] Erasure Coding Pools and PG calculation - documentation

2017-11-11 Thread Ashley Merrick
: ceph-users@lists.ceph.com Subject: [ceph-users] Erasure Coding Pools and PG calculation - documentation Hey all, I’m having some trouble setting up a Pool for Erasure Coding. I haven’t found much documentation around the PG calculation for an Erasure Coding pool. It seems from what I’ve tried

[ceph-users] Erasure Coding Pools and PG calculation - documentation

2017-11-10 Thread Tim Gipson
Hey all, I’m having some trouble setting up a Pool for Erasure Coding.  I haven’t found much documentation around the PG calculation for an Erasure Coding pool.  It seems from what I’ve tried so far that the math needed to set one up is different than the math you use to calculate PGs for a reg

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Jason Dillaman
rows a little bit when the actual Erasure data pool grows way > more! > > You can get information about your image using rbd info {replicated > pool}/{image name} > > Mensaje original > De: Josy > Fecha: 12/10/17 8:40 PM (GMT+01:00) > Para: David Tu

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Josy
1:00) Para: David Turner , dilla...@redhat.com Cc: ceph-users Asunto: Re: [ceph-users] Erasure coding with RBD Thank you for your reply. I created a erasure coded pool 'ecpool' and a replicated pool to store metadata 'ec_rep_pool'  And created image as you mentioned : rbd

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Josy
about your image using rbd info {replicated pool}/{image name} Mensaje original De: Josy Fecha: 12/10/17 8:40 PM (GMT+01:00) Para: David Turner , dilla...@redhat.com Cc: ceph-users Asunto: Re: [ceph-users] Erasure coding with RBD Thank you for your reply. I created a erasure

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Jorge Pinilla López
-- Mensaje original De: Josy Fecha: 12/10/17 8:40 PM (GMT+01:00) Para: David Turner , dilla...@redhat.com Cc: ceph-users Asunto: Re: [ceph-users] Erasure coding with RBD Thank you for your reply. I created a erasure coded pool 'ecpool' and a replicated pool to

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Jason Dillaman
Yes -- the "image" will be in the replicated pool and its data blocks will be in the specified data pool. An "rbd info" against the image will show the data pool. On Thu, Oct 12, 2017 at 2:40 PM, Josy wrote: > Thank you for your reply. > > I created a erasure coded pool 'ecpool' and a replicated

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Josy
Thank you for your reply. I created a erasure coded pool 'ecpool' and a replicated pool to store metadata 'ec_rep_pool'  And created image as you mentioned : rbd create --size 20G --data-pool ecpool ec_rep_pool/ectestimage1 But the image seems to be  created in ec_rep_pool* * [

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread David Turner
Here is your friend. http://docs.ceph.com/docs/luminous/rados/operations/erasure-code/#erasure-coding-with-overwrites On Thu, Oct 12, 2017 at 2:09 PM Jason Dillaman wrote: > The image metadata still needs to live in a replicated data pool -- > only the data blocks can be stored in an EC pool. Th

Re: [ceph-users] Erasure coding with RBD

2017-10-12 Thread Jason Dillaman
The image metadata still needs to live in a replicated data pool -- only the data blocks can be stored in an EC pool. Therefore, when creating the image, you should provide the "--data-pool " optional to specify the EC pool name. On Thu, Oct 12, 2017 at 2:06 PM, Josy wrote: > Hi, > > I am trying

[ceph-users] Erasure coding with RBD

2017-10-12 Thread Josy
Hi, I am trying to setup an erasure coded pool with rbd image. The ceph version is Luminous 12.2.1. and I understand,  since Luminous, RBD and Cephfs can store their data in an erasure coded pool without use of cache tiring. I created a pool ecpool and when trying to create a rbd image, gets

Re: [ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread Jonas Jaszkowic
> Am 20.06.2017 um 16:06 schrieb David Turner : > > Ceph is a large scale storage system. You're hoping that it is going to care > about and split files that are 9 bytes in size. Do this same test with a 4MB > file and see how it splits up the content of the file. > > Makes sense. I was just

Re: [ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread David Turner
Ceph is a large scale storage system. You're hoping that it is going to care about and split files that are 9 bytes in size. Do this same test with a 4MB file and see how it splits up the content of the file. On Tue, Jun 20, 2017, 6:48 AM Jonas Jaszkowic wrote: > I am currently evaluating erasur

[ceph-users] Erasure Coding: Wrong content of data and coding chunks?

2017-06-20 Thread Jonas Jaszkowic
I am currently evaluating erasure coding in Ceph. I wanted to know where my data and coding chunks are located, so I followed the example at http://docs.ceph.com/docs/master/rados/operations/erasure-code/#creating-a-sample-erasure-coded-pool

Re: [ceph-users] Erasure Coding: Determine location of data and coding chunks

2017-06-20 Thread Jonas Jaszkowic
Thank you! I already knew about the ceph osd map command, but I am not sure how to interpret the output. For example, on the described erasure coded pool, the output is: osdmap e30 pool 'ecpool' (1) object 'sample-obj' -> pg 1.fa0b8566 (1.66) -> up ([1,4,2,0,3], p1) acting ([1,4,2,0,3], p1) Thi

Re: [ceph-users] Erasure Coding: Determine location of data and coding chunks

2017-06-19 Thread Marko Sluga
Hi Jonas, ceph osd map [poolname] [objectname] should provide you with more information about where the object and chunks are stored on the cluster. Regards, Marko Sluga Independent Trainer W: http://markocloud.com T: +1 (647) 546-4365 L + M Consulting Inc. Ste 212, 2121 L

[ceph-users] Erasure Coding: Determine location of data and coding chunks

2017-06-19 Thread Jonas Jaszkowic
Hello all, I have a simple question: I have an erasure coded pool with k = 2 data chunks and m = 3 coding chunks, how can I determine the location of the data and coding chunks? Given an object A that is stored on n = k + m different OSDs I want to find out where (i.e. on which OSDs) the data c

Re: [ceph-users] Erasure coding general information Openstack+kvm virtual machine block storage

2016-09-16 Thread Erick Perez - Quadrian Enterprises
Thanks Wes and Josh for your answers. So, for more production-like environments and more tested procedures in case of failures the default replication seems to be the way to go. Perhaps in next release we will add a storage node with EC. Thanks, On Fri, Sep 16, 2016 at 7:25 AM, Wes Dillingham w

Re: [ceph-users] Erasure coding general information Openstack+kvm virtual machine block storage

2016-09-16 Thread Wes Dillingham
Erick, You can use erasure coding but it has to be fronted by a replicated cache tier, or so states the documentation, I have never set up this configuration, and always opt to use RBD directly on replicated pools. https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/paged/storage-

Re: [ceph-users] Erasure coding general information Openstack+kvm virtual machine block storage

2016-09-15 Thread Josh Durgin
On 09/16/2016 09:46 AM, Erick Perez - Quadrian Enterprises wrote: Can someone point me to a thread or site that uses ceph+erasure coding to serve block storage for Virtual Machines running with Openstack+KVM? All references that I found are using erasure coding for cold data or *not* VM block acc

[ceph-users] Erasure coding general information Openstack+kvm virtual machine block storage

2016-09-15 Thread Erick Perez - Quadrian Enterprises
Can someone point me to a thread or site that uses ceph+erasure coding to serve block storage for Virtual Machines running with Openstack+KVM? All references that I found are using erasure coding for cold data or *not* VM block access. thanks, -- - Erick

Re: [ceph-users] Erasure coding after striping

2016-04-18 Thread Chandan Kumar Singh
Thanks Haung jun. I wanted to if it is a common practice for users to combine striping and EC pool. Are there any pros and cons? On Sat, Apr 16, 2016 at 10:01 AM, huang jun wrote: > for striped objects, the main goodness is your cluster's OSDs capacity > usage will get more balanced, > and write

Re: [ceph-users] Erasure coding after striping

2016-04-15 Thread huang jun
for striped objects, the main goodness is your cluster's OSDs capacity usage will get more balanced, and write\read requests will spread across the whole cluster which will improve w/r performance . 2016-04-15 22:17 GMT+08:00 Chandan Kumar Singh : > Hi > > Is it a good practice to store striped ob

[ceph-users] Erasure coding after striping

2016-04-15 Thread Chandan Kumar Singh
Hi Is it a good practice to store striped objects in a EC pool? If yes, what are the pros and cons of such a pattern? Regards Chandan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Erasure coding for small files vs large files

2016-04-15 Thread Chandan Kumar Singh
Hi I am evaluating EC for a ceph cluster where the objects are mostly of smaller sizes (< 1 MB) and occasionally large (~ 100 - 500 MB). Besides the general performance penalty of EC, is there any additional disadvantage of storing small objects along with large objects in same EC pool. More gene

[ceph-users] Erasure Coding pool stuck at creation because of pre-existing crush ruleset ?

2015-09-30 Thread SCHAER Frederic
Hi, With 5 hosts, I could successfully create pools with k=4 and m=1, with the failure domain being set to "host". With 6 hosts, I could also create k=4,m=1 EC pools. But I suddenly failed with 6 hosts k=5 and m=1, or k=4,m=2 : the PGs were never created - I reused the pool name for my tests, th

Re: [ceph-users] Erasure Coding + CephFS, objects not being deleted after rm

2015-06-12 Thread Gregory Farnum
On Fri, Jun 12, 2015 at 11:59 AM, Lincoln Bryant wrote: > Thanks John, Greg. > > If I understand this correctly, then, doing this: > rados -p hotpool cache-flush-evict-all > should start appropriately deleting objects from the cache pool. I just > started one up, and that seems to be work

Re: [ceph-users] Erasure Coding + CephFS, objects not being deleted after rm

2015-06-12 Thread Lincoln Bryant
Thanks John, Greg. If I understand this correctly, then, doing this: rados -p hotpool cache-flush-evict-all should start appropriately deleting objects from the cache pool. I just started one up, and that seems to be working. Otherwise, the cache's confgured timeouts/limits should get th

Re: [ceph-users] Erasure Coding + CephFS, objects not being deleted after rm

2015-06-12 Thread Gregory Farnum
On Fri, Jun 12, 2015 at 11:07 AM, John Spray wrote: > > Just had a go at reproducing this, and yeah, the behaviour is weird. Our > automated testing for cephfs doesn't include any cache tiering, so this is a > useful exercise! > > With a writeback overlay cache tier pool on an EC pool, I write a

Re: [ceph-users] Erasure Coding + CephFS, objects not being deleted after rm

2015-06-12 Thread John Spray
Just had a go at reproducing this, and yeah, the behaviour is weird. Our automated testing for cephfs doesn't include any cache tiering, so this is a useful exercise! With a writeback overlay cache tier pool on an EC pool, I write a bunch of files, then do a rados cache-flush-evict-all, the

[ceph-users] Erasure Coding + CephFS, objects not being deleted after rm

2015-06-12 Thread Lincoln Bryant
Greetings experts, I've got a test set up with CephFS configured to use an erasure coded pool + cache tier on 0.94.2. I have been writing lots of data to fill the cache to observe the behavior and performance when it starts evicting objects to the erasure-coded pool. The thing I have noticed

Re: [ceph-users] Erasure Coding : gf-Complete

2015-04-24 Thread Loic Dachary
.@dachary.org] > Sent: Thursday, April 23, 2015 2:47 PM > To: Garg, Pankaj; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Erasure Coding : gf-Complete > > Hi, > > The ARMv8 optimizations for gf-complete are in Hammer, not in Firefly. The > libec_jerasure*.so

Re: [ceph-users] Erasure Coding : gf-Complete

2015-04-23 Thread Garg, Pankaj
ns or do I have to select a particular one to take advantage of them? -Pankaj -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Thursday, April 23, 2015 2:47 PM To: Garg, Pankaj; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Erasure Coding : gf-Complete Hi,

Re: [ceph-users] Erasure Coding : gf-Complete

2015-04-23 Thread Loic Dachary
Hi, The ARMv8 optimizations for gf-complete are in Hammer, not in Firefly. The libec_jerasure*.so plugin contains gf-complete. Cheers On 23/04/2015 23:29, Garg, Pankaj wrote: > Hi, > > > > I would like to use the gf-complete library for Erasure coding since it has > some ARM v8 based optim

[ceph-users] Erasure Coding : gf-Complete

2015-04-23 Thread Garg, Pankaj
Hi, I would like to use the gf-complete library for Erasure coding since it has some ARM v8 based optimizations. I see that the code is part of my tree, but not sure if these libraries are included in the final build. I only see the libec_jerasure*.so in my libs folder after installation. Are th

Re: [ceph-users] Erasure coding

2015-03-25 Thread Tom Verdaat
Great info! Many thanks! Tom 2015-03-25 13:30 GMT+01:00 Loic Dachary : > Hi Tom, > > On 25/03/2015 11:31, Tom Verdaat wrote:> Hi guys, > > > > We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold > data) that we intend to grow later on as more storage is needed. We would > very mu

Re: [ceph-users] Erasure coding

2015-03-25 Thread Loic Dachary
Hi Tom, On 25/03/2015 11:31, Tom Verdaat wrote:> Hi guys, > > We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data) > that we intend to grow later on as more storage is needed. We would very much > like to use Erasure Coding for some pools but are facing some challenges > r

[ceph-users] Erasure coding

2015-03-25 Thread Tom Verdaat
Hi guys, We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data) that we intend to grow later on as more storage is needed. We would very much like to use Erasure Coding for some pools but are facing some challenges regarding the optimal initial profile “replication” settings giv

Re: [ceph-users] Erasure Coding CPU Overhead Data

2015-02-23 Thread Mark Nelson
Many thanks, Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Nelson Sent: 21 February 2015 18:23 To: ceph-users@lists.ceph.com Cc: ceph-devel Subject: [ceph-users] Erasure Coding CPU Overhead Data Hi All, Last spring at the tail end of

Re: [ceph-users] Erasure coding parameters change

2014-11-09 Thread Loic Dachary
Hi, On 10/11/2014 07:20, ZHOU Yuan wrote: > Hi Loic, > > On Mon, Nov 10, 2014 at 6:44 AM, Loic Dachary wrote: >> >> >> On 05/11/2014 13:57, Jan Pekař wrote:> Hi, >>> >>> is there any possibility to change erasure coding pool parameters ie k and >>> m values on the fly? I want to add more disks

Re: [ceph-users] Erasure coding parameters change

2014-11-09 Thread ZHOU Yuan
Hi Loic, On Mon, Nov 10, 2014 at 6:44 AM, Loic Dachary wrote: > > > On 05/11/2014 13:57, Jan Pekař wrote:> Hi, >> >> is there any possibility to change erasure coding pool parameters ie k and m >> values on the fly? I want to add more disks to existing erasure pool and >> change redundancy leve

Re: [ceph-users] Erasure coding parameters change

2014-11-09 Thread Loic Dachary
On 05/11/2014 13:57, Jan Pekař wrote:> Hi, > > is there any possibility to change erasure coding pool parameters ie k and m > values on the fly? I want to add more disks to existing erasure pool and > change redundancy level. I cannot find it in docs. Hi, It is not possible to change k/m on

[ceph-users] Erasure coding parameters change

2014-11-09 Thread Jan Pekař
Hi, is there any possibility to change erasure coding pool parameters ie k and m values on the fly? I want to add more disks to existing erasure pool and change redundancy level. I cannot find it in docs. Changing erasure-code-profile is not working so I assume that is only template for newl

Re: [ceph-users] erasure coding parameter's choice and performance

2014-06-21 Thread Loic Dachary
Hi, "erasure-code: implement alignment on chunk sizes" https://github.com/ceph/ceph/pull/1890 should resolve the unnecessary overhead for Cauchy and will hopefully be merged soon. Cheers On 21/06/2014 14:57, David Z wrote: > Hi Loic, > > Thanks for your reply. > > I actually used the tool y

Re: [ceph-users] erasure coding parameter's choice and performance

2014-06-21 Thread David Z
Hi Loic, Thanks for your reply. I actually used the tool you mentioned for our evaluation on cauchy_orig.  I didn't compare reed_sol_van with cauchy_orig. I plan to do that now. But we still care about padding size because there are storage overhead. As for reed_sol_van, I don't think we can

Re: [ceph-users] erasure coding parameter's choice and performance

2014-06-20 Thread Loic Dachary
Hi David, On 20/06/2014 14:15, David Z wrote:> > Hi Loic, > > We are evaluating erasure coding and we want to tolerate 3 chunks failure. > Then we choose cauchy_orig because RS's performance should be no better than > cauchy_orig and other algorithms are optimized for raid6 mode. > > For cauc

Re: [ceph-users] Erasure coding

2014-05-19 Thread yalla.gnan.kumar
Cc: Loic Dachary; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Erasure coding I have also added a big part of Loic's discussion of the architecture into the Ceph architecture document here: http://ceph.com/docs/master/architecture/#erasure-coding On Mon, May 19, 2014 at 5:

Re: [ceph-users] Erasure coding

2014-05-19 Thread John Wilkins
Kumar > > > -Original Message- > From: Loic Dachary [mailto:l...@dachary.org] > Sent: Monday, May 19, 2014 6:04 PM > To: Gnan Kumar, Yalla; ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Erasure coding > > Hi, > > The general idea to preserve resilience

Re: [ceph-users] Erasure coding

2014-05-19 Thread yalla.gnan.kumar
Hi Loic, Thanks for the reply. Thanks Kumar -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Monday, May 19, 2014 6:04 PM To: Gnan Kumar, Yalla; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Erasure coding Hi, The general idea to preserve resilience but

Re: [ceph-users] Erasure coding

2014-05-19 Thread Loic Dachary
Hi, The general idea to preserve resilience but save space compared to replication. It costs more in terms of CPU and network. You will find a short introduction here : https://wiki.ceph.com/Planning/Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend https://wiki.ceph.com/Planning/Blue

[ceph-users] Erasure coding

2014-05-19 Thread yalla.gnan.kumar
Hi All, What exactly is erasure coding and why is it used in ceph ? I could not get enough explanatory information from the documentation. Thanks Kumar This message is for the designated recipient only and may contain privileged, proprietary, or otherwise con

Re: [ceph-users] erasure coding testing

2014-03-17 Thread Loic Dachary
Hi Gruher, You can wait for 0.78 this week as Ian suggested. If you feel more adventurous there are various ways to test and contribute back as described here http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/18760 Cheers On 17/03/2014 03:11, Gruher, Joseph R wrote: > Hey all- > >

Re: [ceph-users] erasure coding testing

2014-03-16 Thread Gruher, Joseph R
Great, thanks! I'll watch (hope) for an update later this week. Appreciate the rapid response. -Joe From: Ian Colle [mailto:ian.co...@inktank.com] Sent: Sunday, March 16, 2014 7:22 PM To: Gruher, Joseph R; ceph-users@lists.ceph.com Subject: Re: [ceph-users] erasure coding testing Joe,

Re: [ceph-users] erasure coding testing

2014-03-16 Thread Ian Colle
Joe, We¹re pushing to get 0.78 out this week, which will allow you to play with EC. Ian R. Colle Director of Engineering Inktank Delivering the Future of Storage http://www.linkedin.com/in/ircolle http://www.twitter.com/ircolle Cell: +1.303.601.7713 Email: i...@inktank.com On 3/16/14, 8:11 PM, "

[ceph-users] erasure coding testing

2014-03-16 Thread Gruher, Joseph R
Hey all- Can anyone tell me, if I install the latest development release (looks like it is 0.77) can I enable and test erasure coding? Or do I have to wait for the actual Firefly release? I don't want to deploy anything for production, basically I just want to do some lab testing to see what