Re: [ceph-users] How safe is ceph pg repair these days?

2017-02-18 Thread Nick Fisk
>From what I understand in Jewel+ Ceph has the concept of an authorative shard, so in the case of a 3x replica pools, it will notice that 2 replicas match and one doesn't and use one of the good replicas. However, in a 2x pool your out of luck. However, if someone could confirm my suspicions that

[ceph-users] help with crush rule

2017-02-18 Thread Maged Mokhtar
Hi, I have a need to support a small cluster with 3 hosts and 3 replicas given that in normal operation each replica will be placed on a separate host but in case one host dies then its replicas could be stored on separate osds on the 2 live hosts. I was hoping to write a rule that in case it co

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-02-18 Thread rick stehno
I work for Seagate and have done over a hundred of tests using SMR 8TB disks in a cluster. It all depends on what your access is if SMR hdd would be the best choice. Remember SMR hdd don't perform well doing random writes, but are excellent for reads and sequential writes. I have many tests whe

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-18 Thread Noah Watkins
Hi Nick, The short answer to your question is that barring a hole in the sandbox (or loading a modified version of the plugin which is an option), any dependencies need to be packaged up into the the request and can't be loaded dynamically from the host file system. In version 1 of the Lua plugin

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-18 Thread Nick Fisk
Hi Noah, Thanks for confirming that. My use case is more for learning, rather than an actual working project. I thought it was a really elegant way of harnessing the distributed power of Ceph and wanted to learn more. For this example I wanted to do something like calculate an MD5 hash of all

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-18 Thread Noah Watkins
On Sat, Feb 18, 2017 at 2:39 PM, Nick Fisk wrote: > My use case is more for learning, rather than an actual working project. I > thought it was a really elegant way of harnessing the distributed power of > Ceph and wanted to learn more. For this example I wanted to do something like > calculat

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-02-18 Thread Maxime Guyot
Hi Rick, If you have some numbers and info on the setup that would be greatly appreciated. I noticed Wildo's blog post about SMR drives: https://blog.widodh.nl/2017/02/do-not-use-smr-disks-with-ceph/ so I guess he ran into some problems? Cheers, Maxime On Feb 18, 2017 23:04, rick stehno w