[ceph-users] python binding - snap rollback - progress reporting

2015-11-08 Thread Nikola Ciprich
Hello, I'd like to ask - I'm using python RBD/rados bindings. Everything works well for me, the only thing I'd like to improve is snapshots rollback as the operation is quite time consuming, I would like to report it's progress. is this somehow possible? even at the cost of implementing whole rol

Re: [ceph-users] Issue activating OSDs

2015-11-08 Thread Oliver Dzombic
Hi Robert, did you already tried ceph-deply gatherkeys ? And did you already tried to reboot the nodes completely ? -- Mit freundlichen Gruessen / Best regards Oliver Dzombic ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

[ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-08 Thread Lazuardi Nasution
Hi, I'm new with CEPH cache tiering. Is it possible to have multiple cache pool with single storage pool backend? For example I have some compute nodes with its own local SSDs. I want each compute node has its own cache by using its own local SSDs. The target is to minimize load to storage pool ba

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-08 Thread Timofey Titovets
Big thanks Nick, any way Now i catch hangs of ESXi and Proxy =_='' /* Proxy VM: Ubuntu 15.10/Kernel 4.3/LIO/Ceph 0.94/ESXi 6.0 Software iSCSI*/ I've moved to NFS-RBD proxy and now try to make it HA 2015-11-07 18:59 GMT+03:00 Nick Fisk : > Hi Timofey, > > You are most likely experiencing the effec

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-08 Thread Nick Fisk
You might find NFS even slower as you will probably have 2IO's for every ESXi IO due to the journal for whatever FS you are using on the NFS server. If you are not seeing it even slower, you most likely have the NFS server set to run in async mode. > -Original Message- > From: ceph-user

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-08 Thread Gregory Farnum
With that release it shouldn't be the EC pool causing trouble; it's the CRUSH tunables also mentioned in that thread. Instructions should be available in the docs for using older tunable that are compatible with kernel 3.13. -Greg On Saturday, November 7, 2015, Bogdan SOLGA wrote: > Hello, every

[ceph-users] Radosgw admin MNG Tools to create and report usage of Object accounts

2015-11-08 Thread Florent MONTHEL
Hi Cephers, I’ve just released toolkit on python to report usage and inventory of buckets / accounts / S3 and Swift keys - In the same way, we have script to create account and S3/Swift keys (and initial buckets) Tool is using rgwadmin python module https://github.com/fmonthel/radosgw-admin-mn

Re: [ceph-users] Federated gateways

2015-11-08 Thread WD_Hwang
Hi, Craig: I used 10 VMs for federated gateway testing. There are 5 nodes for us-east, and the others are for us-west. The two zones are independent. Before the configurations of the region and zone, I have the two zones with the same 'client.radosgw.[zone]' setting of ceph.conf.

Re: [ceph-users] v9.2.0 Infernalis released

2015-11-08 Thread Alexandre DERUMIER
Hi, debian repository seem to miss librbd1 package for debian jessie http://download.ceph.com/debian-infernalis/pool/main/c/ceph/ (ubuntu trusty librbd1 is present) - Mail original - De: "Sage Weil" À: ceph-annou...@ceph.com, "ceph-devel" , "ceph-users" , ceph-maintain...@ceph.com En

Re: [ceph-users] v9.2.0 Infernalis released

2015-11-08 Thread Francois Lafont
Hi, I have just upgraded a cluster to 9.2.0 from 9.1.0. All seems to be well except I have this little error message : ~# ceph tell mon.* version --format plain mon.1: ceph version 9.2.0 (17df5d2948d929e997b9d320b228caffc8314e58) mon.2: ceph version 9.2.0 (17df5d2948d929e997b9d320b228caffc8314e58

Re: [ceph-users] v9.2.0 Infernalis released

2015-11-08 Thread Francois Lafont
On 09/11/2015 06:28, Francois Lafont wrote: > I have just upgraded a cluster to 9.2.0 from 9.1.0. > All seems to be well except I have this little error > message : > > ~# ceph tell mon.* version --format plain > mon.1: ceph version 9.2.0 (17df5d2948d929e997b9d320b228caffc8314e58) > mon.2: ceph

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-08 Thread Bogdan SOLGA
Hello Greg! Thank you for your advice, first of all! I have tried to adjust the Ceph tunables detailed in this page, but without success. I have tried both '*ceph osd crush tunables optimal*' and '*ceph osd crush tunables hammer*', bu

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-08 Thread Adam Tygart
The problem is that "hammer" tunables (i.e. "optimal" in v0.94.x) are incompatible with the kernel interfaces before Linux 4.1 (namely due to straw2 buckets). To make use of the kernel interfaces in 3.13, I believe you'll need "firefly" tunables. -- Adam On Sun, Nov 8, 2015 at 11:48 PM, Bogdan SO

Re: [ceph-users] v9.2.0 Infernalis released

2015-11-08 Thread Dan van der Ster
On Mon, Nov 9, 2015 at 6:39 AM, Francois Lafont wrote: > 0: 10.0.2.101:6789/0 mon.1 > 1: 10.0.2.102:6789/0 mon.2 > 2: 10.0.2.103:6789/0 mon.3 Mon rank vs. Mon id is super confusing, especially if you use a number for the mon id. In your case: 0 -> mon.0 (which has id mon.1) 1 -> mon.1 (which has