[ceph-users] BUG 14154 on erasure coded PG

2016-09-09 Thread Gerd Jakobovitsch
Dear all, I am using an erasure coded pool, and I get to a situation where I'm not able to recover a PG. The OSDs that contain this PG keep crashing, on the same behavior registered at http://tracker.ceph.com/issues/14154. I'm using ceph 0.94.9 (it first appeared on 0.94.7, an upgrade didn't

Re: [ceph-users] Recovering full OSD

2016-08-08 Thread Gerd Jakobovitsch
__ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- -- Mandic Cloud Solutions <http://www.mandic.com.br/?utm_source=Assinatura-de-Email&utm_medium=Email&utm_content=Logo&utm_campaign=Site-Mandic> *Gerd Jakobovit

Re: [ceph-users] Lost access when removing cache pool overlay

2016-01-29 Thread Gerd Jakobovitsch
cache tier, it could no longer access the pool. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Jan 29, 2016 at 8:47 AM, Gerd Jakobovitsch wrote: Dear all, I had to move .rgw.buckets.index pool to another structure; therefore, I

[ceph-users] Lost access when removing cache pool overlay

2016-01-29 Thread Gerd Jakobovitsch
Dear all, I had to move .rgw.buckets.index pool to another structure; therefore, I created a new pool .rgw.buckets.index.new ; added the old pool as cache pool, and flushed the data. Up to this moment everything was ok. With radosgw -p df, I saw the objects moving to the new pool; the moved

[ceph-users] leveldb on OSD with missing file after hard boot

2016-01-27 Thread Gerd Jakobovitsch
Hello all, I had a hard reset on a ceph node, and one of the OSDs is not starting due to leveldb error. At that moment, the node was trying to start up, but there was no actual writing of new data: 2016-01-27 12:00:37.068431 7f367f654880 0 ceph version 0.94.5 (9764da52395923e0b32908d83a9f73

[ceph-users] One object in .rgw.buckets.index causes systemic instability

2015-11-03 Thread Gerd Jakobovitsch
Dear all, I have a cluster running hammer (0.94.5), with 5 nodes. The main usage is for S3-compatible object storage. I am getting to a very troublesome problem at a ceph cluster. A single object in the .rgw.buckets.index is not responding to request and takes a very long time while recovering

[ceph-users] ISA erasure code plugin in debian

2015-09-15 Thread Gerd Jakobovitsch
Dear all, I have a ceph cluster deployed in debian; I'm trying to test ISA erasure-coded pools, but there is no plugin (libec_isa.so) included in the library. Looking at the packages at debian Ceph repository, I found a "trusty" package that includes the plugin. Is it created to use with deb

Re: [ceph-users] PGs stuck stale during data migration and OSD restart

2015-08-31 Thread Gerd Jakobovitsch
On Sat, Aug 29, 2015 at 11:50 AM, Gerd Jakobovitsch wrote: Dear all, During a cluster reconfiguration (change of crush tunables from legacy to TUNABLES2) with large data replacement, several OSDs get overloaded and had to be restarted; when OSDs stabilize, I got a number of PGs marked stale, even

[ceph-users] PGs stuck stale during data migration and OSD restart

2015-08-29 Thread Gerd Jakobovitsch
Dear all, During a cluster reconfiguration (change of crush tunables from legacy to TUNABLES2) with large data replacement, several OSDs get overloaded and had to be restarted; when OSDs stabilize, I got a number of PGs marked stale, even when all OSDs where this data used to be located show

Re: [ceph-users] Fwd: OSD crashes after upgrade to 0.80.10

2015-08-12 Thread Gerd Jakobovitsch
regarding memory on ceph osd? But I still get the problem of the incomplete+inactive PG. Regards. Gerd On 12-08-2015 10:11, Gerd Jakobovitsch wrote: I tried it, the error propagates to whichever OSD gets the errorred PG. For the moment, this is my worst problem. I have one PG incomplete+inactive

Re: [ceph-users] Fwd: OSD crashes after upgrade to 0.80.10

2015-08-12 Thread Gerd Jakobovitsch
make cluster healthy firstly? On Wed, Aug 12, 2015 at 1:31 AM, Gerd Jakobovitsch wrote: Dear all, I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently 75% usage, running firefly. On friday I upgraded it from 0.80.8 to 0.80.10, and since then I got several OSDs crashing

[ceph-users] Fwd: OSD crashes after upgrade to 0.80.10

2015-08-11 Thread Gerd Jakobovitsch
Dear all, I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently 75% usage, running firefly. On friday I upgraded it from 0.80.8 to 0.80.10, and since then I got several OSDs crashing and never recovering: trying to run it, ends up crashing as follows. Is this problem know

[ceph-users] OSD crashes when starting

2015-08-07 Thread Gerd Jakobovitsch
Dear all, I got to an unrecoverable crash at one specific OSD, every time I try to restart it. It happened first at firefly 0.80.8, I updated to 0.80.10, but it continued to happen. Due to this failure, I have several PGs down+peering, that won't recover even marking the OSD out. Could som

[ceph-users] Problem setting tunables for ceph firefly

2014-08-21 Thread Gerd Jakobovitsch
SH to find a valid mapping but will make less data move." Is there any suggestion to handle it? Have I to set chooseleaf_vary_r to some other value? Will I lose communication with my rbd clients? Or should I return to legacy tunables? Regards, Gerd Ja

Re: [ceph-users] Uploading large files to swift interface on radosgw

2013-09-19 Thread Gerd Jakobovitsch
size = try a value of 10485760 (10M) which I think is large enough. Yehuda On Thu, Sep 19, 2013 at 7:30 AM, Gerd Jakobovitsch wrote: Hello Yehuda, thank you for your help. On 09/17/2013 08:35 PM, Yehuda Sadeh wrote: On Tue, Sep 17, 2013 at 3:21 PM, Gerd Jakobovitsch wrote: Hi all, I am

[ceph-users] Uploading large files to swift interface on radosgw

2013-09-17 Thread Gerd Jakobovitsch
Hi all, I am testing a ceph environment installed in debian wheezy, and, when testing file upload of more than 1 GB, I am getting errors. For files larger than 5 GB, I get a "400 Bad Request EntityTooLarge" response; looking at the radosgw server, I notice that only the apache process is co

[ceph-users] Issues setting up ceph object storage

2013-08-28 Thread Gerd Jakobovitsch
the request is accepted, but no process exists; but I'm still gathering more information and double checking the object gateway installation. Any help would be welcome. Regards Gerd Jakobovitsch smime.p7s Description: S/MIME Cryptographic Signature _