Dear all,
I am using an erasure coded pool, and I get to a situation where I'm not
able to recover a PG. The OSDs that contain this PG keep crashing, on
the same behavior registered at http://tracker.ceph.com/issues/14154.
I'm using ceph 0.94.9 (it first appeared on 0.94.7, an upgrade didn't
__
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--
Mandic Cloud Solutions
<http://www.mandic.com.br/?utm_source=Assinatura-de-Email&utm_medium=Email&utm_content=Logo&utm_campaign=Site-Mandic>
*Gerd Jakobovit
cache tier, it could no
longer access the pool.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Fri, Jan 29, 2016 at 8:47 AM, Gerd Jakobovitsch wrote:
Dear all,
I had to move .rgw.buckets.index pool to another structure; therefore, I
Dear all,
I had to move .rgw.buckets.index pool to another structure; therefore, I
created a new pool .rgw.buckets.index.new ; added the old pool as cache
pool, and flushed the data.
Up to this moment everything was ok. With radosgw -p df, I saw
the objects moving to the new pool; the moved
Hello all,
I had a hard reset on a ceph node, and one of the OSDs is not starting
due to leveldb error. At that moment, the node was trying to start up,
but there was no actual writing of new data:
2016-01-27 12:00:37.068431 7f367f654880 0 ceph version 0.94.5
(9764da52395923e0b32908d83a9f73
Dear all,
I have a cluster running hammer (0.94.5), with 5 nodes. The main usage
is for S3-compatible object storage.
I am getting to a very troublesome problem at a ceph cluster. A single
object in the .rgw.buckets.index is not responding to request and takes
a very long time while recovering
Dear all,
I have a ceph cluster deployed in debian; I'm trying to test ISA
erasure-coded pools, but there is no plugin (libec_isa.so) included in
the library.
Looking at the packages at debian Ceph repository, I found a "trusty"
package that includes the plugin. Is it created to use with deb
On Sat, Aug 29, 2015 at 11:50 AM, Gerd Jakobovitsch wrote:
Dear all,
During a cluster reconfiguration (change of crush tunables from legacy to
TUNABLES2) with large data replacement, several OSDs get overloaded and had
to be restarted; when OSDs stabilize, I got a number of PGs marked stale,
even
Dear all,
During a cluster reconfiguration (change of crush tunables from legacy
to TUNABLES2) with large data replacement, several OSDs get overloaded
and had to be restarted; when OSDs stabilize, I got a number of PGs
marked stale, even when all OSDs where this data used to be located show
regarding memory on ceph osd?
But I still get the problem of the incomplete+inactive PG.
Regards.
Gerd
On 12-08-2015 10:11, Gerd Jakobovitsch wrote:
I tried it, the error propagates to whichever OSD gets the errorred PG.
For the moment, this is my worst problem. I have one PG
incomplete+inactive
make cluster healthy firstly?
On Wed, Aug 12, 2015 at 1:31 AM, Gerd Jakobovitsch wrote:
Dear all,
I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently 75%
usage, running firefly. On friday I upgraded it from 0.80.8 to 0.80.10, and
since then I got several OSDs crashing
Dear all,
I run a ceph system with 4 nodes and ~80 OSDs using xfs, with currently
75% usage, running firefly. On friday I upgraded it from 0.80.8 to
0.80.10, and since then I got several OSDs crashing and never
recovering: trying to run it, ends up crashing as follows.
Is this problem know
Dear all,
I got to an unrecoverable crash at one specific OSD, every time I try to
restart it. It happened first at firefly 0.80.8, I updated to 0.80.10,
but it continued to happen.
Due to this failure, I have several PGs down+peering, that won't recover
even marking the OSD out.
Could som
SH to find a valid
mapping but will make less data move."
Is there any suggestion to handle it? Have I to set chooseleaf_vary_r to
some other value? Will I lose communication with my rbd clients? Or
should I return to legacy tunables?
Regards,
Gerd Ja
size =
try a value of 10485760 (10M) which I think is large enough.
Yehuda
On Thu, Sep 19, 2013 at 7:30 AM, Gerd Jakobovitsch wrote:
Hello Yehuda, thank you for your help.
On 09/17/2013 08:35 PM, Yehuda Sadeh wrote:
On Tue, Sep 17, 2013 at 3:21 PM, Gerd Jakobovitsch wrote:
Hi all,
I am
Hi all,
I am testing a ceph environment installed in debian wheezy, and, when
testing file upload of more than 1 GB, I am getting errors. For files
larger than 5 GB, I get a "400 Bad Request EntityTooLarge" response;
looking at the radosgw server, I notice that only the apache process is
co
the request is accepted, but no process
exists; but I'm still gathering more information and double checking the
object gateway installation.
Any help would be welcome.
Regards
Gerd Jakobovitsch
smime.p7s
Description: S/MIME Cryptographic Signature
_
17 matches
Mail list logo