Hi,
On starting OSD its failing
$ /etc/init.d/ceph start osd.119
with errors
$ tail -f /var/lib/ceph/osd/ceph-119/ceph-osd.119.log |grep -i err
-1/-1 (stderr threshold)
2015-05-03 11:38:44.366984 7f0794e5b780 -1 journal
_check_disk_write_cache: fclose error: (61) No data available
2015-05-0
Hi all, I need some help getting my 0.87.1 cluster back into a healthy
state...
Overnight, a deep scrub detected an inconsistent object pg. Ceph health
detail said the following:
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 2 scrub errors
pg 2.3b is active+clean+inconsistent, acting [1
Thanks, Greg. Following your lead, we discovered the proper 'set_choose_tries
xxx’ value had not been applied to *this* pool’s rule, and we updated the
cluster accordingly. We then moved a random OSD out and back in to ‘kick’
things, but no joy: we still have the 4 ‘remapped’ PGs. BTW: the 4 P
Okay I have now ended up returning the cluster into a healthy state but
instead using the version of the object from OSDs 0 and 2 rather than
OSD 1. I set the "noout" flag, and shut down OSD 1. That appears to have
resulted in the cluster being happy to use the version of the object
that was pr
On 04/05/15 05:42, Venkateswara Rao Jujjuri wrote:
Here is the output..I am still stuck at this step. :(
(multiple times tried to by purging and restarting from scratch)
vjujjuri@rgulistan-wsl10:~/ceph-cluster$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at
Hi,
we began testing one Btrfs OSD volume last week and for this first test
we disabled autodefrag and began to launch manual btrfs fi defrag.
During the tests, I monitored the number of extents of the journal
(10GB) and it went through the roof (it currently sits at 8000+ extents
for example).
I
On Mon, 4 May 2015, Lionel Bouton wrote:
> Hi,
>
> we began testing one Btrfs OSD volume last week and for this first test
> we disabled autodefrag and began to launch manual btrfs fi defrag.
>
> During the tests, I monitored the number of extents of the journal
> (10GB) and it went through the r
On 05/04/15 01:34, Sage Weil wrote:
> On Mon, 4 May 2015, Lionel Bouton wrote:
>> Hi, we began testing one Btrfs OSD volume last week and for this
>> first test we disabled autodefrag and began to launch manual btrfs fi
>> defrag. During the tests, I monitored the number of extents of the
>> journa
Hi, geeks:
I have a ceph cluster for rgw service in production, which was setup
according to the simple configuration tutorial, with only one deafult
region and one default zone. Even worse, I didn't enable neither the meta
logging nor the data logging. Now i want to add a slave zone to the rgw fo
Hi
Thanks Sage, I got it working now. Everything else seems to be ok, except
mds is reporting "mds cluster is degraded", not sure what could be wrong.
Mds is running and all osds are up and pg's are active+clean and
active+clean+replay.
Had to delete some empty pools which were created while the
10 matches
Mail list logo