Hello,
Brian already mentioned a number very pertinent things, I've got a few
more:
On Tue, 05 Apr 2016 10:48:49 -0400 d...@integrityhost.com wrote:
> In a 12 OSD setup, the following config is there:
>
> (OSDs * 100)
> Total PGs = --
> pool size
>
The PGcal
Hello,
On Wed, 6 Apr 2016 04:18:40 +0100 (BST) Andrei Mikhailovsky wrote:
> Hi
>
> I've just had a warning ( from ceph -s) that one of the osds is near
> full. Having investigated the warning, i've located that osd.6 is 86%
> full. The data distribution is nowhere near to being equal on my osd
Hi
I've just had a warning ( from ceph -s) that one of the osds is near full.
Having investigated the warning, i've located that osd.6 is 86% full. The data
distribution is nowhere near to being equal on my osds as you can see from the
df command output below:
/dev/sdj1 2.8T 2.4T 413G 86% /v
Hi, as my understanding, ceph rbd image will be divided into multiple
objects based on LBA address.
My question here is:
if two clients write to the same LBA address, such as client A write ""
to LBA 0x123456, client B write "" to the same LBA.
LBA address and data will only be in an ob
I was experimenting with using bluestore OSDs and appear to have found a fairly
consistent way to crash them…
Changing the number of copies in a pool down from 3 to 1 has now twice caused
the mass panic of a whole pool of OSDs. In one case it was a cache tier, in
another case it was just a poo
Hi Dan
You can increase - not decrease .. I would go with 512 for this - that
will allow you to increase in the future.
from ceph.com "Having 512 or 4096 Placement Groups is roughly
equivalent in a cluster with less than 50 OSDs "
I don't even think you will be able to set pg num go 4096 - ceph
On Mon, Apr 4, 2016 at 9:55 AM, Gregory Farnum wrote:
> Deletes are just slow right now. You can look at the ops in flight on you
> client or MDS admin socket to see how far along it is and watch them to see
> how long stuff is taking -- I think it's a sync disk commit for each unlink
> though so
In a 12 OSD setup, the following config is there:
(OSDs * 100)
Total PGs = --
pool size
So with 12 OSD's and a pool size of 2 replicas, this would equal Total
PGs of 600 as per this url:
http://docs.ceph.com/docs/master/rados/operations/placement-groups/#prese
For future reference,
You can reset your keyring's permissions using a keyring located on the
monitors at /var/lib/ceph/mon/your-mon/keyring. Specify the -k option
for the ceph command and the full path to the keyring and you can
correct this without having to restart the cluster a couple of times
Hi John,
Thanks for your reply, current kernel version we are using
3.10.0-229.20.1.el7.x86_64 and ceph version 0.94.5 , Please advice which
version is perfect.
Regards
Prabu GJ
On Tue, 05 Apr 2016 16:43:27 +0530 John Spray
wrote
Usually we see th
Usually we see those warning from older clients which have some bugs.
You should use the most recent client version you can (or the most
recent kernel you can if it's the kernel client)
John
On Tue, Apr 5, 2016 at 7:00 AM, gjprabu wrote:
> Hi ,
>
> We have configured ceph rbd with ceph
Thanks!
Now this slow rm is not our biggest issue anymore (for the moment)..
Since this night all our MDSs did crash.
I opened a ticket http://tracker.ceph.com/issues/15379 with the
stacktraces I got.
We aren't able to restart the mds's at all for now..
I stopped the rm, and also increased the
12 matches
Mail list logo