Hi,
I have also posted on the OpenNebula community forum
(https://forum.opennebula.org/t/changing-ceph-monitors-for-running-vms/1266).
Does anyone have any experience of changing the monitors in their Ceph cluster
whilst running OpenNebula VMs?
We have recently bought new hardware to replace ou
This now appears to have partially fixed itself. I am now able to run commands
on the cluster, though one of the monitors is down. I still have no idea what
was going on.
George
From: george.ry...@stfc.ac.uk [mailto:george.ry...@stfc.ac.uk]
Sent: 16 July 2014 13:59
To: ceph-users@lists.ceph.co
On Friday I managed to run a command I probably shouldn't and knock half our
OSDs offline. By setting the noout and nodown flags and bringing up the OSDS on
the boxes that don't also have mons running on them I got most of the cluster
back up by today (it took me a while to discover the nodown f
Last week I decided to take a look at the 'osd pool set-quota' option.
I have a directory in cephFS that uses a pool called pool-2 (configured by
following this:
http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/).
I have a directory in that filled with cat picture
Thanks. I hadn’t actually found ‘ceph df’. It probably just needs a brief
description of what the raw totals include.
One question relating to this, the documentation you’ve linked to suggests that
the pool usage stats are converted to megabytes and gigabytes where relevant,
are they also conv
Having looked at a sample of OSDs it appears that it is indeed the case that
for every GB of data we have 9 GB of Journal. Is this normal? Or are we not
doing some Journal/cluster management that we should be?
George
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: 19 June 2014 13:53
To: R
Hi,
I've come across an error in the Ceph documentation, what's the proper way for
me to report it so that it gets fixed?
(on
http://ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas
"ceph osd pool set-quota {pool-name} [max-objects {obj-count}] [max_bytes
{bytes}]
Hi all,
I'm struggling to understand some Ceph usage statistics and I was hoping
someone might be able to explain them to me.
If I run 'rados df' I get the following:
# rados df
pool name category KB objects clones
degraded unfound rdrd K