Hi All,
Can anyone let me know how to accomplish this ?
Thanks
Kumar
From: Gnan Kumar, Yalla
Sent: Friday, March 21, 2014 5:04 PM
To: 'ceph-users@lists.ceph.com'
Subject: Remove ceph
Hi All,
I have a ceph cluster with four nodes including the admin node. I have
integrated it with openstack
Hi sage,
I have run the repair command, and the warning info disappears in the output of
"ceph health detail", but the replicas isn't recovered in the "current"
directory.
In all, the ceph cluster status can recover (the pg's status recover from
inconsistent to active and clean), but not the re
Hi Hong,
Could you apply the patch and see if it crash after sleep?
This could lead us to find the correct fix to MDS/client too.
As what I can see here, this patch should fix the crash, but how to fix MDS if
the crash happens?
It happened to us, when it crashed, it was totally crash, and even r
When you do
ceph pg scrub
it will notice the missing object (you should see it go by with ceph -w or
the message in /var/log/ceph/ceph.log on a monitor node), and the PG will
get an 'inconsistent' flag set. To trigger repair, you need to do
ceph pg repair
sage
On Mon, 24 Mar 2014, ljm李
Hi Kyle,
Thank you very much for your explanation, I have triggered the relative pg to
scrub, but the secondary replica which I remove manually isn't recovered,
it only shows that instructing pg xx.xxx on osd.x to scrub.
PS: I use the ceph-deploy to deploy the ceph cluster, and the ceph.conf is
Hi,
After looking to code in ceph-disk I came to the same conclusion, problem is
with the mapping.
Here are quote form ceph-disk
def get_partition_dev(dev, pnum):
"""
get the device name for a partition
assume that partitions are named like the base dev, with a number, and
optiona
Hi list,
I'm new to ceph and so I installed a four node ceph cluster for testing
purposes.
Each node has two 6-core sandy bridge Xeons, 64 GiB of RAM, 6 15k rpm
SAS drives, one SSD drive for journals and 10G ethernet.
We're using Debian GNU/Linux 7.4 (Wheezy) with kernel 3.13 from Debian
backport
Hi,
I can see ~17% hardware interrupts which I find a little high - can you
make sure all load is spread over all your cores (/proc/interrupts)?
What about disk util once you restart them? Are they all 100% utilized or
is it 'only' mostly cpu-bound?
Also you're running a monitor on this node - h