Hi,guys
my cluster face a network problem so it occur some error.after solve
network problem.
latency of some osds in one node is high,using ceph osd perf,which come to
3000+
so I delete this osd from cluster,keep osd data device.
after recover and backfill,then I face the problem descri
om my iPhone
On 1 Feb 2016, at 1:26 AM, hnuzhoulin wrote:
I just face the same problem.
The problem is my cluster missing the asok files of mons although the
cluster works well.
so kill mon process and restart it may fix it.(using service command
to restart mon daemon may do not work)
在 Su
you can read diamond collection file of
ceph:/usr/share/diamond/collectors/ceph/ceph.py
in the function " _collect_cluster_stats " ,only it find the leader
mon,it publish pool stat info.
在 Sat, 09 Jan 2016 00:08:08 +0800,hnuzhoulin
写道:
Yeah,this setting can not see in a
I just face the same problem.
The problem is my cluster missing the asok files of mons although the
cluster works well.
so kill mon process and restart it may fix it.(using service command to
restart mon daemon may do not work)
在 Sun, 31 Jan 2016 10:35:25 +0800,Daniel Rolfe
写道:
Seem
maybe the below blog can help you :
http://cephnotes.ksperis.com/blog/2014/06/29/ceph-journal-migration/
在 Wed, 13 Jan 2016 11:06:33 +0800,小科 <1103262...@qq.com> 写道:
when my journal disk don't have enough space, i want change a other
disk which has enough space to save journal.
--
---
Hi,guys.
right now,I face a problem in my openstack+ceph.
some vm can not start and some occur blue screen。
the output of ceph -s say the cluster is OK.
So I using following command to check the volume first:
rbd ls -p volumes|while read line;do rbd info $line -p volumes ;done
then quickly I ge
fixed if it's easier.
Nick
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Wade Holler
Sent: 08 January 2016 16:14
To: hnuzhoulin ; ceph-de...@vger.kernel.org
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] using cache-tier with writeback
Yeah,this setting can not see in asok config.
You just set it in ceph.conf and restart mon and osd service(sorry I
forget if these restart is necessary)
what I use this config is when I changed crushmap manually,and I do not
want the service init script to rebuild crushmap as default way.
Hi,guyes
Recentlly,I am testing cache-tier using writeback mode.but I found a
strange things.
the performance using rados bench degrade.Is it correct?
If so,how to explain.following some info about my test:
storage node:4 machine,two INTEL SSDSC2BB120G4(one for systaem,the other
one used a
Hi,guys.
I am using the character of geo-replication in ceph.
I have two ceph clusters,so my plan is one region,in which two zones.
Ceph version is ceph version 0.72.1
(4d923861868f6a15dcb33fef7f50f674997322de)
Now I can sync users and buckets from master zone to slave zone.
But the obje
10 matches
Mail list logo