Re: [ceph-users] Trying to rebuild cephfs and mds's

2014-12-08 Thread
Glen, you should create two new pool , then alter the mds data pool to new created pool, then delte old pool. From: ceph-users Date: 2014-12-09 00:38 To: Glen Aidukas; 'ceph-users@lists.ceph.com'

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-11-10 Thread
Haomai wang, Do you have proresss on this performance issue? 发件人: Haomai Wang<mailto:haomaiw...@gmail.com> 发送时间: 2014-10-31 10:05 收件人: 廖建锋<mailto:de...@f-club.cn> 抄送: ceph-users<mailto:ceph-users-boun...@lists.ceph.com>; ceph-users<mailto:ceph-users@lists.ceph.c

[ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-31 Thread
Looks like the writte performance of keyvalue backend is bad than file store backend with version 0.87 for my curent cluster, the writteing speed only have 1.5MB/s - 4.5MB/s 发件人: ceph-users 发送时间: 2014-10-31 08:23 收件人: ceph-users

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread
I am not sure if it seq or ramdon, i just use rsync to copy millions small pic file form our pc server to ceph cluster 发件人: Haomai Wang<mailto:haomaiw...@gmail.com> 发送时间: 2014-10-31 09:59 收件人: 廖建锋<mailto:de...@f-club.cn> 抄送: ceph-users<mailto:ceph-users-boun...@lists.ceph.co

Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread
4-10-31 09:40 收件人: 廖建锋<mailto:de...@f-club.cn> 抄送: ceph-users<mailto:ceph-users-boun...@lists.ceph.com>; ceph-users<mailto:ceph-users@lists.ceph.com> 主题: Re: [ceph-users] 回复: half performace with keyvalue backend in 0.87 Yes, it exists persistence problem at 0.80.6 and we fixe

[ceph-users] 回复: half performace with keyvalue backend in 0.87

2014-10-30 Thread
Also found the other problem is: the ceph osd directory has millions small files which will cause performance issue 1008 => # pwd /var/lib/ceph/osd/ceph-8/current 1007 => # ls |wc -l 21451 发件人: ceph-users 发送时间: 2014-10-31 08:23 收件人: ceph-users

[ceph-users] half performace with keyvalue backend in 0.87

2014-10-30 Thread
Dear Ceph, I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with rsync millions small files is 10M byte /second when i upgrade to 0.87(giant), the speed slow down to 5M byte /second, I don't why , is there any tunning option for this? will superblock cause those performance

[ceph-users] where to download 0.87 RPMS?

2014-10-29 Thread
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] get/put files with radosgw once MDS crash

2014-10-26 Thread
Does CEPH has schedule for this? From: Craig Lewis<mailto:cle...@centraldesktop.com> Date: 2014-10-25 05:35 To: 廖建锋<mailto:de...@f-club.cn> CC: ceph-users<mailto:ceph-users@lists.ceph.com> Subject: Re: [ceph-users] get/put files with radosgw once MDS crash No, MDS and RadosGW s

Re: [ceph-users] Continuous OSD crash with kv backend (firefly)

2014-10-26 Thread
I reported that problem a couple of weeks ago From: ceph-users Date: 2014-10-26 17:46 To: Haomai Wang CC: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Continuous OSD crash with kv backe

[ceph-users] get/put files with radosgw once MDS crash

2014-10-24 Thread
dear cepher, Today, I use mds to put/get files from ceph storgate cluster as it is very easy to use for each side of a company. But ceph mds is not very stable, So my question: is it possbile to get the file name and contentes from OSD with radosgw once MDS crash and how ? ___

[ceph-users] 回复: 回复: scrub error with keyvalue backend

2014-10-10 Thread
I like keyvalue backend very much because it if a good performance my request is simple: keep it running, now have another BUG which was fixed in 0.85 : 014-10-11 08:42:01.165836 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60 2014-10-

[ceph-users] 回复: scrub error with keyvalue backend

2014-10-10 Thread
is there anybody can help ? 发件人: ceph-users 发送时间: 2014-10-10 13:34 收件人: ceph-users 主题: [ceph-users] scrub error with keyvalue backend Dear ceph, # ceph -s cluster e1f18421-5d20-4c3e-83be-a74b77468d61 health HEALTH_ERR 4

[ceph-users] scrub error with keyvalue backend

2014-10-09 Thread
Dear ceph, # ceph -s cluster e1f18421-5d20-4c3e-83be-a74b77468d61 health HEALTH_ERR 4 pgs inconsistent; 4 scrub errors monmap e2: 3 mons at {storage-1-213=10.1.0.213:6789/0,storage-1-214=10.1.0.214:6789/0,storage-1-215=10.1.0.215:6789/0}, election epoch 16, quorum 0,1,2 storage-1-213,storage-1-

Re: [ceph-users] ceph mds unable to start with 0.85

2014-09-18 Thread
if i turn on debug=20, the log will be more than 100G, looks no way to put, do you have any other good way to figure it out? would you like to log into the server to check? From: Gregory Farnum<mailto:g...@inktank.com> Date: 2014-09-19 02:33 To: 廖建锋<mailto:de...@f-club.cn> CC

[ceph-users] ceph mds unable to start with 0.85

2014-09-17 Thread
dear, my ceph cluster worked for about two weeks, mds crashed every 2-3 days, Now it stuck on replay , looks like replay crash and restart mds process again what can i do for this? 1015 => # ceph -s cluster 07df7765-c2e7-44de-9bb3-0b13f6517b18 health HEALTH_ERR 56 pgs inconsistent; 56 scru

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread
the current ceph cluster was compiled by hand , and now i disabled scrub and deep-scrub until your new dev version released I hope the new version can help to scrub all data which already error displayed From: Haomai Wang<mailto:haomaiw...@gmail.com> Date: 2014-09-11 12:00 To: 廖建锋<

Re: [ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread
haomai wang, i already use 0.85 which is the latest version of CEPH, is there any new version than 0.85? From: Haomai Wang<mailto:haomaiw...@gmail.com> Date: 2014-09-11 10:02 To: 廖建锋<mailto:de...@f-club.cn> CC: ceph-users<mailto:ceph-users@lists.ceph.com> Subject: Re

Re: [ceph-users] Cache Pool writing too much on ssds, poor performance?

2014-09-10 Thread
I bet he didn't set hit_set yet From: ceph-users Date: 2014-09-11 09:00 To: Andrei Mikhailovsky; ceph-users Subject: Re: [ceph-users] Cache Pool writing too much on ssds, poor performance? Could

[ceph-users] Why so much inconsistent error in 0.85?

2014-09-10 Thread
dear, Is this another big bug of CEPH? [cid:_Foxmail.1@970bdd36-9dc6-e03f-6afa-bcdcfed500d0] ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] bad performance of leveldb on 0.85

2014-09-10 Thread
Dear, is there any body compared the performance with leveldb between 0.80.5 and 0.85, In my pervious cluster(0.80.5), the average writting speed: 10MB- 15MB In current cluster(0.85), the average writting speed: 5M-8M what is going on ? will superblock of leveldb disk cause this ? __

[ceph-users] 回复: Re: 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-09 Thread
I solved by creating new then removing old pool 发件人: 廖建锋<mailto:de...@f-club.cn> 发送时间: 2014-09-09 17:39 收件人: haomaiwang<mailto:haomaiw...@gmail.com> 抄送: ceph-users<mailto:ceph-users-boun...@lists.ceph.com>; ceph-users<mailto:ceph-users@lists.ceph.com> 主题: Re: Re: [

Re: [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-09 Thread
data' is in use by CephFS 发件人: Haomai Wang<mailto:haomaiw...@gmail.com> 发送时间: 2014-09-09 17:28 收件人: 廖建锋<mailto:de...@f-club.cn> 抄送: ceph-users<mailto:ceph-users-boun...@lists.ceph.com>; ceph-users<mailto:ceph-users@lists.ceph.com> 主题: Re: [ceph-users] 回复: mix ceph ve

Re: [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-08 Thread
there is nothing about this in ceph.com 发件人: Jason King<mailto:chn@gmail.com> 发送时间: 2014-09-09 11:19 收件人: 廖建锋<mailto:de...@f-club.cn> 抄送: ceph-users<mailto:ceph-users-boun...@lists.ceph.com>; ceph-users<mailto:ceph-users@lists.ceph.com> 主题: Re: [ceph-users] 回复: mix

[ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-08 Thread
Looks like it dosn't work, i noticed that 0.85 added superblock to leveldb osd, the osd which I alread have do not have superblock is there anybody can tell me how to upgrade OSDs ? 发件人: ceph-users 发送时间: 2014-09-09 10:32 收件人: ceph-users

[ceph-users] mix ceph verion with 0.80.5 and 0.85

2014-09-08 Thread
dear, As there are a lot of bugs of keyvalue backend in 0.80.5 firely version , So i want to upgrade to 0.85 for some osds which already down and unable to start and keep some other osd with 0.80.5,I wondering , will it works? [Adobe Systems] 廖建锋 Derek 运维

[ceph-users] 答复: 答复: 答复: ceph osd unexpected error

2014-09-07 Thread
o_op(KeyValueStore::OpSequencer*, ThreadPool::TPHandle&)+0x18a) [0x75feaa] 13: (ThreadPool::worker(ThreadPool::WorkThread*)+0x551) [0xab8561] 14: (ThreadPool::WorkThread::entry()+0x10) [0xabb5a0] 15: (()+0x79d1) [0x7f5ae78b29d1] 16: (clone()+0x6d) [0x7f5ae6842b6d] NOTE: a copy of the execut

[ceph-users] 答复: 答复: ceph osd unexpected error

2014-09-06 Thread
it happend this morning, i can not wait, so I remove and add osd again next time I will set debug level up when it happend again thanks very much 发件人: Haomai Wang [haomaiw...@gmail.com] 发送时间: 2014年9月7日 12:08 到: 廖建锋 Cc: Somnath Roy; ceph-users; ceph-devel

[ceph-users] 答复: ceph osd unexpected error

2014-09-06 Thread
I use latest version 0.80.6 I am setting the limitation now, and watching? 发件人: Somnath Roy [somnath@sandisk.com] 发送时间: 2014年9月7日 1:12 到: Haomai Wang; 廖建锋 Cc: ceph-users; ceph-devel 主题: RE: [ceph-users] ceph osd unexpected error Have you set the

[ceph-users] ceph osd unexpected error

2014-09-05 Thread
Dear CEPH , Urgent question, I met a "FAILED assert(0 == "unexpected error")" yesterday , Now i have not way to start this OSDS I have attached my logs in the attachment, and some ceph configurations as below osd_pool_default_pgp_num = 300 osd_pool_default_size = 2 osd_pool_default_mi