[ceph-users] 答复: how can i remove rbd0

2018-06-18 Thread
rbd unmap [dev-path] 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 xiang@sky-data.cn 发送时间: 2018年6月19日 10:52 收件人: ceph-users 主题: [ceph-users] how can i remove rbd0 Hi,all! I found a confused question: [root@test]# rbd ls [root@test]# lsblk NAMEMAJ:MIN RM SIZE R

[ceph-users] 答复: Question about BUG #11332

2017-12-04 Thread
in the "wait_for_active" queue if it's not. Is this right? Thanks:-) 发件人: Gregory Farnum [mailto:gfar...@redhat.com] 发送时间: 2017年12月5日 5:48 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com; 陈玉鹏 主题: Re: [ceph-users] Question about BUG #11332 On Thu, Nov 23, 2017 at 1:55 AM 许雪寒 wrote: Hi,

[ceph-users] Question about BUG #11332

2017-11-23 Thread
Hi, everyone. We also encountered this problem: http://tracker.ceph.com/issues/11332. And we found that this seems to be caused by the lack of mutual exclusion between applying "trim" and handling subscriptions. Since "build_incremental" operations doesn't go through the "PAXOS" procedure, and

[ceph-users] 答复: How to enable ceph-mgr dashboard

2017-09-05 Thread
rg1-ceph7 ceph-mgr: File "/usr/lib/python2.7/site-packages/cherrypy/process/wspbus.py", line 250, in start Sep 5 19:01:56 rg1-ceph7 ceph-mgr: raise e_info Sep 5 19:01:56 rg1-ceph7 ceph-mgr: ChannelFailures: error('No socket could be created',) -邮件原件- 发件人: ceph-user

[ceph-users] 答复: How to enable ceph-mgr dashboard

2017-09-05 Thread
line 250, in start Sep 5 19:01:56 rg1-ceph7 ceph-mgr: raise e_info Sep 5 19:01:56 rg1-ceph7 ceph-mgr: ChannelFailures: error('No socket could be created',) What does it mean? Thank you:-) -邮件原件- 发件人: 许雪寒 发送时间: 2017年9月4日 18:15 收件人: 许雪寒; ceph-users@lists.ceph.com 主题: 答复

[ceph-users] 答复: How to enable ceph-mgr dashboard

2017-09-04 Thread
Thanks for your quick reply:-) I checked the opened ports and 7000 is not opened, and all of my machines had selinux disabled. Can there be other causes? Thanks :-) -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年9月4日 17:38 收件人: ceph-users

[ceph-users] How to enable ceph-mgr dashboard

2017-09-04 Thread
Hi, everyone. I’m trying to enable mgr dashboard on Luminous. However, when I modified the configuration and restart ceph-mgr, the following error came up: Sep 4 17:33:06 rg1-ceph7 ceph-mgr: 2017-09-04 17:33:06.495563 7fc49b3fc700 -1 mgr handle_signal *** Got signal Terminated *** Sep 4 17:33

[ceph-users] How to remove a cache tier?

2017-07-20 Thread
Hi, everyone. We are trying to remove a cache tier from one of our clusters. However, when we try to issue command "ceph osd tier cache-mode {cachepool} forward" which is recommended in ceph's documentation, it prompted "'forward' is not a well-supported cache mode and may corrupt your data. p

[ceph-users] 答复: 答复: How's cephfs going?

2017-07-19 Thread
Hi, sir, thanks for your sharing. May I ask how many users do you have on cephfs? And how much data does the cephfs store? Thanks:-) -邮件原件- 发件人: Blair Bethwaite [mailto:blair.bethwa...@gmail.com] 发送时间: 2017年7月17日 11:51 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: 答复: [ceph-users

[ceph-users] 答复: How's cephfs going?

2017-07-19 Thread
I got it, thank you☺ 发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su] 发送时间: 2017年7月19日 18:20 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] How's cephfs going? You right. Forgot to mention that the client was using kernel 4.9.9. 19 июля 2017 г., в 12:36, 许雪寒 написал(а)

[ceph-users] 答复: How's cephfs going?

2017-07-19 Thread
Hi, thanks for your sharing:-) So I guess you have not put cephfs into real production environment, and it's still in test phase, right? Thanks again:-) 发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su] 发送时间: 2017年7月19日 17:33 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users]

[ceph-users] 答复: How's cephfs going?

2017-07-18 Thread
Is there anyone else willing to share some usage information of cephfs? Could developers tell whether cephfs is a major effort in the whole ceph development? 发件人: 许雪寒 发送时间: 2017年7月17日 11:00 收件人: ceph-users@lists.ceph.com 主题: How's cephfs going? Hi, everyone. We intend to use cephfs of

[ceph-users] 答复: How's cephfs going?

2017-07-17 Thread
送时间: 2017年7月18日 8:01 收件人: 许雪寒 抄送: ceph-users 主题: Re: [ceph-users] How's cephfs going? I feel that the correct answer to this question is: it depends.  I've been running a 1.75PB Jewel based cephfs cluster in production for about a 2 years at Laureate Institute for Brain Research. Befor

[ceph-users] 答复: How's cephfs going?

2017-07-17 Thread
r the help:-) -邮件原件- 发件人: Deepak Naidu [mailto:dna...@nvidia.com] 发送时间: 2017年7月18日 6:59 收件人: Blair Bethwaite; 许雪寒 抄送: ceph-users@lists.ceph.com 主题: RE: [ceph-users] How's cephfs going? Based on my experience, it's really stable and yes is production ready. Most of the use case for ce

[ceph-users] 答复: How's cephfs going?

2017-07-16 Thread
Blair Bethwaite [mailto:blair.bethwa...@gmail.com] 发送时间: 2017年7月17日 11:14 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] How's cephfs going? It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi

[ceph-users] How's cephfs going?

2017-07-16 Thread
Hi, everyone. We intend to use cephfs of Jewel version, however, we don’t know its status. Is it production ready in Jewel? Does it still have lots of bugs? Is it a major effort of the current ceph development? And who are using cephfs now? ___ ceph-us

[ceph-users] 答复: 答复: 答复: No "snapset" attribute for clone object

2017-07-15 Thread
;164135 mlcod 2160'164135 active+clean] dropping ondisk_read_lock for src 6:d1d35c73:::rbd_data.d18d71b948ac7.062e:16 It showed that rbd_data.d18d71b948ac7.0000062e:16 got promoted at about 2017-07-14 18:27:11, at which time the "snaps" filed of its object c

[ceph-users] 答复: 答复: No "snapset" attribute for clone object

2017-07-14 Thread
Yes, I believe so. Is there any workarounds? -邮件原件- 发件人: Jason Dillaman [mailto:jdill...@redhat.com] 发送时间: 2017年7月13日 21:13 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] 答复: No "snapset" attribute for clone object Quite possibly the same as this issue? [1]

[ceph-users] 答复: No "snapset" attribute for clone object

2017-07-13 Thread
By the way, we are using hammer version's rbd command to export-diff rbd images on Jewel version's cluster. -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年7月13日 19:54 收件人: ceph-users@lists.ceph.com 主题: [ceph-users] No "snapset"

[ceph-users] No "snapset" attribute for clone object

2017-07-13 Thread
We are using rbd for block devices of VMs, and recently we found that after we created snapshots for some rbd images, there existed such objects for which there are clone objects who doesn't have "snapset" extensive attributes with them. It seems that the lack of "snapset" attributes for clone

[ceph-users] Mon stuck in synchronizing after upgrading from Hammer to Jewel

2017-07-04 Thread
Hi, everyone. Recently, we upgraded one of clusters from Hammer to Jewel. However, after upgrading one of our monitors cannot finish the bootstrap procedure and stuck in “synchronizing”. Does anyone has any clue about this? Thank you☺ ___ ceph-users

[ceph-users] 答复: 答复: Can't start ceph-mon through systemctl start ceph-mon@.service after upgrading from Hammer to Jewel

2017-06-23 Thread
no problem. Only when starting through systemctl, the start failed. 发件人: David Turner [mailto:drakonst...@gmail.com] 发送时间: 2017年6月22日 20:47 收件人: 许雪寒; Linh Vu; ceph-users@lists.ceph.com 主题: Re: [ceph-users] 答复: Can't start ceph-mon through systemctl start ceph-mon@.service after upgrading from Ham

[ceph-users] 答复: Can't start ceph-mon through systemctl start ceph-mon@.service after upgrading from Hammer to Jewel

2017-06-22 Thread
I set mon_data to “/home/ceph/software/ceph/var/lib/ceph/mon”, and its owner has always been “ceph” since we were running Hammer. And I also tried to set the permission to “777”, it also didn’t work. 发件人: Linh Vu [mailto:v...@unimelb.edu.au] 发送时间: 2017年6月22日 14:26 收件人: 许雪寒; ceph-users

[ceph-users] 答复: Can't start ceph-mon through systemctl start ceph-mon@.service after upgrading from Hammer to Jewel

2017-06-21 Thread
I set mon_data to “/home/ceph/software/ceph/var/lib/ceph/mon”, and its owner has always been “ceph” since we were running Hammer. And I also tried to set the permission to “777”, it also didn’t work. 发件人: Linh Vu [mailto:v...@unimelb.edu.au] 发送时间: 2017年6月22日 14:26 收件人: 许雪寒; ceph-users

[ceph-users] Can't start ceph-mon through systemctl start ceph-mon@.service after upgrading from Hammer to Jewel

2017-06-21 Thread
Hi, everyone. I upgraded one of our ceph clusters from Hammer to Jewel. After upgrading, I can’t start ceph-mon through “systemctl start ceph-mon@ceph1”, while, on the other hand, I can start ceph-mon, either as user ceph or root, if I directly call “/usr/bin/ceph-mon �Ccluster ceph �Cid ceph1

[ceph-users] 转发: Question about upgrading ceph clusters from Hammer to Jewel

2017-06-19 Thread
By the way, I intend to install jewel version throught “rpm” command, and I already have a user “ceph” on the target machine, is there any problem if I do “systemctl start ceph.target” after the installation of jewel version? 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时

[ceph-users] 答复: Question about upgrading ceph clusters from Hammer to Jewel

2017-06-19 Thread
By the way, I intend to install jewel version throught “rpm” command, and I already have a user “ceph” on the target machine, is there any problem if I do “systemctl start ceph.target” after the installation of jewel version? 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时

[ceph-users] Question about upgrading ceph clusters from Hammer to Jewel

2017-06-19 Thread
Hi, everyone. I intend to upgrade one of our ceph clusters from Hammer to Jewel, I wonder in what order I should upgrade the MON, OSD and LIBRBD? Is there any problem to have some of these components running Hammer version while others running Jewel version? Do I have to upgrade QEMU as well to

[ceph-users] Question about PGMonitor::waiting_for_finished_proposal

2017-05-31 Thread
Hi, everyone. Recently, I’m reading the source code of Monitor. I found that, in PGMonitor::preprare_pg_stats() method, a callback C_Stats is put into PGMonitor::waiting_for_finished_proposal. I wonder, if a previous PGMap incremental is in PAXOS's proposeaccept phase at the moment C_Stats

[ceph-users] 答复: How does rbd preserve the consistency of WRITE requests that span across multiple objects?

2017-05-24 Thread
sert some other operations, like io barrier, between those to writes so that the underlying storage system is aware of the case? -邮件原件- 发件人: Jason Dillaman [mailto:jdill...@redhat.com] 发送时间: 2017年5月24日 23:05 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] How does rbd preserve

[ceph-users] How does rbd preserve the consistency of WRITE requests that span across multiple objects?

2017-05-23 Thread
Hi, thanks for the explanation:-) On the other hand, I wonder if the following scenario could happen: A program in a virtual machine that uses "libaio" to access a file continuous submit "write" requests to the underlying file system which translates the request into rbd requests. Say,

[ceph-users] 答复: Odd cyclical cluster performance

2017-05-11 Thread
It seems that there's some bottleneck is blocking the I/O, when the bottleneck is reached, I/O is blocked and curve goes down, when it is released, I/O resumes and the curve gose up. -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 Patrick Dinnen 发送时间: 2017年5月12日 3:47

[ceph-users] 答复: Large META directory within each OSD's directory

2017-05-01 Thread
Thanks☺ We are using hammer 0.94.5, Which commit is supposed to fix this bug? Thank you. 发件人: David Turner [mailto:drakonst...@gmail.com] 发送时间: 2017年4月25日 20:17 收件人: 许雪寒; ceph-users@lists.ceph.com 主题: Re: [ceph-users] Large META directory within each OSD's directory Which version of Cep

[ceph-users] Large META directory within each OSD's directory

2017-04-25 Thread
Hi, everyone. Recently, in one of our clusters, we found that the “META” directory in each OSD’s working directory is getting extremely large, about 17GB each. Why hasn’t the OSD cleared those old osdmaps? How should I deal with this problem? Thank you☺ _

[ceph-users] 答复: Why is there no data backup mechanism in the rados layer?

2017-04-19 Thread
to another cluster, wouldn't this be a better way? 发件人: Christian Balzer [mailto:ch...@gol.com] 发送时间: 2017年1月3日 19:47 收件人: ceph-users@lists.ceph.com 抄送: 许雪寒 主题: Re: [ceph-users] Why is there no data backup mechanism in the rados layer? Hello, On Tue, 3 Jan 2017 11:16:27 +0000 许雪寒 wrote:

[ceph-users] 答复: Does cephfs guarantee client cache consistency for file data?

2017-04-19 Thread
Thanks, everyone:-) I'm still not very clear. Do these cache "capabilities" only apply to metadata operations or both metadata and data? -邮件原件- 发件人: David Disseldorp [mailto:dd...@suse.de] 发送时间: 2017年4月19日 16:46 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-use

[ceph-users] Does cephfs guarantee client cache consistency for file data?

2017-04-19 Thread
Hi, everyone. I’m new to cephfs. I wonder whether cephfs guarantee client cache consistency for file content. For example, if client A read some data of file X, then client B modified the X’s content in the range that A read, will A be notified of the modification? _

[ceph-users] 答复: 答复: 答复: 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-04-11 Thread
Thanks for your help:-) By the way, could you give us some hint why Infernalis and later releases don't have this problem, please? Thank you. -邮件原件- 发件人: Jason Dillaman [mailto:jdill...@redhat.com] 发送时间: 2017年4月11日 4:30 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: 答复: [ceph-user

[ceph-users] 答复: 答复: 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-04-09 Thread
Oh, sorry again. We didn't resize the image, just "aio_discard"ed the data from the offset 1048576 to the end of the rbd image. ________ 发件人: 许雪寒 发送时间: 2017年4月9日 16:37 收件人: dilla...@redhat.com Cc: ceph-users@lists.ceph.com 主题: 答复: [ceph-users] 答复:

[ceph-users] 答复: 答复: 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-04-09 Thread
Ah, sorry, I didn't understand you correctyly. We did use the librbd::Image::aio_discard method to resize the image from 4MB to 1MB. 发件人: ceph-users [ceph-users-boun...@lists.ceph.com] 代表 许雪寒 [xuxue...@360.cn] 发送时间: 2017年4月3日 23:27 收件人: dilla...@redha

[ceph-users] 答复: 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-04-03 Thread
bject, which leads to a "copy-up", before a new snapshot is created after which a export-diff is conducted, the export-diff will copy all the data in the HEAD object, which, in our case, is not the "diff" that we want. ________ 发件人: Jason Dillaman [j

[ceph-users] 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-04-03 Thread
Hi, the operation we performed is AioTruncate. 发件人: Jason Dillaman [jdill...@redhat.com] 发送时间: 2017年4月3日 22:11 收件人: 许雪寒 Cc: ceph-users@lists.ceph.com 主题: Re: [ceph-users] rbd expord-diff aren't counting AioTruncate op correctly On Fri, Mar 31, 2017

[ceph-users] 答复: CephX Authentication fails when only disable "auth_cluster_required"

2017-03-31 Thread
By the way, we are using hammer version, 0.94.5. -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年4月1日 13:13 收件人: ceph-users@lists.ceph.com 主题: [ceph-users] CephX Authentication fails when only disable "auth_cluster_required" Hi, everyone.

[ceph-users] 答复: rbd expord-diff aren't counting AioTruncate op correctly

2017-03-31 Thread
By the way, we are using hammer version, 0.94.5. -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年4月1日 10:37 收件人: ceph-users@lists.ceph.com 主题: [ceph-users] rbd expord-diff aren't counting AioTruncate op correctly Hi, everyone. Recently, in our

[ceph-users] CephX Authentication fails when only disable "auth_cluster_required"

2017-03-31 Thread
Hi, everyone. According to the documentation, “auth_cluster_required” means that “the Ceph Storage Cluster daemons (i.e., ceph-mon, ceph-osd, and ceph-mds) must authenticate with each other”. So, I guess if I only need to verify the client, then "auth_cluster_required" doesn't need to be enable

[ceph-users] rbd expord-diff aren't counting AioTruncate op correctly

2017-03-31 Thread
Hi, everyone. Recently, in our test, we found that there are VM images, that we exported from the original cluster and imported to another cluster, whose images on those two clusters are not the same. The details of test is as follows: at first, we fully exported the VM's images from the origi

[ceph-users] 答复: 答复: 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-03-16 Thread
Hi, Gregory. On the other hand, I checked the fix 63e44e32974c9bae17bb1bfd4261dcb024ad845c should be the one that we need. However, I notice that this fix has only been backported down to v11.0.0, can we simply apply it to our Hammer version(0.94.5)? -邮件原件- 发件人: 许雪寒 发送时间: 2017年3月17日

[ceph-users] 答复: 答复: 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-03-16 Thread
I got it. Thanks very much:-) 发件人: Gregory Farnum [mailto:gfar...@redhat.com] 发送时间: 2017年3月17日 2:10 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com; jiajia zhong 主题: Re: 答复: [ceph-users] 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5 On Thu, Mar 16, 2017 at 3:36 AM 许雪寒 wrote: Hi, Gregory, is it p

[ceph-users] 答复: 答复: 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-03-16 Thread
:gfar...@redhat.com] 发送时间: 2017年1月17日 7:14 收件人: 许雪寒 抄送: jiajia zhong; ceph-users@lists.ceph.com 主题: Re: 答复: [ceph-users] 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5 On Sat, Jan 14, 2017 at 7:54 PM, 许雪寒 wrote: > Thanks for your help:-) > > I checked the source code again, and in read_m

[ceph-users] 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-03-15 Thread
Hi, sir. I'm sorry I made a mistake, the fix that you provided should be the one we need, is it safe for us to simply "git cherry-pick" that commit into our 0.94.5 version? So sorry for my mistake. Thank you. On Wed, Jan 11, 2017 at 3:59 PM, 许雪寒 wrote: > In our test, w

[ceph-users] 答复: 答复: How does ceph preserve read/write consistency?

2017-03-09 Thread
I also submitted an issue: http://tracker.ceph.com/issues/19252 -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年3月10日 11:20 收件人: Wei Jin; ceph-users@lists.ceph.com 主题: [ceph-users] 答复: How does ceph preserve read/write consistency? Thanks for your

[ceph-users] 答复: How does ceph preserve read/write consistency?

2017-03-09 Thread
17年3月9日 21:52 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] How does ceph preserve read/write consistency? On Thu, Mar 9, 2017 at 1:45 PM, 许雪寒 wrote: > Hi, everyone. > As shown above, WRITE req with tid 1312595 arrived at 18:58:27.439107 and > READ req with tid 6476 arri

[ceph-users] How does ceph preserve read/write consistency?

2017-03-08 Thread
Hi, everyone. Recently, in our test, we found a strange phenomenon: a READ req from client A that arrived later than a WRITE req from client B is finished ealier than that WRITE req. The logs are as follows(we did a little modification to the level of some logs to 1 in order to get some insigh

[ceph-users] 答复: Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-20 Thread
Hi, everyone. I read the source code. Could this be a case: a "WRITE" op designated to OBJECT X is followed by a series of Ops at the end of which is a "READ" op designated to the same OBJECT that come from the "rbd EXPORT" command; although the "WRITE" op modified the ObjectContext of OBJECT

[ceph-users] Why does ceph-client.admin.asok disappear after some running time?

2017-02-12 Thread
Hi, everyone. I’m doing some stress test with ceph, librbd and fio. During the test, I want to “perf dump” the cliet’s perf data. However, each time I tried to do “perf dump” on the client, the “aosk” file of librbd had disappeared. I’m sure that at the beginning of the running of fio, client’s

[ceph-users] Monitor repeatedly calling new election

2017-02-03 Thread
Hi, everyone. Recently, when I was doing some stress test, one of the monitors of my ceph cluster was marked down, and all the monitors repeatedly call new election and the I/O can be finished. There were three monitors in my cluster: rg3-ceph36, rg3-ceph40, rg3-ceph45. It was rg3-ceph40 that w

[ceph-users] Monitor repeatedly calling new election

2017-02-03 Thread
Hi, everyone. Recently, when I was doing some stress test, one of the monitors of my ceph cluster was marked down, and all the monitors repeatedly call new election and the I/O can be finished. There were three monitors in my cluster: rg3-ceph36, rg3-ceph40, rg3-ceph45. It was rg3-ceph40 that w

[ceph-users] 答复: Does this indicate a "CPU bottleneck"?

2017-01-19 Thread
The network is only about 10% full, and we tested the performance with different number of clients, and it turned out that no matter how we increase the number of clients, the result is the same. -邮件原件- 发件人: John Spray [mailto:jsp...@redhat.com] 发送时间: 2017年1月19日 16:11 收件人: 许雪寒 抄送: ceph

[ceph-users] Does this indicate a "CPU bottleneck"?

2017-01-18 Thread
Hi, everyone. Recently, we did some stress test on ceph using three machines. We tested the IOPS of the whole small cluster when there are 1~8 OSDs per machines separately and the result is as follows: OSD num per machine fio iops 1

[ceph-users] 答复: 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-01-14 Thread
oop, it first lock connection_state->lock and then do tcp_read_nonblocking. connection_state is of type PipeConnectionRef, connection_state->lock is Connection::lock. On the other hand, I'll check that whether there are a lot of message to send as you suggested. Thanks:-) 发件人: G

[ceph-users] 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-01-12 Thread
holding Connection::lock? I think maybe a different mutex should be used in Pipe::read_message rather than Connection::lock. 发件人: jiajia zhong [mailto:zhong2p...@gmail.com] 发送时间: 2017年1月13日 11:50 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: 答复: [ceph-users] Pipe "deadlock" in Hammer, 0

[ceph-users] 答复: Pipe "deadlock" in Hammer, 0.94.5

2017-01-12 Thread
it wouldn’t act as blocked. Is this so? This really confuses me. 发件人: jiajia zhong [mailto:zhong2p...@gmail.com] 发送时间: 2017年1月12日 18:22 收件人: 许雪寒 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] Pipe "deadlock" in Hammer, 0.94.5 if errno is EAGAIN for recv, the Pipe:do_recv ju

[ceph-users] Pipe "deadlock" in Hammer, 0.94.5

2017-01-12 Thread
Hi, everyone. Recently, we did some experiment to test the stability of the ceph cluster. We used Hammer version which is the mostly used version of online cluster. One of the scenarios that we simulated is poor network connectivity, in which we used iptables to drop TCP/IP packet under some pr

[ceph-users] Fwd: Is this a deadlock?

2017-01-04 Thread
e the latter one that you metioned Last night, one of our switch got some problem and made the OSD unconnected to other peer, which in turn made the monitor to wrongly mark the OSD down. Thank you:-) On Wed, 4 Jan 2017 07:49:03 +0000 许雪寒 wrote: > Hi, everyone. > > Recently in one of o

[ceph-users] 答复: Is this a deadlock?

2017-01-04 Thread
monitor to wrongly mark the OSD down. Thank you:-) On Wed, 4 Jan 2017 07:49:03 + 许雪寒 wrote: > Hi, everyone. > > Recently in one of our online ceph cluster, one OSD suicided itself after > experiencing some network connectivity problem, and the OSD log is as follows: > Vers

[ceph-users] Is this a deadlock?

2017-01-03 Thread
Hi, everyone. Recently in one of our online ceph cluster, one OSD suicided itself after experiencing some network connectivity problem, and the OSD log is as follows: -173> 2017-01-03 23:42:19.145490 7f5021bbc700 0 -- 10.205.49.55:6802/1778451 >> 10.205.49.174:6803/1499671 pipe(0x7f50ec2ce00

[ceph-users] Why is there no data backup mechanism in the rados layer?

2017-01-03 Thread
Hi, everyone. I’m researching the online backup mechanism of ceph, like rbd mirroring and multi-site. And I’m a little confused. Why is there no data backup mechanism in the rados layer? Wouldn’t this save the bother to implement a backup system for every higher level feature of ceph, like rbd