[ceph-users] what's the benefit if I deploy more ceph-mon node?

2015-11-19 Thread 席智勇
hi all:        As the title, if I deploy more than three ceph-mon node, I can tolerate more monitor node failture, what I wana know is, is there any other benefit, for example, better for IOPS or latency? On the other hand, what the disadvantage if it has? best regards~

[ceph-users] Ceph extras package support for centos kvm-qemu

2015-11-19 Thread Xue, Chendi
Hi, All We noticed ceph.com/packages url is no longer available, we used to download rbd supported centos qemu-kvm from http://ceph.com/packages/ceph-extras/rpm as intructed below. Is there another way to fix this? Or any other qemu-kvm version is rbd supported? [root@client03]# /usr/libexec/

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread Mykola Dvornik
Dear Yan, Thanks for your reply. The problem is that the back-up I've made was done after the data corruption (but before any manipulations with the journal). Since FS cannot be mounted via in-kernel client, I tend to believe that cephfs_metadata corruption is the cause. Since I do have a read-o

Re: [ceph-users] what's the benefit if I deploy more ceph-mon node?

2015-11-19 Thread Jan Schermer
There's no added benefit - it just adds resiliency. On the other hand - more monitors means more likelihood that one of them will break, when that happens there will be a brief interruption to some (not only management) operations. If you decide to reduce the number of MONs then that is a PITA a

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread John Spray
On Wed, Nov 18, 2015 at 9:21 AM, Mykola Dvornik wrote: > Hi John, > > It turned out that mds triggers an assertion > > mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0) > > on any attempt to write data to the filesystem mounted via fuse. I'm guessing in this context that "write

[ceph-users] Questions about MDLog size and prezero operation

2015-11-19 Thread xiafei
Hi, all: I have two questions about MDLog: 1. The max number of logsegments per MDlog (mds_log_max_segments) is configured to be 30 in the config_opts.h file. However, the MDLog doesn’t check the number of logsegments when it start a new segment. The configuration is only used when the

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread Mykola Dvornik
I'm guessing in this context that "write data" possibly means creating a file (as opposed to writing to an existing file). Indeed. Sorry for the confusion. You've pretty much hit the limits of what the disaster recovery tools are currently capable of. What I'd recommend you do at this stage is m

Re: [ceph-users] Questions about MDLog size and prezero operation

2015-11-19 Thread John Spray
On Thu, Nov 19, 2015 at 9:43 AM, xiafei wrote: > Hi, all: > I have two questions about MDLog: > > 1. The max number of logsegments per MDlog (mds_log_max_segments) is > configured to be 30 in the config_opts.h file. > However, the MDLog doesn’t check the number of logsegments when it star

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread John Spray
On Thu, Nov 19, 2015 at 10:07 AM, Mykola Dvornik wrote: > I'm guessing in this context that "write data" possibly means creating > a file (as opposed to writing to an existing file). > > Indeed. Sorry for the confusion. > > You've pretty much hit the limits of what the disaster recovery tools > ar

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread Mykola Dvornik
Thanks for the tip. I will stay of the safe side and wait until it will be merged into master) Many thanks for all your help. -Mykola On 19 November 2015 at 11:10, John Spray wrote: > On Thu, Nov 19, 2015 at 10:07 AM, Mykola Dvornik > wrote: > > I'm guessing in this context that "write data"

Re: [ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-19 Thread Haomai Wang
On Thu, Nov 19, 2015 at 11:13 AM, Will Bryant wrote: > Hi Haomai, > > Thanks for that suggestion. To test it out, I have: > > 1. upgraded to 3.19 kernel > 2. added filestore_fiemap = true to my ceph.conf in the [osd] section > 3. wiped and rebuild the ceph cluster > 4. recreated the RBD volume >

[ceph-users] Can't activate osd in infernalis

2015-11-19 Thread David Riedl
Hi everyone. I updated one of my hammer osd nodes to infernalis today. After many problems with the upgrading process of the running OSDs, I decided to wipe them and start anew. I reinstalled all packages and deleted all partitions on the OSDs and the SSD journal drive. I zapped the disks with

Re: [ceph-users] All SSD Pool - Odd Performance

2015-11-19 Thread Sean Redmond
Hi Mike/Warren, Thanks for helping out here. I am running the below fio command to test this with 4 jobs and a iodepth of 128 fio --time_based --name=benchmark --size=4G --filename=/mnt/test.bin --ioengine=libaio --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 -

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread David Riedl
I fixed the issue and opened a ticket on the ceph-deploy bug tracker http://tracker.ceph.com/issues/13833 tl;dr: change permission of the ssd journal partition with chown ceph:ceph /dev/sdd1 On 19.11.2015 11:38, David Riedl wrote: Hi everyone. I updated one of my hammer osd nodes to infernalis

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread Mykola
I am afraid one would need an udev rule to make it persistent. Sent from Outlook Mail for Windows 10 phone From: David Riedl Sent: Thursday, November 19, 2015 1:42 PM To: ceph-us...@ceph.com Subject: Re: [ceph-users] Can't activate osd in infernalis I fixed the issue and opened a ticket on the

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread Mykola Dvornik
cat /etc/udev/rules.d/89-ceph-journal.rules KERNEL=="sdd?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660" On 19 November 2015 at 13:54, Mykola wrote: > I am afraid one would need an udev rule to make it persistent. > > > > Sent from Outlook Mail

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread David Riedl
Thanks for the fix! Two questions though: Is that the right place for the udev rule? I have CentOS 7. The folder exists, but all the other udev rules are in /usr/lib/udev/rules.d/. Can I just create a new file named "89-ceph-journal.rules" in the /usr/lib/udev/rules.d/ folder? Regards David

Re: [ceph-users] Questions about MDLog size and prezero operation

2015-11-19 Thread xiafei
Dear John, Thanks for your reply. Fei Xia > 在 2015年11月19日,18:07,John Spray 写道: > > On Thu, Nov 19, 2015 at 9:43 AM, xiafei wrote: >> Hi, all: >>I have two questions about MDLog: >> >> 1. The max number of logsegments per MDlog (mds_log_max_segments) is >> configured to be 3

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread Mykola Dvornik
I am also using centos7.x. /usr/lib/udev/rules.d/ should be fine. If not, one can always symlink to /etc/udev/rules.d/. On 19 November 2015 at 14:13, David Riedl wrote: > Thanks for the fix! > Two questions though: > Is that the right place for the udev rule? I have CentOS 7. The folder > exists

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread David Riedl
Thanks again! It works now. But now I have another problem. The daemons are working now, even after a restart. But the OSDs won't talk to the rest of the cluster. osdmap e5058: 12 osds: 8 up, 8 in; The command # ceph osd in osd.1 tells me marked in osd.1. # ceph status tells me 1/9 in osds a

Re: [ceph-users] Can't activate osd in infernalis

2015-11-19 Thread German Anders
I've a similar problem while trying to run the prepare osd command: ceph version: infernalis 9.2.0 disk: /dev/sdf (745.2G) /dev/sdf1 740.2G /dev/sdf2 5G # parted /dev/sdf GNU Parted 2.3 Using /dev/sdf Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) pri

Re: [ceph-users] After flattening the children image, snapshot still can not be unprotected

2015-11-19 Thread Jason Dillaman
Does child image "images/0a38b10d-2184-40fc-82b8-8bbd459d62d2" have snapshots? -- Jason Dillaman - Original Message - > From: "Jackie" > To: ceph-users@lists.ceph.com > Sent: Thursday, November 19, 2015 12:05:12 AM > Subject: [ceph-users] After flattening the children image, snapsh

[ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread German Anders
Hi cephers, I had some issues while running the prepare osd command: ceph version: infernalis 9.2.0 disk: /dev/sdf (745.2G) /dev/sdf1 740.2G /dev/sdf2 5G # parted /dev/sdf GNU Parted 2.3 Using /dev/sdf Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread Mykola Dvornik
*'Could not create partition 2 from 10485761 to 10485760'.* Perhaps try to zap the disks first? On 19 November 2015 at 16:22, German Anders wrote: > Hi cephers, > > I had some issues while running the prepare osd command: > > ceph version: infernalis 9.2.0 > > disk: /dev/sdf (745.2G) >

[ceph-users] CACHEMODE_READFORWARD doesn't try proxy write?

2015-11-19 Thread Nick Fisk
Hi, I'm just looking through https://github.com/ceph/ceph/blob/ef2fec78440a522722ad003f0399bcfec2808416/s rc/osd/ReplicatedPG.cc#L2192 And from what I can see if you set the cachemode to CACHEMODE_READFORWARD, proxy writes will not be used. Am I reading the code correct and is that the

Re: [ceph-users] CACHEMODE_READFORWARD doesn't try proxy write?

2015-11-19 Thread Nick Fisk
Don't know why that URL got changed, it was meant to link to ReplicatedPG.cc - line 2192 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Nick Fisk Sent: 19 November 2015 16:29 To: 'ceph-users' Subject: [ceph-users] CACHEMODE_READFORWARD doesn't try proxy write? Hi,

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread German Anders
I've already try that with no luck at all On Thursday, 19 November 2015, Mykola Dvornik wrote: > *'Could not create partition 2 from 10485761 to 10485760'.* > > Perhaps try to zap the disks first? > > On 19 November 2015 at 16:22, German Anders > wrote: > >> Hi cephers, >> >> I had some issues

Re: [ceph-users] ceph osd prepare cmd on infernalis 9.2.0

2015-11-19 Thread Mykola
I believe the error message says that there is no space left on the device for the second partition to be created. Perhaps try to flush gpt with old good dd. Sent from Outlook Mail for Windows 10 phone From: German Anders Sent: Thursday, November 19, 2015 7:25 PM To: Mykola Dvornik Cc: ceph-use

[ceph-users] v0.80.11 Firefly released

2015-11-19 Thread Sage Weil
This is a bugfix release for Firefly. This Firefly 0.80.x is nearing its planned end of life in January 2016 it may also be the last. We recommend that all Firefly users upgrade. For more detailed information, see the complete changelog at http://docs.ceph.com/docs/master/_downloads/v0.80.11.

Re: [ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-19 Thread Will Bryant
> On 19/11/2015, at 23:36 , Haomai Wang wrote: > Hmm, what's the actual capacity usage in this volume? Fiemap could > help a lot to a normal workload volume like sparse data distribution. I’m using basically the whole volume, so it’s not really sparse. > > Hmm, it's really a strange result for

Re: [ceph-users] v0.80.11 Firefly released

2015-11-19 Thread Yonghua Peng
I have been using firefly release. is there an official documentation for upgrading? thanks. On 2015/11/20 6:08, Sage Weil wrote: This is a bugfix release for Firefly. This Firefly 0.80.x is nearing its planned end of life in January 2016 it may also be the last. We recommend that all Firefl

[ceph-users] Reply:Re: what's the benefit if I deploy more ceph-mon node?

2015-11-19 Thread 席智勇
hi Jan: got it. thanks for the reply. At 2015-11-19 17:14:19, "Jan Schermer" wrote: >There's no added benefit - it just adds resiliency. >On the other hand - more monitors means more likelihood that one of them will >break, when that happens there will be a brief interruption to so

[ceph-users] Objects per PG skew warning

2015-11-19 Thread Richard Gray
Hi, Running 'health detail' on our Ceph cluster this morning, I notice a warning about one of the pools having significantly more objects per placement group than the cluster average. ceph> health detail HEALTH_WARN pool cas_backup has too few pgs pool cas_backup objects per pg (2849) is more

[ceph-users] [HELP] Unprotect snapshot RBD object

2015-11-19 Thread Le Quang Long
Hi all, I meet a serious bug, When I try to unprotect a snapshot of RBD object, there is no respond, can't find any log, so I can not delete this object. Could you help me solve this? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ce