Re: [ceph-users] Math behind : : OSD count vs OSD process vs OSD ports

2015-11-18 Thread Vickey Singh
Can anyone please help me understand this. Thank You On Mon, Nov 16, 2015 at 5:55 PM, Vickey Singh wrote: > Hello Community > > Need your help in understanding this. > > I have the below node, which is hosting 60 physical disks, running 1 OSD > per disk so total 60 Ceph OSD daemons > > *[root@

Re: [ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-18 Thread Özhan Rüzgar Karaman
Hi Haomai; Do you use filestore_fiemap=true parameter over CentOS7+ Hammer/Interfalis on any Production Ceph environment for rbd style storage? Is it safe to use on production environment? Thanks Özhan On Wed, Nov 18, 2015 at 8:12 AM, Haomai Wang wrote: > Yes, it's a expected case. Actually if

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-18 Thread Mykola Dvornik
Hi John, It turned out that mds triggers an assertion *mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0)* on any attempt to write data to the filesystem mounted via fuse. Deleting data is still OK. I cannot really follow why duplicated inodes appear. Are there any ways to f

Re: [ceph-users] Math behind : : OSD count vs OSD process vs OSD ports

2015-11-18 Thread Дмитрий Глушенок
Hi Vickey, > 18 нояб. 2015 г., в 11:36, Vickey Singh > написал(а): > > Can anyone please help me understand this. > > Thank You > > > On Mon, Nov 16, 2015 at 5:55 PM, Vickey Singh > wrote: > Hello Community > > Need your help in understanding this. > >

Re: [ceph-users] Fixing inconsistency

2015-11-18 Thread Межов Игорь Александрович
Hi! As for my previous message, digging mailing list gave me only one method to fix inconsistency - truncate object files in a filesystem to a size, that they have in ceph metadata: http://www.spinics.net/lists/ceph-users/msg00794.html But in that issue, metadata size was bigger, than ondisk, so

Re: [ceph-users] SSD Caching Mode Question

2015-11-18 Thread Nick Fisk
Hi Robert, > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Robert LeBlanc > Sent: 18 November 2015 00:47 > To: Ceph-User > Subject: [ceph-users] SSD Caching Mode Question > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > We are i

Re: [ceph-users] Math behind : : OSD count vs OSD process vs OSD ports

2015-11-18 Thread Vickey Singh
A BIG Thanks Dmitry for your HELP. On Wed, Nov 18, 2015 at 11:47 AM, Дмитрий Глушенок wrote: > Hi Vickey, > > 18 нояб. 2015 г., в 11:36, Vickey Singh > написал(а): > > Can anyone please help me understand this. > > Thank You > > > On Mon, Nov 16, 2015 at 5:55 PM, Vickey Singh > wrote: > >> H

[ceph-users] All SSD Pool - Odd Performance

2015-11-18 Thread Sean Redmond
Hi, I have a performance question for anyone running an SSD only pool. Let me detail the setup first. 12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM) 8 X intel DC 3710 800GB Dual port Solarflare 10GB/s NIC (one front and one back) Ceph 0.94.5 Ubuntu 14.04 (3.13.0-68-generic) The above is in one

Re: [ceph-users] All SSD Pool - Odd Performance

2015-11-18 Thread Mike Almateia
18-Nov-15 14:39, Sean Redmond пишет: Hi, I have a performance question for anyone running an SSD only pool. Let me detail the setup first. 12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM) 8 X intel DC 3710 800GB Dual port Solarflare 10GB/s NIC (one front and one back) Ceph 0.94.5 Ubuntu 14.04 (3

[ceph-users] Advised Ceph release

2015-11-18 Thread Bogdan SOLGA
Hello, everyone! We have recently setup a Ceph cluster running on the Hammer release (v0.94.5), and we would like to know what is the advised release for preparing a production-ready cluster - the LTS version (Hammer) or the latest stable version (Infernalis)? The cluster works properly (so far),

Re: [ceph-users] All SSD Pool - Odd Performance

2015-11-18 Thread Warren Wang - ISD
What were you using for iodepth and numjobs? If you’re getting an average of 2ms per operation, and you’re single threaded, I’d expect about 500 IOPS / thread, until you hit the limit of your QEMU setup, which may be a single IO thread. That’s also what I think Mike is alluding to. Warren From

Re: [ceph-users] Advised Ceph release

2015-11-18 Thread Warren Wang - ISD
If it’s your first prod cluster, and you have no hard requirements for Infernalis features, I would say stick with Hammer. Warren From: Bogdan SOLGA mailto:bogdan.so...@gmail.com>> Date: Wednesday, November 18, 2015 at 1:58 PM To: ceph-users mailto:ceph-users@lists.ceph.com>> Cc: Calin Fatu mail

Re: [ceph-users] Bcache and Ceph Question

2015-11-18 Thread Wido den Hollander
On 11/17/2015 03:12 PM, German Anders wrote: > Hi all, > > Is there any way to use bcache in an already configured Ceph cluster? > I've both OSD and Journals inside the same OSD daemon, and I want to try > bcache in front of the OSD daemon and also move in the bcache device the > journal, so for e

Re: [ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-18 Thread Will Bryant
Hi Haomai, Thanks for that suggestion. To test it out, I have: 1. upgraded to 3.19 kernel 2. added filestore_fiemap = true to my ceph.conf in the [osd] section 3. wiped and rebuild the ceph cluster 4. recreated the RBD volume But I am still only getting around 120 IOPS after a snapshot. The lo

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-18 Thread Yan, Zheng
On Wed, Nov 18, 2015 at 5:21 PM, Mykola Dvornik wrote: > Hi John, > > It turned out that mds triggers an assertion > > *mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0)* > > on any attempt to write data to the filesystem mounted via fuse. > > Deleting data is still OK. > > I c

[ceph-users] After flattening the children image, snapshot still can not be unprotected

2015-11-18 Thread Jackie
Hi experts, I met a problem that after flattening the children image, snapshot still can not be unprotected. Detail infos are as following: ubuntu@devstack-ntse:~$ rbd snap unprotect vms/9bec1605-816b-4e6f-b393-eb44d214c21d_disk@20d41cdcbc184d09834c850ec68965af 2015-11-19 12:53:09.146001 7

[ceph-users] ceph_monitor - monitor your cluster with parallel python

2015-11-18 Thread igor.podo...@ts.fujitsu.com
Hi Cephers! I've created small tool to help track memory/cpu/io usage. It's useful for me so I thought I could share with you: https://github.com/aiicore/ceph_monitor In general this is a python script, that uses parallel python to run a function on remote host. Data is gathered from all hosts