Re: [ceph-users] KVM problems when rebalance occurs

2016-01-08 Thread nick
Hi, benchmarking is done via fio and different blocksizes. I compared with benchmarks I did before the ceph.conf change and encountered very similar numbers. Thanks for the hint with mysql benchmarking. I will try it out. Cheers Nick On Friday, January 08, 2016 06:59:13 AM Josef Johansson wro

[ceph-users] can rbd block_name_prefix be changed?

2016-01-08 Thread min fang
Hi, can rbd block_name_prefix be changed? Is it constant for a rbd image? thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Swift use Rados backend

2016-01-08 Thread Sam Huracan
Hi, How could I use Ceph as Backend for Swift? I follow these git: https://github.com/stackforge/swift-ceph-backend https://github.com/enovance/swiftceph-ansible I try to install manually, but I am stucking in configuring entry for ring. What device I use in 'swift-ring-builder account.builder ad

[ceph-users] cephfs (ceph-fuse) and file-layout: "operation not supported" in a client Ubuntu Trusty

2016-01-08 Thread Francois Lafont
Hi @all, I'm using ceph Infernalis (9.2.0) in the client and cluster side. I have a Ubuntu Trusty client where cephfs is mounted via ceph-fuse and I would like to put a sub-directory of cephfs in a specific pool (a ssd pool). In the cluster, I have: ~# ceph auth get client.cephfs exported keyrin

[ceph-users] Intel P3700 PCI-e as journal drives?

2016-01-08 Thread Burkhard Linke
Hi, I want to start another round of SSD discussion since we are about to buy some new servers for our ceph cluster. We plan to use hosts with 12x 4TB drives and two SSD journals drives. I'm fancying Intel P3700 PCI-e drives, but Sebastien Han's blog does not contain performance data for thes

Re: [ceph-users] ceph osd tree output

2016-01-08 Thread Wade Holler
That is not set as far as I can tell. Actually it is strange that I don't see that setting at all. [root@cpn1 ~]# ceph daemon osd.0 config show | grep update | grep crush [root@cpn1 ~]# grep update /etc/ceph/ceph.conf [root@cpn1 ~]# On Fri, Jan 8, 2016 at 1:50 AM Mart van Santen w

Re: [ceph-users] Intel P3700 PCI-e as journal drives?

2016-01-08 Thread Paweł Sadowski
Hi, Quick results for 1/5/10 jobs: # fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1 fio-2.1.3 Starting 1 proce

[ceph-users] pg is stuck stale (osd.21 still removed)

2016-01-08 Thread Daniel Schwager
Hi, we had a HW-problem with OSD.21 today. The OSD daemon was down and "smartctl" told me about some hardware errors. I decided to remove the HDD: ceph osd out 21 ceph osd crush remove osd.21 ceph auth del osd.21 ceph osd rm osd.21 But afterwards I saw t

Re: [ceph-users] cephfs (ceph-fuse) and file-layout: "operation not supported" in a client Ubuntu Trusty

2016-01-08 Thread Francois Lafont
Hi, Some news... On 08/01/2016 12:42, Francois Lafont wrote: > ~# mkdir /mnt/cephfs/ssd > > ~# setfattr -n ceph.dir.layout.pool -v poolssd /mnt/cephfs/ssd/ > setfattr: /mnt/cephfs/ssd/: Operation not supported > > ~# getfattr -n ceph.dir.layout /mnt/cephfs/ > /mnt/cephfs/: ceph.dir.layout: Ope

Re: [ceph-users] can rbd block_name_prefix be changed?

2016-01-08 Thread Jason Dillaman
It's constant for an RBD image and is tied to the image's internal unique ID. -- Jason Dillaman - Original Message - > From: "min fang" > To: "ceph-users" > Sent: Friday, January 8, 2016 4:50:08 AM > Subject: [ceph-users] can rbd block_name_prefix be changed? > Hi, can rbd block_

[ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-08 Thread hnuzhoulin
Hi,guyes Recentlly,I am testing cache-tier using writeback mode.but I found a strange things. the performance using rados bench degrade.Is it correct? If so,how to explain.following some info about my test: storage node:4 machine,two INTEL SSDSC2BB120G4(one for systaem,the other one used a

Re: [ceph-users] ceph osd tree output

2016-01-08 Thread hnuzhoulin
Yeah,this setting can not see in asok config. You just set it in ceph.conf and restart mon and osd service(sorry I forget if these restart is necessary) what I use this config is when I changed crushmap manually,and I do not want the service init script to rebuild crushmap as default way.

Re: [ceph-users] ceph osd tree output

2016-01-08 Thread Wade Holler
It is not set in the conf file. So why do I still have this behavior ? On Fri, Jan 8, 2016 at 11:08 AM hnuzhoulin wrote: > Yeah,this setting can not see in asok config. > You just set it in ceph.conf and restart mon and osd service(sorry I > forget if these restart is necessary) > > what I use

Re: [ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-08 Thread Wade Holler
My experience is performance degrades dramatically when dirty objects are flushed. Best Regards, Wade On Fri, Jan 8, 2016 at 11:08 AM hnuzhoulin wrote: > Hi,guyes > Recentlly,I am testing cache-tier using writeback mode.but I found a > strange things. > the performance using rados bench degr

Re: [ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-08 Thread Nick Fisk
There was/is a bug in Infernalis and older, where objects will always get promoted on the 2nd read/write regardless of what you set the min_recency_promote settings to. This can have a dramatic effect on performance. I wonder if this is what you are experiencing? This has been fixed in Jewel ht

[ceph-users] Infernalis

2016-01-08 Thread HEWLETT, Paul (Paul)
Hi Cephers Just fired up first Infernalis cluster on RHEL7.1. The following: [root@citrus ~]# systemctl status ceph-osd@0.service ceph-osd@0.service - Ceph object storage daemon Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled) Active: active (running) since Fri 2016-01-0

Re: [ceph-users] using cache-tier with writeback mode, raods bench result degrade

2016-01-08 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Are you backporting that to hammer? We'd love it. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Jan 8, 2016 at 9:28 AM, Nick Fisk wrote: > There was/is a bug in Infernalis and older, w

Re: [ceph-users] Unable to see LTTng tracepoints in Ceph

2016-01-08 Thread Jason Dillaman
Have you started ceph-osd with LD_PRELOAD=/usr/lib64/liblttng-ust-fork.so [matched to correct OS path]? I just tested ceph-osd on the master branch and was able to generate OSD trace events. You should also make sure that AppArmor / SElinux isn't denying access to /dev/shm/lttng-ust-*. What t

Re: [ceph-users] pg is stuck stale (osd.21 still removed)

2016-01-08 Thread Daniel Schwager
One more - I tried to recreate the pg but now this pg this "stuck inactive": root@ceph-admin:~# ceph pg force_create_pg 34.225 pg 34.225 now creating, ok root@ceph-admin:~# ceph health detail HEALTH_WARN 49 pgs stale; 1 pgs stuck inactive; 49 pgs stuck stale; 1 pg

[ceph-users] [Ceph-Users] The best practice, ""ceph.conf""

2016-01-08 Thread Shinobu Kinjo
Hello, Since ""ceph.conf"" is getting more complicated because there has been a bunch of parameters. It's because of bug fixes, performance optimization or whatever making the Ceph cluster more strong, stable and something. I'm pretty sure that I have not been able to catch up -; [ceph@ceph0

Re: [ceph-users] very high OSD RAM usage values

2016-01-08 Thread Josef Johansson
Hi, I would say this is normal. 1GB of ram per 1TB is what we designed the cluster for, I would believe that an EC-pool demands a lot more. Buy more ram and start everything 32GB ram is quite little, when the cluster is operating OK you'll see that extra ram getting used as file cache which makes

Re: [ceph-users] very high OSD RAM usage values

2016-01-08 Thread Josef Johansson
Maybe changing the number of concurrent back fills could limit the memory usage. On 9 Jan 2016 05:52, "Josef Johansson" wrote: > Hi, > > I would say this is normal. 1GB of ram per 1TB is what we designed the > cluster for, I would believe that an EC-pool demands a lot more. Buy more > ram and sta