Re: [ceph-users] strange benchmark problem : restarting osd daemon improve performance from 100k iops to 300k iops

2015-04-26 Thread Alexandre DERUMIER
>>If I want to use librados API for performance testing, are there any >>existing benchmark tools which directly accesses librados (not through >>rbd or gateway) you can use "rados bench" from ceph packages http://ceph.com/docs/master/man/8/rados/ " bench seconds mode [ -b objsize ] [ -t thre

Re: [ceph-users] ceph-deploy : systemd unit files not deployed to a centos7 nodes

2015-04-26 Thread Mark Kirkwood
I have just run into this after upgrading to Ubuntu 15.04 and trying to deploy ceph 0.94. Initially tried to get things going by changing relevant code for ceph-deploy and ceph-disk to use systemd for this release - however the unit files in ./systemd do not contain a ceph-create-keys step, so

Re: [ceph-users] Ceph Radosgw multi zone data replication failure

2015-04-26 Thread Vickey Singh
Any help with related to this problem would be highly appreciated. -VS- On Sun, Apr 26, 2015 at 6:01 PM, Vickey Singh wrote: > Hello Geeks > > > I am trying to setup Ceph Radosgw multi site data replication using > official documentation > http://ceph.com/docs/master/radosgw/federated-config/#m

Re: [ceph-users] Shadow Files

2015-04-26 Thread Ben
Are these fixes going to make it into the repository versions of ceph, or will we be required to compile and install manually? On 2015-04-26 02:29, Yehuda Sadeh-Weinraub wrote: Yeah, that's definitely something that we'd address soon. Yehuda - Original Message - From: "Ben" To: "Ben

Re: [ceph-users] strange benchmark problem : restarting osd daemon improve performance from 100k iops to 300k iops

2015-04-26 Thread Alexandre DERUMIER
>>I'll retest tcmalloc, because I was prety sure to have patched it correctly. Ok, I really think I have patched tcmalloc wrongly. I have repatched it, reinstalled it, and now I'm getting 195k iops with a single osd (10fio rbd jobs 4k randread). So better than jemalloc. - Mail original --

[ceph-users] rgw-admin usage show does not seem to work right with start and end dates

2015-04-26 Thread baijia...@126.com
when I execute a put file operation at 17:10 of the local time. and this time convert UTC time that is 9:10. and I execute "radosgw-admin usage show --uid=test1 --show-log-entries=true --start-date="2015-04-27 09:00:00" " but it does not seem to see anything. when I check the code, I find funct

Re: [ceph-users] strange benchmark problem : restarting osd daemon improve performance from 100k iops to 300k iops

2015-04-26 Thread Alexandre DERUMIER
Hi, also another big difference, I can reach now 180k iops with a single jemalloc osd (data in buffer) vs 50k iops max with tcmalloc. I'll retest tcmalloc, because I was prety sure to have patched it correctly. - Mail original - De: "aderumier" À: "Mark Nelson" Cc: "ceph-users" , "c

Re: [ceph-users] Radosgw and mds hardware configuration

2015-04-26 Thread Yan, Zheng
On Sat, Apr 25, 2015 at 11:21 PM, François Lafont wrote: > Hi, > > Gregory Farnum wrote: > >> The MDS will run in 1GB, but the more RAM it has the more of the metadata >> you can cache in memory. The faster single-threaded performance your CPU >> has, the more metadata IOPS you'll get. We haven't

[ceph-users] defragment xfs-backed OSD

2015-04-26 Thread Josef Johansson
Hi, I’m seeing high fragmentation on my OSDs, is it safe to perform xfs_fsr defragmentation? Any guidelines in using it? I would assume doing it in off hours and using a tmp-file for saving the last position for the defrag. Thanks! /Josef ___ ceph-us

Re: [ceph-users] Ceph recovery network?

2015-04-26 Thread Robert LeBlanc
My understanding is that Monitors monitor the public address of the OSDs and other OSDs monitor the cluster address of the OSDs. Replication, recovery and backfill traffic all use the same network when you specify 'cluster network = ' in your ceph.conf. It is useful to remember that replication, re

[ceph-users] Ceph recovery network?

2015-04-26 Thread Sebastien Han
Hi list, While reading this http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks, I came across the following sentence: "You can also establish a separate cluster network to handle OSD heartbeat, object replication and recovery traffic” I didn’t know it was possib

Re: [ceph-users] Having trouble getting good performance

2015-04-26 Thread Michal Kozanecki
Quick correction/clarification about ZFS and large blocks - ZFS can and will write in 1MB or larger blocks but only with the latest versions with large block support enabled (which I am not sure if ZoL has), by default block aggregation is limited to 128KB. The rest of my post (about multiple vd

Re: [ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-26 Thread Andrei Mikhailovsky
Anthony, I doubt the manufacturer reported 315MB/s for 4K block size. Most likely they've used 1M or 4M as the block size to achieve the 300MB/s+ speeds Andrei - Original Message - > From: "Alexandre DERUMIER" > To: "Anthony Levesque" > Cc: "ceph-users" > Sent: Saturday, 25 April,

Re: [ceph-users] very different performance on two volumes in the same pool

2015-04-26 Thread Somnath Roy
Hi Nik, Thanks for the perf data..It seems innocuous..I am not seeing single tcmalloc trace, are you running with tcmalloc by the way ? What about my other question, is the performance of slow volume increasing if you stop IO on the other volume ? Are you using default ceph.conf ? Probably, you w

[ceph-users] Ceph Radosgw multi zone data replication failure

2015-04-26 Thread Vickey Singh
Hello Geeks I am trying to setup Ceph Radosgw multi site data replication using official documentation http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication Everything seems to work except radosgw-agent sync , Request you to please check the below outputs and help me

[ceph-users] test pls ignore

2015-04-26 Thread Vickey Singh
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] 3.18.11 - RBD triggered deadlock?

2015-04-26 Thread Nikola Ciprich
> tcp0 0 10.0.0.1:6809 10.0.0.1:59692 > ESTABLISHED 20182/ceph-osd > tcp0 4163543 10.0.0.1:59692 10.0.0.1:6809 ESTABLISHED - > > You got bitten by a recently fixed regression. It's never been a good > idea to co-locate kernel client with osds, and w

Re: [ceph-users] very different performance on two volumes in the same pool

2015-04-26 Thread Nikola Ciprich
Hello Somnath, On Fri, Apr 24, 2015 at 04:23:19PM +, Somnath Roy wrote: > This could be again because of tcmalloc issue I reported earlier. > > Two things to observe. > > 1. Is the performance improving if you stop IO on other volume ? If so, it > could be different issue. there is no other

[ceph-users] Ceph Radosgw multi site data replication failure :

2015-04-26 Thread Vickey Singh
Hello Geeks I am trying to setup Ceph Radosgw multi site data replication using official documentation http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication Everything seems to work except radosgw-agent sync , Request you to please check the below outputs and help me