>>If I want to use librados API for performance testing, are there any
>>existing benchmark tools which directly accesses librados (not through
>>rbd or gateway)
you can use "rados bench" from ceph packages
http://ceph.com/docs/master/man/8/rados/
"
bench seconds mode [ -b objsize ] [ -t thre
I have just run into this after upgrading to Ubuntu 15.04 and trying to
deploy ceph 0.94.
Initially tried to get things going by changing relevant code for
ceph-deploy and ceph-disk to use systemd for this release - however the
unit files in ./systemd do not contain a ceph-create-keys step, so
Any help with related to this problem would be highly appreciated.
-VS-
On Sun, Apr 26, 2015 at 6:01 PM, Vickey Singh
wrote:
> Hello Geeks
>
>
> I am trying to setup Ceph Radosgw multi site data replication using
> official documentation
> http://ceph.com/docs/master/radosgw/federated-config/#m
Are these fixes going to make it into the repository versions of ceph,
or will we be required to compile and install manually?
On 2015-04-26 02:29, Yehuda Sadeh-Weinraub wrote:
Yeah, that's definitely something that we'd address soon.
Yehuda
- Original Message -
From: "Ben"
To: "Ben
>>I'll retest tcmalloc, because I was prety sure to have patched it correctly.
Ok, I really think I have patched tcmalloc wrongly.
I have repatched it, reinstalled it, and now I'm getting 195k iops with a
single osd (10fio rbd jobs 4k randread).
So better than jemalloc.
- Mail original --
when I execute a put file operation at 17:10 of the local time.
and this time convert UTC time that is 9:10.
and I execute "radosgw-admin usage show --uid=test1 --show-log-entries=true
--start-date="2015-04-27 09:00:00" " but it does not seem to see anything.
when I check the code, I find funct
Hi,
also another big difference,
I can reach now 180k iops with a single jemalloc osd (data in buffer) vs 50k
iops max with tcmalloc.
I'll retest tcmalloc, because I was prety sure to have patched it correctly.
- Mail original -
De: "aderumier"
À: "Mark Nelson"
Cc: "ceph-users" , "c
On Sat, Apr 25, 2015 at 11:21 PM, François Lafont wrote:
> Hi,
>
> Gregory Farnum wrote:
>
>> The MDS will run in 1GB, but the more RAM it has the more of the metadata
>> you can cache in memory. The faster single-threaded performance your CPU
>> has, the more metadata IOPS you'll get. We haven't
Hi,
I’m seeing high fragmentation on my OSDs, is it safe to perform xfs_fsr
defragmentation? Any guidelines in using it?
I would assume doing it in off hours and using a tmp-file for saving the last
position for the defrag.
Thanks!
/Josef
___
ceph-us
My understanding is that Monitors monitor the public address of the
OSDs and other OSDs monitor the cluster address of the OSDs.
Replication, recovery and backfill traffic all use the same network
when you specify 'cluster network = ' in your ceph.conf.
It is useful to remember that replication, re
Hi list,
While reading this
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks,
I came across the following sentence:
"You can also establish a separate cluster network to handle OSD heartbeat,
object replication and recovery traffic”
I didn’t know it was possib
Quick correction/clarification about ZFS and large blocks - ZFS can and will
write in 1MB or larger blocks but only with the latest versions with large
block support enabled (which I am not sure if ZoL has), by default block
aggregation is limited to 128KB. The rest of my post (about multiple vd
Anthony,
I doubt the manufacturer reported 315MB/s for 4K block size. Most likely
they've used 1M or 4M as the block size to achieve the 300MB/s+ speeds
Andrei
- Original Message -
> From: "Alexandre DERUMIER"
> To: "Anthony Levesque"
> Cc: "ceph-users"
> Sent: Saturday, 25 April,
Hi Nik,
Thanks for the perf data..It seems innocuous..I am not seeing single tcmalloc
trace, are you running with tcmalloc by the way ?
What about my other question, is the performance of slow volume increasing if
you stop IO on the other volume ?
Are you using default ceph.conf ? Probably, you w
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication
Everything seems to work except radosgw-agent sync , Request you to please
check the below outputs and help me
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> tcp0 0 10.0.0.1:6809 10.0.0.1:59692
> ESTABLISHED 20182/ceph-osd
> tcp0 4163543 10.0.0.1:59692 10.0.0.1:6809 ESTABLISHED -
>
> You got bitten by a recently fixed regression. It's never been a good
> idea to co-locate kernel client with osds, and w
Hello Somnath,
On Fri, Apr 24, 2015 at 04:23:19PM +, Somnath Roy wrote:
> This could be again because of tcmalloc issue I reported earlier.
>
> Two things to observe.
>
> 1. Is the performance improving if you stop IO on other volume ? If so, it
> could be different issue.
there is no other
Hello Geeks
I am trying to setup Ceph Radosgw multi site data replication using
official documentation
http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication
Everything seems to work except radosgw-agent sync , Request you to please
check the below outputs and help me
19 matches
Mail list logo