Re: [ceph-users] get user list via rados-rest: {code: 403, message: Forbidden}

2015-10-12 Thread Klaus Franken
Hi,

is someone able to reproduce the issue?
How to get a list of all users via rest: 
https://rgw01.XXX.de/admin/user?format=json
 without a „uid=„?

Thank you,
Klaus

noris network AG - Thomas-Mann-Straße 16-20 - D-90471 Nürnberg -
Tel +49-911-9352-0 - Fax +49-911-9352-100

http://www.noris.de - The IT-Outsourcing Company

Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel -
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689

Am 08.10.2015 um 12:29 schrieb Klaus Franken 
mailto:klaus.fran...@noris.de>>:

Hi,

I’m trying to get a list of all users from the rados-rest-gateway analog to 
"radosgw-admin  metadata list user“.

I can retrieve a user info for a specified user from 
https://rgw01.XXX.de/admin/user?uid=klaus&format=json.
http://docs.ceph.com/docs/master/radosgw/adminops/#get-user-info say "If no 
user is specified returns the list of all users along with suspension 
information“.
But when using the same url without „uid=klaus“ I’m always get a {code: 403, 
message: Forbidden}. I tried to give the user all capabilities I found, but 
without success.

How can I get more debug messages (/var/log/ceph/radosgw.log wasn’t helpful 
even with a higher debug level)?
Or is that maybe a bug?


Successfull with uid=:
- request:
   body: null
   headers:
 Accept: ['*/*']
 Accept-Encoding: ['gzip, deflate']
 Authorization: ['AWS ']
 Connection: [keep-alive]
 User-Agent: [python-requests/2.7.0 CPython/3.4.2 Darwin/14.5.0]
 date: ['Thu, 08 Oct 2015 09:12:14 GMT']
   method: GET
   uri: 
https://rgw01.XXX.de/admin/user?uid=klaus&format=json
 response:
   body: {string: '{"user_id":"klaus","display_name":"Klaus 
Franken","email":"","suspended":0,"max_buckets":1000,"subusers":[],"keys":[{"user":"klaus","access_key“:"","secret_key":"SpxxE\/"}],"swift_keys":[],"caps":[{"type":"buckets","perm":"*"},{"type":"metadata","perm":"*"},{"type":"usage","perm":"*"},{"type":"users","perm":"*"}]}'}
   headers:
 Connection: [close]
 Content-Type: [application/json]
 Date: ['Thu, 08 Oct 2015 09:12:14 GMT']
 Server: [Apache]
   status: {code: 200, message: OK}

403 withou uid=:
- request:
   body: null
   headers:
 Accept: ['*/*']
 Accept-Encoding: ['gzip, deflate']
 Authorization: ['AWS =']
 Connection: [keep-alive]
 User-Agent: [python-requests/2.7.0 CPython/3.4.2 Darwin/14.5.0]
 date: ['Thu, 08 Oct 2015 09:13:15 GMT']
   method: GET
   uri: 
https://rgw01.XXX.de/admin/user?format=json
 response:
   body: {string: '{"Code":"AccessDenied"}'}
   headers:
 Accept-Ranges: [bytes]
 Connection: [close]
 Content-Length: ['23']
 Content-Type: [application/json]
 Date: ['Thu, 08 Oct 2015 09:13:16 GMT']
 Server: [Apache]
   status: {code: 403, message: Forbidden}
version: 1


Thank you,
Klaus


noris network AG - Thomas-Mann-Straße 16-20 - D-90471 Nürnberg -
Tel +49-911-9352-0 - Fax +49-911-9352-100

http://www.noris.de - The IT-Outsourcing Company

Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel -
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs replace hdfs problem

2015-10-12 Thread ZHOU Yuan
Hi,

From the doc it looks like for the default cephfs-hadoop driver,
Hadoop 2.x is not supported yet. You may need to get a newer
hadoop-cephfs.jar if you need to use YARN?

http://docs.ceph.com/docs/master/cephfs/hadoop/

https://github.com/GregBowyer/cephfs-hadoop

Sincerely, Yuan


On Mon, Oct 12, 2015 at 1:58 PM, Fulin Sun  wrote:
> Thanks so much for kindly advice. This is my fault.
>
> I resolved the problem and the root cause is that I misconfigured
> HADOOP_CLASSPATH, so sorry for confusing and troubling.
>
> But then I am trying to use hadoop yarn to do terasort benckmark test based
> on cephfs. New exception message occurs as :
>
> Does this mean that I cannot use this ceph-hadoop plugin over the hadoop
> version? Hadoop version is : 2.7.1 release, Ceph version is : 0.94.3
>
> Thanks again for moving this thread.
>
> Best,
> Sun.
>
> 15/10/12 11:08:35 INFO client.RMProxy: Connecting to ResourceManager at
> /172.16.33.18:8032
> 15/10/12 11:08:35 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
> java.lang.NoSuchMethodException:
> org.apache.hadoop.fs.ceph.CephFS.(java.net.URI,
> org.apache.hadoop.conf.Configuration)
> java.io.IOException: Cannot initialize Cluster. Please check your
> configuration for mapreduce.framework.name and the correspond server
> addresses.
> at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75)
> at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1260)
> at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1256)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1284)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:301)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:305)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> 
> 
>
>
> From: Paul Evans
> Date: 2015-10-12 11:10
> To: Fulin Sun
> Subject: Re: [ceph-users] cephfs replace hdfs problem
> I don’t think there are many of us that have attempted what you are trying
> to do… that’s the most likely reason the list is quiet.
> You may need to be patient, and possibly provide updates (if you have any)
> to keep the issue in front of people.
> Best of luck...
> --
> Paul
>
> On Oct 11, 2015, at 7:03 PM, Fulin Sun  wrote:
>
> sign...
> I had to say that I have not received any reponse from this mailing list...
>
> 
> 
>
> From: Fulin Sun
> Date: 2015-10-10 17:27
> To: ceph-users
> Subject: [ceph-users] cephfs replace hdfs problem
> Hi there,
>
> I configured hadoop-cephfs plugin and try to use cephfs as a replacement. I
> had sucessfully configured
>
> hadoop-env.sh with setting the HADOOP_CLASSPATH for hadoop-cephfs.jar
>
> But when I run hadoop fs -ls /, I got the following exception. Looks like it
> cannot find the actual jar for both
> hadoop-cephfs.jar  and  libcephfs-java.jar I placed these two in the
> /usr/local/hadoop/lib directory and edited
> the hadoop classpath in hadoop-env.sh
>
> How could this issue be ?
>
> Thanks anyone for kind response.
>
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
> org.apache.hadoop.fs.ceph.CephFileSystem not found
> at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
> at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> at org.apache.hadoop.fs.File

Re: [ceph-users] cephfs replace hdfs problem

2015-10-12 Thread Fulin Sun
Hi, Yuan

Thanks a lot for your guide. I cloned the master branch and packaged a new jar 
and put it to the HADOOP_CLASSPATH 

and yarn.application.classpath. Now I can successfully run teragen and terasort 
benckmark test over cephfs . 

Thanks very much for advice.

Best,
Sun.






From: ZHOU Yuan
Date: 2015-10-12 15:30
To: Fulin Sun
CC: Paul Evans; ceph-users
Subject: Re: [ceph-users] cephfs replace hdfs problem
Hi,
 
From the doc it looks like for the default cephfs-hadoop driver,
Hadoop 2.x is not supported yet. You may need to get a newer
hadoop-cephfs.jar if you need to use YARN?
 
http://docs.ceph.com/docs/master/cephfs/hadoop/
 
https://github.com/GregBowyer/cephfs-hadoop
 
Sincerely, Yuan
 
 
On Mon, Oct 12, 2015 at 1:58 PM, Fulin Sun  wrote:
> Thanks so much for kindly advice. This is my fault.
>
> I resolved the problem and the root cause is that I misconfigured
> HADOOP_CLASSPATH, so sorry for confusing and troubling.
>
> But then I am trying to use hadoop yarn to do terasort benckmark test based
> on cephfs. New exception message occurs as :
>
> Does this mean that I cannot use this ceph-hadoop plugin over the hadoop
> version? Hadoop version is : 2.7.1 release, Ceph version is : 0.94.3
>
> Thanks again for moving this thread.
>
> Best,
> Sun.
>
> 15/10/12 11:08:35 INFO client.RMProxy: Connecting to ResourceManager at
> /172.16.33.18:8032
> 15/10/12 11:08:35 INFO mapreduce.Cluster: Failed to use
> org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
> java.lang.NoSuchMethodException:
> org.apache.hadoop.fs.ceph.CephFS.(java.net.URI,
> org.apache.hadoop.conf.Configuration)
> java.io.IOException: Cannot initialize Cluster. Please check your
> configuration for mapreduce.framework.name and the correspond server
> addresses.
> at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:75)
> at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1260)
> at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1256)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1284)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:301)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:305)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> 
> 
>
>
> From: Paul Evans
> Date: 2015-10-12 11:10
> To: Fulin Sun
> Subject: Re: [ceph-users] cephfs replace hdfs problem
> I don’t think there are many of us that have attempted what you are trying
> to do… that’s the most likely reason the list is quiet.
> You may need to be patient, and possibly provide updates (if you have any)
> to keep the issue in front of people.
> Best of luck...
> --
> Paul
>
> On Oct 11, 2015, at 7:03 PM, Fulin Sun  wrote:
>
> sign...
> I had to say that I have not received any reponse from this mailing list...
>
> 
> 
>
> From: Fulin Sun
> Date: 2015-10-10 17:27
> To: ceph-users
> Subject: [ceph-users] cephfs replace hdfs problem
> Hi there,
>
> I configured hadoop-cephfs plugin and try to use cephfs as a replacement. I
> had sucessfully configured
>
> hadoop-env.sh with setting the HADOOP_CLASSPATH for hadoop-cephfs.jar
>
> But when I run hadoop fs -ls /, I got the following exception. Looks like it
> cannot find the actual jar for both
> hadoop-cephfs.jar  and  libcephfs-java.jar I placed these two in the
> /usr/local/hadoop/lib directory and edited
> the hadoop classpath in hadoop-env.sh
>
> How could this issue be ?
>
> Thanks anyone for kind response.
>
> java.lang.RuntimeExcept

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-12 Thread Burkhard Linke

Hi,

On 10/08/2015 09:14 PM, John Spray wrote:

On Thu, Oct 8, 2015 at 7:23 PM, Gregory Farnum  wrote:

On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke
 wrote:

Hammer 0.94.3 does not support a 'dump cache' mds command.
'dump_ops_in_flight' does not list any pending operations. Is there any
other way to access the cache?

"dumpcache", it looks like. You can get all the supported commands
with "help" and look for things of interest or alternative phrasings.
:)

To head off any confusion for someone trying to just replace dump
cache with dumpcache: "dump cache" is the new (post hammer,
apparently) admin socket command, dumpcache is the old tell command.
So it's "ceph mds tell  dumpcache ".
Thanks, that did the trick. I was able to locate the host blocking the 
file handles and remove the objects from the EC pool.


Well, all except one:

# ceph df
  ...
ec_ssd_cache 18  4216k 0 2500G  129
cephfs_ec_data   19  4096k 0 31574G1

# rados -p ec_ssd_cache ls
1ef540f.0386
# rados -p cephfs_ec_data ls
1ef540f.0386
# ceph mds tell cb-dell-pe620r dumpcache cache.file
# grep 1ef540f /cache.file
#

It does not show up in the dumped cache file, but keeps being promoted 
to the cache tier after MDS restarts. I've restarted most of the cephfs 
clients by unmounting cephfs and restarting ceph-fuse, but the object 
remains active.


Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement rule not resolved

2015-10-12 Thread ghislain.chevalier
Hi all,

After installing the cluster, all the disks (sas and ssd) were mixed under a 
host, so the calculated reweight was related to the entire capacity.

It doesn't explain why sas disks were selected when using a specific ssd-driven 
rule.

Brgds

De : CHEVALIER Ghislain IMT/OLPS
Envoyé : jeudi 8 octobre 2015 10:47
À : ceph-users; ceph-de...@vger.kernel.org
Objet : RE: [ceph-users] Placement rule not resolved

HI all,

I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and 
puppet.
(the installation was run by the openstack team and I checked the ceph cluster 
configuration afterwards)

After reweighing them to 1, the ceph-cluster is working properly.
Thanks to object lookup module of inkscope, I checked that the osd allocation 
was ok.

What is not normal is that crush tried to allocated osd that are not targeted 
by the rule in that case sas disks instead of ssd disks?
Must the cluster normal behavior ,i.e. the pg allocation, be  to be frozen?
I can say that because if I analyze the stuck pgs (inkscope module) and noticed 
that osd allocation for these pgs were either not correct (acting list) or 
uncomplete.

Best regards

De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de 
ghislain.cheval...@orange.com
Envoyé : mardi 6 octobre 2015 14:18
À : ceph-users
Objet : [ceph-users] Placement rule not resolved

Hi,

Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors

I create an alternate crushmap in order to fulfill tiering requirement i.e. 
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in 
"tier-ssd" and "tier-sas" under a "tier-root"
E.g. I want to select 1 ssd in 3 distinct hosts

I don't understand why the placement rule for sas is working and not for ssd.
Sas are selected even if ,according to the crushmap,  they are not in the right 
tree.
When sometimes 3 ssd are selected, the pgs stay stuck but active

I attached the crushmap and ceph osd tree.

Can someone have a look and tell me where the default is?

Bgrds
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLPS/ASE/DAPI/CSE
Architecte de services d'infrastructure de stockage
Sofware-Defined Storage Architect
+33299124432
+33788624370
ghislain.cheval...@orange.com
P Pensez à l'Environnement avant d'imprimer ce message !


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy mon create failing with exception

2015-10-12 Thread Martin Palma
Hi,

from what I'm seeing your ceph.conf isn't quite right if we take into
account you cluster description "...with one monitor node and one osd...".
The parameters "mon_inital_members" and "mon_host" should only contain
monitor nodes. Not all the nodes in you cluster.

More over you should provide in the "public_network" parameter also the
netmask of the network.

What where the exact commands you performed before the step 5? DNS and
hostname lookup does work correctly?

Best,
Martin

On Sat, Oct 10, 2015 at 11:40 PM, prasad pande 
wrote:

>
>
> 
>
> I am installing a ceph-cluster with one monitor node and one osd. I am
> following the document:
> 
>
> http://docs.ceph.com/docs/v0.86/start/quick-ceph-deploy/
>
> During the step 5: Add the initial monitor(s) and gather the keys (new in
> ceph-deploy v1.1.3),
>
> I am getting the following exception:
>
> *[ceph-mon1][ERROR ] admin_socket: exception getting command descriptions:
> [Errno 2] No such file or directory* [ceph-mon1][WARNIN] monitor:
> mon.ceph-mon1, might not be running yet [ceph-mon1][INFO ] Running command:
> sudo ceph --cluster=ceph --admin-daemon
> /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status *[ceph-mon1][ERROR ]
> admin_socket: exception getting command descriptions: [Errno 2] No such
> file or directory [ceph-mon1][WARNIN] monitor ceph-mon1 does not exist in
> monmap*
>
>
> Just for reference my *ceph.conf* is as follows:
>
> [global]
>
> *fsid = 351948ba-9716-4a04-802d-28b5510bfeb0*
>
> *mon_initial_members = ceph-mon1,ceph-admin,ceph-osd1*
>
> *mon_host = xxx.yyy.zzz.78,xxx.yyy.zzz.147,xxx.yyy.zzz.135*
>
> *auth_cluster_required = cephx*
>
> *auth_service_required = cephx*
>
> *auth_client_required = cephx*
>
> *filestore_xattr_use_omap = true*
>
>
> *osd_pool_default_size = 2*
>
> *public_addr = xxx.yyy.zzz.0 *
>
>
> I tried to understand all the questions related to same on ceph user
> mailing list but there is no precise solution I found for this problem.
>
> Can anyone help me on this?
>
>
> Thanks & Regards
>
> Prasad Pande
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Mark Nelson

Hi Guy,

Given all of the recent data on how different memory allocator 
configurations improve SimpleMessenger performance (and the effect of 
memory allocators and transparent hugepages on RSS memory usage), I 
thought I'd run some tests looking how AsyncMessenger does in 
comparison.  We spoke about these a bit at the last performance meeting 
but here's the full write up.  The rough conclusion as of right now 
appears to be:


1) AsyncMessenger performance is not dependent on the memory allocator 
like with SimpleMessenger.


2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB 
(ie default) thread cache.


3) AsyncMessenger is consistently faster than SimpleMessenger for 128K 
random reads.


4) AsyncMessenger is sometimes slower than SimpleMessenger when memory 
allocator optimizations are used.


5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.

Here's a link to the paper:

https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How expensive are 'rbd ls' and 'rbd snap ls' calls?

2015-10-12 Thread Allen Liao
How expensive are the calls to list the rbd images (rbd ls) and their
snapshots (rbd snap ls).  Is the metatdata for what images and snapshots
are stored in ceph kept in memory on the MON (in which case the calls would
be cheap)?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How expensive are 'rbd ls' and 'rbd snap ls' calls?

2015-10-12 Thread Jason Dillaman
Neither operation should involve the monitor beyond retrieving the CRUSH map.  
The image directory is stored as an object within the pool (rbd_directory) and 
the image snapshot list is embedded in each image's header object.  These 
values are stored as omap key/value pairs associated to the object, so they 
will be read via LevelDB or RocksDB (depending on your configuration) within 
the object's PG's OSD.

-- 

Jason Dillaman 


- Original Message - 

> From: "Allen Liao" 
> To: ceph-users@lists.ceph.com
> Sent: Monday, October 12, 2015 2:52:03 PM
> Subject: [ceph-users] How expensive are 'rbd ls' and 'rbd snap ls' calls?

> How expensive are the calls to list the rbd images (rbd ls) and their
> snapshots (rbd snap ls). Is the metatdata for what images and snapshots are
> stored in ceph kept in memory on the MON (in which case the calls would be
> cheap)?

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Red Hat Storage Day – Cupertino

2015-10-12 Thread Kobi Laredo
Bay Area Cephers,

If you are interested in hearing about Ceph @ DreamHost, come join us
at Red Hat Storage Day – Cupertino:
https://engage.redhat.com/storagedays-ceph-gluster-e-201508192024

Lot's of great speakers and a great opportunity to network. Best of all,
It's free to attend!

*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Haomai Wang
COOL

Interesting that async messenger will consume more memory than simple, in
my mind I always think async should use less memory. I will give a look at
this

On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson  wrote:

> Hi Guy,
>
> Given all of the recent data on how different memory allocator
> configurations improve SimpleMessenger performance (and the effect of
> memory allocators and transparent hugepages on RSS memory usage), I thought
> I'd run some tests looking how AsyncMessenger does in comparison.  We spoke
> about these a bit at the last performance meeting but here's the full write
> up.  The rough conclusion as of right now appears to be:
>
> 1) AsyncMessenger performance is not dependent on the memory allocator
> like with SimpleMessenger.
>
> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB (ie
> default) thread cache.
>
> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
> random reads.
>
> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
> allocator optimizations are used.
>
> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>
> Here's a link to the paper:
>
> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view
>
> Mark
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Haomai Wang
resend

On Tue, Oct 13, 2015 at 10:56 AM, Haomai Wang  wrote:
> COOL
>
> Interesting that async messenger will consume more memory than simple, in my
> mind I always think async should use less memory. I will give a look at this
>
> On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson  wrote:
>>
>> Hi Guy,
>>
>> Given all of the recent data on how different memory allocator
>> configurations improve SimpleMessenger performance (and the effect of memory
>> allocators and transparent hugepages on RSS memory usage), I thought I'd run
>> some tests looking how AsyncMessenger does in comparison.  We spoke about
>> these a bit at the last performance meeting but here's the full write up.
>> The rough conclusion as of right now appears to be:
>>
>> 1) AsyncMessenger performance is not dependent on the memory allocator
>> like with SimpleMessenger.
>>
>> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB (ie
>> default) thread cache.
>>
>> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
>> random reads.
>>
>> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
>> allocator optimizations are used.
>>
>> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>>
>> Here's a link to the paper:
>>
>> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view
>>
>> Mark
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
>
> Best Regards,
>
> Wheat



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Gregory Farnum
On Mon, Oct 12, 2015 at 9:50 AM, Mark Nelson  wrote:
> Hi Guy,
>
> Given all of the recent data on how different memory allocator
> configurations improve SimpleMessenger performance (and the effect of memory
> allocators and transparent hugepages on RSS memory usage), I thought I'd run
> some tests looking how AsyncMessenger does in comparison.  We spoke about
> these a bit at the last performance meeting but here's the full write up.
> The rough conclusion as of right now appears to be:
>
> 1) AsyncMessenger performance is not dependent on the memory allocator like
> with SimpleMessenger.
>
> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB (ie
> default) thread cache.
>
> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
> random reads.
>
> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
> allocator optimizations are used.
>
> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>
> Here's a link to the paper:
>
> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view

Can you clarify these tests a bit more? I can't make the number of
nodes, OSDs, and SSDs work out properly. Were the FIO jobs 256
concurrent ops per job, or in aggregate? Is there any more info that
might suggest why the 128KB rand-read (but not read nor write, and not
4k rand-read) was so asymmetrical?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Somnath Roy
Mark,
Thanks for this data. This means probably simple messenger (not OSD core) is 
not doing optimal job of handling memory.

Haomai,
I am not that familiar with Async messenger code base, do you have an 
explanation of the behavior (like good performance with default tcmalloc) Mark 
reported ? Is it using lot less thread overall than Simple ?
Also, it seems Async messenger has some inefficiencies in the io path and 
that’s why it is not performing as well as simple if the memory allocation 
stuff is optimally handled.
Could you please send out any documentation around Async messenger ? I tried to 
google it , but, not even blueprint is popping up.


Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Haomai 
Wang
Sent: Monday, October 12, 2015 7:57 PM
To: Mark Nelson
Cc: ceph-devel; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger vs 
AsyncMessenger results

COOL

Interesting that async messenger will consume more memory than simple, in my 
mind I always think async should use less memory. I will give a look at this

On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson 
mailto:mnel...@redhat.com>> wrote:
Hi Guy,

Given all of the recent data on how different memory allocator configurations 
improve SimpleMessenger performance (and the effect of memory allocators and 
transparent hugepages on RSS memory usage), I thought I'd run some tests 
looking how AsyncMessenger does in comparison.  We spoke about these a bit at 
the last performance meeting but here's the full write up.  The rough 
conclusion as of right now appears to be:

1) AsyncMessenger performance is not dependent on the memory allocator like 
with SimpleMessenger.

2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB (ie 
default) thread cache.

3) AsyncMessenger is consistently faster than SimpleMessenger for 128K random 
reads.

4) AsyncMessenger is sometimes slower than SimpleMessenger when memory 
allocator optimizations are used.

5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.

Here's a link to the paper:

https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

Best Regards,

Wheat



PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-12 Thread Goncalo Borges

Hi Ken

Here it is:

http://tracker.ceph.com/issues/13470

Cheers
G.

On 10/09/2015 02:58 AM, Ken Dreyer wrote:

On Wed, Sep 30, 2015 at 7:46 PM, Goncalo Borges
 wrote:

- Each time logrotate is executed, we received a daily notice with the
message

ibust[8241/8241]: Warning: HOME environment variable not set. Disabling
LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)

Thanks for this detailed report!

Would you mind filing a new bug in tracker.ceph.com for this? It would
be nice to fix this in Ceph or LTTNG without having to set the HOME
env var.

- Ken


--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Haomai Wang
On Tue, Oct 13, 2015 at 12:18 PM, Somnath Roy  wrote:
> Mark,
>
> Thanks for this data. This means probably simple messenger (not OSD core) is
> not doing optimal job of handling memory.
>
>
>
> Haomai,
>
> I am not that familiar with Async messenger code base, do you have an
> explanation of the behavior (like good performance with default tcmalloc)
> Mark reported ? Is it using lot less thread overall than Simple ?

Originally async messenger mainly want to solve with high thread
number problem which limited the ceph cluster size. High context
switch and cpu usage caused by simple messenger under large cluster.

Recently we have memory problem discussed on ML and I also spend times
to think about the root cause. Currently I would like to consider the
simple messenger's memory usage is deviating from the design of
tcmalloc. Tcmalloc is aimed to provide memory with local cache, and it
also has memory control among all threads, if we have too much
threads, it may let tcmalloc busy with memory lock contention.

Async messenger uses thread pool to serve connections, it make all
blocking calls in simple messenger async.

>
> Also, it seems Async messenger has some inefficiencies in the io path and
> that’s why it is not performing as well as simple if the memory allocation
> stuff is optimally handled.

Yep, simple messenger use two threads(one for read, one for write) to
serve one connection, async messenger at most have one thread to serve
one connection and multi connection  will share the same thread.

Next, I would like to have several plans to improve performance:
1. add poll mode support, I hope it can help enhance high performance
storage need
2. add load balance ability among worker threads
3. move more works out of messenger thread.

>
> Could you please send out any documentation around Async messenger ? I tried
> to google it , but, not even blueprint is popping up.

>
>
>
>
>
> Thanks & Regards
>
> Somnath
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Haomai Wang
> Sent: Monday, October 12, 2015 7:57 PM
> To: Mark Nelson
> Cc: ceph-devel; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger vs
> AsyncMessenger results
>
>
>
> COOL
>
>
>
> Interesting that async messenger will consume more memory than simple, in my
> mind I always think async should use less memory. I will give a look at this
>
>
>
> On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson  wrote:
>
> Hi Guy,
>
> Given all of the recent data on how different memory allocator
> configurations improve SimpleMessenger performance (and the effect of memory
> allocators and transparent hugepages on RSS memory usage), I thought I'd run
> some tests looking how AsyncMessenger does in comparison.  We spoke about
> these a bit at the last performance meeting but here's the full write up.
> The rough conclusion as of right now appears to be:
>
> 1) AsyncMessenger performance is not dependent on the memory allocator like
> with SimpleMessenger.
>
> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB (ie
> default) thread cache.
>
> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
> random reads.
>
> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
> allocator optimizations are used.
>
> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>
> Here's a link to the paper:
>
> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view
>
> Mark
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
>
> --
>
> Best Regards,
>
> Wheat
>
>
> 
>
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If the
> reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
>



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Somnath Roy
Thanks Haomai..
Since Async messenger is always using a constant number of threads , there 
could be a potential performance problem of scaling up the client connections 
keeping the constant number of OSDs ?
May be it's a good tradeoff..

Regards
Somnath


-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Monday, October 12, 2015 11:35 PM
To: Somnath Roy
Cc: Mark Nelson; ceph-devel; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger vs 
AsyncMessenger results

On Tue, Oct 13, 2015 at 12:18 PM, Somnath Roy  wrote:
> Mark,
>
> Thanks for this data. This means probably simple messenger (not OSD
> core) is not doing optimal job of handling memory.
>
>
>
> Haomai,
>
> I am not that familiar with Async messenger code base, do you have an
> explanation of the behavior (like good performance with default
> tcmalloc) Mark reported ? Is it using lot less thread overall than Simple ?

Originally async messenger mainly want to solve with high thread number problem 
which limited the ceph cluster size. High context switch and cpu usage caused 
by simple messenger under large cluster.

Recently we have memory problem discussed on ML and I also spend times to think 
about the root cause. Currently I would like to consider the simple messenger's 
memory usage is deviating from the design of tcmalloc. Tcmalloc is aimed to 
provide memory with local cache, and it also has memory control among all 
threads, if we have too much threads, it may let tcmalloc busy with memory lock 
contention.

Async messenger uses thread pool to serve connections, it make all blocking 
calls in simple messenger async.

>
> Also, it seems Async messenger has some inefficiencies in the io path
> and that’s why it is not performing as well as simple if the memory
> allocation stuff is optimally handled.

Yep, simple messenger use two threads(one for read, one for write) to serve one 
connection, async messenger at most have one thread to serve one connection and 
multi connection  will share the same thread.

Next, I would like to have several plans to improve performance:
1. add poll mode support, I hope it can help enhance high performance storage 
need 2. add load balance ability among worker threads 3. move more works out of 
messenger thread.

>
> Could you please send out any documentation around Async messenger ? I
> tried to google it , but, not even blueprint is popping up.

>
>
>
>
>
> Thanks & Regards
>
> Somnath
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of Haomai Wang
> Sent: Monday, October 12, 2015 7:57 PM
> To: Mark Nelson
> Cc: ceph-devel; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger
> vs AsyncMessenger results
>
>
>
> COOL
>
>
>
> Interesting that async messenger will consume more memory than simple,
> in my mind I always think async should use less memory. I will give a
> look at this
>
>
>
> On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson  wrote:
>
> Hi Guy,
>
> Given all of the recent data on how different memory allocator
> configurations improve SimpleMessenger performance (and the effect of
> memory allocators and transparent hugepages on RSS memory usage), I
> thought I'd run some tests looking how AsyncMessenger does in
> comparison.  We spoke about these a bit at the last performance meeting but 
> here's the full write up.
> The rough conclusion as of right now appears to be:
>
> 1) AsyncMessenger performance is not dependent on the memory allocator
> like with SimpleMessenger.
>
> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB
> (ie
> default) thread cache.
>
> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
> random reads.
>
> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
> allocator optimizations are used.
>
> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>
> Here's a link to the paper:
>
> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view
>
> Mark
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
>
> --
>
> Best Regards,
>
> Wheat
>
>
> 
>
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named
> above. If the reader of this message is not the intended recipient,
> you are hereby notified that you have received this message in error
> and that any review, dissemination, distribution, or copying of this
> message is strictly prohibited. If you have received this
> communication in error, please notify the sender by telephone or
> e-mail (as shown above) immediately and destroy any and all copies of
> this message in your possession (whether hard copies or electronically stored 
> copies).
>



--
Best Regards,

Wh

Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-10-12 Thread Haomai Wang
Yep, as I said below, I consider to add auto scale up/down for worker
threads with connection load balance ability. It may let users not
entangled with how much thread number I need. :-(

Actually thread number for config value is a pain in ceph osd io stack.

On Tue, Oct 13, 2015 at 2:45 PM, Somnath Roy  wrote:
> Thanks Haomai..
> Since Async messenger is always using a constant number of threads , there 
> could be a potential performance problem of scaling up the client connections 
> keeping the constant number of OSDs ?
> May be it's a good tradeoff..
>
> Regards
> Somnath
>
>
> -Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Monday, October 12, 2015 11:35 PM
> To: Somnath Roy
> Cc: Mark Nelson; ceph-devel; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger vs 
> AsyncMessenger results
>
> On Tue, Oct 13, 2015 at 12:18 PM, Somnath Roy  wrote:
>> Mark,
>>
>> Thanks for this data. This means probably simple messenger (not OSD
>> core) is not doing optimal job of handling memory.
>>
>>
>>
>> Haomai,
>>
>> I am not that familiar with Async messenger code base, do you have an
>> explanation of the behavior (like good performance with default
>> tcmalloc) Mark reported ? Is it using lot less thread overall than Simple ?
>
> Originally async messenger mainly want to solve with high thread number 
> problem which limited the ceph cluster size. High context switch and cpu 
> usage caused by simple messenger under large cluster.
>
> Recently we have memory problem discussed on ML and I also spend times to 
> think about the root cause. Currently I would like to consider the simple 
> messenger's memory usage is deviating from the design of tcmalloc. Tcmalloc 
> is aimed to provide memory with local cache, and it also has memory control 
> among all threads, if we have too much threads, it may let tcmalloc busy with 
> memory lock contention.
>
> Async messenger uses thread pool to serve connections, it make all blocking 
> calls in simple messenger async.
>
>>
>> Also, it seems Async messenger has some inefficiencies in the io path
>> and that’s why it is not performing as well as simple if the memory
>> allocation stuff is optimally handled.
>
> Yep, simple messenger use two threads(one for read, one for write) to serve 
> one connection, async messenger at most have one thread to serve one 
> connection and multi connection  will share the same thread.
>
> Next, I would like to have several plans to improve performance:
> 1. add poll mode support, I hope it can help enhance high performance storage 
> need 2. add load balance ability among worker threads 3. move more works out 
> of messenger thread.
>
>>
>> Could you please send out any documentation around Async messenger ? I
>> tried to google it , but, not even blueprint is popping up.
>
>>
>>
>>
>>
>>
>> Thanks & Regards
>>
>> Somnath
>>
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
>> Of Haomai Wang
>> Sent: Monday, October 12, 2015 7:57 PM
>> To: Mark Nelson
>> Cc: ceph-devel; ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Initial performance cluster SimpleMessenger
>> vs AsyncMessenger results
>>
>>
>>
>> COOL
>>
>>
>>
>> Interesting that async messenger will consume more memory than simple,
>> in my mind I always think async should use less memory. I will give a
>> look at this
>>
>>
>>
>> On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson  wrote:
>>
>> Hi Guy,
>>
>> Given all of the recent data on how different memory allocator
>> configurations improve SimpleMessenger performance (and the effect of
>> memory allocators and transparent hugepages on RSS memory usage), I
>> thought I'd run some tests looking how AsyncMessenger does in
>> comparison.  We spoke about these a bit at the last performance meeting but 
>> here's the full write up.
>> The rough conclusion as of right now appears to be:
>>
>> 1) AsyncMessenger performance is not dependent on the memory allocator
>> like with SimpleMessenger.
>>
>> 2) AsyncMessenger is faster than SimpleMessenger with TCMalloc + 32MB
>> (ie
>> default) thread cache.
>>
>> 3) AsyncMessenger is consistently faster than SimpleMessenger for 128K
>> random reads.
>>
>> 4) AsyncMessenger is sometimes slower than SimpleMessenger when memory
>> allocator optimizations are used.
>>
>> 5) AsyncMessenger currently uses far more RSS memory than SimpleMessenger.
>>
>> Here's a link to the paper:
>>
>> https://drive.google.com/file/d/0B2gTBZrkrnpZS1Q4VktjZkhrNHc/view
>>
>> Mark
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>>
>> --
>>
>> Best Regards,
>>
>> Wheat
>>
>>
>> 
>>
>> PLEASE NOTE: The information contained in this electronic mail message
>> is intended only for the use of the designated recipient(s) named
>> above. If the reader of this