I accidentally replied to the wrong thread.  this was meant for a different one.
________________________________

[cid:image5d7d18.JPG@87f9fba8.42bf12bd]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________
From: Shinobu Kinjo [ski...@redhat.com]
Sent: Thursday, December 29, 2016 4:03 PM
To: David Turner
Cc: Bryan Henderson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph program uses lots of memory

And we may be interested in your cluster's configuration.

 # ceph --show-config > $(hostname).$(date +%Y%m%d).ceph_conf.txt

On Fri, Dec 30, 2016 at 7:48 AM, David Turner 
<david.tur...@storagecraft.com<mailto:david.tur...@storagecraft.com>> wrote:

Another thing that I need to make sure on is that your number of PGs in the 
pool with 90% of the data is a power of 2 (256, 512, 1024, 2048, etc).  If that 
is the case, then I need the following information.

1) Pool replica size
2) The number of the pool with the data
3) A copy of your osdmap (ceph osd getmap -o osd_map.bin)
4) Full output of (ceph osd tree)
5) Full output of (ceph osd df)

With that I can generate a new crushmap that is balanced for your cluster to 
equalize all of the osds % used.

Our clusters have more than 1k osds and the difference between the top used osd 
and the least used osd is within 2% in those clusters.  We have 99.9% of our 
data in 1 pool.

________________________________

[cid:imageef823d.JPG@9dc05da4.41ac2656]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760<tel:(801)%20871-2760> | Mobile: 
385.224.2943<tel:(385)%20224-2943>

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________________
From: ceph-users 
[ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>] 
on behalf of Bryan Henderson 
[bry...@giraffe-data.com<mailto:bry...@giraffe-data.com>]
Sent: Thursday, December 29, 2016 3:31 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] ceph program uses lots of memory


Does anyone know why the 'ceph' program uses so much memory?  If I run it with
an address space rlimit of less than 300M, it usually dies with messages about
not being able to allocate memory.

I'm curious as to what it could be doing that requires so much address space.

It doesn't matter what specific command I'm doing and it does this even with
there is no ceph cluster running, so it must be something pretty basic.

--
Bryan Henderson                                   San Jose, California
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to