[ceph-users] Too few PGs per OSD (autoscaler)

2019-08-01 Thread Jan Kasprzak
Helo, Ceph users, TL;DR: PG autoscaler should not cause the "too few PGs per OSD" warning Detailed: Some time ago, I upgraded the HW in my virtualization+Ceph cluster, replacing 30+ old servers with <10 modern servers. I immediately got "Too much PGs per OSD" warning, so I had to add more

Re: [ceph-users] How to deal with slow requests related to OSD bugs

2019-08-01 Thread Thomas Bennett
Hi Xavier, We have had OSDs backed with Samsung SSD 960 PRO 512GB nvmes which started generating slow requests. After running: ceph osd tree up | grep nvme | awk '{print $4}' | xargs -P 10 -I _OSD sh -c 'BPS=$(ceph tell _OSD bench | jq -r .bytes_per_sec); MBPS=$(echo "scale=2; $BPS/100" | bc

[ceph-users] Error ENOENT: problem getting command descriptions from mon.5

2019-08-01 Thread Christoph Adomeit
Hi there, i have updated my ceph-cluster from luminous to 14.2.1 and whenever I run a "ceph tell mon.* version" I get the correct versions from all monitors except mon.5 For mon.5 is get the error: Error ENOENT: problem getting command descriptions from mon.5 mon.5: problem getting command desc

Re: [ceph-users] RGW 4 MiB objects

2019-08-01 Thread Thomas Bennett
Hi Aleksey, Thanks for the detailed breakdown! We're currently using replication pools but will be testing ec pools soon enough and this is a useful set of parameters to look at. Also, I had not considered the bluestore parameters, thanks for pointing that out. Kind regards On Wed, Jul 31, 2019

[ceph-users] High memory usage OSD with BlueStore

2019-08-01 Thread 杨耿丹
H all: we have a cephfs env,ceph version is 12.2.10,server in arm,but fuse clients are x86,osd disk size is 8T,some osd use 12GB memory,is that normal? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.c

Re: [ceph-users] High memory usage OSD with BlueStore

2019-08-01 Thread Janne Johansson
Den tors 1 aug. 2019 kl 11:31 skrev dannyyang(杨耿丹) : > H all: > > we have a cephfs env,ceph version is 12.2.10,server in arm,but fuse clients > are x86, > osd disk size is 8T,some osd use 12GB memory,is that normal? > > For bluestore, there are certain tuneables you can use to limit memory a bit.

Re: [ceph-users] High memory usage OSD with BlueStore

2019-08-01 Thread Mark Nelson
Hi Danny, Are your arm binaries built using tcmalloc?  At least on x86 we saw significantly higher memory fragmentation and memory usage with glibc malloc. First, you can look at the mempool stats which may provide a hint: ceph daemon osd.NNN dump_mempools Assuming you are using tcmallo

Re: [ceph-users] Urgent Help Needed (regarding rbd cache)

2019-08-01 Thread Oliver Freyermuth
Hi together, Am 01.08.19 um 08:45 schrieb Janne Johansson: Den tors 1 aug. 2019 kl 07:31 skrev Muhammad Junaid mailto:junaid.fsd...@gmail.com>>: Your email has cleared many things to me. Let me repeat my understanding. Every Critical data (Like Oracle/Any Other DB) writes will be done with

Re: [ceph-users] Adventures with large RGW buckets [EXT]

2019-08-01 Thread Matthew Vernon
Hi, On 31/07/2019 19:02, Paul Emmerich wrote: Some interesting points here, thanks for raising them :) From our experience: buckets with tens of million objects work just fine with no big problems usually. Buckets with hundreds of million objects require some attention. Buckets with billions

Re: [ceph-users] Ceph nfs ganesha exports

2019-08-01 Thread Jeff Layton
On Sun, 2019-07-28 at 18:20 +, Lee Norvall wrote: > Update to this I found that you cannot create a 2nd files system as yet and > it is still experimental. So I went down this route: > > Added a pool to the existing cephfs and then setfattr -n ceph.dir.layout.pool > -v SSD-NFS /mnt/cephfs/s

Re: [ceph-users] Ceph nfs ganesha exports

2019-08-01 Thread Lee Norvall
Hi Jeff Thanks for the pointer on this. I found some details on this the other day and your link is a big help. I will get this updated in my ansible playbook and test. Rgds Lee On 01/08/2019 17:03, Jeff Layton wrote: On Sun, 2019-07-28 at 18:20 +, Lee Norvall wrote: Update to this I

[ceph-users] Balancer in HEALTH_ERR

2019-08-01 Thread EDH - Manuel Rios Fernandez
Hi , Two weeks ago, we started a data migration from one old ceph node to a new one. For task we added a 120TB Host to the cluster and evacuated the old one with the ceph osd crush reweight osd.X 0.0 that move near 15 TB per day. After 1 week and few days we found that balancer module don'

Re: [ceph-users] Balancer in HEALTH_ERR

2019-08-01 Thread Smith, Eric
From your pastebin data – it appears you need to change the crush weight of the OSDs on CEPH006? They all have crush weight of 0, when other OSDs seem to have a crush weight of 10.91309. You might look into the ceph osd crush reweight-subtree command. Eric From: ceph-users on behalf of EDH -

Re: [ceph-users] Adventures with large RGW buckets

2019-08-01 Thread Eric Ivancich
Hi Paul, I’ll interleave responses below. > On Jul 31, 2019, at 2:02 PM, Paul Emmerich wrote: > > we are seeing a trend towards rather large RGW S3 buckets lately. > we've worked on > several clusters with 100 - 500 million objects in a single bucket, and we've > been asked about the possibilit

Re: [ceph-users] Adventures with large RGW buckets

2019-08-01 Thread Eric Ivancich
Hi Paul, I’ve turned the following idea of yours into a tracker: https://tracker.ceph.com/issues/41051 > 4. Common prefixes could filtered in the rgw class on the OSD instead > of in radosgw > > Consider a bucket with 100 folders with 1000 object

Re: [ceph-users] details about cloning objects using librados

2019-08-01 Thread Gregory Farnum
On Wed, Jul 31, 2019 at 10:31 PM nokia ceph wrote: > > Thank you Greg, > > Another question , we need to give new destination object , so that we can > read them separately in parallel with src object . This function resides in > objector.h , seems to be like internal and can it be used in int

Re: [ceph-users] Balancer in HEALTH_ERR

2019-08-01 Thread EDH - Manuel Rios Fernandez
Hi Eric, CEPH006 is the node that we’re evacuating , for that task we added CEPH005. Thanks De: Smith, Eric Enviado el: jueves, 1 de agosto de 2019 20:12 Para: EDH - Manuel Rios Fernandez ; ceph-users@lists.ceph.com Asunto: Re: [ceph-users] Balancer in HEALTH_ERR >From your past

Re: [ceph-users] Adventures with large RGW buckets

2019-08-01 Thread Gregory Farnum
On Thu, Aug 1, 2019 at 12:06 PM Eric Ivancich wrote: > > Hi Paul, > > I’ll interleave responses below. > > On Jul 31, 2019, at 2:02 PM, Paul Emmerich wrote: > > How could the bucket deletion of the future look like? Would it be possible > to put all objects in buckets into RADOS namespaces and im

Re: [ceph-users] Adventures with large RGW buckets

2019-08-01 Thread EDH - Manuel Rios Fernandez
HI Greg / Eric, What about allow delete bucket object with a lifecycle policy? You can actually put 1 day of object life, that task is done at cluster level. And them delete objects young than 1 day, and remove bucket. That sometimes speed deletes as task is done by rgw's. It should be like a

Re: [ceph-users] Balancer in HEALTH_ERR

2019-08-01 Thread Konstantin Shalygin
Two weeks ago, we started a data migration from one old ceph node to a new one. For task we added a 120TB Host to the cluster and evacuated the old one with the ceph osd crush reweight osd.X 0.0 that move near 15 TB per day. After 1 week and few days we found that balancer module don't work

[ceph-users] bluestore write iops calculation

2019-08-01 Thread nokia ceph
Hi Team, Could you please help us in understanding the write iops inside ceph cluster . There seems to be mismatch in iops between theoretical and what we see in disk status. Our platform 5 node cluster 120 OSDs, with each node having 24 disks HDD ( data, rcoksdb and rocksdb.WAL all resides in th

Re: [ceph-users] details about cloning objects using librados

2019-08-01 Thread nokia ceph
Thank you Greg, it is now clear for us and the option is only available in C++ , we need to rewrite the client code with c++ . Thanks, Muthu On Fri, Aug 2, 2019 at 1:05 AM Gregory Farnum wrote: > On Wed, Jul 31, 2019 at 10:31 PM nokia ceph > wrote: > > > > Thank you Greg, > > > > Another quest