Re: [ceph-users] GetRole Error:405 Method Not Allowed

2019-03-07 Thread Pritha Srivastava
min --display-name="admin" --admin > > radosgw-admin caps add --uid=admin --caps="roles=*" > > > When I use the REST admin APIs to get the Role, it returns an HTTP 405 > error. > > Request: > > POST / HTTP/1.1 > Host: 192.168.19

Re: [ceph-users] PGs stuck in created state

2019-03-07 Thread Martin Verges
Hello, try restarting every osd if possible. Upgrade to a recent ceph version. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. regi

Re: [ceph-users] mount cephfs on ceph servers

2019-03-07 Thread Marc Roos
Container = same kernel, problem is with processes using the same kernel. -Original Message- From: Daniele Riccucci [mailto:devs...@posteo.net] Sent: 07 March 2019 00:18 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] mount cephfs on ceph servers Hello, is the deadlock

[ceph-users] garbage in cephfs pool

2019-03-07 Thread Fyodor Ustinov
Hi! After removing all files from cephfs I see that situation: #ceph df POOLS: NAME ID USED%USED MAX AVAIL OBJECTS fsd2 0 B 0 233 TiB 11527762 #rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFO

[ceph-users] rbd cache limiting IOPS

2019-03-07 Thread Florian Engelmann
Hi, we are running an Openstack environment with Ceph block storage. There are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a P4800X Optane for rocksdb and WAL. The decision was made to use rbd writeback cache with KVM/QEMU. The write latency is incredible good (~85 µs) a

[ceph-users] CEPH ISCSI Gateway

2019-03-07 Thread Ashley Merrick
Been reading into the gateway, and noticed it’s been mentioned a few times it can be installed on OSD servers. I am guessing therefore there be no issues like is sometimes mentioned when using kRBD on a OSD node apart from the extra resources required from the hardware. Thanks ___

Re: [ceph-users] rbd cache limiting IOPS

2019-03-07 Thread Florian Engelmann
I was able to check the used settings by a ceph.conf like: [client.nova] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/client.log debug rbd = 20 debug librbd = 20 rbd_cache = true rbd cache size = 268435456 rbd cache max dirty = 201326592 rbd cache targ

Re: [ceph-users] http://tracker.ceph.com/issues/38122

2019-03-07 Thread Sebastian Wagner
Fixed by https://github.com/ceph/ceph/pull/26779 Am 07.03.19 um 07:23 schrieb Jos Collin: > I have originally created this bug when I saw this issue in > debian/stretch. But now it looks like install-deps.sh is not installing > 'colorize' package in Fedora too. I'm reopening this bug. > > On 07/0

Re: [ceph-users] rbd cache limiting IOPS

2019-03-07 Thread Mohamad Gebai
Hi Florian, On 3/7/19 10:27 AM, Florian Engelmann wrote: > > So the settings are recognized and used by qemu. But any value higher > than the default (32MB) of the cache size leads to strange IOPS > results. IOPS are very constant with 32MB ~20.000 - 23.000 but if we > define a bigger cache size (

[ceph-users] Failed to repair pg

2019-03-07 Thread Herbert Alexander Faleiros
Hi, # ceph health detail HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 3 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 2.2bb is active+clean+inconsistent, acting [36,12,80] # ceph pg repair 2.2bb instructing pg 2.2bb on osd.36 to repa

Re: [ceph-users] mount cephfs on ceph servers

2019-03-07 Thread Tony Lill
AFAIR the issue is that under memory pressure, the kernel will ask cephfs to flush pages, but that this in turn causes the osd (mds?) to require more memory to complete the flush (for network buffers, etc). As long as cephfs and the OSDs are feeding from the same kernel mempool, you are susceptible

[ceph-users] Radosgw object size limit?

2019-03-07 Thread Jan Kasprzak
Hello, Ceph users, does radosgw have an upper limit of object size? I tried to upload a 11GB file using s3cmd, but it failed with InvalidRange error: $ s3cmd put --verbose centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso s3://mybucket/ INFO: No cache file found, creating it. INFO

Re: [ceph-users] Failed to repair pg

2019-03-07 Thread Herbert Alexander Faleiros
On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote: > Hi, > > # ceph health detail > HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent > OSD_SCRUB_ERRORS 3 scrub errors > PG_DAMAGED Possible data damage: 1 pg inconsistent > pg 2.2bb is active+clean+inco

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-07 Thread Zack Brenton
Edit: screenshot removed due to message size constraints on the mailing list. Hey Patrick, I understand your skepticism! I'm also confident that this is some kind of a configuration issue; I'm not very familiar with all of Ceph's various configuration options as Rook generally abstracts those awa

Re: [ceph-users] Radosgw object size limit?

2019-03-07 Thread Casey Bodley
There is a rgw_max_put_size which defaults to 5G, which limits the size of a single PUT request. But in that case, the http response would be 400 EntityTooLarge. For multipart uploads, there's also a rgw_multipart_part_upload_limit that defaults to 1 parts, which would cause a 416 InvalidRa

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-07 Thread Patrick Donnelly
On Thu, Mar 7, 2019 at 8:24 AM Zack Brenton wrote: > > Hey Patrick, > > I understand your skepticism! I'm also confident that this is some kind of a > configuration issue; I'm not very familiar with all of Ceph's various > configuration options as Rook generally abstracts those away, so I apprec

[ceph-users] Large OMAP Objects in default.rgw.log pool

2019-03-07 Thread Samuel Taylor Liston
Hello All, I have recently had 32 large map objects appear in my default.rgw.log pool. Running luminous 12.2.8. Not sure what to think about these.I’ve done a lot of reading about how when these normally occur it is related to a bucket needing resharding, but it doesn’t

Re: [ceph-users] How To Scale Ceph for Large Numbers of Clients?

2019-03-07 Thread Zack Brenton
On Thu, Mar 7, 2019 at 2:38 PM Patrick Donnelly wrote: > Is this with one active MDS and one standby-replay? The graph is odd > to me because the session count shows sessions on fs-b and fs-d but > not fs-c. Or maybe max_mds=2 and fs-d has no activity and fs-c is > standby-replay? > The graphs w

Re: [ceph-users] Failed to repair pg

2019-03-07 Thread Brad Hubbard
you could try reading the data from this object and write it again using rados get then rados put. On Fri, Mar 8, 2019 at 3:32 AM Herbert Alexander Faleiros wrote: > > On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote: > > Hi, > > > > # ceph health detail > > HEALTH_ERR 3

Re: [ceph-users] Can CephFS Kernel Client Not Read & Write at the Same Time?

2019-03-07 Thread Gregory Farnum
In general, no, this is not an expected behavior. My guess would be that something odd is happening with the other clients you have to the system, and there's a weird pattern with the way the file locks are being issued. Can you be more precise about exactly what workload you're running, and get t

Re: [ceph-users] garbage in cephfs pool

2019-03-07 Thread Gregory Farnum
Are they getting cleaned up? CephFS does not instantly delete files; they go into a "purge queue" and get cleaned up later by the MDS. -Greg On Thu, Mar 7, 2019 at 2:00 AM Fyodor Ustinov wrote: > Hi! > > After removing all files from cephfs I see that situation: > #ceph df > POOLS: > NAME

Re: [ceph-users] Large OMAP Objects in default.rgw.log pool

2019-03-07 Thread Brad Hubbard
On Fri, Mar 8, 2019 at 4:46 AM Samuel Taylor Liston wrote: > > Hello All, > I have recently had 32 large map objects appear in my default.rgw.log > pool. Running luminous 12.2.8. > > Not sure what to think about these.I’ve done a lot of reading > about how when these normall

[ceph-users] Ceph crushmap re-arrange with minimum rebalancing?

2019-03-07 Thread Pardhiv Karri
Hi, We have a ceph cluster with rack as failure domain but the racks are so imbalanced due to which we are not able to utilize the maximum of storage allocated as some odd's in small racks are filling up too fast and causing ceph to go into warning state and near_full_ratio being triggered. We ar

Re: [ceph-users] Failed to repair pg

2019-03-07 Thread David Zafman
On 3/7/19 9:32 AM, Herbert Alexander Faleiros wrote: On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote: Should I do something like this? (below, after stop osd.36) # ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-36/ --journal-path /dev/sdc1 rbd_data.dfd5e223

Re: [ceph-users] Can CephFS Kernel Client Not Read & Write at the Same Time?

2019-03-07 Thread Ketil Froyn
On Fri, Mar 8, 2019, 01:15 Gregory Farnum wrote: > In general, no, this is not an expected behavior. > For clarification: I assume you are responding to Andrew's last question "Is this expected behavior...?" (quoted below). When I first read through, it looked like your mail was a response to

Re: [ceph-users] Can CephFS Kernel Client Not Read & Write at the Same Time?

2019-03-07 Thread Yan, Zheng
CephFS kernel mount blocks reads while other client has dirty data in its page cache. Cache coherency rule looks like: state 1 - only one client opens a file for read/write. the client can use page cache state 2 - multiple clients open a file for read, no client opens the file for wirte. client