min --display-name="admin" --admin
>
> radosgw-admin caps add --uid=admin --caps="roles=*"
>
>
> When I use the REST admin APIs to get the Role, it returns an HTTP 405
> error.
>
> Request:
>
> POST / HTTP/1.1
> Host: 192.168.19
Hello,
try restarting every osd if possible.
Upgrade to a recent ceph version.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. regi
Container = same kernel, problem is with processes using the same
kernel.
-Original Message-
From: Daniele Riccucci [mailto:devs...@posteo.net]
Sent: 07 March 2019 00:18
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mount cephfs on ceph servers
Hello,
is the deadlock
Hi!
After removing all files from cephfs I see that situation:
#ceph df
POOLS:
NAME ID USED%USED MAX AVAIL OBJECTS
fsd2 0 B 0 233 TiB 11527762
#rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFO
Hi,
we are running an Openstack environment with Ceph block storage. There
are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a
P4800X Optane for rocksdb and WAL.
The decision was made to use rbd writeback cache with KVM/QEMU. The
write latency is incredible good (~85 µs) a
Been reading into the gateway, and noticed it’s been mentioned a few times
it can be installed on OSD servers.
I am guessing therefore there be no issues like is sometimes mentioned when
using kRBD on a OSD node apart from the extra resources required from the
hardware.
Thanks
___
I was able to check the used settings by a ceph.conf like:
[client.nova]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/client.log
debug rbd = 20
debug librbd = 20
rbd_cache = true
rbd cache size = 268435456
rbd cache max dirty = 201326592
rbd cache targ
Fixed by https://github.com/ceph/ceph/pull/26779
Am 07.03.19 um 07:23 schrieb Jos Collin:
> I have originally created this bug when I saw this issue in
> debian/stretch. But now it looks like install-deps.sh is not installing
> 'colorize' package in Fedora too. I'm reopening this bug.
>
> On 07/0
Hi Florian,
On 3/7/19 10:27 AM, Florian Engelmann wrote:
>
> So the settings are recognized and used by qemu. But any value higher
> than the default (32MB) of the cache size leads to strange IOPS
> results. IOPS are very constant with 32MB ~20.000 - 23.000 but if we
> define a bigger cache size (
Hi,
# ceph health detail
HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 3 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 2.2bb is active+clean+inconsistent, acting [36,12,80]
# ceph pg repair 2.2bb
instructing pg 2.2bb on osd.36 to repa
AFAIR the issue is that under memory pressure, the kernel will ask
cephfs to flush pages, but that this in turn causes the osd (mds?) to
require more memory to complete the flush (for network buffers, etc). As
long as cephfs and the OSDs are feeding from the same kernel mempool,
you are susceptible
Hello, Ceph users,
does radosgw have an upper limit of object size? I tried to upload
a 11GB file using s3cmd, but it failed with InvalidRange error:
$ s3cmd put --verbose centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso
s3://mybucket/
INFO: No cache file found, creating it.
INFO
On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote:
> Hi,
>
> # ceph health detail
> HEALTH_ERR 3 scrub errors; Possible data damage: 1 pg inconsistent
> OSD_SCRUB_ERRORS 3 scrub errors
> PG_DAMAGED Possible data damage: 1 pg inconsistent
> pg 2.2bb is active+clean+inco
Edit: screenshot removed due to message size constraints on the mailing
list.
Hey Patrick,
I understand your skepticism! I'm also confident that this is some kind of
a configuration issue; I'm not very familiar with all of Ceph's various
configuration options as Rook generally abstracts those awa
There is a rgw_max_put_size which defaults to 5G, which limits the size
of a single PUT request. But in that case, the http response would be
400 EntityTooLarge. For multipart uploads, there's also a
rgw_multipart_part_upload_limit that defaults to 1 parts, which
would cause a 416 InvalidRa
On Thu, Mar 7, 2019 at 8:24 AM Zack Brenton wrote:
>
> Hey Patrick,
>
> I understand your skepticism! I'm also confident that this is some kind of a
> configuration issue; I'm not very familiar with all of Ceph's various
> configuration options as Rook generally abstracts those away, so I apprec
Hello All,
I have recently had 32 large map objects appear in my default.rgw.log
pool. Running luminous 12.2.8.
Not sure what to think about these.I’ve done a lot of reading about
how when these normally occur it is related to a bucket needing resharding, but
it doesn’t
On Thu, Mar 7, 2019 at 2:38 PM Patrick Donnelly wrote:
> Is this with one active MDS and one standby-replay? The graph is odd
> to me because the session count shows sessions on fs-b and fs-d but
> not fs-c. Or maybe max_mds=2 and fs-d has no activity and fs-c is
> standby-replay?
>
The graphs w
you could try reading the data from this object and write it again
using rados get then rados put.
On Fri, Mar 8, 2019 at 3:32 AM Herbert Alexander Faleiros
wrote:
>
> On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote:
> > Hi,
> >
> > # ceph health detail
> > HEALTH_ERR 3
In general, no, this is not an expected behavior.
My guess would be that something odd is happening with the other clients
you have to the system, and there's a weird pattern with the way the file
locks are being issued. Can you be more precise about exactly what workload
you're running, and get t
Are they getting cleaned up? CephFS does not instantly delete files; they
go into a "purge queue" and get cleaned up later by the MDS.
-Greg
On Thu, Mar 7, 2019 at 2:00 AM Fyodor Ustinov wrote:
> Hi!
>
> After removing all files from cephfs I see that situation:
> #ceph df
> POOLS:
> NAME
On Fri, Mar 8, 2019 at 4:46 AM Samuel Taylor Liston wrote:
>
> Hello All,
> I have recently had 32 large map objects appear in my default.rgw.log
> pool. Running luminous 12.2.8.
>
> Not sure what to think about these.I’ve done a lot of reading
> about how when these normall
Hi,
We have a ceph cluster with rack as failure domain but the racks are so
imbalanced due to which we are not able to utilize the maximum of storage
allocated as some odd's in small racks are filling up too fast and causing
ceph to go into warning state and near_full_ratio being triggered.
We ar
On 3/7/19 9:32 AM, Herbert Alexander Faleiros wrote:
On Thu, Mar 07, 2019 at 01:37:55PM -0300, Herbert Alexander Faleiros wrote:
Should I do something like this? (below, after stop osd.36)
# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-36/ --journal-path
/dev/sdc1 rbd_data.dfd5e223
On Fri, Mar 8, 2019, 01:15 Gregory Farnum wrote:
> In general, no, this is not an expected behavior.
>
For clarification:
I assume you are responding to Andrew's last question "Is this expected
behavior...?" (quoted below).
When I first read through, it looked like your mail was a response to
CephFS kernel mount blocks reads while other client has dirty data in
its page cache. Cache coherency rule looks like:
state 1 - only one client opens a file for read/write. the client can
use page cache
state 2 - multiple clients open a file for read, no client opens the
file for wirte. client
26 matches
Mail list logo