Hi,
I’m experiencing the same symptoms as OP.
We’re running Ceph Octopus 15.2.1 with RGW, and have seen on multiple occasions
the bucket index pool go up to 500MB/s read throughput / 100K read IOPS.
Our logs during this time are flooded with these entries:
2020-06-09T07:11:18.070+0200 7f2676efd
Hi,
I have Ceph Octopus 4 Node, each node has 12 disk cluster, which is
configured with cephfs (replica 2) + exposed via samba for windows 10G
client.
When a user copies a folder containing 1000's of 7MB files from windows 10
client getting only a speed of 40MB/s.
Client and Ceph nodes all connec
Hi Reed,
thanks for the log.
Nothing much of interest there though. Just a regular SST file that
RocksDB instructed to put at "slow" device. Presumably it belongs to a
higher level hence the desire to put it that "far". Or (which is less
likely) RocksDB lacked free space when doing compaction
Hi,
I think you are hit by two different problems at the same time. The second
problem might be the same that we also experience, namely that Windows VMs have
very strange performance characteristics with libvirt, vd driver and RBD. With
copy operations on very large files (>2GB) we see a sharp
I was wondering if I can fork and do pull request on the grafana
dashboards at git[1]
Clean up a bit inconsistent naming use of labels etc.
[1]
https://github.com/ceph/ceph/tree/master/monitoring/grafana/dashboards
___
ceph-users mailing list -- ceph
Thanks for sticking with me Igor.Attached is the ceph-kvstore-tool stats output.Hopefully something interesting in here.Thanks,Reed
kvstoretool.log
Description: Binary data
On Jun 12, 2020, at 6:56 AM, Igor Fedotov wrote:
Hi Reed,thanks for the log.Nothing much of i
The grafana dashboard 'rbd overview' is empty. Queries have measurements
'ceph_rbd_write_ops' that do not exist in prometheus (I think). Should I
enable something more than just 'ceph mgr module enable prometheus'
I am on Nautilus
___
ceph-users m
I have sometimes that the dashboard keeps loading when switching to 3
hours range. However I do not see any load on the prometheus server.
Anyone having something similar?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an em
hmm, RocksDB reports 13GB at L4:
"": "Level Files Size Score Read(GB) Rn(GB) Rnp1(GB)
Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec)
CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop",
"":
"
Will there be a ceph release available on rhel7 until the eol of rhel7?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 2020-06-12 16:35, Marc Roos wrote:
>
> Will there be a ceph release available on rhel7 until the eol of rhel7?
much needed here as well
+1
Would be really great, Thanks a lot.
Dietmar
--
_
D i e t m a r R i e d e r, Mag.Dr.
Innsbruck Medical Univers
On 6/12/20, 5:40 AM, "James, GleSYS" wrote:
> When I set the debug_rgw logs to "20/1", the issue disappears immediately,
> and the throughput for the index pool goes back down to normal levels.
I can – somewhat happily – confirm that setting debug_rgw to "20/1" makes the
issue disappear instan
Thanks Igor,
I did see that L4 sizing and thought it seemed auspicious.
Though after looking at a couple other OSD's with this, I saw that I think the
sum of L0-L4 appears to match a rounded off version of the metadata size
reported in ceph osd df tree.
So I'm not sure if thats actually showing
Hi,
which ceph release are you using? You mention ceph-disk so your OSDs
are not LVM based, I assume?
I've seen these messages a lot when testing in my virtual lab
environment although I don't believe it's the cluster's fsid but the
OSD's fsid that's in the error message (the OSDs have th
Maybe you have the same issue?
https://tracker.ceph.com/issues/44102#change-167531
In my case an update(?) disabled osd runlevels.
systemctl is-enabled ceph-osd@0
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] Re: help with failed osds after reboot
Hi,
which ceph
黄明友 wrote:
> Hi,all:
>
> the slave zone show metadata is caught up with master ; but use
> radosgw-admin bucket list|wc diff master and the slave zone , is not equal.
> how can I force sync it?
I too face the same problem. I see new buckets getting created in the master
zone howe
Hi,
I'm new to radosgw (learned more about the MDS than I care to...), and it
seems like the buckets and objects created by one user cannot be accessed
by another user.
Is there a way to make any content created by User A accessible (read-only)
by User B?
>From the documentation it looks like thi
Yes best via bucket policies
https://docs.ceph.com/docs/mimic/radosgw/bucketpolicy/
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] radosgw - how to grant read-only access to another
user by default
Hi,
I'm new to radosgw (learned more about the MDS than I care t
18 matches
Mail list logo