On Do, 2022-09-08 at 08:22 +, Frank Schilder wrote:
> My experience with osd bench is not good either
it seems it was recently "fixed" by writing "a"'s instead of zeroes:
https://github.com/ceph/ceph/commit/db045e005fab218f2bb270b7cb60b62abbbe3619
tongue in cheek:
not sure that this is a goo
Hi all,
My experience with osd bench is not good either. Firstly, its results are
> very fluctuating and secondly it always returns unrealistically large
> numbers. I always get >1000 IOPs for HDDs that can do 100-150 in reality.
> My suspicion is that the internal benchmark is using an IO path th
What jumps out to me is:
a. The -13 error code represents permission denied
b. You’ve commented out the keyring configuration in ceph.conf
So do your RGWs have appropriate credentials?
Eric
(he/him)
> On Sep 7, 2022, at 3:04 AM, Rok Jaklič wrote:
>
> Hi,
>
> after upgrading t
I’m glad you tried it on a test system. I’ve not heard of any issues with
rgw-orphan-list and I’m aware of it being used successfully on very large
clusters.
It is vital that you provide the correct pool for the data objects. The reason
why is that the tool works by:
a. Generating a li
Hello
Thanks for your reply
Sorry I'm relatively new comer in ceph env
6+3, or 8+4 with failure domain to host it's acceptable in regard of usable
space.
We have 5 more hosts (actually zfs nas, but we will move data to ceph to get
back theses hosts to expand the cluster)
In our first ceph clus
Frank, Sven,
I totally agree with your thoughts on performing a proper benchmark study.
The
osd bench results we obtained when compared with Fio results didn't vary
wildly
and therefore the osd bench was adopted considering the minimal impact to
the
osd startup time. But this apparently is not alw
FWIW, I'd be for trying to take periodic samples of actual IO happening
on the drive during operation. You can get a much better idea of
latency and throughput characteristics across different IO sizes over
time (though you will need to account for varying levels of concurrency
at the device l
Hello,
I had to rebuld a data directory of one of my mons, but now I can't get
the new mon to join the cluster. What I did was based on this documentation:
https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/
ceph mon remove mon1
ssh root@mon1
mkdir /var/lib/ceph/mon/tmp
mkdi
Jan Kasprzak wrote:
: Hello,
:
: I had to rebuld a data directory of one of my mons, but now I can't get
: the new mon to join the cluster. What I did was based on this documentation:
: https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/
:
: ceph mon remove mon1
:
: ssh root@m
What credentials should RGWs have?
I have intentionally
auth cluster required = none
auth service required = none
auth client required = none
and in 16.2.7 it worked.
Kind regards,
Rok
On Thu, Sep 8, 2022, 14:29 J. Eric Ivancich wrote:
> What jumps out to me is:
>
> a. The -13 error
Hello, Frank,
Frank Schilder wrote:
: Might be a problem I had as well. Try setting
:
: mon_sync_max_payload_size 4096
:
: If you search this list for that you will find the background.
Thanks.
I did
ceph tell mon.* config set mon_sync_max_payload_size 4096
ceph config s
What can make a "rbd unmap" fail, assuming the device is not mounted and
not (obviously) open by any other processes?
I have multiple XFS on rbd filesystems, and often create rbd snapshots,
map and read-only mount the snapshot, perform some work on the fs, then
unmount and unmap. The unmap reg
On Fri, Sep 09, 2022 at 11:14:41AM +1000, Chris Dunlop wrote:
What can make a "rbd unmap" fail, assuming the device is not mounted
and not (obviously) open by any other processes?
I have multiple XFS on rbd filesystems, and often create rbd
snapshots, map and read-only mount the snapshot, perf
13 matches
Mail list logo