Mon, Nov 21, 2022 at 5:48 PM Marcus Müller
> wrote:
>>
>> Hi all,
>>
>> we created a RBD image for usage in a K8s cluster. We use a own user and
>> namespace for that RBD image.
>>
>> If we want to use this RBD image as a volume in k8s, it won’t work
Hi all,
we created a RBD image for usage in a K8s cluster. We use a own user and
namespace for that RBD image.
If we want to use this RBD image as a volume in k8s, it won’t work as k8s can’t
find the image - without a namespace for the RBD it works. Do we have to set
something special here ?
Hi all,
I try to install a new rgw node. After trying to execute this command:
/usr/bin/radosgw -f --cluster ceph --name client.rgw.s3-001 --setuser ceph
--setgroup ceph --keyring=/etc/ceph/ceph.client.admin.keyring --conf
/etc/ceph/ceph.conf -m 10.0.111.13
I get:
2022-11-16T15:37:39.291+01
ley :
>
> are you running quincy? it looks like this '/admin/info' API was new
> to that release
>
> https://docs.ceph.com/en/quincy/radosgw/adminops/#info
>
> On Fri, Jul 15, 2022 at 7:04 AM Marcus Müller
> wrote:
>>
>> Hi all,
>>
>>
Hi all,
I’ve created a test user on our radosgw to work with the API. I’ve done the
following:
~#radosgw-admin user create --uid=testuser--display-name=„testuser"
~#radosgw-admin caps add --uid=testuser --caps={caps}
"caps": [
{
"type": "amz-cache",
"perm": "
$ ceph daemon mon.ceph4 config get osd_scrub_auto_repair
{
"osd_scrub_auto_repair": "true"
}
What does this tell me know? Setting can be changed to false of course, but as
list-inconsistent-obj shows something, I would like to find the reason for that
first.
Regards
Marcus
Hi all,
we recently upgraded from Ceph Luminous (12.x) to Ceph Octopus (15.x) (of
course with Mimic and Nautilus in between). Since this upgrade we see we
constant number of active+clean+scrubbing+deep+repair PGs. We never had this in
the past, now every time (like 10 or 20 PGs at the same time
snapshots, compression, etc?
>
> You might want to consider recordsize / blocksize for the dataset where it
> would live:
>
> https://www.reddit.com/r/zfs/comments/8l20f5/zfs_record_size_is_smaller_really_better/
>
>> On Mar 2, 2022, at 10:59 AM, Marcus Mül
Hi all,
are there any recommendations for suitable filesystems for ceph monitors ?
In the past we always deployed them on ext4, but would be ZFS possible as well?
Regards,
Marcus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an