hi all,
I think this patch might fix the problem
(https://github.com/ceph/ceph/pull/49954), it hasn't been merged for a long
time, I asked a few days ago and got it merged, you can try it.
best wishes
___
ceph-users mailing list -- ceph-u
>
>
> The documentation describes that I could set a device class for an OSD with
> a command like:
>
> `ceph osd crush set-device-class CLASS OSD_ID [OSD_ID ..]`
>
> Class names can be arbitrary strings like 'big_nvme". Before setting a new
> device class to an OSD that already has an assigned de
Anthony,
Thank you! This is very helpful information and thanks for the specific
advice for these drive types on choosing a 64KB min_alloc_size. I will do
some more review as I believe they are likely at the 4KB min_alloc_size if
that is the default for the `ssd` device-class.
I will look to t
Ah, our old friend the P5316.
A few things to remember about these:
* 64KB IU means that you'll burn through endurance if you do a lot of writes
smaller than that. The firmware will try to coalesce smaller writes,
especially if they're sequential. You probably want to keep your RGW / CephFS
Correction, it's not so new but doesn't seem to be maintained :
https://github.com/ceph/ceph/commits/v17.2.6/src/pybind/mgr/rgw
Cordialement,
*David CASIER*
Le mar. 24 oct. 2023
Hi Michel,
(I'm just discovering the existence of this module, so it's possible I'm
making mistakes)
The rgw module is new and only seems to be there to configure multisite.
It is present on the v17.2.6 branch but I don't see it in the container for
this version.
In any case, if you're not usin
We use HAProxy in front of the Ceph RadosGWs. The logs are kicked to ELK
stack where we can filter by those (and many more) values. IP
address/geolocation is most easily pulled from switches.
On Wed, Oct 18, 2023 at 2:07 AM Boris Behrens wrote:
> Hi,
> did someone have a solution ready to monito
I am looking to create a new pool that would be backed by a particular set
of drives that are larger nVME SSDs (Intel SSDPF2NV153TZ, 15TB drives).
Particularly, I am wondering about what is the best way to move devices
from one pool and to direct them to be used in a new pool to be created. In
this
Hi,
I'm trying to use the rgw mgr module to configure RGWs. Unfortunately it
is not present in 'ceph mgr module ls' list and any attempt to enable it
suggests that one mgr doesn't support it and that --force should be
added. Adding --force effectively enabled it.
It is strange as it is a bra
Some tests:
If in Nautilus 16.2.14 in
/usr/lib/python3.6/site-packages/ceph_volume/util/disk.py I disable
lines 804 and 805
804 if get_file_contents(os.path.join(_sys_block_path, dev,
'removable')) == "1":
805 continue
the command "ceph-volume inventory" works as i
Yes this can be the reason.
Thanks for your help.
Best Regards,
Mahnoosh
On Tue, Oct 24, 2023 at 5:45 PM Casey Bodley wrote:
> i don't suppose you're using sts roles with AssumeRole?
> https://tracker.ceph.com/issues/59495 tracks a bug where each
> AssumeRole request was writing to the user met
i don't suppose you're using sts roles with AssumeRole?
https://tracker.ceph.com/issues/59495 tracks a bug where each
AssumeRole request was writing to the user metadata unnecessarily,
which would race with your admin api requests
On Tue, Oct 24, 2023 at 9:56 AM mahnoosh shahidi
wrote:
>
> Thanks
I have checked my disks as well,
all devices are hot-swappable hdd and have the removable flag set
/Johan
Den 2023-10-24 kl. 13:38, skrev Patrick Begou:
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to
1. May be because they are hot-swappable hard drives.
I
Thanks Casey for your explanation,
Yes it succeeded eventually. Sometimes after about 100 retries. It's odd
that it stays in racing condition for that much time.
Best Regards,
Mahnoosh
On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote:
> errno 125 is ECANCELED, which is the code we use when w
errno 125 is ECANCELED, which is the code we use when we detect a
racing write. so it sounds like something else is modifying that user
at the same time. does it eventually succeed if you retry?
On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi
wrote:
>
> Hi all,
>
> I couldn't understand what doe
Hi all,
I couldn't understand what does the status -125 mean from the docs. I'm
getting 500 response status code when I call rgw admin APIs and the only
log in the rgw log files is as follows.
s3:get_obj recalculating target
initializing for trans_id =
tx0aa90f570fb8281cf-006537bf9e-84395fa-d
Hi,
May be because they are hot-swappable hard drives.
yes, that's my assumption as well.
Zitat von Patrick Begou :
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set
to 1. May be because they are hot-swappable hard drives.
I have contacted the commit author
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to
1. May be because they are hot-swappable hard drives.
I have contacted the commit author Zack Cerza and he asked me for some
additional tests too this morning. I add him in copy to this mail.
Patrick
Le 24/10/
Hi,
just to confirm, could you check that the disk which is *not*
discovered by 16.2.11 has a "removable" flag?
cat /sys/block/sdX/removable
I could reproduce it as well on a test machine with a USB thumb drive
(live distro) which is excluded in 16.2.11 but is shown in 16.2.10.
Although
19 matches
Mail list logo