Hi,
just gave it a shot on Reef where the commands are available after
enabling the module. It seems to work, but I did just a few tests like
creating a realm, zonegroup and zone. This bootstrapped 2 rgw daemons
and created 3 pools (.log, .control, .meta). But then the MGRs started
to res
Hi,
Running git pull this morning I saw the patch on the main branch and try
to compile it but it fails with cython for rbd.pyx. I have many similar
errors:
rbd.pyx:760:44: Cannot assign type 'int (*)(uint64_t, uint64_t, void *)
except? -1' to 'librbd_progress_fn_t'. Exception values are
in
Hi all,
I have a case where I want to set options for a set of HDDs under a common
sub-tree with root A. I have also HDDs in another disjoint sub-tree with root
B. Therefore, I would like to do something like
ceph config set osd/class:hdd,datacenter:A option value
The above does not give a syn
Hi,
this setting is not as harmless as I assumed. There seem to be more
ticks/periods/health_checks involved. When I choose a mgr_tick_period
value > 30 seconds the two MGRs keep respawning. 30 seconds are the
highest value that still seemed to work without MGR respawn, even with
increase
Thanks for the warning, Eugen.
/Z
On Wed, 25 Oct 2023 at 13:04, Eugen Block wrote:
> Hi,
>
> this setting is not as harmless as I assumed. There seem to be more
> ticks/periods/health_checks involved. When I choose a mgr_tick_period
> value > 30 seconds the two MGRs keep respawning. 30 seconds
Hi,
I'm struggling with a problem to add cephadm some hosts in our Quincy
cluster. "ceph orch host add host addr" fails with the famous "missing 2
required positional arguments: 'hostname' and 'addr'" because of bug
https://tracker.ceph.com/issues/59081 but looking at cephadm messages
with "c
Hi,
We encountered the same kind of error for one of our users.
CEPH Version : 16.2.10
2023-10-24T17:57:22.438+0200 7fc27ab44700 0 WARNING: set_req_state_err
err_no=125 resorting to 500
2023-10-24T17:57:22.439+0200 7fc584957700 0 req 12200560481916573577
143.735748291s ERROR: RESTFUL_IO(s)->co
Hi all,
Here are this week's notes from the CLT:
* Collective review of the Reef/Squid "State of Cephalopod" slides.
* Smoke test suite was unscheduled but it's back on now.
* Releases:
* 17.2.7: about to start building last week, delayed by a few
issues (https://tracker.ceph.com/issues/63257,
Hi Hubert,
It's an error "125" (ECANCELED) (and there may be many reasons for it).
I see a high latency (144sec), is the object big ?
No network problems ?
Cordialement,
*David CASIER*
___
Hi Reto,
Thanks a lot for the instructions. I tried the same, but still couldn't
trigger scrubbing deterministically. The first time I initiated scrubbing,
I saw scrubbing status in ceph -s, but for subsequent times, I didn't see
any scrubbing status. Do you know what might be going on potentially
I'm fairly new to the community so I figured I'd ask about this here before
creating an issue - I'm not sure how supported this config is.
I am running rook v1.12.6 and ceph 18.2.0. I've enabled the dashboard in the
CRD and it has been working for a while. However, the charts are empty.
I do
That's correct - it's the removable flag that's causing the disks to
be excluded.
I actually just merged this PR last week:
https://github.com/ceph/ceph/pull/49954
One of the changes it made was to enable removable (but not USB)
devices, as there are vendors that report hot-swappable drives as
re
Hi Ceph users,
currently I'm using the lua script feature in radosgw to send "put_obj" and
"get_obj" requests stats to a mongo db.
So far it's working quite well but I miss a field which is very important for
us for traffic stats.
Im looking for the HTTP_REMOTE-ADDR field which is available in
Answering to myself... I hesitated to send this email to the list as the
problem didn't seem to be related to Ceph itself but rather a
configuration problem that Ceph was a victim of. I managed to find the
problem: we are using jumbo frames on all servers but the VLAN shared by
the servers and
On Mon, Oct 23, 2023 at 5:15 PM Yuri Weinstein wrote:
>
> If no one has anything else left, we have all issues resolved and
> ready for the 17.2.7 release
A last-minute issue with exporter daemon [1][2] necessitated a revert
[3]. 17.2.7 builds would need to be respinned: since the tag created
by
Another outstanding issue is https://tracker.ceph.com/issues/63305, a
compile-time issue we noticed upon building Debian Bullseye. We have raised
a small PR to fix the issue, which has been merged and is now undergoing
testing.
After this, we will be ready to rebuild 17.2.7.
On Wed, Oct 25, 2023
I have a bucket which got injected with bucket policy which locks the
bucket even to the bucket owner. The bucket now cannot be accessed (even
get its info or delete bucket policy does not work) I have looked in the
radosgw-admin command for a way to delete a bucket policy but do not see
anything.
if you have an administrative user (created with --admin), you should
be able to use its credentials with awscli to delete or overwrite this
bucket policy
On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham
wrote:
>
> I have a bucket which got injected with bucket policy which locks the
> bucket e
Hi,
Getting an error while adding a new node/OSD with bluestore OSDs to the
cluster. The OSD is added without any host and is down, tried to bring it
up didn't work. The same method to add in other clusters doesn't have any
issue. Any idea what the problem is?
Ceph Version: ceph version 12.2.11
(
Thank you, I am not sure (inherited cluster). I presume such an admin user
created after-the-fact would work? Is there a good way to discover an admin
user other than iterate over all users and retrieve user information? (I
presume radosgw-admin user info --uid=" would illustrate such
administrativ
On Wed, Oct 25, 2023 at 4:59 PM Wesley Dillingham
wrote:
>
> Thank you, I am not sure (inherited cluster). I presume such an admin user
> created after-the-fact would work?
yes
> Is there a good way to discover an admin user other than iterate over all
> users and retrieve user information? (
21 matches
Mail list logo