Please use:
https://docs.ceph.com/en/quincy/man/8/ceph-post-file/
to share debug logs from the MDS.
On Wed, Feb 22, 2023 at 4:56 PM Thomas Widhalm wrote:
>
> Ah, sorry. My bad.
>
> The MDS crashed and I restarted them. And I'm waiting for them to crash
> again.
>
> There's a tracker for this or
And did you already try the other caps? Do those work?
Zitat von Thomas Schneider <74cmo...@gmail.com>:
Confirmed.
# ceph versions
{
"mon": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 3
},
"mgr": {
"ceph version 14.2.22
Confirmed.
# ceph versions
{
"mon": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 3
},
"mgr": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 3
},
"osd": {
"ceph version
And the ceph cluster has the same version? ‚ceph versions‘ shows all
daemons. If the cluster is also 14.2.X the caps should work with
lower-case rbd_id. Can you confirm?
Zitat von Thomas Schneider <74cmo...@gmail.com>:
This is
# ceph --version
ceph version 14.2.22 (877fa256043e4743620f4677
Off the top of my head:
1. The command would take a bucket marker and a bucket names as arguments. It
might also need some additional metadata to fill in gaps.
2. Scan the data pool for head objects that refer to that bucket marker.
3. Based on the number of such objects found, create a bucket in
On Thu, Feb 23, 2023 at 3:31 PM Kuhring, Mathias
wrote:
>
> Hey Ilya,
>
> I'm not sure if the things I find in the logs are actually anything related
> or useful.
> But I'm not really sure, if I'm looking in the right places.
>
> I enabled "debug_ms 1" for the OSDs as suggested above.
> But this
This is
# ceph --version
ceph version 14.2.22 (877fa256043e4743620f4677e72dee5e738d1226) nautilus
(stable)
Am 23.02.2023 um 16:47 schrieb Eugen Block:
Which ceph version is this? In a Nautilus cluster it works for me with
the lower-case rbd_id, in Pacific it doesn't. I don't have an Octopus
Which ceph version is this? In a Nautilus cluster it works for me with
the lower-case rbd_id, in Pacific it doesn't. I don't have an Octopus
cluster at hand.
Zitat von Eugen Block :
I tried to recreate this restrictive client access, one thing is
that the rbd_id is in lower-case. I created
I tried to recreate this restrictive client access, one thing is that
the rbd_id is in lower-case. I created a test client named "TEST":
storage01:~ # rados -p pool ls | grep -vE
"5473cdeb5c62c|1f553ba0f6222" | grep test
rbd_id.test
But after adding all necessary caps I'm still not allowed
I'll delete existing authentication and its caps "VCT" and recreate it.
Just to be sure: there's no ingress communication to the client (from
Ceph server)?
Am 23.02.2023 um 16:01 schrieb Eugen Block:
For rbd commands you don't specify the "client" prefix for the --id
parameter, just the clien
This conclusion is certainly incorrect:
# rbd ls -l hdb_backup | grep VCT
VCT 800 GiB 2
Am 23.02.2023 um 16:01 schrieb Curt:
What does 'rbd ls hbd_backup' return? Or is your pool VCT? Which if
that's the case those should be switched. 'rbd map VCT/hdb_backup --id
VCT
What does 'rbd ls hbd_backup' return? Or is your pool VCT? Which if that's
the case those should be switched. 'rbd map VCT/hdb_backup --id VCT
--keyring /etc/ceph/ceph.client.VCT.keyring'
On Thu, Feb 23, 2023 at 6:54 PM Thomas Schneider <74cmo...@gmail.com> wrote:
> Hm... I'm not sure about the
For rbd commands you don't specify the "client" prefix for the --id
parameter, just the client name, in your case "VCT". Your second
approach shows a different error message, so it can connect with "VCT"
successfully, but the permissions seem not to be sufficient. Those
caps look very restr
Hm... I'm not sure about the correct rbd command syntax, but I thought
it's correct.
Anyway, using a different ID fails, too:
# rbd map hdb_backup/VCT --id client.VCT --keyring
/etc/ceph/ceph.client.VCT.keyring
rbd: couldn't connect to the cluster!
# rbd map hdb_backup/VCT --id VCT --keyring
You don't specify which client in your rbd command:
rbd map hdb_backup/VCT --id client --keyring
/etc/ceph/ceph.client.VCT.keyring
Have you tried this (not sure about upper-case client names, haven't
tried that)?
rbd map hdb_backup/VCT --id VCT --keyring /etc/ceph/ceph.client.VCT.keyring
Hello,
I'm trying to mount RBD using rbd map, but I get this error message:
# rbd map hdb_backup/VCT --id client --keyring
/etc/ceph/ceph.client.VCT.keyring
rbd: couldn't connect to the cluster!
Checking on Ceph server the required permission for relevant keyring exists:
# ceph-authtool -l /et
After reading a lot about it I still don't understand how this happened and
what I can do to fix it.
This only trims the pglog, but not the duplicates:
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-41 --op
trim-pg-log --pgid 8.664
I also try to recreate the OSDs (sync out, crush rm, wi
Yes, it's still:
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
(stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x135) [0x7f6bf079e43f]
2: /usr/lib64/ceph/libceph-common.so.2(+0x269605) [0x7f6bf079e605]
3: (interval_set::erase(inoden
Hi,
On 22.02.23 17:45, J. Eric Ivancich wrote:
You also asked why there’s not a command to scan the data pool and
recreate the bucket index. I think the concept would work as all head
objects include the bucket marker in their names. There might be some
corner cases where it’d partially fail,
Hi,
Our cluster runs Pacific on Rocky8. We have 3 rgw running on port 7480.
I tried to setup an ingress service with a yaml definition of service:
no luck
service_type: ingress
service_id: rgw.myceph.be
placement:
hosts:
- ceph001
- ceph002
- ceph003
spec:
backend_service: rgw
20 matches
Mail list logo