[ceph-users] RGW is slowly after the ops increase

2023-04-14 Thread Louis Koo
check the ops for rgw: [root@node06 ceph]# ceph daemon /var/run/ceph/ceph-client.rgw.os.dsglczutvqsgowpz.a.13.93908447458760.asok objecter_requests| jq ".ops" | jq 'length' 8 list subdir with s5cmd: [root@node01 deeproute]# time ./s5cmd --endpoint-url=http://10.x.x.x:80 ls s3://mlp-data-wareho

[ceph-users] Osd crash, looks like something related to PG recovery.

2023-04-14 Thread Louis Koo
{ "archived": "2023-04-13 02:23:50.948191", "backtrace": [ "/lib64/libpthread.so.0(+0x12ce0) [0x7f2ee8198ce0]", "pthread_kill()", "(ceph::HeartbeatMap::_check(ceph::heartbeat_handle_d const*, char const*, std::chrono::time_point > >)+0x48c) [0x563506e9934c]",

[ceph-users] radosgw crash

2023-04-14 Thread Louis Koo
{ "archived": "2023-04-09 01:22:40.755345", "backtrace": [ "/lib64/libpthread.so.0(+0x12ce0) [0x7f06dc1edce0]", "(boost::asio::detail::reactive_socket_service_base::start_op(boost::asio::detail::reactive_socket_service_base::base_implementation_type&, int, boost::asio::det

[ceph-users] rookcmd: failed to configure devices: failed to generate osd keyring: failed to get or create auth key for client.bootstrap-osd:

2023-04-14 Thread knawaz
I am deploying Ceph via ROOK on K8s Cluster with following version metric ceph-version=17.2.5-0 Ubuntu 20.04 Kernel 5.4.0-135-generic But getting following error, does ceph-version=17.2.5-0 tested with Ubuntu 20.04 running Kernel 5.4.0-135-generic. Where I can find compatibility metric? 20

[ceph-users] OSDs remain not in after update to v17

2023-04-14 Thread Alexandre Becholey
Dear Ceph Users, I have a small ceph cluster for VMs on my local machine. It used to be installed with the system packages and I migrated it to docker following the documentation. It worked OK until I migrated from v16 to v17 a few months ago. Now the OSDs remain "not in" as shown in the status

[ceph-users] v16.2.12 Pacific (hot-fix) released

2023-04-14 Thread Yuri Weinstein
We're happy to announce the 12th hot-fix release in the Pacific series. https://ceph.io/en/news/blog/2023/v16-2-12-pacific-released/ Notable Changes --- This is a hotfix release that resolves several performance flaws in ceph-volume, particularly during osd activation (https://tracker

[ceph-users] Re: Nothing provides libthrift-0.14.0.so()(64bit)

2023-04-14 Thread Will Nilges
Oops, forgot to mention that I'm installing Ceph 17.2.6, preempting an upgrade of our cluster from 15.2.17 to 17.2.6. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Nothing provides libthrift-0.14.0.so()(64bit)

2023-04-14 Thread Will Nilges
Hello! I'm trying to Install the ceph-common package on a Rocky Linux 9 box so that I can connect to our ceph cluster and mount user directories. I've added the ceph repo to yum.repos.d, but when I run `dnf install ceph-common`, I get the following error ``` [root@jet yum.repos.d]# dnf install ceph

[ceph-users] Re: 17.2.6 Dashboard/RGW Signature Mismatch

2023-04-14 Thread Chris Palmer
I've finally solved this. There has been a change in behaviour in 17.2.6. For cluster 2 (the one that failed): * When they were built the hosts were configured with a hostname without a domain (so hostname returned a short name) * The hosts as reported by ceph all had short hostnames * In

[ceph-users] Re: Restrict user to an RBD image in a pool

2023-04-14 Thread Eugen Block
Hi, this is a common question, you should be able to find plenty of examples, here's one [1]. Regards, Eugen [1] https://www.spinics.net/lists/ceph-users/msg76020.html Zitat von Work Ceph : Hello guys! Is it possible to restrict user access to a single image in an RBD pool? I know that

[ceph-users] Re: Cephadm only scheduling, not orchestrating daemons

2023-04-14 Thread Eugen Block
Hi, I would probably stop the upgrade to continue, this might be blocking cephadm. Then try again to redeploy a daemon, if it still fails check the cephadm.log(s) on the respective servers as well as the active mgr log. Regards, Eugen Zitat von Thomas Widhalm : Hi, As you might know, I

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-14 Thread Lokendra Rathour
Hi Team, their is one additional observation. Mount as the client is working fine from one of the Ceph nodes. Command *: sudo mount -t ceph :/ /mnt/imgs -o name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== * *we are not passing the Monitor address, instead, DNS SRV is configured as per:*

[ceph-users] Restrict user to an RBD image in a pool

2023-04-14 Thread Work Ceph
Hello guys! Is it possible to restrict user access to a single image in an RBD pool? I know that I can use namespaces, so users can only see images with a given namespaces. However, these users will still be able to create new RBD images. Is it possible to somehow block users from creating RBD im

[ceph-users] Re: ceph pg stuck - missing on 1 osd how to proceed

2023-04-14 Thread Eugen Block
Hi, your cluster is in backfilling state, maybe just wait for the backfill to finish? What is 'ceph -s' reporting? The PG could be backfilling to a different OSD as well. You could query the PG to see more details ('ceph pg 8.2a6 query'). By the way, the PGs you show are huge (around 174 GB