check the ops for rgw:
[root@node06 ceph]# ceph daemon
/var/run/ceph/ceph-client.rgw.os.dsglczutvqsgowpz.a.13.93908447458760.asok
objecter_requests| jq ".ops" | jq 'length'
8
list subdir with s5cmd:
[root@node01 deeproute]# time ./s5cmd --endpoint-url=http://10.x.x.x:80 ls
s3://mlp-data-wareho
{
"archived": "2023-04-13 02:23:50.948191",
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f2ee8198ce0]",
"pthread_kill()",
"(ceph::HeartbeatMap::_check(ceph::heartbeat_handle_d const*, char
const*, std::chrono::time_point > >)+0x48c)
[0x563506e9934c]",
{
"archived": "2023-04-09 01:22:40.755345",
"backtrace": [
"/lib64/libpthread.so.0(+0x12ce0) [0x7f06dc1edce0]",
"(boost::asio::detail::reactive_socket_service_base::start_op(boost::asio::detail::reactive_socket_service_base::base_implementation_type&,
int, boost::asio::det
I am deploying Ceph via ROOK on K8s Cluster with following version metric
ceph-version=17.2.5-0
Ubuntu 20.04
Kernel 5.4.0-135-generic
But getting following error, does ceph-version=17.2.5-0 tested with Ubuntu
20.04 running Kernel 5.4.0-135-generic. Where I can find compatibility metric?
20
Dear Ceph Users,
I have a small ceph cluster for VMs on my local machine. It used to be
installed with the system packages and I migrated it to docker following the
documentation. It worked OK until I migrated from v16 to v17 a few months ago.
Now the OSDs remain "not in" as shown in the status
We're happy to announce the 12th hot-fix release in the Pacific series.
https://ceph.io/en/news/blog/2023/v16-2-12-pacific-released/
Notable Changes
---
This is a hotfix release that resolves several performance flaws in ceph-volume,
particularly during osd activation (https://tracker
Oops, forgot to mention that I'm installing Ceph 17.2.6, preempting an
upgrade of our cluster from 15.2.17 to 17.2.6.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello!
I'm trying to Install the ceph-common package on a Rocky Linux 9 box so
that I can connect to our ceph cluster and mount user directories. I've
added the ceph repo to yum.repos.d, but when I run `dnf install
ceph-common`, I get the following error
```
[root@jet yum.repos.d]# dnf install ceph
I've finally solved this. There has been a change in behaviour in 17.2.6.
For cluster 2 (the one that failed):
* When they were built the hosts were configured with a hostname
without a domain (so hostname returned a short name)
* The hosts as reported by ceph all had short hostnames
* In
Hi,
this is a common question, you should be able to find plenty of
examples, here's one [1].
Regards,
Eugen
[1] https://www.spinics.net/lists/ceph-users/msg76020.html
Zitat von Work Ceph :
Hello guys!
Is it possible to restrict user access to a single image in an RBD pool? I
know that
Hi,
I would probably stop the upgrade to continue, this might be blocking
cephadm. Then try again to redeploy a daemon, if it still fails check
the cephadm.log(s) on the respective servers as well as the active mgr
log.
Regards,
Eugen
Zitat von Thomas Widhalm :
Hi,
As you might know, I
Hi Team,
their is one additional observation.
Mount as the client is working fine from one of the Ceph nodes.
Command *: sudo mount -t ceph :/ /mnt/imgs -o
name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== *
*we are not passing the Monitor address, instead, DNS SRV is configured as
per:*
Hello guys!
Is it possible to restrict user access to a single image in an RBD pool? I
know that I can use namespaces, so users can only see images with a given
namespaces. However, these users will still be able to create new RBD
images.
Is it possible to somehow block users from creating RBD im
Hi,
your cluster is in backfilling state, maybe just wait for the backfill
to finish? What is 'ceph -s' reporting? The PG could be backfilling to
a different OSD as well. You could query the PG to see more details
('ceph pg 8.2a6 query').
By the way, the PGs you show are huge (around 174 GB
14 matches
Mail list logo