Can you send sudo ceph -s and sudo ceph health detail
Sent from Bloomberg Professional for iPhone
- Original Message -
From: nguyenvand...@baoviet.com.vn
To: ceph-users@ceph.io
At: 02/23/24 20:27:53 UTC-05:00
Could you pls guide me more detail :( im very newbie in Ceph :(
_
Could you pls guide me more detail :( im very newbie in Ceph :(
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Have just upgraded a cluster from 17.2.7 to 18.2.1
Everything is working as expected apart from the amount of scrubs & deep scrubs
is bouncing all over the place every second.
I have the value set to 1 per OSD but currently the cluster reckons one minute
it’s doing 60+ scrubs, and then second t
look at ALL cephfs kernel clients (no effect on RGW)
Le ven. 23 févr. 2024 à 16:38, a écrit :
> And we dont have parameter folder
>
> cd /sys/module/ceph/
> [root@cephgw01 ceph]# ls
> coresize holders initsize initstate notes refcnt rhelversion
> sections srcversion taint uevent
>
> My
ECC 2+2 & 4+2 HDD only.
On Tue, 20 Feb 2024, 00:25 Anthony D'Atri, wrote:
> After wrangling with this myself, both with 17.2.7 and to an extent with
> 17.2.5, I'd like to follow up here and ask:
>
> Those who have experienced this, were the affected PGs
>
> * Part of an EC pool?
> * Part of an H
And we dont have parameter folder
cd /sys/module/ceph/
[root@cephgw01 ceph]# ls
coresize holders initsize initstate notes refcnt rhelversion sections
srcversion taint uevent
My Ceph is 16.2.4
___
ceph-users mailing list -- ceph-users@ceph.io
thanks Giada, i see that you created
https://tracker.ceph.com/issues/64547 for this
unfortunately, this topic metadata doesn't really have a permission
model at all. topics are shared across the entire tenant, and all
users have access to read/overwrite those topics
a lot of work was done for htt
Hi David,
Could you pls helo me understand,
Does it affect to RGW service ? And if something go bad, how can i rollback ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thank you for your time :) Have a good day, sir
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Dear Eugen,
We have followed the workaround here :
https://tracker.ceph.com/issues/58082#note-11
And the cluster goes healthy, K8S workload are back.
# ceph status
cluster:
id: fcb373ce-7aaa-11eb-984f-e7c6e0038e87
health: HEALTH_OK
services:
mon: 3 daemons, quorum rke-sh1-
Hey ceph-users,
I just noticed issues with ceph-crash using the Debian /Ubuntu packages
(package: ceph-base):
While the /var/lib/ceph/crash/posted folder is created by the package
install,
it's not properly chowned to ceph:ceph by the postinst script.
This might also affect RPM based install
Hello,
You can use the RGW admin API (enabled_apis=admin,….) and get the usage from
there.
https://docs.ceph.com/en/latest/radosgw/adminops/
Best regards
> On 15 Feb 2024, at 06:48, asad.siddi...@rapidcompute.com wrote:
>
> Hi,
>
> I am currently working on Ceph object storage and would lik
Hello Eugen,
We used to have cache tiering (hdd+ssd) for openstack nova/glance in the past
before we move to nvme hardware. But we were not able to evict all objects
because it required to shutdown all virtual instances and then do the eviction.
So we decided to set the cache mode to "proxy" an
Hi,
A bit of history might help to understand why we have the cache tier.
We run openstack on top ceph since many years now (started with mimic, then an
upgrade to nautilus (years 2 ago) and today and upgrade to pacific). At the
beginning of the setup, we used to have a mix of hdd+ssd devices i
Hi ceph-users,
I currently use Ceph Octopus to provide CephFS & S3 Storage for our app
servers, deployed in containers by ceph-ansible. I'm planning to take an
upgrade to get off Ceph Octopus as it's EOL.
I'd love to go straight to reef, but vaguely remember reading a statement that
only two m
For us we see this for both EC 3,2 and 3 way replication pools, but all on
HDD. Our SSD usage is very small though.
On Mon, Feb 19, 2024 at 10:18 PM Anthony D'Atri
wrote:
>
>
> >> After wrangling with this myself, both with 17.2.7 and to an extent
> with 17.2.5, I'd like to follow up here and as
What exactly does the osd pool repair function do?
Documentation is not clear.
Kind regards,
AP
This e-mail may contain information that is privileged or confidential. If you
are not the intended recipient, please delete the e-mail and any attachments
and notify us immediately.
__
Hello everyone,
we are facing a problem regarding the topic operations to send
notification, particularly when using amqp protocol.
We are using Ceph version 18.2.1. We have created a topic by giving as
attributes all needed information and so the push-endpoint (in our case
a rabbit endpoint
Hi,
I have a CephFS cluster
```
> ceph -s
cluster:
id: e78987f2-ef1c-11ed-897d-cf8c255417f0
health: HEALTH_WARN
85 pgs not deep-scrubbed in time
85 pgs not scrubbed in time
services:
mon: 5 daemons, quorum
datastone05,datastone06,datastone07,datastone
hi vladimir,
thanks for answering ... of cause, we will build an 3 dc (tiebraker or
server) setup.
i'm not sure, what to do with "disaster recovery".
is it real, that a ceph cluster can be completly broken?
kind regards,
ronny
--
Ronny Lippold
System Administrator
--
Spark 5 GmbH
Rheinstr. 9
I have configured Ceph S3 encryption successfully, and the configuration is
created successfully. However, when I try to upload a file to the bucket, a
failed request occurs. Could you please guide me on how to properly configure
it?
I follow this link:
https://docs.ceph.com/en/quincy/radosgw/v
Hi
I'm currently working with Ceph object storage (version Reef), and I'd like to
know how we can set up alerts/notifications for buckets when they become full.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
Hi,
I am currently working on Ceph object storage and would like to inquire about
how we can calculate the ingress and egress traffic for buckets/tenant via API.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
From: asad.siddi...@rapidcompute.com At: 02/23/24 09:42:29 UTC-5:00To:
ceph-users@ceph.io
Subject: [ceph-users] Issue with Setting Public/Private Permissions for Bucket
Hi Team,
I'm currently working with Ceph object stora
Hi Reza,
I know this is a old thread, but I am running into a similar issue with the
same error messages. Were you able to get around the upgrade issue? If so,
what helped resolve it?
Thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
Team,
Guys,
We were facing cephFs volume mount issue and ceph status it was showing
mds slow requests
Mds behind on trimming
After restarting mds pods it was resolved
But wanted to know Root caus of this
It was started after 2 hours of one of the active mds was crashed
So does that an activ
Hi Team,
I'm currently working with Ceph object storage and would like to understand how
to set permissions to private or public on buckets/objects in Ceph object
storage.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email t
It works for me on 17.2.6 as well. Could you be more specific what
doesn't work for you? Running that command only removes the cluster
configs etc. on that host, it does not orchestrate a removal on all
hosts, not sure if you're aware of that.
Zitat von Vahideh Alinouri :
The version that
2024-02-23T08:15:13.155+ 7fbc145d2700 -1 log_channel(cluster)
log [ERR] : failed to commit dir 0x1 object, errno -22
2024-02-23T08:15:13.155+ 7fbc145d2700 -1 mds.0.12487 unhandled
write error (22) Invalid argument, force readonly...
Was your cephfs metadata pool full? This tracker
(
Hi,
The problem seems to come from the clients (reconnect).
Test by disabling metrics on all clients:
echo Y > /sys/module/ceph/parameters/disable_send_metrics
Cordialement,
*David CASIER*
Hi Eugen,
Thanks for the reply, really appreciate
The first command , just hang with no output
# cephfs-journal-tool --rank=cephfs:0 --journal=mdlog journal inspect
The second command
# cephfs-journal-tool --rank=cephfs:0 --journal=purge_queue journal inspect
Overall journal integrity: OK
ro
Hi,
the mds log should contain information why it goes into read-only
mode. Just a few weeks ago I helped a user with a broken CephFS (MDS
went into read-only mode because of missing objects in the journal).
Can you check the journal status:
# cephfs-journal-tool --rank=cephfs:0 --journal
Dear Ceph Community,
I am having an issue with my Ceph Cluster , there were several osd crashing
but now active and recovery finished and now the CephFS filesystem cannot be
access by clients in RW (K8S worklod) as the 1 MDS is in Read-Only and 2 are
being on trimming
The cephfs seems to h
Hi,
No one have any comment at all?
I'm not picky so any speculation, guessing, I would, I wouldn't, should
work and so one would be highly appreciated.
Since 4 out of 6 in EC 4+2 is OK and ceph pg repair doesn't solve it I
think the following might work.
pg 404.bc acting [223,297,269,276,
Which ceph version is this? In a small Reef test cluster this works as
expected:
# cephadm rm-cluster --fsid 2851404a-d09a-11ee-9aaa-fa163e2de51a
--zap-osds --force
Using recent ceph image
registry.cloud.hh.nde.ag/ebl/ceph-upstream@sha256:057e08bf8d2d20742173a571bc28b65674b055bebe5f4c6cd488
This seems to be the relevant stack trace:
---snip---
Feb 23 15:18:39 cephgw02 conmon[2158052]: debug -1>
2024-02-23T08:18:39.609+ 7fccc03c0700 -1
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic
https://drive.google.com/file/d/1OIN5O2Vj0iWfEMJ2fyHN_xV6fpknBmym/view?usp=sharing
Pls check my mds log which generate by command
cephadm logs --name mds.cephfs.cephgw02.qqsavr --fsid
258af72a-cff3-11eb-a261-d4f5ef25154c
___
ceph-users mailing list --
Hi Guys,
I faced an issue. When I wanted to purge, the cluster was not purged
using the below command:
ceph mgr module disable cephadm
cephadm rm-cluster --force --zap-osds --fsid
The OSDs will remain. There should be some cleanup methods for the
whole cluster, not just MON nodes. Is there anyt
You still haven't provided any details (logs) of what happened. The
short excerpt from yesterday isn't useful as it only shows the startup
of the daemon.
Zitat von nguyenvand...@baoviet.com.vn:
Could you pls help me explain the status of volume: recovering ?
what is it ? and do we need to
39 matches
Mail list logo