[ceph-users] HELP! Cluster usage increased after adding new nodes/osd's

2025-07-07 Thread mhnx
Hello! Few years ago I build a "dc-a:12 + dc-b:12 = 24" node ceph cluster with Nautilus v14.2.16 A year ago the cluster upgraded to Octopus and it was running fine. Recently I added 4+4=8 new nodes with identical hardware and SSD drives. When I created OSD's with Octopus, The cluster usage increas

[ceph-users] Help in upgrading CEPH

2025-05-15 Thread Sergio Rabellino
Dear list,  we are upgrading our ceph infrastructure from mimic to octopus (please be kind, we known that we are working with "old" tools, but these ceph releases are tied to our openstack installation needs) and _*all*_ the ceph actors (mon/mgr/osd/rgw  - no mds as we do not serve filesystem)

[ceph-users] Help with HA NFS

2025-04-22 Thread Devin A. Bougie
Hello, We’ve found that if we lose one of the nfs.cephfs service daemons in our cephadm 19.2.2 cluster, all NFS traffic is blocked until either: - the down nfs.cephfs daemon is restarted - or we reconfigure the placement of the nfs.cephs service to not use the affected host. After this, the ing

[ceph-users] Help needed: s3cmd set ACL command possess S3 error: 400 (InvalidArgument) in squid ceph version.

2025-01-20 Thread Saif Mohammad
Hello Community, We are trying to set ACL for one of the objects by s3cmd tool within the buckets to be public by using the command as follows but we are unable to set it in squid ceph version, however the same was done in the reef version, we were successfully able to set it public. Please let

[ceph-users] Help in recreating a old ceph cluster

2025-01-12 Thread Jayant Dang
Hi Can someone help me understand what happens in a scenario if my os disk of all the ceph nodes gets destroyed somehow and I am only left with the OSDs or physical storage devices, How can I recreate the same ceph cluster using those old OSDs without any data loss ? Is there something I should reg

[ceph-users] Help needed, ceph fs down due to large stray dir

2025-01-10 Thread Frank Schilder
Hi all, we seem to have a serious issue with our file system, ceph version is pacific latest. After a large cleanup operation we had an MDS rank with 100Mio stray entries (yes, one hundred million). Today we restarted this daemon, which cleans up the stray entries. It seems that this leads to a

[ceph-users] Help with "27 osd(s) are not reachable" when also "27 osds: 27 up.. 27 in"

2024-10-14 Thread Harry G Coin
I need help to remove a useless "HEALTH ERR" in 19.2.0 on a fully dual stack docker setup with ceph using ip v6, public and private nets separated, with a few servers.   After upgrading from an error free v18 rev, I can't get rid of the 'health err' owing to the report that all osds are unreach

[ceph-users] Help with cephadm bootstrap and ssh private key location

2024-09-22 Thread Kozakis, Anestis
Hi All, Very new to Ceph and hoping someone can help me out. We are implementing Ceph in our team's environment, and I have been able to manually set up a test cluster using cephadm bootstrap and answering all the prompts. What we want to do is to automate the setup and maintenance of the prod

[ceph-users] Help with osd spec needed

2024-08-01 Thread Kristaps Cudars
3 nodes each: 3 hdd – 21G 1 ssd – 80G Create osd containing block_data with block_db size 15G located on ssd- This par works Create block_data osd on remaining space 35G in ssd- This part is not working ceph orch apply osd -i /path/to/osd_spec.yml service_type: osd service_id: osd_spec_hdd place

[ceph-users] Help with Mirroring

2024-07-11 Thread Dave Hall
Hello. I would like to use mirroring to facilitate migrating from an existing Nautilus cluster to a new cluster running Reef. RIght now I'm looking at RBD mirroring. I have studied the RBD Mirroring section of the documentation, but it is unclear to me which commands need to be issued on each cl

[ceph-users] Help needed please ! Filesystem became read-only !

2024-06-03 Thread nbarbier
Hello, First of all, thanks for reading my message. I set up a Ceph version 18.2.2 cluster with 4 nodes, everything went fine for a while, but after copying some files, the storage showed a warning status and the following message : "HEALTH_WARN: 1 MDSs are read only mds.PVE-CZ235007SH(mds.0):

[ceph-users] Help needed! First MDs crashing, then MONs. How to recover ?

2024-05-28 Thread Noe P.
Hi, we ran into a bigger problem today with our ceph cluster (Quincy, Alma8.9). We have 4 filesystems and a total of 6 MDs, the largest fs having two ranks assigned (i.e. one standby). Since we often have the problem of MDs lagging behind, we restart the MDs occasionally. Helps ususally, the stan

[ceph-users] Help with deep scrub warnings

2024-03-04 Thread Nicola Mori
Dear Ceph users, in order to reduce the deep scrub load on my cluster I set the deep scrub interval to 2 weeks, and tuned other parameters as follows: # ceph config get osd osd_deep_scrub_interval 1209600.00 # ceph config get osd osd_scrub_sleep 0.10 # ceph config get osd osd_scrub_loa

[ceph-users] help me understand ceph snapshot sizes

2024-02-22 Thread garcetto
good morning, i am trying to understand ceph snapshot sizing. For example if i have 2.7 GB volume and i create a snap on it, the sizing says: (BEFORE SNAP) rbd du volumes/volume-d954915c-1dc1-41cb-8bf0-0c67e7b6e080 NAME PROVISIONED USED volume-d954915c-1dc1-41cb-8bf0-0c67e7b6e080 10 GiB 2.7 Gi

[ceph-users] Help with setting-up Influx MGR module: ERROR - queue is full

2024-02-13 Thread Fulvio Galeazzi
Hi there! Has anyone any experience with the Influx Ceph mgr module? I am using 17.2.7 on CentOS8-Stream, I configured one of my clusters, I test with "ceph influx send" (whereas official doc https://docs.ceph.com/en/quincy/mgr/influx/ mentions the non-existing "ceph influx self-test") but no

[ceph-users] Help: Balancing Ceph OSDs with different capacity

2024-02-07 Thread Jasper Tan
Hi I have recently onboarded new OSDs into my Ceph Cluster. Previously, I had 44 OSDs of 1.7TiB each and was using it for about a year. About 1 year ago, we onboarded an additional 20 OSDs of 14TiB each. However I observed that many of the data were still being written onto the original 1.7TiB OS

[ceph-users] Help on rgw metrics (was rgw_user_counters_cache)

2024-01-31 Thread garcetto
good morning, i was struggling trying to understand why i cannot find this setting on my reef version, is it because is only on latest dev ceph version and not before? https://docs.ceph.com/en/*latest* /radosgw/metrics/#user-bucket-counter-caches Reef gives 404 https://docs.ceph.com/en/reef

[ceph-users] Help needed with Grafana password

2023-11-08 Thread Sake Ceph
I configured a password for Grafana because I want to use Loki. I used the spec parameter initial_admin_password and this works fine for a staging environment, where I never tried to used Grafana with a password for Loki. Using the username admin with the configured password gives a credential

[ceph-users] Help needed with Grafana password

2023-11-08 Thread Sake Ceph
I configured a password for Grafana because I want to use Loki. I used the spec parameter initial_admin_password and this works fine for a staging environment, where I never tried to used Grafana with a password for Loki.    Using the username admin with the configur

[ceph-users] help, ceph fs status stuck with no response

2023-08-07 Thread Zhang Bao
Hi, I have a ceph stucked at `ceph --verbose stats fs fsname`. And in the monitor log, I can found something like `audit [DBG] from='client.431973 -' entity='client.admin' cmd=[{"prefix": "fs status", "fs": "fsname", "target": ["mon-mgr", ""]}]: dispatch`. What happened and what should I do? --

[ceph-users] [Help appreciated] ceph mds damaged

2023-05-23 Thread Justin Li
Dear All, After a unsuccessful upgrade to pacific, MDS were offline and could not get back on. Checked the MDS log and found below. See cluster info from below as well. Appreciate it if anyone can point me to the right direction. Thanks. MDS log: 2023-05-24T06:21:36.831+1000 7efe56e7d700 1 m

[ceph-users] Help needed to configure erasure coding LRC plugin

2023-04-04 Thread Michel Jouvin
Hi, As discussed in another thread (Crushmap rule for multi-datacenter erasure coding), I'm trying to create an EC pool spanning 3 datacenters (datacenters are present in the crushmap), with the objective to be resilient to 1 DC down, at least keeping the readonly access to the pool and if po

[ceph-users] HELP NEEDED : cephadm adopt osd crash

2022-11-08 Thread Patrick Vranckx
Hi, We've already convert two PRODUCTION storage nodes on Octopus to cephadm without problem. On the third one, we succeeded to convert only one OSD. [root@server4 osd]# cephadm adopt --style legacy --name osd.0 Found online OSD at //var/lib/ceph/osd/ceph-0/fsid objectstore_type is bluestore

[ceph-users] Help - Multiple OSD's Down

2022-01-05 Thread Lee
Looking for some help as this is production effecting.. We run a 3 Node cluster with a mix of 5xSSD,15xSATA and 5xSAS in each node. Running 15.2.15. All using DB/WAL on NVME SSD except the SSD's Earlier today I increased the PG num from 32 to 128 on one of our pools, due to the status complaining

[ceph-users] Help needed to recover 3node-cluster

2022-01-03 Thread Mini Serve
Hi, we have 3 node ceph cluster, long time ago installed by another team. Currently, we had to reinstall OS(due to disk failure) at one of them(node 3), so we lost all configs on that node. As all other hard drives on that node3 are intact, after installing fresh ceph, "cephadm ceph-volume lvm lis

[ceph-users] Help !!!

2021-11-14 Thread Innocent Onwukanjo
Hi! I need help setting up the domain name for my company's ceph dashboard. I tried using NGINX but it would only display the ceph dashboard on HTTP and logging in doesn't work. Also using HTTPS returns a 5xx error message. Our domain name is from CloudFlare and also has SSL enabled. Please help. T

[ceph-users] Help

2020-08-17 Thread Randy Morgan
We are seeking information on configuring Ceph to work with Noobaa and NextCloud. Randy -- Randy Morgan CSR Department of Chemistry/BioChemistry Brigham Young University ran...@chem.byu.edu ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscr

[ceph-users] help me enable ceph iscsi gatewaty in ceph octopus

2020-08-04 Thread David Thuong
Please help me enable ceph iscsi gatewaty in ceph octopus . when i install ceph complete . i see iscsi gateway not enable. please help me config it ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] help with deleting errant iscsi gateway

2020-08-04 Thread Sharad Mehrotra
Hi: I am using ceph nautilus with CentOS 7.6 and working on adding a pair of iscsi gateways in our cluster, following the documentation here: https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ I was in the "Configuring" section, step #3, "Create the iSCSI gateways" and ran into problems. Whe

[ceph-users] Help add node to cluster using cephadm

2020-07-21 Thread davidthuong2424
I need help about add node when install ceph with cephadm . When i run cpeh orch add host ceph2 error enoent: new host ceph2 (ceph2) failed check: ['traceback (most recent call last):', Please help me fix it. Thanks & Best Regards David ___ cep

[ceph-users] Help add node to cluster using cephadm

2020-07-21 Thread Hoài Thương
Dear Support, I need help about add node when install ceph with cephadm . When i run cpeh orch add host ceph2 error enoent: new host ceph2 (ceph2) failed check: ['traceback (most recent call last):', Please help me fix it. Thanks & Best Regards David _

[ceph-users] help with failed osds after reboot

2020-06-11 Thread Seth Duncan
I had 5 of 10 osds fail on one of my nodes, after reboot the other 5 osds failed to start. I have tried running ceph-disk activate-all and get back and error message about the cluster fsid not matching in /etc/ceph/ceph.conf Has anyone experienced an issue such as this? *

[ceph-users] Help! ceph-mon is blocked after shutting down and ip address changed

2020-05-29 Thread occj
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable) os :CentOS Linux release 7.7.1908 (Core) single node ceph cluster with 1 mon,1mgr,1 mds,1rgw and 12osds , but only cephfs is used. ceph -s is blocked after shutting down the machine (192.168.0.104), then ip add

[ceph-users] Help

2020-05-29 Thread Sumit Gaur
..." > > Today's Topics: > >1. Help (Sumit Gaur) > > > ------ > > Date: Tue, 29 Oct 2019 07:06:17 +1100 > From: Sumit Gaur > Subject: [ceph-users] Help > To: ceph-users@ceph.io > Message-ID: > udqwc9wbna...@

[ceph-users] Help: corrupt pg

2020-03-25 Thread Jake Grimmett
Dear All, We are "in a bit of a pickle"... No reply to my message (23/03/2020),  subject  "OSD: FAILED ceph_assert(clone_size.count(clone))" So I'm presuming it's not possible to recover the crashed OSD This is bad news, as one pg may be lost, (we are using EC 8+2, pg dump shows [NONE,NONE,

[ceph-users] HELP! Ceph( v 14.2.8) bucket notification dose not work!

2020-03-12 Thread 曹 海旺
Hi, I upgrade the ceph from 14.2.7 to the new version 14.2.8 . The bucket notification dose not work. I can’t create a TOPIC : I use post man to send a post flow by https://docs.ceph.com/docs/master/radosgw/notifications/#create-a-topic REQUEST: POST http://rgw1:7480/?Action=CreateTopi

[ceph-users] Help

2019-10-28 Thread Sumit Gaur
On Tue, 29 Oct 2019 at 1:50 am, wrote: > Send ceph-users mailing list submissions to > ceph-users@ceph.io > > To subscribe or unsubscribe via email, send a message with subject or > body 'help' to > ceph-users-requ...@ceph.io > > You can reach the person managing the list at >

[ceph-users] Help unsubscribe please

2019-10-22 Thread Sumit Gaur
On Wed, 23 Oct 2019 at 3:12 am, wrote: > Send ceph-users mailing list submissions to > ceph-users@ceph.io > > To subscribe or unsubscribe via email, send a message with subject or > body 'help' to > ceph-users-requ...@ceph.io > > You can reach the person managing the list at >

[ceph-users] help

2019-10-11 Thread Jörg Kastning
Am 11.10.2019 um 09:21 schrieb ceph-users-requ...@ceph.io: Send ceph-users mailing list submissions to ceph-users@ceph.io To subscribe or unsubscribe via email, send a message with subject or body 'help' to ceph-users-requ...@ceph.io You can reach the person managing the list at

[ceph-users] HELP! Way too much space consumption with ceph-fuse using erasure code data pool under highly concurrent writing operations

2019-09-27 Thread daihongbo
Hi, It's observerd up to 10 times space is consumed when concurrent 200 files iozone writing test , with erasure code profile (k=8,m=4) data pool, mounted with ceph fuse, but disk usage is normal if only has one writing task . Furthermore everything is normal using replicated data pool, no

[ceph-users] help

2019-08-29 Thread Amudhan P
Hi, I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. my ceph health status showing warning . "ceph health" HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects degraded (15.499%) "ceph health detail" HEALTH_WARN Degraded data redundancy: 1197128/7723191 objects d