[ceph-users] Add nats_adapter

2023-10-30 Thread Vahideh Alinouri
rsion. Best regards, Vahideh Alinouri ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] cephadm purge cluster does not work

2024-02-23 Thread Vahideh Alinouri
Hi Guys, I faced an issue. When I wanted to purge, the cluster was not purged using the below command: ceph mgr module disable cephadm cephadm rm-cluster --force --zap-osds --fsid The OSDs will remain. There should be some cleanup methods for the whole cluster, not just MON nodes. Is there anyt

[ceph-users] Setup Ceph over RDMA

2024-04-08 Thread Vahideh Alinouri
Hi guys, I need setup Ceph over RDMA, but I faced many issues! The info regarding my cluster: Ceph version is Reef Network cards are Broadcom RDMA. RDMA connection between OSD nodes are OK. I just found ms_type = async+rdma config in document and apply it using ceph config set global ms_type asyn

[ceph-users] Add node-exporter using ceph orch

2024-04-26 Thread Vahideh Alinouri
Hi guys, I have tried to add node-exporter to the new host in ceph cluster by the command mentioned in the document. ceph orch apply node-exporter hostname I think there is a functionality issue because cephadm log print node-exporter was applied successfully, but it didn't work! I tried the bel

[ceph-users] Re: Setup Ceph over RDMA

2024-04-26 Thread Vahideh Alinouri
ent - name: ms_async_rdma_cm type: bool level: advanced default: false with_legacy: true - name: ms_async_rdma_type type: str level: advanced default: ib with_legacy: true It causes confusion and The RDMA setup needs more detail in the document. Regards On Mon, Apr 8, 2024 at 10:06 AM Vahideh Alinouri wro

[ceph-users] header_limit in AsioFrontend class

2023-06-17 Thread Vahideh Alinouri
configurable option introduced to set the header_limit value and the default value is 16384. I would greatly appreciate it if someone from the Ceph development team backport this change to the older version. Best regards, Vahideh Alinouri ___ ceph-users mailing

[ceph-users] Recover pgs from failed osds

2020-08-27 Thread Vahideh Alinouri
Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we have high I/O wait. After increasing one of pool’s pg_num from 64 to 128 according to warning message (more objects per pg), this lead to high cpu load and ram usage on ceph-osd nodes and finally crashed the whole cluster. Thre

[ceph-users] Recover pgs from failed osds

2020-08-27 Thread Vahideh Alinouri
vahideh.alino...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Recover pgs from failed osds

2020-08-28 Thread Vahideh Alinouri
f that helps bring the OSDs back up. Splitting the PGs is a > very heavy operation. > > > Zitat von Vahideh Alinouri : > > > Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we > have > > high I/O wait. > > > > After increasing one of pool’s

[ceph-users] Re: Recover pgs from failed osds

2020-08-28 Thread Vahideh Alinouri
to reduce the memory_target to 3 > GB and see if they start successfully. > > > Zitat von Vahideh Alinouri : > > > osd_memory_target is 4294967296. > > Cluster setup: > > 3 mon, 3 mgr, 21 osds on 3 ceph-osd nodes in lvm scenario. ceph-osd > nodes > > resourc

[ceph-users] Re: Recover pgs from failed osds

2020-08-31 Thread Vahideh Alinouri
osd_memory_target is changed to 3G, starting failed osd causes ceph-osd nodes crash! and failed osd is still "down" On Fri, Aug 28, 2020 at 1:13 PM Vahideh Alinouri wrote: > Yes, each osd node has 7 osds with 4 GB memory_target. > > > On Fri, Aug 28, 2020, 12:48 PM Eugen B

[ceph-users] Re: Recover pgs from failed osds

2020-08-31 Thread Vahideh Alinouri
the opposite and turn up the memory_target and only try to > start a single OSD? > > > Zitat von Vahideh Alinouri : > > > osd_memory_target is changed to 3G, starting failed osd causes ceph-osd > > nodes crash! and failed osd is still "down" > &

[ceph-users] Re: Recover pgs from failed osds

2020-09-01 Thread Vahideh Alinouri
One of failed osd with 3G RAM started and dump_mempools shows total RAM usage is 18G and buff_anon uses 17G RAM! On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri wrote: > osd_memory_target of failed osd in one ceph-osd node changed to 6G but > other osd_memory_target is 3G, starting fail

[ceph-users] Re: Recover pgs from failed osds

2020-09-01 Thread Vahideh Alinouri
Is not any solution or advice? On Tue, Sep 1, 2020, 11:53 AM Vahideh Alinouri wrote: > One of failed osd with 3G RAM started and dump_mempools shows total RAM > usage is 18G and buff_anon uses 17G RAM! > > On Mon, Aug 31, 2020 at 6:24 PM Vahideh Alinouri < > vahideh.alino..

[ceph-users] Re: Recover pgs from failed osds

2020-09-05 Thread Vahideh Alinouri
ceph.io/thread/EDL7U5EWFHSFK5IIBRBNAIXX7IFWR5QK/ > [2] > > https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/F5MOI47FIVSFHULNNPWEJAY6LLDOVUJQ/ > > > Zitat von Vahideh Alinouri : > > > Is not any solution or advice? > > > > On Tue, Sep 1, 2020,

[ceph-users] Re: RGW sizing in multisite and rgw_run_sync_thread

2024-12-26 Thread Vahideh Alinouri
Hi, You might find this blog post helpful: IBM Storage Ceph Object Storage Multisite - Part 1 . The number of RGWs and their configurations depends on the lo

[ceph-users] RGW multisite metadata sync issue

2024-12-15 Thread Vahideh Alinouri
Hi guys, My Ceph release is Quincy 17.2.5. I need to change the master zone to decommission the old one and upgrade all zones. I have separated the client traffic and sync traffic in RGWs, meaning there are separate RGW daemons handling the sync process. I encountered an issue when trying to sync

[ceph-users] Re: RGW multisite metadata sync issue

2024-12-16 Thread Vahideh Alinouri
og virtual int RGWDataChangesFIFO::list(const DoutPrefixProvider*, int, int, std::vector&, std::optional >, std::string*, bool*): unable to list FIFO: data_log.44: (34) Numerical result out of range On Sun, Dec 15, 2024 at 10:45 PM Vahideh Alinouri < vahideh.alino...@gmail.com> wrote: > H

[ceph-users] Re: RGW multisite metadata sync issue

2024-12-17 Thread Vahideh Alinouri
ot;: 0, "marker": "", "next_step_marker": "1_1730469205.875723_87748.1", "total_entries": 174, "pos": 0, "timestamp": "2024-11-01T13:53:25.875723Z&quo

[ceph-users] Re: RGW multisite metadata sync issue

2025-01-24 Thread Vahideh Alinouri
The metadata sync issue has been resolved by changing the master zone and re-running the metadata sync. On Mon, Dec 23, 2024 at 2:15 PM Vahideh Alinouri wrote: > When I increased the debug level of the RGW sync client to 20, I get it: > > 2024-12-23T09:42:17.248+ 7f1248

[ceph-users] Re: Error ENOENT: Module not found

2025-01-25 Thread Vahideh Alinouri
Try the following command to fix the osd_remove_queue: ceph config-key set mgr/cephadm/osd_remove_queue [] After that, the orchestrator will be back. On Sat, Jan 25, 2025, 8:47 PM Devender Singh wrote: > +Eugen > Lets follow “No recovery after removing node - > active+undersized+degraded-- rem

[ceph-users] Re: RGW multisite metadata sync issue

2024-12-23 Thread Vahideh Alinouri
anted to point out that the .meta pool uses namespaces. > > > - Attempted to list metadata in the pool using rados ls -p > > s3-cdn-dc07.rgw.meta, but got an empty result. > > Try this instead: > > rados -p s3-cdn-dc07.rgw.meta ls --all > > Do you have a specific o