[ceph-users] Re: Problems adding a new host via orchestration.

2024-02-03 Thread Eugen Block
Hi, I found this blog post [1] which reports the same error message. It seems a bit misleading because it appears to be about DNS. Can you check cephadm check-host --expect-hostname Or is that what you already tried? It's not entirely clear how you checked the hostname. Regards, Eugen

[ceph-users] Re: How can I clone data from a faulty bluestore disk?

2024-02-03 Thread Alexander E. Patrakov
Hi, I think that the approach with exporting and importing PGs would be a-priori more successful than the one based on pvmove or ddrescue. The reason is that you don't need to export/import all data that the failed disk holds, but only the PGs that Ceph cannot recover otherwise. The logic here is

[ceph-users] Performance issues with writing files to Ceph via S3 API

2024-02-03 Thread Renann Prado
Hello, I have an issue at my company where we have an underperforming Ceph instance. The issue that we have is that sometimes writing files to Ceph via S3 API (our only option) takes up to 40s, which is too long for us. We are a bit limited on what we can do to investigate why it's performing so b

[ceph-users] Re: Snapshot automation/scheduling for rbd?

2024-02-03 Thread Jeremy Hansen
Am I just off base here or missing something obvious? Thanks > On Thursday, Feb 01, 2024 at 2:13 AM, Jeremy Hansen (mailto:jer...@skidrow.la)> wrote: > Can rbd image snapshotting be scheduled like CephFS snapshots? Maybe I missed > it in the documentation but it looked like scheduling snapshots

[ceph-users] Re: Performance issues with writing files to Ceph via S3 API

2024-02-03 Thread Anthony D'Atri
The slashes don’t mean much if anything to Ceph. Buckets are not hierarchical filesystems. You speak of millions of files. How many millions? How big are they? Very small objects stress any object system. Very large objects may be multi part uploads that stage to slow media or otherwise ad

[ceph-users] Re: How can I clone data from a faulty bluestore disk?

2024-02-03 Thread Anthony D'Atri
I’ve done the pg import dance a couple of times. It was very slow but did work ultimately. Depending on the situation, if there is one valid copy available one can enable recovery by temporarily setting min_size on the pool to 1, reverting it once recovery completes. You you run with 1 all

[ceph-users] Re: Snapshot automation/scheduling for rbd?

2024-02-03 Thread Marc
I am having a script that checks on each node what vm's are active and then the script makes a snap shot of their rbd's. It first issues some command to the vm to freeze the fs if the vm supports it. > > Am I just off base here or missing something obvious? > > Thanks > > > > > On T

[ceph-users] Re: Snapshot automation/scheduling for rbd?

2024-02-03 Thread Jayanth Reddy
Hi, For CloudStack with RBD, you should be able to control the snapshot placement using the global setting "snapshot.backup.to.secondary". Setting this to false makes snapshots be placed directly on Ceph instead of secondary storage. See if you can perform recurring snapshots. I know that there

[ceph-users] RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-03 Thread duluxoz
Hi All, All of this is using the latest version of RL and Ceph Reef I've got an existing RBD Image (with data on it - not "critical" as I've got a back up, but its rather large so I was hoping to avoid the restore scenario). The RBD Image used to be server out via an (Ceph) iSCSI Gateway, bu