[ceph-users] Re: multisite sync issue with bucket sync

2024-11-20 Thread Christopher Durham
Casey, OR, is there a way to continue on with new data syncing (incremental) as the full sync catches up, as the full sync will take a long time, and no new incremental data is being replicated. -Chris On Wednesday, November 20, 2024 at 03:30:40 PM MST, Christopher Durham wrote: Cas

[ceph-users] Re: multisite sync issue with bucket sync

2024-11-20 Thread Christopher Durham
Casey, Thanks for your response. So is there a way to abandon a full sync and just move on with an incremental from the time you abandon the full sync? -Chris On Wednesday, November 20, 2024 at 12:29:26 PM MST, Casey Bodley wrote: On Wed, Nov 20, 2024 at 2:10 PM Christopher Durham wr

[ceph-users] Re: CephFS subvolumes not inheriting ephemeral distributed pin

2024-11-20 Thread Rajmohan Ramamoorthy
Hi Patrick, Few other follow up questions. Is directory fragmentation applicable only when multiple active MDS is enabled for a Ceph FS? Will directory fragmenation and distribution of fragments amongs active MDS happen if we turn off balancer for a Ceph FS volume `ceph fs set midline-a balance_

[ceph-users] Squid: regression in rgw multisite replication from Quincy/Reef clusters

2024-11-20 Thread Casey Bodley
Recent multisite testing has uncovered a regression on Squid that happens when secondary zones are upgraded to Squid before the metadata master zone. User metadata replicates incorrectly in this configuration, such that their access keys are "inactive". As a result, these users are denied access to

[ceph-users] Re: [CephFS] Completely exclude some MDS rank from directory processing

2024-11-20 Thread Eugen Block
Ah, I misunderstood, I thought you wanted an even distribution across both ranks. Just for testing purposes, have you tried pinning rank 1 to some other directory? Does it still break the CephFS if you stop it? I'm not sure if you can prevent rank 1 from participating, I haven't looked into

[ceph-users] Re: Crush rule examples

2024-11-20 Thread Andre Tann
Hi Janne Am 20.11.24 um 11:30 schrieb Janne Johansson: This post seem to show that, except they have their root named "nvme" and they split on rack and not dc, but that is not important. https://unix.stackexchange.com/questions/781250/ceph-crush-rules-explanation-for-multiroom-racks-setup Th

[ceph-users] CephFS maximum filename length

2024-11-20 Thread Naumann, Thomas
Hi at all, we use proxmoxcluster (v8.2.8) with ceph (v18.2.4) an EC-pools (all ceph options on default). One pool is exported with CephFS as backend storage for nextcloud servers. At the moment there is data migration from old S3 storage to CephFS pool. There are many files with huge filename leng

[ceph-users] Re: multisite sync issue with bucket sync

2024-11-20 Thread Christopher Durham
Ok, Source code review reveals that full sync is marker based and sync errors within a marker group *suggest* that data within the marker isre-checked, (I may be wrong about this, but that is consistent with my 304 errors below). I do however, have the folllowing question: Is there a way to oth

[ceph-users] Re: multisite sync issue with bucket sync

2024-11-20 Thread Casey Bodley
On Wed, Nov 20, 2024 at 2:10 PM Christopher Durham wrote: > > Ok, > Source code review reveals that full sync is marker based and sync errors > within a marker group *suggest* that data within the marker isre-checked, (I > may be wrong about this, but that is consistent with my 304 errors below

[ceph-users] Re: Crush rule examples

2024-11-20 Thread Joachim Kraftmayer
I have worked with crush and crush rules a lot over the last 12 years. I would always recommend testing the rules with a crushtool, for example. https://docs.ceph.com/en/reef/man/8/crushtool/ joachim.kraftma...@clyso.com www.clyso.com Hohenzollernstr. 27, 80801 Munich Utting | HR: Augsbu

[ceph-users] [CephFS] Completely exclude some MDS rank from directory processing

2024-11-20 Thread Александр Руденко
Hi, I try to distribute all top level dirs in CephFS by different MDS ranks. I have two active MDS with rank *0* and *1 *and I have 2 top dirs like */dir1* and* /dir2*. After pinning: setfattr -n ceph.dir.pin -v 0 /fs-mountpoint/dir1 setfattr -n ceph.dir.pin -v 0 /fs-mountpoint/dir2 I can see ne

[ceph-users] Re: Encrypt OSDs on running System. A good Idea?

2024-11-20 Thread Janne Johansson
> What issues should I expect if I take an OSD (15TB) out one at a time, > encrypt it, and put it back into the cluster? I would have a long period > where some OSDs are encrypted and others are not. How dangerous is this? I don't think it would be more dangerous than if you were redoing OSDs for

[ceph-users] Encrypt OSDs on running System. A good Idea?

2024-11-20 Thread Giovanna Ratini
Hello all :-), We use Ceph both as storage in Proxmox and as storage in K8S. I would like to encrypt the OSDs. I have backups of the Proxmox machines, but honestly, I would prefer not to have to use them, as it would take two days to rebuild everything from scratch. I ran some tests on a sma

[ceph-users] Re: Crush rule examples

2024-11-20 Thread Andre Tann
Sorry, sent too early. So here we go again: My setup looks like this: DC1 node01 node02 node03 node04 node05 DC2 node06 node07 node08 node09 node10 I want a replicated pool with siz

[ceph-users] Crush rule examples

2024-11-20 Thread Andre Tann
Hi all, I'm trying to understand how crush rules need to be set up, and much to my surprise I cannot find examples and/or good explanations (or I'm too stupid to understand them ;) ) My setup looks like this: DC1 node01 node02 node03

[ceph-users] Re: Crush rule examples

2024-11-20 Thread Andre Tann
How can I describe this in a crush rule? Let me please add the point that causes me the most difficulties: I consider DC and host both to be failure domains. But still I accept that two copies go into one DC, but I don't want to accept that two copies go to one host. And also, how can I

[ceph-users] Re: Crush rule examples

2024-11-20 Thread Janne Johansson
> Sorry, sent too early. So here we go again: > My setup looks like this: > >DC1 >node01 >node02 >node03 >node04 >node05 >DC2 >node06 >node07 >node08 >node09 >no

[ceph-users] Re: Crush rule examples

2024-11-20 Thread Frank Schilder
Hi Andre, I think what you really want to look at is stretch mode. There have been long discussions on this list why a crush rule with rep 4 and 2 copies per DC will not handle a DC failure as expected. Stretch mode will make sure writes happen in a way that prevents split brain scenarios. Ha

[ceph-users] Re: [CephFS] Completely exclude some MDS rank from directory processing

2024-11-20 Thread Eugen Block
Hi, After pinning: setfattr -n ceph.dir.pin -v 0 /fs-mountpoint/dir1 setfattr -n ceph.dir.pin -v 0 /fs-mountpoint/dir2 is this a typo? If not, you did pin both directories to the same rank. Zitat von Александр Руденко : Hi, I try to distribute all top level dirs in CephFS by different MDS

[ceph-users] Join us for today's User + Developer Monthly Meetup!

2024-11-20 Thread Laura Flores
Hi all, Please join us for today's User + Developer Monthly Meetup at 10:00 AM ET. RSVP here! https://www.meetup.com/ceph-user-group/events/304636936 Thanks, Laura Flores -- Laura Flores She/Her/Hers Software Engineer, Ceph Storage Chicago, IL lflo...@ibm.com | lflo...@re

[ceph-users] Re: [CephFS] Completely exclude some MDS rank from directory processing

2024-11-20 Thread Александр Руденко
No it's not a typo. It's misleading example) dir1 and dir2 are pinned to rank 0, but FS and dir1,dir2 can't work without rank 1. rank 1 is used for something when I work with this dirs. ceph 16.2.13, metadata balancer and policy based balancing not used. ср, 20 нояб. 2024 г. в 16:33, Eugen Block

[ceph-users] Re: [CephFS] Completely exclude some MDS rank from directory processing

2024-11-20 Thread Александр Руденко
> > Just for testing purposes, have you tried pinning rank 1 to some other > directory? Does it still break the CephFS if you stop it? Yes, nothing changed. It's no problem that FS hangs when one of the ranks goes down, we will have standby-reply for all ranks. I don't like that rank which is no