[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Denis Polom
Thanks guys, this works! On 11/3/23 17:15, Anthony D'Atri wrote: nm, Adam beat me to it. On Nov 3, 2023, at 11:40, Josh Baergen wrote: The ticket has been updated, but it's probably important enough to state on the list as well: The documentation is currently wrong in a way that running the

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Nelson Hicks
I notice the documentation has been updated to put the L and P at the end of the --sharding option in uppercase, but the O(3,0-13) option is still lowercase: https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding - Nelson On 11/3/23 11:13, Anthony D'Atri

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Anthony D'Atri
nm, Adam beat me to it. > On Nov 3, 2023, at 11:40, Josh Baergen wrote: > > The ticket has been updated, but it's probably important enough to > state on the list as well: The documentation is currently wrong in a > way that running the command as documented will cause this corruption. > The cor

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Anthony D'Atri
If someone can point me at the errant docs locus I'll make it right. > On Nov 3, 2023, at 11:45, Laura Flores wrote: > > Yes, Josh beat me to it- this is an issue of incorrectly documenting the > command. You can try the solution posted in the tracker issue. > > On Fri, Nov 3, 2023 at 10:43 AM

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Laura Flores
Yes, Josh beat me to it- this is an issue of incorrectly documenting the command. You can try the solution posted in the tracker issue. On Fri, Nov 3, 2023 at 10:43 AM Josh Baergen wrote: > The ticket has been updated, but it's probably important enough to > state on the list as well: The docume

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Josh Baergen
The ticket has been updated, but it's probably important enough to state on the list as well: The documentation is currently wrong in a way that running the command as documented will cause this corruption. The correct command to run is: ceph-bluestore-tool \ --path \ --sh

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Denis Polom
Hi, yes, exactly. I had to recreate OSD as well because daemon wasn't able to start. It's obviously a bug and should be fixed either in documentation or code. On 11/3/23 11:45, Eugen Block wrote: Hi, this seems like a dangerous operation to me, I tried the same on two different virtual cl

[ceph-users] Re: ceph orch problem

2023-11-03 Thread Eugen Block
Could you add more info from the mgr log (not only the failing container logs)? Something like this: cephadm logs --name mgr.ceph02-hn02.ofencx And what's with the mds daemons? I have seen mgr actions blocked by an HEALTH_ERR state, maybe you're experiencing that as well here. Zitat von Da

[ceph-users] Re: resharding RocksDB after upgrade to Pacific breaks OSDs

2023-11-03 Thread Eugen Block
Hi, this seems like a dangerous operation to me, I tried the same on two different virtual clusters, Reef and Pacific (all upgraded from previous releases). In Reef the reshard fails alltogether and the OSD fails to start, I had to recreate it. In Pacific the reshard reports a successful

[ceph-users] Re: data corruption after rbd migration

2023-11-03 Thread Nikola Ciprich
Hello Jaroslav, thank you for you reply.. > I found your info a bit confusing. The first command suggests that the VM > is shut down and later you are talking about live migration. So how are you > migrating data online or offline? online, the VM is started after migration prepare command: .. >

[ceph-users] Re: data corruption after rbd migration

2023-11-03 Thread Jaroslav Shejbal
Hi Nikola, I found your info a bit confusing. The first command suggests that the VM is shut down and later you are talking about live migration. So how are you migrating data online or offline? In the case of live migration, I would suggest looking at the fsfreeze(proxmox use it) command. Hope

[ceph-users] Re: ceph fs (meta) data inconsistent

2023-11-03 Thread Frank Schilder
Hi Gregory and Xiubo, we have a smoking gun. The error shows up when using python's shutil.copy function. It affects newer versions of python3. Here some test results (quoted e-mail from our user): > I now have a minimal example that reproduces the error: > > echo test > myfile.txt > ml Python

[ceph-users] data corruption after rbd migration

2023-11-03 Thread Nikola Ciprich
Dear ceph users and developers, we're struggling with strange issue which I think might be a bug causing snapshot data corruption while migrating RBD image we've tracked it to minimal set of steps to reproduce using VM with one 32G drive: rbd create --size 32768 sata/D2 virsh create xml_orig.xml