Thanks guys, this works!
On 11/3/23 17:15, Anthony D'Atri wrote:
nm, Adam beat me to it.
On Nov 3, 2023, at 11:40, Josh Baergen wrote:
The ticket has been updated, but it's probably important enough to
state on the list as well: The documentation is currently wrong in a
way that running the
I notice the documentation has been updated to put the L and P at the
end of the --sharding option in uppercase,
but the O(3,0-13) option is still lowercase:
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#rocksdb-sharding
- Nelson
On 11/3/23 11:13, Anthony D'Atri
nm, Adam beat me to it.
> On Nov 3, 2023, at 11:40, Josh Baergen wrote:
>
> The ticket has been updated, but it's probably important enough to
> state on the list as well: The documentation is currently wrong in a
> way that running the command as documented will cause this corruption.
> The cor
If someone can point me at the errant docs locus I'll make it right.
> On Nov 3, 2023, at 11:45, Laura Flores wrote:
>
> Yes, Josh beat me to it- this is an issue of incorrectly documenting the
> command. You can try the solution posted in the tracker issue.
>
> On Fri, Nov 3, 2023 at 10:43 AM
Yes, Josh beat me to it- this is an issue of incorrectly documenting the
command. You can try the solution posted in the tracker issue.
On Fri, Nov 3, 2023 at 10:43 AM Josh Baergen
wrote:
> The ticket has been updated, but it's probably important enough to
> state on the list as well: The docume
The ticket has been updated, but it's probably important enough to
state on the list as well: The documentation is currently wrong in a
way that running the command as documented will cause this corruption.
The correct command to run is:
ceph-bluestore-tool \
--path \
--sh
Hi,
yes, exactly. I had to recreate OSD as well because daemon wasn't able
to start.
It's obviously a bug and should be fixed either in documentation or code.
On 11/3/23 11:45, Eugen Block wrote:
Hi,
this seems like a dangerous operation to me, I tried the same on two
different virtual cl
Could you add more info from the mgr log (not only the failing
container logs)? Something like this:
cephadm logs --name mgr.ceph02-hn02.ofencx
And what's with the mds daemons? I have seen mgr actions blocked by an
HEALTH_ERR state, maybe you're experiencing that as well here.
Zitat von Da
Hi,
this seems like a dangerous operation to me, I tried the same on two
different virtual clusters, Reef and Pacific (all upgraded from
previous releases). In Reef the reshard fails alltogether and the OSD
fails to start, I had to recreate it. In Pacific the reshard reports a
successful
Hello Jaroslav,
thank you for you reply..
> I found your info a bit confusing. The first command suggests that the VM
> is shut down and later you are talking about live migration. So how are you
> migrating data online or offline?
online, the VM is started after migration prepare command:
..
>
Hi Nikola,
I found your info a bit confusing. The first command suggests that the VM
is shut down and later you are talking about live migration. So how are you
migrating data online or offline?
In the case of live migration, I would suggest looking at the
fsfreeze(proxmox use it) command.
Hope
Hi Gregory and Xiubo,
we have a smoking gun. The error shows up when using python's shutil.copy
function. It affects newer versions of python3. Here some test results (quoted
e-mail from our user):
> I now have a minimal example that reproduces the error:
>
> echo test > myfile.txt
> ml Python
Dear ceph users and developers,
we're struggling with strange issue which I think might be a bug
causing snapshot data corruption while migrating RBD image
we've tracked it to minimal set of steps to reproduce using VM
with one 32G drive:
rbd create --size 32768 sata/D2
virsh create xml_orig.xml
13 matches
Mail list logo