hello guys,
in my production cluster i've many objects like this
"#> rados -p rbd ls | grep 'benchmark'"
... .. .
benchmark_data_inkscope.example.net_32654_object1918
benchmark_data_server_26414_object1990
... .. .
Is it safe to run "rados -p rbd cleanup" or is there any risk for my
images?
_
Dear all,
we're planning a new Ceph-Clusterm, with CephFS as the
main workload, and would like to use erasure coding to
use the disks more efficiently. Access pattern will
probably be more read- than write-heavy, on average.
I don't have any practical experience with erasure-
coded pools so far.
Hi,
I cloned a NTFS with bad blocks from USB HDD onto Ceph RBD volume
(using ntfsclone, so the copy has sparse regions), and decided to clean
bad blocks within the copy. I run chkdsk /b from WIndows and it fails on
free space verification (step 5 of 5).
In tcmu-runner.log I see that
Jason Dillaman wrote:
I am doing more experiments with Ceph iSCSI gateway and I am a bit
confused on how to properly repurpose an RBD image from iSCSI target
into QEMU virtual disk and back
This isn't really a use case that we support nor intend to support. Your
best bet would b
Hi all,
I have the same problem here:
* during the upgrade from 12.2.5 to 12.2.6
* I restarted all the OSD server in turn, which did not trigger any bad
thing
* a few minutes after upgrading the OSDs/MONs/MDSs/MGRs (all on the
same set of servers) and unsetting noout, I upgraded the clients, which
Check out the message titled "IMPORTANT: broken luminous 12.2.6
release in repo, do not upgrade"
It sounds like 12.2.7 should come *soon* to fix this transparently.
--
Adam
On Sun, Jul 15, 2018 at 10:28 AM, Nicolas Huillard
wrote:
> Hi all,
>
> I have the same problem here:
> * during the upgra
Le dimanche 15 juillet 2018 à 11:01 -0500, Adam Tygart a écrit :
> Check out the message titled "IMPORTANT: broken luminous 12.2.6
> release in repo, do not upgrade"
>
> It sounds like 12.2.7 should come *soon* to fix this transparently.
Thanks. I didn't notice this one. I should monitor more clo
Hi Caspar,
Did you find any information regarding the migration from crush-compat to
unmap? I’m facing the same situation.
Thanks!
De: ceph-users En nombre de Caspar Smit
Enviado el: lunes, 25 de junio de 2018 12:25
Para: ceph-users
Asunto: [ceph-users] Balancer: change from crush-compat to