Zap had an issue back then and never properly worked, you have to manually dd,
we always played it save and went 2-4GB in just to be sure.Should fix your
issue.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/cep
On Tue, Nov 5, 2019 at 2:21 PM Janne Johansson wrote:
> I seem to recall some ticket where zap would "only" clear 100M of the drive,
> but lvm and all partition info needed more to be cleared, so using dd
> bs=1M count=1024 (or more!) would be needed to make sure no part of the OS
> picks
Den tis 5 nov. 2019 kl 19:10 skrev J David :
> On Tue, Nov 5, 2019 at 3:18 AM Paul Emmerich
> wrote:
> > could be a new feature, I've only realized this exists/works since
> Nautilus.
> > You seem to be a relatively old version since you still have ceph-disk
> installed
>
> The next approach may
On Tue, Nov 5, 2019 at 3:18 AM Paul Emmerich wrote:
> could be a new feature, I've only realized this exists/works since Nautilus.
> You seem to be a relatively old version since you still have ceph-disk
> installed
None of this is using ceph-disk? It's all done with ceph-volume.
The ceph clus
On Mon, Nov 4, 2019 at 11:04 PM J David wrote:
>
> OK. Is there possibly a more surgical approach? It's going to take a
> really long time to convert the cluster, so we don't want to do
> anything global that might cause weirdness if any of the OSD servers
> with unconverted OSD's need to be rebo
On Tue, Nov 5, 2019 at 2:43 AM J David wrote:
> $ sudo ceph osd safe-to-destroy 42
> OSD(s) 42 are safe to destroy without reducing data durability.
> $ sudo ceph osd destroy 42
> Error EPERM: Are you SURE? This will mean real, permanent data loss,
> as well as cephx and lockbox keys. Pass --yes-i
On Mon, Nov 4, 2019 at 1:32 PM Paul Emmerich wrote:
> BTW: you can run destroy before stopping the OSD, you won't need the
> --yes-i-really-mean-it if it's drained in this case
This actually does not seem to work:
$ sudo ceph osd safe-to-destroy 42
OSD(s) 42 are safe to destroy without reducing
On Mon, Nov 4, 2019 at 1:32 PM Paul Emmerich wrote:
> That's probably the ceph-disk udev script being triggered from
> something somewhere (and a lot of things can trigger that script...)
That makes total sense.
> Work-around: convert everything to ceph-volume simple first by running
> "ceph-vol
That's probably the ceph-disk udev script being triggered from
something somewhere (and a lot of things can trigger that script...)
Work-around: convert everything to ceph-volume simple first by running
"ceph-volume simple scan" and "ceph-volume simple activate", that will
disable udev in the inte
While converting a luminous cluster from filestore to bluestore, we
are running into a weird race condition on a fairly regular basis.
We have a master script that writes upgrade scripts for each OSD
server. The script for an OSD looks like this:
ceph osd out 68
while ! ceph osd safe-to-destroy
10 matches
Mail list logo