Hey guys,
I ran into a weird issue, hope you can explain what I'm observing. I'm
testing* Ceph 16.2.10* on *Ubuntu 20.04* in *Google Cloud VMs*, I created 3
instances and attached 4 persistent SSD disks to each instance. I can see
these disks attached as `/dev/sdb, /dev/sdc, /dev/sdd, /dev/sde` de
SDs for
> you or you disable that (unmanaged=true) and run the manual steps
> again (although it's not really necessary).
>
> Regards,
> Eugen
>
> Zitat von Oleksiy Stashok :
>
> > Hey guys,
> >
> > I ran into a weird issue, hope you can explain what I'
Hey guys,
Could you please point me to the branch that will be used for the upcoming
16.2.11 release? I'd like to see the diff w/ 16.2.10 to better understand
what was fixed.
Thank you.
Oleksiy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
Thank you all! It's exactly what I needed.
On Fri, Oct 28, 2022 at 8:51 AM Laura Flores wrote:
> Hi Christian,
>
> There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
>
> Yes, thank you for providing
have a clean
> start.
>
> Zitat von Oleksiy Stashok :
>
> > Hey Eugen,
> >
> > valid points, I first tried to provision OSDs via ceph-ansible (later
> > excluded), which does run the batch command with all 4 disk devices, but
> it
> > often failed with the s
Hey guys,
Is there a way to disable the legacy msgr v1 protocol for all ceph services?
Thank you.
Oleksiy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io