y blocked OSDs to
ever recover this Ceph cluster, I fear...
Thanks,
Paul
*******
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.ac.uk<mailto:pf...@
meral than it seems to be, and
so instantiation of new DB devices mapped to the correct OSD devices (via LUKS
key) would allow for restarting the down+out OSD devices. But that's
increasingly looking to not be possible, from updates on this thread.
***
Paul Browne
Research
tact HDD OSDs which have links to dead block.DB devices, using
native cephadm tooling rather than getting so low-level as all the above?
Many thanks for any advice,
***
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
Deployment of RGW container to the host is similarly blocked again.
This seems to be covered under this issue;
https://tracker.ceph.com/issues/48694 for ceph-volume, but may not
have been addressed as yet...
On Mon, 11 Jan 2021 at 10:36, Paul Browne wrote:
> Hello all,
>
> I&
s to clean
up the dangling container state?
--
***
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.a
On Wed, 29 Jan 2020 at 16:52, Matthew Vernon wrote:
> Hi,
>
> On 29/01/2020 16:40, Paul Browne wrote:
>
> > Recently we deployed a brand new Stein cluster however, and I'm curious
> > whether the idea of pointing the new OpenStack cluster at the same RBD
> >
ool having
different feature lists?
--
***
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.ac.uk
Tel: 0044-1