Yeah, we'll make sure the container images are built before announcing it.

On 5/28/20 1:30 PM, David Orman wrote:
Due to the impact/severity of this issue, can we make sure the docker images are pushed simultaneously for those of us using cephadm/containers (with the last release, there was a significant delay)? I'm glad the tempfix is being put into place in short-order, thank you for the expedient turnaround and understanding.

On Thu, May 28, 2020 at 3:03 PM Josh Durgin <jdur...@redhat.com <mailto:jdur...@redhat.com>> wrote:

    Hi Paul, we're planning to release 15.2.3 with the workaround [0]
    tomorrow, so folks don't have to worry as we work on a more complete
    fix.

    Josh

    [0] https://github.com/ceph/ceph/pull/35293

    On 5/27/20 6:27 AM, Paul Emmerich wrote:
     > Hi,
     >
     > since this bug may lead to data loss when several OSDs crash at
    the same
     > time (e.g., after a power outage): can we pull the release from the
     > mirrors and docker hub?
     >
     > Paul
     >
     > --
     > Paul Emmerich
     >
     > Looking for help with your Ceph cluster? Contact us at
    https://croit.io
     >
     > croit GmbH
     > Freseniusstr. 31h
     > 81247 München
     > www.croit.io <http://www.croit.io> <http://www.croit.io>
     > Tel: +49 89 1896585 90
     >
     >
     > On Wed, May 20, 2020 at 7:18 PM Josh Durgin <jdur...@redhat.com
    <mailto:jdur...@redhat.com>
     > <mailto:jdur...@redhat.com <mailto:jdur...@redhat.com>>> wrote:
     >
     >     Hi folks, at this time we recommend pausing OSD upgrades to
    15.2.2.
     >
     >     There have been a couple reports of OSDs crashing due to rocksdb
     >     corruption after upgrading to 15.2.2 [1] [2]. It's safe to
    upgrade
     >     monitors and mgr, but OSDs and everything else should wait.
     >
     >     We're investigating and will get a fix out as soon as we can. You
     >     can follow progress on this tracker:
     >
     > https://tracker.ceph.com/issues/45613
     >
     >     Josh
     >
     >     [1]
     >
    
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CX5PRFGL6UBFMOJC6CLUMLPMT4B2CXVQ/
     >     [2]
     >
    
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CWN7BNPGSRBKZHUF2D7MDXCOAE3U2ERU/
     >     _______________________________________________
     >     ceph-users mailing list -- ceph-users@ceph.io
    <mailto:ceph-users@ceph.io>
     >     <mailto:ceph-users@ceph.io <mailto:ceph-users@ceph.io>>
     >     To unsubscribe send an email to ceph-users-le...@ceph.io
    <mailto:ceph-users-le...@ceph.io>
     >     <mailto:ceph-users-le...@ceph.io
    <mailto:ceph-users-le...@ceph.io>>
     >
     >
     > _______________________________________________
     > Dev mailing list -- d...@ceph.io <mailto:d...@ceph.io>
     > To unsubscribe send an email to dev-le...@ceph.io
    <mailto:dev-le...@ceph.io>
     >
    _______________________________________________
    ceph-users mailing list -- ceph-users@ceph.io
    <mailto:ceph-users@ceph.io>
    To unsubscribe send an email to ceph-users-le...@ceph.io
    <mailto:ceph-users-le...@ceph.io>

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to