On Mon, Aug 5, 2024 at 10:32 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67340#note-1
>
> Release Notes - N/A
> LRC upgrade - N/A
> Gibba upgrade -TBD
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura (https://github.com/c
Details of this release are summarized here:
https://tracker.ceph.com/issues/67340#note-1
Release Notes - N/A
LRC upgrade - N/A
Gibba upgrade -TBD
Seeking approvals/reviews for:
rados - Radek, Laura (https://github.com/ceph/ceph/pull/59020 is being
tested and will be cherry-picked when ready)
On 05.08.24 18:38, Nicola Mori wrote:
docker.io/snack14/ceph-wizard
This is not an official container image.
The images from the Ceph project are on quay.io/ceph/ceph.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 03
Dear Ceph users,
during an upgrade from 18.2.2 to 18.2.4 the image pull from Dockerhub
failed on one machine running a monitor daemon, while it succeeded on
the previous ones.
# ceph orch upgrade status
{
"target_image":
"snack14/ceph-wizard@sha256:b1994328eb078778abdba0a17a7cf7b371e7d95
Describe your hardware, please, and are you talking an orderly "shutdown -r"
reboot, or a kernel / system crash or power loss?
Often corruptions like this are a result of:
* Using non-enterprise SSDs that lack power loss protection
* Buggy / defective RAID HBAs
* Enabling volatile write cache on
Hi Robert,
I saw nothing unusual on this side.
Regards,
Marianne
On 5 Aug 2024, at 11:42, Robert Sander wrote:
> Hi Marianne,
>
> is there anything in the kernel logs of the VMs and the hosts where the VMs
> are running with regard to the VM storage?
>
> Regards
> --
> Robert Sander
> Hei
Hi Alex,
thank you for the script. We will monitor how the queue fills ups to see if
this is the issue or not.
Cheers,
Florian
> On 5. Aug 2024, at 14:01, Alex Hussein-Kershaw (HE/HIM)
> wrote:
>
> Hi Florian,
>
> We are also gearing up to use persistent bucket notifications, but have not
The setting can technically be set up at bootstrap. Bootstrap supports a
`--config` flag that takes a filepath to a ceph config file. That file
could include
[mgr]
mgr/cephadm/use_repo_digest = false
The config file is assimilated into the cluster as part of bootstrap and
use_repo_digest will be
Hi Florian,
We are also gearing up to use persistent bucket notifications, but have not got
as far as you yet so quite interested in this. As I understand it, a bunch of
new function is coming in Squid on the radosgw-admin command to allow gathering
metrics from the queues, but they are not ava
Hi,
we just set up 2 new ceph clusters (using rook). To do some processing of the
user activity we configured a topic that sends events to Kafka.
After 5-12 hours this stops working with a 503 SlowDown response:
debug 2024-08-02T09:17:58.205+ 7ff4359ad700 1 req 13681579273117692719
0.005000
Hi Marianne,
is there anything in the kernel logs of the VMs and the hosts where the
VMs are running with regard to the VM storage?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
A
Hello,
Whenever a node reboots in the cluster I get some corrupted OSDs, is there
any config I should set to prevent this from happening that I am not aware
of?
Here is the error log:
# kubectl logs rook-ceph-osd-1-5dcbd99cc7-2l5g2 -c expand-bluefs
ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a5
Thanks! The initial email was from a month ago and I think only made its way
onto the mailing list recently as I was having trouble getting signed up.
Unfortunately that means I don't have a system in this state anymore.
I have since got through the bootstrapping issues by running a docker regis
13 matches
Mail list logo