Hello,
Whenever a node reboots in the cluster I get some corrupted OSDs, is there
any config I should set to prevent this from happening that I am not aware
of?
Here is the error log:
# kubectl logs rook-ceph-osd-1-5dcbd99cc7-2l5g2 -c expand-bluefs
ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a5
Hi all,
Is there any production-based sample for HAProxy?
Or are there any other suggestions for limiting RGW bandwidth for users?
On Wed, 19 Aug 2020 at 20:11, Anthony D'Atri
wrote:
>
>
> > I wanna limit the traffic of specific buckets. Can haproxy, nginx or any
> > other proxy software deal w
Hi
I'm using the pacific version with cephadm. After a failed upgrade from
16.2.7 to 17.2.2, 2/3 MGR nodes stopped working (this is a known bug of
upgrade) and the orchestrator also didn't respond to rollback services, so
I had to remove the daemons and add the correct one manually by running
this
de to Quincy, which
updated automatically.
If I understand correctly migration_current is somehow a safety feature in
the upgrade.
If you have more info, please let me know.
Regards,
Reza
On Mon, 29 Aug 2022 at 10:50, Reza Bakhshayeshi
wrote:
> Hi
>
> I'm using the pacific version wi
Hi all,
I have a problem regarding upgrading Ceph cluster from Pacific to Quincy
version with cephadm. I have successfully upgraded the cluster to the
latest Pacific (16.2.11). But when I run the following command to upgrade
the cluster to 17.2.5, After upgrading 3/4 mgrs, the upgrade process stop
Cephadm changed the backend ssh library from pacific to quincy due to the
> one used in pacific no longer being supported so it's possible some general
> ssh error has popped up in your env as a result.
>
> On Thu, Apr 6, 2023 at 8:38 AM Reza Bakhshayeshi
> wrote:
>
>&
://docs.ceph.com/en/pacific/cephadm/install/#further-information-about-cephadm-bootstrap,
> see the point on "--ssh-user"). Did this actually work for you before in
> pacific with a non-root user that doesn't have sudo privileges? I had
> assumed that had never worked.
>
me, we
> might have to work on something in order to handle this case (making the
> sudo optional somehow). As mentioned in the previous email, that setup
> wasn't intended to be supported even in pacific, although if it did work,
> we could bring something in to make it usable in
Hello,
What is the best strategy regarding failure domain and rack awareness when
there are only 2 physical racks and we need 3 replicas of data?
In this scenario what is your point of view if we create 4 artificial racks
at least to be able to manage deliberate node maintenance in a more
efficie