Dear Eugen,
We have a scenario of DC and DR replication, and planned to explore RBD
mirroring with both Journaling and Snapshot mechanism.
I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2 different
ceph clusters configured.
Please clarify the following queries
1. With One wa
Hi,
I would suggest wiping the disks first with "wipefs -af /dev/_your_disk" or
"sgdisk --zap-all /dev/your_disk" and try again. Try only one disk first.
Is the host visible by running the command: "ceph orch host ls". Is the
FQDN name correct? If so, does the following command return any errors?
Hi,
Yes, like it always do
k
Sent from my iPhone
> On 2 May 2024, at 07:09, Nima AbolhassanBeigi
> wrote:
>
> We are trying to upgrade our OS version from ubuntu 18.04 to ubuntu 22.04.
> Our ceph cluster version is 16.2.13 (pacific).
>
> The problem is that the ubuntu packages for the ceph
Hi Mark,
On Thu, May 2, 2024 at 3:18 AM Mark Nelson wrote:
> For our customers we are still disabling mclock and using wpq. Might be
> worth trying.
>
>
Could you please elaborate a bit on the issue(s) preventing the
use of mClock. Is this specific to only the slow backfill rate and/or other
iss
Hi Götz,
Please see my response below.
On Tue, Apr 30, 2024 at 7:39 PM Pierre Riteau wrote:
> Hi Götz,
>
> You can change the value of osd_max_backfills (for all OSDs or specific
> ones) using `ceph config`, but you need
> enable osd_mclock_override_recovery_settings. See
>
> https://docs.ceph.
Hello All,
I'm hoping I can get some help with an issue in the dashboard after doing a
recent bare metal ceph upgrade from
Octopus to Quincy.
** Please note, this document references it only being an issue with the images
tab shortly after this I found the same issue on another cluster that wa
Hello David, did you resolve it? I have the same problem for rgw. I upgraded
from N to P.
Regards,
Jie
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
hi stefan ... you are the hero of the month ;)
i don't know, why i did not found your bug report.
i have the exact same problem and resolved the HEALTH only with "ceph
osd force_healthy_stretch_mode --yes-i-really-mean-it"
will comment the report soon.
actually, we think about 4/2 size withou
Attached is a copy of the "Launch of the Ceph User Council" slides.
On Sat, Apr 27, 2024 at 8:12 AM Matt Vandermeulen
wrote:
> Hi folks!
>
> Thanks for a great Ceph Day event in NYC! I wanted to make sure I posted
> my slides before I forget (and encourage others to do the same). Feel
> free to
Hi,
We are trying to upgrade our OS version from ubuntu 18.04 to ubuntu 22.04.
Our ceph cluster version is 16.2.13 (pacific).
The problem is that the ubuntu packages for the ceph pacific release will
not be supported for ubuntu 22.04. We were wondering if the ceph client
(version 18.2, reef) on u
Hello,
I had a problem after I finished the 'cephadm adopt' from services to docker
containers for mon and mgr. The fsid of `ceph -s` is not the same as the
/etc/ceph/ceph.conf. The ceph.conf is correct, but `ceph -s` is incorrect. I
followed the https://docs.ceph.com/en/quincy/cephadm/adoption/
I'm trying to add a new storage host into a Ceph cluster (quincy 17.2.6). The
machine has boot drives, one free SSD and 10 HDDs. The plan is to have each HDD
be an OSD with a DB on a equal size lvm of the SDD. This machine is newer but
otherwise similar to other machines already in the cluster t
Hi Maged,
2024年5月2日(木) 5:34 Maged Mokhtar :
>
> On 01/05/2024 16:12, Satoru Takeuchi wrote:
> > I confirmed that incomplete data is left on `rbd import-diff` failure.
> > I guess that this data is the part of snapshot. Could someone answer
> > me the following questions?
> >
> > Q1. Is it safe to
We've run into a problem during the last verification steps before
publishing this release after upgrading the LRC to it =>
https://tracker.ceph.com/issues/65733
After this issue is resolved, we will continue testing and publishing
this point release.
Thanks for your patience!
On Thu, Apr 18, 2
For our customers we are still disabling mclock and using wpq. Might be
worth trying.
Mark
On 4/30/24 09:08, Pierre Riteau wrote:
Hi Götz,
You can change the value of osd_max_backfills (for all OSDs or specific
ones) using `ceph config`, but you need
enable osd_mclock_override_recovery_setti
On 01/05/2024 16:12, Satoru Takeuchi wrote:
I confirmed that incomplete data is left on `rbd import-diff` failure.
I guess that this data is the part of snapshot. Could someone answer
me the following questions?
Q1. Is it safe to use the RBD image (e.g. client I/O and snapshot
management) even
Hello Saif,
Unfortunately, I have no other ideas that could help you.
On Wed, May 1, 2024 at 4:48 PM Saif Mohammad wrote:
>
> Hi Alexander,
>
> We have configured the parameters in our infrastructure to fix the issue,
> and despite tuning them or even set it to the higher levels, the issue sti
I confirmed that incomplete data is left on `rbd import-diff` failure.
I guess that this data is the part of snapshot. Could someone answer
me the following questions?
Q1. Is it safe to use the RBD image (e.g. client I/O and snapshot
management) even though incomplete data exists?
Q2. Is there any
Hi Alexander,
We have configured the parameters in our infrastructure to fix the issue, and
despite tuning them or even set it to the higher levels, the issue still
persists. We have shared the latency between the DC and DR site for your
reference. Please advise on alternative solutions to res
19 matches
Mail list logo