Thanks!
As far as I can see, this is the same problem as mine.
ср, 15 дек. 2021 г. в 16:49, Chris Dunlop :
> On Wed, Dec 15, 2021 at 02:05:05PM +1000, Michael Uleysky wrote:
> > I try to upgrade three-node nautilus cluster to pacific. I am updating
> ceph
> > on one node and restarting daemons.
Hello Team,
After testing our cluster we removed and recreated all ceph pools which
actually cleaned up all users and buckets, but we can still see data in the
disks.
is there a easy way to clean up all osds without actually removing and
reconfiguring them?
what can be the best way to solve this
The message is being held because:
Message has implicit destination
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi
I'm having trouble getting snapshot replication to work. I have 2
clusters, 714-ceph on RHEL/16.2.0-146.el8cp and dcn-ceph on CentOS
Stream 8/16.2.6. I trying to enable one-way replication from 714-ceph ->
dcn-ceph.
Adding peer:
"
# rbd mirror pool info
Mode: image
Site Name: dcn-ceph
P
Hi Torkil,
On 12/15/21 09:45, Torkil Svensgaard wrote:
> I'm having trouble getting snapshot replication to work. I have 2
> clusters, 714-ceph on RHEL/16.2.0-146.el8cp and dcn-ceph on CentOS
> Stream 8/16.2.6. I trying to enable one-way replication from 714-ceph ->
> dcn-ceph.
I didn't try th
Hi,
Our total number of hdd-OSD is 40. 40X5.5TB=220. we are using 3 replica for
every pool. So, "Max avail" should show 220/3= 73.3. Am I right?
what is the meaning of "variance 1.x". I think we might have wrong
configuration , but need to find it.
We have some more SSD-OSD, , yeah total capaci
Hi
I'm confused by the direction parameter in the documentation[1]. If I
have my data at site-a and want one way replication to site-b should the
mirroring be configured as the documentation example, directionwise?
E.g.
rbd --cluster site-a mirror pool peer bootstrap create --site-name
site
Den ons 15 dec. 2021 kl 09:35 skrev Marc :
> The message is being held because:
>
> Message has implicit destination
Usually stuff like "the maillist wasn't in the To: field, but only CC: or BCC:"
--
May the most significant bit of your life be positive.
_
Hi Torkil,
On 12/15/21 13:24, Torkil Svensgaard wrote:
> I'm confused by the direction parameter in the documentation[1]. If I
> have my data at site-a and want one way replication to site-b should the
> mirroring be configured as the documentation example, directionwise?
What you are describin
On 15/12/2021 13.44, Arthur Outhenin-Chalandre wrote:
Hi Torkil,
Hi Arthur
On 12/15/21 13:24, Torkil Svensgaard wrote:
I'm confused by the direction parameter in the documentation[1]. If I
have my data at site-a and want one way replication to site-b should the
mirroring be configured as the
Hi Torkil,
I would recommend sticking to rx-tx to make potential failback back to
the primary cluster easier. There shouldn't be any issue with running
rbd-mirror daemons at both sites either -- it doesn't start replicating
until it is instructed to, either per-pool or per-image.
Thanks,
On 15/12/2021 10.17, Arthur Outhenin-Chalandre wrote:
Hi Torkil,
Hi Arthur
On 12/15/21 09:45, Torkil Svensgaard wrote:
I'm having trouble getting snapshot replication to work. I have 2
clusters, 714-ceph on RHEL/16.2.0-146.el8cp and dcn-ceph on CentOS
Stream 8/16.2.6. I trying to enable on
On 15/12/2021 13.58, Ilya Dryomov wrote:
Hi Torkil,
Hi Ilya
I would recommend sticking to rx-tx to make potential failback back to
the primary cluster easier. There shouldn't be any issue with running
rbd-mirror daemons at both sites either -- it doesn't start replicating
until it is instruc
On 12/15/21 13:50, Torkil Svensgaard wrote:
> Ah, so as long as I don't run the mirror daemons on site-a there is no
> risk of overwriting production data there?
To be perfectly clear there should be no risk whatsoever (as Ilya also
said). I suggested to not run rbd-mirror on site-a so that repli
Hmm that ticket came from the slightly unusual scenario where you were
deploying a *new* Pacific monitor against an Octopus cluster.
Michael, is your cluster deployed with cephadm? And is this a new or
previously-existing monitor?
On Wed, Dec 15, 2021 at 12:09 AM Michael Uleysky wrote:
>
> Thank
I create a rbd pool using only two SATA SSDs(one for data, another for
database,WAL), and set the replica size 1.
After that, I setup a fio test on Host same with the OSD placed. I found
the latency is hundreds micro-seconds(sixty micro-seconds for the raw
SATA SSD ).
The fio outpus:
m-seqw
On 15.12.21 05:59, Linh Vu wrote:
May not be directly related to your error, but they slap a DO NOT UPGRADE
FROM AN OLDER VERSION label on the Pacific release notes for a reason...
https://docs.ceph.com/en/latest/releases/pacific/
This is an unrelated issue (bluestore_fsck_quick_fix_on_mount)
Is this not just inherent to SDS? And wait for the new osd code, I think they
are working on it.
https://yourcmc.ru/wiki/Ceph_performance
>
> m-seqwr-004k-001q-001j: (groupid=0, jobs=1): err= 0: pid=46: Wed Dec 15
> 14:05:32 2021
> write: IOPS=794, BW=3177KiB/s (3254kB/s)(559MiB/180002msec)
FWIW, we ran single OSD, iodepth=1 O_DSYNC write tests against classic
and crimson bluestore OSDs in our Q3 crimson slide deck. You can see the
results starting on slide 32 here:
https://docs.google.com/presentation/d/1eydyAFKRea8n-VniQzXKW8qkKM9GLVMJt2uDjipJjQA/edit#slide=id.gf880cf6296_1_73
Thanks Linh Vu, so it sounds like i should be prepared to bounce the OSDs
and/or Hosts, but I haven't heard anyone yet say that it won't work, so I
guess there's that...
On Tue, Dec 14, 2021 at 7:48 PM Linh Vu wrote:
> I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in
> Luminous
I've got Ceph running on Ubuntu 20.04 using Ceph-ansible, and I noticed
that the .deb files for NFS-ganesha aren't on download.ceph.com. It seems
the files should be here:
https://download.ceph.com/nfs-ganesha/deb-V3.5-stable/pacific but
"deb-V3.5-stable" doesn't exist. Poking around, I can see the
Hi Xiubo,
Thanks very much for looking into this, that does sound like what might
be happening in our case.
Is this something that can be improved somehow - would disabling pinning or
some config change help? Or could this be addressed in a future release?
It seems somehow excessive to write so
On 12/15/21 14:18, Arthur Outhenin-Chalandre wrote:
On 12/15/21 13:50, Torkil Svensgaard wrote:
Ah, so as long as I don't run the mirror daemons on site-a there is no
risk of overwriting production data there?
To be perfectly clear there should be no risk whatsoever (as Ilya also
said). I sugg
23 matches
Mail list logo