logs (and if nothing interesting is in there increase the debug
level and retry).
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ybe he would know more about that or if something else could have
caused your issue.
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
fectively a
> 30GB database, and 30GB of extra room for compaction?
I don't use cephadm, but it's maybe related to this regression:
https://tracker.ceph.com/issues/56031. At list the symptoms looks very
similar...
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
d upon in this tracker
https://tracker.ceph.com/issues/56031. It's a pretty bad regression
IMO... The fix is already available (and I just opened the backports
this morning).
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-us
ith snapshots came from an proxmox bug ...
> we will see
> (https://forum.proxmox.com/threads/possible-bug-after-upgrading-to-7-2-vm-freeze-if-backing-up-large-disks.109272/)
>
> have a great time ...
>
> ronny
>
> Am 2022-05-12 15:29, schrieb Arthur Outhenin-Chalandre:
>> On 5/1
Hi,
Not sure if that's expected but the tarball for 16.2.9 has not been
uploaded apparently.
Cheers,
--
Arthur Outhenin-Chalandre
On 5/19/22 06:18, David Galloway wrote:
> 16.2.9 is a hotfix release to address a bug in 16.2.8 that can cause the
> MGRs to deadlock.
>
on every images; it would only
be for new volumes if people want explicitly that feature. So we are
probably not going to hit these performance issues that you suffer for
quite some time and the scope of it should be limited...
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 5/12/22 13:25, ronny.lippold wrote:
> hi arthur and thanks for answering,
>
>
> Am 2022-05-12 13:06, schrieb Arthur Outhenin-Chalandre:
>> Hi Ronny
>
>>
>> Yes according to my test we were not able to have a good replication
>> speed on a single
ling the snapshot feature on the rbd images, the load goes
> down.
> disabling the rbd mirror processes did not help, load stays up.
If you have no rbd-mirror running while snapshot mirroring is enabled,
for me it means me that the load come from taking/deleting snapshots...
At what interv
future.
Cheers
--
Arthur Outhenin-Chalandre
On 5/9/22 10:22, Michel Niyoyita wrote:
> Hi Arthur ,
>
> Thanks for your help , I have curiosity of what you use for Sir . CLI ?
> or you have another method you use to manage your cluste
>
> Cheers
>
> Michel
>
> On
Hi Michel,
Sorry I don't use the dashboard, I can't help you on that part.
Cheers,
--
Arthur Outhenin-Chalandre
On 5/9/22 10:14, Michel Niyoyita wrote:
> Dear Arthur,
>
> Thanks for the recommandations it works, I changed the download url and
> it works .
> but
Hi Michel,
Well either updating the url in ceph-ansible or changing ceph-ansible
behavior to use the dashboards in the package as you prefer.
Cheers,
--
Arthur Outhenin-Chalandre
On 5/9/22 08:27, Michel Niyoyita wrote:
> Hello Arthur ,
>
> What can you recommend me to resolve
ontent.com/ceph/ceph/pacific/monitoring/grafana/dashboards/ceph-cluster.json
> [...]
Yes the dashboards JSON were moved to
https://github.com/ceph/ceph/tree/master/monitoring/ceph-mixin/dashboards_out.
They are packaged both in deb and rpm packages, so I don't know why
ceph-ansible
for both clusters. So your rbd-mirror
need to access at least the mons and the OSD from both clusters. Maybe
you could try to use a tunnel like Wireguard or OpenVPN?
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
xx
leader: false
health: OK
service 149710160:
instance_id: 149710166
client_id: barn-rbd-mirror-c
hostname: barn-rbd-mirror-c.cern.ch
version: 15.2.xx
leader: false
health: OK
service 149781483:
instance_id: 149710136
client_id: barn-rbd-mirror-a
hostname: barn-rbd-mirror-a.
0S9#/. Feel free to test the
journal mode in your setup and report back to the list though, it could
be very interesting!
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 2/24/22 09:26, Arthur Outhenin-Chalandre wrote:
> On 2/23/22 21:43, Linkriver Technology wrote:
>> Could someone shed some light please? Assuming that snaptrim didn't run to
>> completion, how can I manually delete objects from now-removed snapshots? I
>> bel
ker [1]. You can
probably guess all the pgs that still needs snaptrim by checking
snaptrimq_len with the command `ceph pg dump pgs`. Basically all the pgs
that have a non zero value need snaptrim and you can trigger the
snaptrim by re-peering them.
[1]: https://tracker.ceph.com/issues/52
ssed that the other mode has also the same behavior.
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
e-a so that replication
from site-b to site-a wouldn't be a thing at all.
That being said we also run a setup where we only need one way
replication but for the same reasons posted by Ilya we use rx-tx and run
rbd-mirror in both sites.
Cheers,
--
Arthur Ou
on site-a.
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
-ceph.
I didn't try the one way replication myself with the snapshot mode so I
can't say for sure, but there is an issue in 16.2.6 [1]. It has been
fixed and backported into 16.2.7, an update to that version may solve
your problem!
[1]: https://tracker.ceph.com/issues/52675
Cheers,
than the
journal mode. We have no practical production experience on it yet, but
we will soon rollout this into a "beta" phase for our users!
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ormance [1].
Cheers,
[1]:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/FPRB2DW4N427U25LEHYICOKI4C37BKSO/
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
but this was needed because I was stress testing snapshots and there was
many many objects that needed this snaptrim process. You could probably
increase this value for safety reasons, any value between 0.1s and 3s
(that you already tested!) is probably fine!
Cheers,
images to replicate among
themselves.
Cheers,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
t
for it! We are interested by the possibility of adding a second remote
peer to facilitate migration of the replication process to a third
cluster without stopping the replication to the remote cluster.
Cheers,
--
Arthur Outhenin-Chalandre
___
ce
rs,
--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
28 matches
Mail list logo