On Fri, Jul 1, 2022 at 8:32 AM Ansgar Jazdzewski
wrote:
>
> Hi folks,
>
> I did a little testing with the persistent write-back cache (*1) we
> run ceph quincy 17.2.1 qemu 6.2.0
>
> rbd.fio works with the cache, but as soon as we start we get something like
>
> error: internal error: process exite
I just upgraded a non-cephadm test cluster from Pacific 16.2.9 to Quincy
17.2.1. It all went very smoothly, but just a couple of comments about
the upgrade note:
* Steps 5.2, 5.3 & 5.5 - the required command is "ceph fs status", not
"ceph status"
* Step 5.1 correctly requires "allow_standb
OK, and when it will be backported to Pacific?
On 6/27/22 18:59, Neha Ojha wrote:
This issue should be addressed by https://github.com/ceph/ceph/pull/46860.
Thanks,
Neha
On Fri, Jun 24, 2022 at 2:53 AM Kenneth Waegeman
wrote:
Hi,
I’ve updated the cluster to 17.2.0, but the log is still fill
On Wed, Jun 29, 2022 at 11:22 PM Curt wrote:
>
>
> On Wed, Jun 29, 2022 at 9:55 PM Stefan Kooman wrote:
>
>> On 6/29/22 19:34, Curt wrote:
>> > Hi Stefan,
>> >
>> > Thank you, that definitely helped. I bumped it to 20% for now and
>> that's
>> > giving me around 124 PGs backfilling at 187 MiB/s,
We noticed that our DNS settings were inconsistent and partially wrong.
The NetworkManager somehow set useless nameservers in the
/etc/resolv.conf of our hosts.
But in particular, the DNS settings in the MGR containers needed fixing
as well.
I fixed etc/resolv.conf on our hosts and in the contain
Hi,
Since Jun 28 04:05:58 postfix/smtpd[567382]: NOQUEUE: reject: RCPT from
unknown[158.69.70.147]: 450 4.7.25 Client host rejected: cannot find your
hostname, [158.69.70.147]; from=
helo=
ipaddr was changed from 158.69.68.89 to 158.69.70.147, but PTR record was not
moved simultaneously
√ ~
Dear Ceph community,
After upgrading our cluster to Quincy with cephadm (ceph orch upgrade start
--image quay.io/ceph/ceph:v17.2.1), I struggle to re-activate the snapshot
schedule module:
0|0[root@osd-1 ~]# ceph mgr module enable snap_schedule
0|1[root@osd-1 ~]# ceph mgr module ls | grep snap
We found a fix for our issue ceph orch reporting wrong/outdated service
information:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/DAFXD46NALFAFUBQEYODRIFWSD6SH2OL/
In our case our DNS settings were messed up on the cluster hosts AND
also within the MGR daemon containers (ceph
On Fri, Jul 1, 2022 at 5:48 PM Konstantin Shalygin wrote:
>
> Hi,
>
> Since Jun 28 04:05:58 postfix/smtpd[567382]: NOQUEUE: reject: RCPT from
> unknown[158.69.70.147]: 450 4.7.25 Client host rejected: cannot find your
> hostname, [158.69.70.147]; from=
> helo=
>
> ipaddr was changed from 158.69
On 7/1/22 12:13, Ilya Dryomov wrote:
On Fri, Jul 1, 2022 at 5:48 PM Konstantin Shalygin wrote:
Hi,
Since Jun 28 04:05:58 postfix/smtpd[567382]: NOQUEUE: reject: RCPT from
unknown[158.69.70.147]: 450 4.7.25 Client host rejected: cannot find your hostname,
[158.69.70.147]; from= helo=
ipadd
Dear ceph community,
over the last years I read pros and cons regarding swap for different workloads
or setups.
Recently I came again across that question on ceph OSD nodes. The folks a croit
disable it at all on their distribution an suggest it to ;) …. and from the
v14.2.22 Nautilus released
There was a RocksDB PR that was merged a few days ago that I suspect is
causing a regression. Details in this Tracker:
https://tracker.ceph.com/issues/55636.
It is Igor's PR, and I have asked him to take a look to verify whether it's
related. This is an important issue that I believe should be add
Interesting thought. Thanks for the reply J
I have a mgr running on that same node but that’s what happened when I tried to
spin up a monitor. I went back to the node based on this feedback, removed the
mgr instance so it had nothing on it. Deleted all the images and containers,
downloa
>
> This is an important issue that I believe should be addressed before
> Quincy's last point release.
Apologies, this should say *Octopus's* last point release.
- Laura
On Fri, Jul 1, 2022 at 4:30 PM Laura Flores wrote:
> There was a RocksDB PR that was merged a few days ago that I suspect
14 matches
Mail list logo