2023-08-15T18:15:55.356+ 7f7916ef3700 -1 *** Caught signal (Aborted) **
in thread 7f7916ef3700 thread_name:radosgw
ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific
(stable)
1: /lib64/libpthread.so.0(+0x12ce0) [0x7f79da065ce0]
2: gsignal()
3: abort()
4: /lib64/libs
Hi everyone,
Join us tomorrow at 15:00 UTC to hear from our Google Summer of Code and
Outreachy interns, Devansh Singh and Medhavi Singh, in this next Ceph Tech Talk
on Making Teuthology Friendly.
https://ceph.io/en/community/tech-talks/
If you want to give atechnicalpresentation for CephTechT
I did find this question:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/XGLIXRMA5YUVG6P2W6WOQVTN4GJMX3GI/#XGLIXRMA5YUVG6P2W6WOQVTN4GJMX3GI
Seems "ceph mgr fail" worked for me in this case.
___
ceph-users mailing list -- ceph-users@ceph.
Hello,
I’ve had a quite unpleasant experience today that I would like to share.
In our setup we use two set’s of RGW one that has only s3 and admin API and a
second set with s3website and admin API. I was changing the global quota
setting which means that I’ve then needed to commit the updated
Hi everyone,
The User + Dev Monthly Meeting is happening next week on* Thursday, August
24th* *@ 2:00 PM UTC *at this link:
https://meet.jit.si/ceph-user-dev-monthly
(Note that the date has been rescheduled from the original date, August
17th.)
Please add any topics you'd like to discuss to the
you could maybe try running "ceph config set global container
quay.io/ceph/ceph:v16.2.9" before running the adoption. It seems it still
thinks it should be deploying mons with the default image (
docker.io/ceph/daemon-base:latest-pacific-devel ) for some reason and maybe
that config option is why.
with the log to cluster level already on debug, if you do a "ceph mgr fail"
what does cephadm log to the cluster before it reports sleeping? It should
at least be doing something if it's responsive at all. Also, in "ceph orch
ps" and "ceph orch device ls" are the REFRESHED columns reporting that
t
We are happy to announce another release of the go-ceph API library. This
is a
regular release following our every-two-months release cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.23.0
The library includes bindings that aim to play a similar role to the
"pybind"
python bindings in the
I recently updated one of the hosts (an older Dell PowerEdge R515) in my Ceph
Quincy (17.2.6) cluster. I needed to change the IP address, so I removed the
host from the cluster (gracefully removed OSDs and daemons, then removed the
host). I also took the opportunity to upgrade the host from Rock
Hi Everyone,
We are looking at migrating all our production clusters from ceph-ansible to
cephadm. We are currently experiencing an issue where when reconfiguring a
service through ceph orch, it will change the running container image for that
service which has led to the mgr services running a
I'd like to try reef, but we are on debian 11 (bullseye).
In the ceph repos, there is debian-quincy/bullseye and
debian-quincy/focal, but under reef there is only focal & jammy.
Is there a reason why there is no reef/bullseye build? I had thought
that the blocker only affected debian-bookworm
On 8/15/23 11:16, Curt wrote:
Probably not the issue, but do all your osd servers have internet
access? I've had a similar experience when one of our osd servers
default gateway got changed, so it was just waiting to download and took
a bit to timeout.
Yes, all nodes can manually pull the im
On 8/15/23 11:02, Eugen Block wrote:
I guess I would start looking on the nodes where it failed to upgrade
OSDs and check out the cephadm.log as well as syslog. Did you see
progress messages in the mgr log for the successfully updated OSDs (or
MON/MGR)?
The issue is that there is no informatio
Hi,
literally minutes before your email popped up in my inbox I had
announced that I would upgrade our cluster from 16.2.10 to 16.2.13
tomorrow. Now I'm hesitating. ;-)
I guess I would start looking on the nodes where it failed to upgrade
OSDs and check out the cephadm.log as well as syslog
Hi,
A healthy 16.2.7 cluster should get an upgrade to 16.2.13.
ceph orch upgrade start --ceph-version 16.2.13
did upgrade MONs, MGRs and 25% of the OSDs and is now stuck.
We tried several "ceph orch upgrade stop" and starts again.
We "failed" the active MGR but no progress.
We set the debug lo
15 matches
Mail list logo