Hi Patrick,
I agree that learning Ceph today with Octopus is not a good idea, but,
as a newbie with this tool, I was not able to solve the HDD detection
problem and my post about it on this forum do not provide any help
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/OPMWHJ4Z
On 9/18/23 11:19, Stefan Kooman wrote:
IIIRC, the "enable dual" stack PR's were more or less "accidentally"
merged
So this looks like a big NO on the dual stack support for Ceph.
I just need an answer, I do not need dual stack support.
It would be nice if the documentation was a little be cl
Any input from anyone, please?
On Tue, 19 Sept 2023 at 09:01, Zakhar Kirpichenko wrote:
> Hi,
>
> Our Ceph 16.2.x cluster managed by cephadm is logging a lot of very
> detailed messages, Ceph logs alone on hosts with monitors and several OSDs
> has already eaten through 50% of the endurance of t
Hi,
We are unable to resolve these issues and OSD restarts have made the ceph
cluster unusable. We are wondering if downgrading ceph version from 18.2.0 to
17.2.6. Please let us know if this is supported and if so, please point me to
the procedure to do the same.
Thanks.
__
Thanks!
I hadn't even noticed the problem was in the stand-by, I just assumed it was
the active MDS.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Ceph users and developers,
We invite you to join us at the User + Dev Relaunch, happening this
Thursday at 10:00 AM EST! See below for more meeting details. Also see this
blog post to read more about the relaunch:
https://ceph.io/en/news/blog/2023/user-dev-meeting-relaunch/
We have two guest s
I'm not sure off-hand. The module did have several changes as recently
as pacific so it's possible something is broken. Perhaps you don't
have a file system created yet? I would still expect to see the
commands however...
I suggest you figure out why Ceph Pacific+ can't detect your hard disk
drive
Hi Patrick,
sorry for the bad copy/paste. As it was not working I have also tried
with the module name 😕
[ceph: root@mostha1 /]# ceph fs snap-schedule
no valid command found; 10 closest matches:
fs status []
fs volume ls
fs volume create []
fs volume rm []
fs subvolumegroup ls
fs subvolume
https://docs.ceph.com/en/quincy/cephfs/snap-schedule/#usage
ceph fs snap-schedule
(note the hyphen!)
On Tue, Sep 19, 2023 at 8:23 AM Patrick Begou
wrote:
>
> Hi,
>
> still some problems with snap_schedule as as the ceph fs snap-schedule
> namespace is not available on my nodes.
>
> [ceph: root@
Hi,
still some problems with snap_schedule as as the ceph fs snap-schedule
namespace is not available on my nodes.
[ceph: root@mostha1 /]# ceph mgr module ls | jq -r '.enabled_modules []'
cephadm
dashboard
iostat
prometheus
restful
snap_schedule
[ceph: root@mostha1 /]# ceph fs snap_schedule
n
Hi,
I've seen this issue mentioned in the past, but with older releases. So
I'm wondering if anybody has any pointers.
The Ceph cluster is running Pacific 16.2.13 on Ubuntu 20.04. Almost all
clients are working fine, with the exception of our backup server. This
is using the kernel CephFS client
Hi Venky,
As I said: There are no laggy OSDs. The maximum ping I have for any OSD
in ceph osd perf is around 60ms (just a handful, probably aging disks).
The vast majority of OSDs have ping times of less than 1ms. Same for the
host machines, yet I'm still seeing this message. It seems that the
Hi Janek,
On Mon, Sep 18, 2023 at 9:52 PM Janek Bevendorff <
janek.bevendo...@uni-weimar.de> wrote:
> Thanks! However, I still don't really understand why I am seeing this.
>
This is due to a changes that was merged recently in pacific
https://github.com/ceph/ceph/pull/52270
The MDS wo
Hello Sam,
> For Mons and Mgrs, what stuff do I need to retain across the OS upgrade to
have things "just work" [since they're relatively stateless, I assume mostly
the /etc/ceph/ stuff and ceph cluster keys?]
> For the MDS, I assume it's similar to MGRS? The MDS, IIRC, mainly works as
a cachi
Hello,
In our deployment we are using the mix of s3 and s3website RGW. I’ve noticed
strange behaviour when sending range requests to the s3website RGWs that I’m
not able to replicate on the s3 ones.
I’ve created a simple wrk LUA script to test sending range requests on tiny
ranges so the issue
Hi,
bad question, sorry.
I've just run
ceph mgr module enable snap_schedule --force
to solve this problem. I was just afraid to use "--force" 😕 but as I
can break this test configuration
Patrick
Le 19/09/2023 à 09:47, Patrick Begou a écrit :
Hi,
I'm working on a small POC for a ceph
Hi,
I'm working on a small POC for a ceph setup on 4 old C6100 power-edge. I
had to install Octopus since latest versions were unable to detect the
HDD (too old hardware ??). No matter, this is only for training and
understanding Ceph environment.
My installation is based on
https://downlo
Hello Pedro,
This is a known bug in standby-replay MDS. Please see the links below and
patiently wait for the resolution. Restarting the standby-replay MDS will
clear the warning with zero client impact, and realistically, that's the
only thing (besides disabling the standby-replay MDS completely)
18 matches
Mail list logo