Hey Frank,
I hate to sound like a broken record here but if you can access any of
the stuff that's in rank 2 try running a 'find /path/to/dir/ -ls' on
some of the stuff and see if the num_strays decrease. I've had that help
last time we've had an MDS like that.
Frank,
Are you able to share an update to date ceph config dump and ceph daemon
mds.X perf dump | grep strays from the cluster?
We're just getting through our comically long ceph outage, so i'd like
to be able to share the love here hahahaha
Regards,
Bailey Allison
Service
o mentions such, and some other
helpful configs.
Again, if you can access the directories of the mds rank in question
when it's active, see if you can stat some of them.
Best of luck friend,
Regards,
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 1/10/25 18:07,
you can access the filesystem, try running a stat on that portion
with something like 'find . -ls' in a directory and see if the strays
decrease.
Regards,
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 1/10/25 17:18, Frank Schilder wrote:
Hi Bailey,
thank
which
doing so everything returned to normal.
ceph tell mds.X perf dump | jq .mds_cache
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 1/10/25 16:42, Frank Schilder wrote:
Hi all,
I got the MDS up. however, after quite some time its sitting with almost no CPU
load:
top
HI Frank,
What is the state of the mds currently? We are probably at a point where
we do a bit of hope and waiting for it to come back up.
Regards,
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 1/10/25 15:51, Frank Schilder wrote:
Hi all,
I seem to have gotten the
y brain of
similar issues we've seen.
Is there much swap space available to the node as well? In the event the
daemon is actually making progress but just has lack of resources you
may need to extend the time it can remain up with swap.
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-
ing the shadow_copy2 module.
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 9/3/24 18:19, John Mulligan wrote:
On Tuesday, September 3, 2024 5:00:20 PM EDT Robert W. Eckert wrote:
When I try to create the .smb pool, I get an error message:
# ceph osd pool create .smb
Hey,
We at 45drives also offer ceph support. We have no specific requirements
either, we can work with bare metal or containerized, and do not require
specific version of ceph.
Happy to provide more details if needed.
Bailey Allison
Service Team Lead
45Drives, Ltd.
866-594-7199 x868
On 8
+1 to this, also ran into this in our lab testing. Thanks for sharing this
information!
Regards,
Bailey
> -Original Message-
> From: Eugen Block
> Sent: July 18, 2024 3:55 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Heads up: New Ceph images require x86-64-v2 and
> possibly
Igor, it was your post on here mentioning this a few weeks ago
that actually let me know to even check this stuff.
Regards,
Bailey
> -Original Message-
> From: Igor Fedotov
> Sent: June 10, 2024 7:08 AM
> To: Bailey Allison ; 'ceph-users' us...@ceph.io>
&
I have a question regarding bluestore labels, specifically for a block.db
partition.
To make a long story short, we are currently in a position where checking
the label of a block.db partition and it appears corrupted.
I have seen another thread on here suggesting to copy the label from a
Hey Nicola,
Try mounting cephfs with fuse instead of kernel, we have seen before sometimes
the kernel mount does not properly support that option but the fuse mount does.
Regards,
Bailey
> -Original Message-
> From: Nicola Mori
> Sent: May 15, 2024 7:55 AM
> To: ceph-users
> Subject:
Hey Peter,
A simple ceph-volume lvm activate should get all of the OSDs back up and
running once you install the proper packages/restore the ceph config
file/etc.,
If the node was also a mon/mgr you can simply re-add those services.
Regards,
Bailey
> -Original Message-
> From: Peter va
Hey,
We make use of the ctdb_mutex_ceph_rados_helper so the lock file just gets
stored within CephFS metadata pool rather than on a shared CephFS mount as a
file.
We don't recommend storing directly on CephFS as if the mount hosting the lock
file is to go down we have seen the mds mark as stal
I think this is fantastic. Looking forward to the sambaxp talk too!
CephFS + SMB is something we make use of very much of, and have had a lot of
success working with. It is nice to see it getting some more integration.
Regards,
Bailey
> -Original Message-
> From: John Mulligan
> Sent:
Hey All,
It might be easier to check using cephfs dir stats using getfattr, ex.
getfattr -n ceph.dir.rentries /path/to/dir
Regards,
Bailey
> -Original Message-
> From: Igor Fedotov
> Sent: March 14, 2024 1:37 PM
> To: Thorne Lawler ; ceph-users@ceph.io;
> etienne.men...@ubisoft.com; v
+1 on this if you need iSCSI,
Maged & team have built a great iSCSI ceph solution with PetaSAN especially
if integrating directly into VMware.
Regards,
Bailey
> -Original Message-
> From: Maged Mokhtar
> Sent: February 27, 2024 5:40 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] R
Holy! I have no questions just wanted to say thanks for emailing this, as
much as it does suck to know that's been an issue I really appreciate you
sharing the information about this on here.
We've got a fair share of ubuntu clusters so if there's a way to validate I
would love to know, but it als
+1 to this, great article and great research. Something we've been keeping a
very close eye on ourselves.
Overall we've mostly settled on the old keep it simple stupid methodology with
good results. Especially as the benefits have gotten less beneficial the more
recent your ceph version, and h
Hey Frank,
+1 to this, we've seen it a few times now. I've attached an output of ceph
df from an internal cluster we have with the same issue.
[root@Cluster1 ~]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAILUSED RAW USED %RAW USED
fast_nvme 596 GiB 595 GiB 50 MiB 1.0 GiB
Hi Götz,
We’ve done a similar process which involves going from starting at CentOS 7
Nautilus and upgrading to Rocky 8/Ubuntu 20.04 Octopus+.
What we do is start on CentOS 7 Nautilus we upgrade to Octopus on CentOS 7
(we’ve built python packages and have them on our repo to satisfy some c
Hi,
It appears you have quite a low PG count on your cluster (approx. 20 PGs per
each OSD).
Usually is recommended to have about 100-150 per each OSD. With a lower PG
count you can have issues with balancing data and cause errors such as large
OMAP objects.
Might not be the fix in this case
Hi,
Did you restart all of the ceph services just on node 1 so far? Or did you
restart mons on each node first, then managers on each node, etc.,? I have
seen during ceph upgrades if services are restarted out of order a similar
issue occurs (restarting all ceph services on a single node).
Regard
ens
>Sent: April 29, 2023 11:21 PM
>To: Bailey Allison ; ceph-users@ceph.io
>Subject: [ceph-users] Re: architecture help (iscsi, rbd, backups?)
>
>Bailey,
>
>Thanks for your extensive reply, you got me down the wormhole of CephFS and
>SMB (and looking at a lot of 45drives
Hey Angelo,
Just to make sure I'm understanding correctly, the main idea for the use
case is to be able to present Ceph storage to windows clients as SMB?
If so, you can absolutely use CephFS to get that done. This is something we
do all the time with our cluster configurations, if we're looking
Hey Jeff,
As long as you set the maintenance flags (noout/norebalance) you should be good
to take the node down with a reboot
Regards,
Bailey
>From: Jeffrey Turmelle
>Sent: March 1, 2023 2:47 PM
>To: ceph-users@ceph.io
>Subject: [ceph-users] Interruption of rebalancing
>
>I ha
Hi,
That is most likely possible but the difference in performance from doing
CephFS + Samba compared to RBD + Ceph iSCSI + Windows SMB would probably be
extremely noticeable in a not very good way.
As Wyll mentioned recommended way is to just share out SMB on top of an
exisitng CephFS mount (
Hi Reed,
Just taking a quick glance at the Pastebin provided I have to say your cluster
balance is already pretty damn good all things considered.
We've seen the upmap balancer at it's best in practice provides a deviation of
about 10-20% percent across OSDs which seems to be matching up on yo
29 matches
Mail list logo