Comments inline.
On Thu, Feb 1, 2024 at 4:51 AM Matthew Melendy wrote:
> In our department we're getting starting with Ceph 'reef', using Ceph FUSE
> client for our Ubuntu workstations.
>
> So far so good, except I can't quite figure out one aspect of subvolumes.
>
> When I do the commands:
>
>
I think the client should reconnect when it's out of sleep. Could you
please share the client logs to check what's happening?
On Tue, Mar 26, 2024 at 4:16 AM wrote:
> Hi All,
>
> So I've got a Ceph Reef Cluster (latest version) with a CephFS system set
> up with a number of directories on it.
On Tue, Mar 26, 2024 at 7:30 PM Yongseok Oh
wrote:
> Hi,
>
> CephFS is provided as a shared file system service in a private cloud
> environment of our company, LINE. The number of sessions is approximately
> more than 5,000, and session evictions occur several times a day. When
> session evictio
On Sat, May 11, 2024 at 6:04 AM Adiga, Anantha
wrote:
> Hi,
>
> Under the circumstance that a ceph fs subvolume has to be recreated ,
> the uuid will change and we have to change all sources that
> reference the volume path.
>
> Is there a way to provide a label /tag to the volume pa
I think you can do the following.
NOTE: If you know the objects that are recently created, you can skip to
step 5
1. List the objects in the metadata pool and copy it to a file
rados -p ls > /tmp/metadata_obj_list
2. Prepare a bulk stat script for each object. Unfortunately xargs didn't
wor
Hi Nicola,
Yes, this issue is already fixed in main [1] and the quincy backport is
still pending to be merged. Hopefully will be available
in the next Quincy release.
[1] https://github.com/ceph/ceph/pull/48027
[2] https://github.com/ceph/ceph/pull/54469
Thanks and Regards,
Kotresh H R
On We
Hi,
~6K log segments to be trimmed, that's huge.
1. Are there any custom configs configured on this setup ?
2. Is subtree pinning enabled ?
3. Are there any warnings w.r.t rados slowness ?
4. Please share the mds perf dump to check for latencies and other stuff.
$ceph tell mds. perf dump
Than
On Fri, May 17, 2024 at 11:52 AM Nicola Mori wrote:
> Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should
> be the current version and which is affected. Will the fix be included
> in the next Reef release?
>
Yes, it's already merged to the reef branch, and should be availabl
Please share the mds per dump as requested. We need to understand what's
happening before suggesting anything.
Thanks & Regards,
Kotresh H R
On Fri, May 17, 2024 at 5:35 PM Akash Warkhade
wrote:
> @Kotresh Hiremath Ravishankar
>
> Can you please help on above
>
>
>
On Wed, Jul 27, 2022 at 5:02 AM Gregory Farnum wrote:
> On Tue, Jul 26, 2022 at 3:41 PM Yuri Weinstein
> wrote:
>
>> Greg, I started testing this PR.
>> What do you want to rerun for it? Are fs, kcephfs, multimds suites
>> sufficient?
>
>
> We just need to run the mgr/volumes tests — I think th
You can find the upstream fix here https://github.com/ceph/ceph/pull/46833
Thanks,
Kotresh HR
On Mon, Sep 26, 2022 at 3:17 PM Dhairya Parmar wrote:
> Patch for this has already been merged and backported to quincy as well. It
> will be there in the next Quincy release.
>
> On Thu, Sep 22, 2022
The MDS requests the clients to release caps to trim caches when there is
cache pressure or it
might proactively request the client to release caps in some cases. But the
client is failing to release the
caps soon enough in your case.
Few questions:
1. Have you tuned MDS cache configurations? If
Hi Mathias,
I am glad that you could find it's a client related issue and figured a way
around it.
I too could reproduce the issue locally i.e. when a client which was
initially copying the snapshot still
has access to it even when it's got deleted from the other client. I think
this needs further
Created a tracker to investigate this further.
https://tracker.ceph.com/issues/58376
On Wed, Jan 4, 2023 at 3:18 PM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Mathias,
>
> I am glad that you could find it's a client related issue and figured a
> way a
Hi Thomas,
As the documentation says, the MDS enters up:resolve from up:replay if the
Ceph file system has multiple ranks (including this one), i.e. it’s not a
single active MDS cluster.
The MDS is resolving any uncommitted inter-MDS operations. All ranks in the
file system must be in this state o
ph-mds[1311]: mds.mds01.ceph04.cvdhsx Updating
> MDS map to version 143927 from mon.1
> Jan 16 10:05:05 ceph04 ceph-mds[1311]: mds.mds01.ceph04.cvdhsx Updating
> MDS map to version 143929 from mon.1
> Jan 16 10:05:09 ceph04 ceph-mds[1311]: mds.mds01.ceph04.cvdhsx Updating
> MDS map to ve
eph05 ceph-mds[1209]: mds.0.cache releasing free memory
> >> Jan 17 10:08:26 ceph05 ceph-mds[1209]: mds.0.cache upkeep thread waiting
> >> interval 1.0s
> >> Jan 17 10:08:27 ceph05 ceph-mds[1209]: mds.0.cache Memory usage: total
> >> 372640, rss 57272, heap 20
Hi Thomas,
I have created the tracker https://tracker.ceph.com/issues/58489 to track
this. Please upload the debug mds logs here.
Thanks,
Kotresh H R
On Wed, Jan 18, 2023 at 4:56 PM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Thomas,
>
> This looks like i
Hi,
First of all I would suggest upgrading your cluster on one of the supported
releases.
I think full recovery is recommended to get back the mds.
1. Stop the mdses and all the clients.
2. Fail the fs.
a. ceph fs fail
3. Backup the journal: (If the below command fails, make rados level c
On Tue, Jun 6, 2023 at 4:30 PM Dario Graña wrote:
> Hi,
>
> I'm installing a new instance (my first) of Ceph. Our cluster runs
> AlmaLinux9 + Quincy. Now I'm dealing with CephFS and quotas. I read
> documentation about setting up quotas with virtual attributes (xattr) and
> creating volumes and s
fs approved.
On Fri, Jun 2, 2023 at 2:54 AM Yuri Weinstein wrote:
> Still awaiting for approvals:
>
> rados - Radek
> fs - Kotresh and Patrick
>
> upgrade/pacific-x - good as is, Laura?
> upgrade/quicny-x - good as is, Laura?
> upgrade/reef-p2p - N/A
> powercycle - Brad
>
> On Tue, May 30, 2023
Hi Jakub,
Comments inline.
On Tue, Jul 25, 2023 at 11:03 PM Jakub Petrzilka
wrote:
> Hello everyone!
>
> Recently we had a very nasty incident with one of our CEPH storages.
>
> During basic backfill recovery operation due to faulty disk CephFS
> metadata started growing exponentially until the
On Sat, Apr 9, 2022 at 12:33 AM Vladimir Brik <
vladimir.b...@icecube.wisc.edu> wrote:
> Hello
>
> What speed networking is recommended for active-active MDS
> configurations?
>
I think there is no specific recommendation for active-active MDS. Though
2*1G is sufficient,
it is recommended to use 2
On Tue, Jul 5, 2022 at 1:06 AM Austin Axworthy
wrote:
> When syncing using mirroring, the source has extended acls. On the
> destination they are not preserved. Is this intended behavior? Unable to
> find any information on the docs.
>
Yes, this is the intended behavior.
>
>
>
> Is it possible
Is this the kernel client ? If so, could you try dropping the cache as
below ?
# sync; echo 3 > /proc/sys/vm/drop_caches
Mimic is EOL a long time back. Please upgrade to the latest supported version.
Thanks,
Kotresh HR
On Tue, Jul 5, 2022 at 10:21 PM Frank Schilder wrote:
> Hi all,
>
> I se
25 matches
Mail list logo