Problem:
CephFS fallocate implementation does not actually reserve data blocks
when mode is 0.
It only truncates the file to the given size by setting the file size
in the inode.
So, there is no guarantee that writes to the file will succeed
Solution:
Since an immediate remediation of this problem
On Tue, Sep 10, 2024 at 5:36 PM Ilya Dryomov wrote:
>
> On Tue, Sep 10, 2024 at 1:23 PM Milind Changire wrote:
> >
> > Problem:
> > CephFS fallocate implementation does not actually reserve data blocks
> > when mode is 0.
> > It only truncates the file to the
Hi Paul,
Could you create a ceph tracker (tracker.ceph.com) and list out things
that are suboptimal according to your investigation?
We'd like to hear more on this.
Alternatively, you could list the issues with mds here.
Thanks,
Milind
On Sun, Jan 7, 2024 at 4:37 PM Paul Mezzanini wrote:
>
> We
All paths mentioned while configuring cephfs snapshot mirroring start at
the respective cephfs file-system root
eg.
if you typically mount the cephfs file-system at /mnt/folderfs, then the
path "/mnt/folderfs" is meaningless to cephfs snapshot mirroring unless you
indeed have a folder hierarchy /mn
On Sat, Oct 8, 2022 at 7:27 PM Frank Schilder wrote:
> Hi all,
>
> I believe I enabled ephemeral pinning on a home dir, but I can't figure
> out how to check that its working. Here is my attempt:
>
> Set the flag:
> # setfattr -n ceph.dir.pin.distributed -v 1 /mnt/admin/cephfs/hpc/home
>
> Try to
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Milind Changire
> Sent: 09 October 2022 09:24:20
> To: Frank Schilder
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] How to check which directory has
maybe,
- use the top program to look at a threaded listing of the ceph-mds
process and see which thread(s) are consuming the most cpu
- use gstack to attach to the ceph-mds process and dump the backtrace
into a file; we can then map the thread with highest cpu consumption to the
gst
Christian,
Some obvious questions ...
1. What Linux distribution have you deployed Ceph on ?
2. The snap_schedule db has indeed been moved to an SQLite DB in rados
in Quincy.
So, is there ample storage space in your metadata pool to move this DB
to ?
On Thu, Nov 17, 2022 at 2:53
On Thu, Nov 17, 2022 at 6:02 PM phandaal wrote:
> On 2022-11-17 12:58, Milind Changire wrote:
> > Christian,
> > Some obvious questions ...
> >
> >1. What Linux distribution have you deployed Ceph on ?
>
> Gentoo Linux, using python 3.10.
> Ceph is only us
You could try creating Subvolumes as well:
https://docs.ceph.com/en/latest/cephfs/fs-volumes/
As usual, ceph caps and data layout semantics apply to Subvolumes as well.
On Thu, Dec 22, 2022 at 8:19 PM Jonas Schwab <
jonas.sch...@physik.uni-wuerzburg.de> wrote:
> Hello everyone,
>
> I would like
What ceph version are you using?
$ ceph versions
On Wed, Dec 28, 2022 at 3:17 AM Daniel Kovacs
wrote:
> Hello!
>
> I'd like to create a CephFS subvol, with these command: ceph fs
> subvolume create cephfs_ssd subvol_1
> I got this error: Error EINVAL: invalid value specified for
> ceph.dir.sub
Also, please list the volumes available on your system:
$ ceph fs volume ls
On Wed, Dec 28, 2022 at 9:09 AM Milind Changire wrote:
> What ceph version are you using?
>
> $ ceph versions
>
>
> On Wed, Dec 28, 2022 at 3:17 AM Daniel Kovacs
> wrote:
>
>> Hello!
>
Isaiah,
I'm trying to understand your requirements for a CephFS Active-Active setup.
What do you want to achieve with a CephFS Active-Active setup ?
Once you list the exact requirements, we can discuss further on how to
achieve them.
There's also something called *CephFS Snapshot Mirroring*:
https
reate a subvol in inclust_ssd volume. I can create
> subvolume with same name in inclust without any problems.
>
>
> Best regards,
>
> Daniel
>
> On 2022. 12. 28. 4:42, Milind Changire wrote:
> > Also, please list the volumes available on your system:
> >
>
(for archival purposes)
On Thu, Mar 2, 2023 at 6:04 PM Milind Changire wrote:
> The docs for the ceph kernel module will be updated appropriately in the
> kernel documentation.
> Thanks for pointing out your pain point.
>
> --
> Milind
>
>
> On Thu, Mar 2, 2023 at 1
There's a default/hard limit of 50 snaps that's maintained for any dir via
the definition MAX_SNAPS_PER_PATH = 50 in the source file
src/pybind/mgr/snap_schedule/fs/schedule_client.py.
Every time the snapshot names are read for pruning, the last thing done is
to check the length of the list and kee
FYI, PR - https://github.com/ceph/ceph/pull/51278
On Fri, Apr 28, 2023 at 8:49 AM Milind Changire wrote:
> There's a default/hard limit of 50 snaps that's maintained for any dir via
> the definition MAX_SNAPS_PER_PATH = 50 in the source file
> src/pybind/mgr/snap_schedule/fs
If a dir doesn't exist at the moment of snapshot creation, then the
schedule is deactivated for that dir.
On Fri, Apr 28, 2023 at 8:39 PM Jakob Haufe wrote:
> On Thu, 27 Apr 2023 11:10:07 +0200
> Tobias Hachmer wrote:
>
> > > Given the limitation is per directory, I'm currently trying this:
>
On Sun, Apr 30, 2023 at 9:02 PM William Edwards
wrote:
> Angelo Höngens schreef op 2023-04-30 15:03:
> > How do you guys backup CephFS? (if at all?)
> >
> > I'm building 2 ceph clusters, a primary one and a backup one, and I'm
> > looking into CephFS as the primary store for research files. CephF
Emmanuel,
You probably missed the "daemon" keyword after the "ceph" command name.
Here's the docs for pacific:
https://docs.ceph.com/en/pacific/cephfs/troubleshooting/
So, your command should've been:
# ceph daemon mds.icadmin011 dump cache /tmp/dump.txt
You could also dump the ops in flight with
]
> }
> }
> ],
> "num_ops": 1
> }
>
> However, the dump cache does not seem to produce an output:
> root@icadmin011:~# ceph --cluster floki daemon mds.icadmin011 dump cache
> /tmp/dump.txt
> root@icadmin011:~# ls /tmp
&g
Sandip,
What type of client are you using ?
kernel client or fuse client ?
If it's the kernel client, then it's a bug.
FYI - Pacific and Quincy fuse clients do the right thing
On Wed, May 24, 2023 at 9:24 PM Sandip Divekar <
sandip.dive...@hitachivantara.com> wrote:
> Hi Team,
>
> I'm writing
GAi/
> drwx-- 3 root root 4096 May 4 12:43
>
> systemd-private-18c17b770fc24c48a0507b8faa1c0ec2-systemd-resolved.service-KYHd7f/
> drwx-- 3 root root 4096 May 4 12:43
>
> systemd-private-18c17b770fc24c48a0507b8faa1c0ec2-systemd-timesyncd.service-1Qtj5i/
>
> On Wed, May 24,
nd.service-uU1GAi/
>>> drwx-- 3 root root 4096 May 4 12:43
>>>
>>> systemd-private-18c17b770fc24c48a0507b8faa1c0ec2-systemd-resolved.service-KYHd7f/
>>> drwx-- 3 root root 4096 May 4 12:43
>>>
>>> systemd-private-18c17b770fc24c4
If the crash is easily reproducible at your end, could you set debug_client
to 20 in the client-side conf file and then reattempt the operation.
You could then send over the collected logs and we could take a look at
them.
FYI - there's also a bug tracker that has identified a similar problem:
ht
if possible, could you share the mds logs at debug level 20
you'll need to set debug_mds = 20 in the conf file until the crash and
revert the level to the default after mds crash
On Tue, Jul 18, 2023 at 9:12 PM wrote:
> hello.
> I am using ROK CEPH and have 20 MDSs in use. 10 are in rank 0-9 an
On Fri, Jul 21, 2023 at 9:03 PM Patrick Donnelly wrote:
>
> Hello karon,
>
> On Fri, Jun 23, 2023 at 4:55 AM karon karon wrote:
> >
> > Hello,
> >
> > I recently use cephfs in version 17.2.6
> > I have a pool named "*data*" and a fs "*kube*"
> > it was working fine until a few days ago, now i can
On Mon, Aug 7, 2023 at 8:23 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> I have an octopus cluster on the latest octopus version with mgr/mon/rgw/osds
> on centos 8.
> Is it safe to add an ubuntu osd host with the same octopus version?
>
> Thank you
Well, the ceph source bits surely remain the sa
You might want to read up https://docs.ceph.com/en/pacific/cephfs/multimds/
The page contains info on dir pinning and related policies.
On Thu, Aug 10, 2023 at 12:11 PM Eugen Block wrote:
>
> Okay, you didn't mention that in your initial question. There was an
> interesting talk [3] at the Cephal
well, you should've used the ceph command to create the subvol
it's much simpler that way
$ ceph fs subvolume create mycephfs subvol2
The above command creates a new subvol (subvol2) in the default subvolume group.
So, in your case the actual path to the subvolume would be
/mnt/volumes/_nogroup/
xattr here.
>
> Thanks,
> Eugen
>
> Zitat von Milind Changire :
>
> > well, you should've used the ceph command to create the subvol
> > it's much simpler that way
> >
> > $ ceph fs subvolume create mycephfs subvol2
> >
> > The above comm
Hello Kushagr,
Snap-schedule no longer accepts a --subvol argument, so it's not
easily possible to schedule snapshots for subvolumes.
Could you tell the commands used to schedule snapshots for subvolumes ?
--
Milind
On Wed, Sep 27, 2023 at 11:13 PM Kushagr Gupta
wrote:
>
> Hi Teams,
>
> *Ceph-v
me instances, the scheduler created the scheduled snapshot.
> But on a fresh setup when we executed the same commands as per the setup, the
> scheduler did not create the scheduled snapshots.
> We have observed this behavior multiple times.
>
> Could you please help us out?
> Kindly let
On Wed, Oct 4, 2023 at 3:40 PM Kushagr Gupta
wrote:
>
> Hi Team,Milind
>
> Ceph-version: Quincy, Reef
> OS: Almalinux 8
>
> Issue: snap_schedule works after 1 hour of schedule
>
> Description:
>
> We are currently working in a 3-node ceph cluster.
> We are currently exploring the scheduled snapsho
was one more instance where we waited for 2 hours and then re-started
> and in the third hour the schedule started working.
>
> Could you please guide us if we are doing anything wrong.
> Kindly let us know if any logs are required.
>
> Thanks and Regards,
> Kushagra Gupta
>
&
you for your response @Milind Changire
>
> >>The only thing I can think of is a stale mgr that wasn't restarted
> >>after an upgrade.
> >>Was an upgrade performed lately ?
>
> Yes an upgrade was performed after which we faced this. But we were facing
> th
em
On Thu, Oct 5, 2023 at 1:44 PM Kushagr Gupta
wrote:
>
> Hi Milind,
>
> Thank you for your response.
> Please find the logs attached, as instructed.
>
> Thanks and Regards,
> Kushagra Gupta
>
>
> On Thu, Oct 5, 2023 at 12:09 PM Milind Changire wrote:
>>
Here's some answers to your questions:
On Sun, Mar 6, 2022 at 3:57 AM Arnaud M wrote:
> Hello to everyone :)
>
> Just some question about filesystem scrubbing
>
> In this documentation it is said that scrub will help admin check
> consistency of filesystem:
>
> https://docs.ceph.com/en/latest/ce
700 0 log_channel(cluster) log [INF] :
> scrub summary: idle+waiting paths [~mds0]
> 2022-03-12T18:13:55.317+ 7f61cf8ba700 0 log_channel(cluster) log [INF] :
> scrub summary: idle
>
> 2022-03-12T18:14:12.608+ 7f61d30c1700 1 mds.1 asok_command: scrub
> start {path=~mds
18 375=364+11)","memory_value.dirstat":"f(v0
> > 10=0+10)","memory_value.rstat":"n(v1815
> > rc2022-03-12T16:01:44.218294+ b1017620718
> > 375=364+11)","error_str":""},"return_code":-61}
> > 2022
You could set an xattr on the dir of your choice to convert it to a
subvolume.
eg.
# setfattr -n ceph.dir.subvolume -v 1 my/favorite/dir/is/now/a/subvol1
You can also disable the subvolume feature by setting the xattr value to 0
(zero)
But there are constraints on a subvolume dir, namely:
* you c
41 matches
Mail list logo