On Sat, Oct 15, 2016 at 1:36 AM, Heller, Chris wrote:
> Just a thought, but since a directory tree is a first class item in cephfs,
> could the wire protocol be extended with an “recursive delete” operation,
> specifically for cases like this?
In principle yes, but the problem is that the POSIX
Saturday, 15 October 2016 05:02
> To: Mykola Dvornik
> Cc: Heller, Chris; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] cephfs slow delete
>
>
>
> On Fri, Oct 14, 2016 at 6:26 PM, wrote:
>
>> If you are running 10.2.3 on your cluster, then I would strongl
ave size of 16EB. Those
are impossible to delete as they are ‘non empty’.
-Mykola
From: Gregory Farnum
Sent: Saturday, 15 October 2016 05:02
To: Mykola Dvornik
Cc: Heller, Chris; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] cephfs slow delete
On Fri, Oct 14, 2016 at 6:26 PM, wrote:
>
On Fri, Oct 14, 2016 at 6:26 PM, wrote:
> If you are running 10.2.3 on your cluster, then I would strongly recommend
> to NOT delete files in parallel as you might hit
> http://tracker.ceph.com/issues/17177
I don't think these have anything to do with each other. What gave you
the idea simultane
-users] cephfs slow delete
Just a thought, but since a directory tree is a first class item in cephfs,
could the wire protocol be extended with an “recursive delete” operation,
specifically for cases like this?
On 10/14/16, 4:16 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016
Just a thought, but since a directory tree is a first class item in cephfs,
could the wire protocol be extended with an “recursive delete” operation,
specifically for cases like this?
On 10/14/16, 4:16 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016 at 1:11 PM, Heller, Chris wrote:
>
On Fri, Oct 14, 2016 at 1:11 PM, Heller, Chris wrote:
> Ok. Since I’m running through the Hadoop/ceph api, there is no syscall
> boundary so there is a simple place to improve the throughput here. Good to
> know, I’ll work on a patch…
Ah yeah, if you're in whatever they call the recursive tree
Ok. Since I’m running through the Hadoop/ceph api, there is no syscall boundary
so there is a simple place to improve the throughput here. Good to know, I’ll
work on a patch…
On 10/14/16, 3:58 PM, "Gregory Farnum" wrote:
On Fri, Oct 14, 2016 at 11:41 AM, Heller, Chris wrote:
> Unfortu
On Fri, Oct 14, 2016 at 11:41 AM, Heller, Chris wrote:
> Unfortunately, it was all in the unlink operation. Looks as if it took nearly
> 20 hours to remove the dir, roundtrip is a killer there. What can be done to
> reduce RTT to the MDS? Does the client really have to sequentially delete
> dir
Unfortunately, it was all in the unlink operation. Looks as if it took nearly
20 hours to remove the dir, roundtrip is a killer there. What can be done to
reduce RTT to the MDS? Does the client really have to sequentially delete
directories or can it have internal batching or parallelization?
-
On Thu, Oct 13, 2016 at 12:44 PM, Heller, Chris wrote:
> I have a directory I’ve been trying to remove from cephfs (via
> cephfs-hadoop), the directory is a few hundred gigabytes in size and
> contains a few million files, but not in a single sub directory. I startd
> the delete yesterday at aroun
I have a directory I’ve been trying to remove from cephfs (via cephfs-hadoop),
the directory is a few hundred gigabytes in size and contains a few million
files, but not in a single sub directory. I startd the delete yesterday at
around 6:30 EST, and it’s still progressing. I can see from (ceph
12 matches
Mail list logo