Hello everyone,
something very strange is driving me crazy with CephFS (kernel driver).
I copy a large directory on the CephFS from one node. If I try to perform a
'time ls -alR' on that directory it gets executed in less than one second.
If I try to do the same 'time ls -alR' from another node it
Hello everyone,
something very strange is driving me crazy with CephFS (kernel driver).
I copy a large directory on the CephFS from one node. If I try to perform a
'time ls -alR' on that directory it gets executed in less than one second.
If I try to do the same 'time ls -alR' from another node it
need to
access a large number of files and when we set them to work on CephFS
latency skyrockets.
Thanks again and regards.
On Tue, Jun 16, 2015 at 10:59 AM, Gregory Farnum wrote:
> On Mon, Jun 15, 2015 at 11:34 AM, negillen negillen
> wrote:
> > Hello everyone,
> >
> >
re?
Yes I could drop the Linux cache as a 'fix' but that would drop the entire
system's cache, sounds a bit extreme! :P
Unless is there a way to drop the cache only for that single dir...?
On Tue, Jun 16, 2015 at 12:15 PM, Gregory Farnum wrote:
> On Tue, Jun 16, 2015 at 12:11
uld/should).
>
> Jan
>
>
> On 16 Jun 2015, at 13:37, negillen negillen wrote:
>
> Thanks again,
>
> even 'du' performance is terrible on node B (testing on a directory taken
> from Phoronix):
>
> # time du -hs /storage/test9/installed-tests/pts/pgbenc
r cannot be used on OSD nodes else
conflicts may arise):
# time tar c /storage/test10/installed-tests/pts/pgbench-1.5.1/>/dev/null
real0m26.934s
user0m0.067s
sys 0m1.336s
On Tue, Jun 16, 2015 at 1:42 PM, John Spray wrote:
>
>
> On 16/06/2015 12:11, negillen negillen wrot
, negillen negillen
wrote:
> Thanks everyone,
>
> update: I tried running on "node A":
> # vmtouch -ev /storage/
> # sync; sync
>
> The problem persisted; one minute needed to 'ls -Ral' the dir (from node
> B).
>
> After that I ran on node A:
> #
56 AM, Francois Lafont wrote:
> Hi,
>
> On 16/06/2015 18:46, negillen negillen wrote:
>
> > Fixed! At least looks like fixed.
>
> That's cool for you. ;)
>
> > It seems that after migrating every node (both servers and clients) from
> > kernel 3.10.80-1 to 4.