Sounds like we are going to restart with 20 threads on each storage node.
On Sat, Nov 3, 2018 at 8:26 PM Sergey Malinin wrote:
> scan_extents using 8 threads took 82 hours for my cluster holding 120M
> files on 12 OSDs with 1gbps between nodes. I would have gone with lot more
> threads if I had
scan_extents using 8 threads took 82 hours for my cluster holding 120M files on
12 OSDs with 1gbps between nodes. I would have gone with lot more threads if I
had known it only operated on data pool and the only problem was network
latency. If I recall correctly, each worker used up to 800mb ram
For a 150TB file system with 40 Million files how many cephfs-data-scan
threads should be used? Or what is the expected run time. (we have 160 osd
with 4TB disks.)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
I had a filesystem rank get damaged when the MDS had an error writing the log
to the OSD. Is damage expected when a log write fails?
According to log messages, an OSD write failed because the MDS attempted
to write a bigger chunk than the OSD's maximum write size. I can probably
figure out why t
Not exactly, this feature was supported in Jewel starting 10.2.11, ref
https://github.com/ceph/ceph/pull/18010
I thought you mentioned you were using Luminous 12.2.4.
From: David Turner
Date: Friday, November 2, 2018 at 5:21 PM
To: Pavan Rallabhandi
Cc: ceph-users
Subject: EXT: Re: [ceph-user
is it possible to snapshot the cephfs data pool?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Having attempted to recover using the journal tool and having that fail we
are goinig to rebuild our metadata using a separate metadata pool.
We have the following procedure we are going to use. The issue I haven't
found yet (likely lack of sleep) is how to replace the original metadata
pool in th
Morning,
Having attempted to recover using the journal tool and having that fail we are
goinig to rebuild our metadata using a separate metadata pool.
We have the following procedure we are going to use. The issue I haven't found
yet (likely lack of sleep) is how to replace the original metad
Hi,
On 03.11.18 10:31, jes...@krogh.cc wrote:
I suspect that mds asked client to trim its cache. Please run
following commands on an idle client.
In the mean time - we migrated to the RH Ceph version and deliered the MDS
both SSD's and more memory and the problem went away.
It still puzzles my
Hi.
I tried to enable the "new smart balancing" - backend are on RH luminous
clients are Ubuntu 4.15 kernel.
As per: http://docs.ceph.com/docs/mimic/rados/operations/upmap/
$ sudo ceph osd set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 1 conn
> I suspect that mds asked client to trim its cache. Please run
> following commands on an idle client.
In the mean time - we migrated to the RH Ceph version and deliered the MDS
both SSD's and more memory and the problem went away.
It still puzzles my mind a bit - why is there a connection betwe
Den lör 3 nov. 2018 kl 09:10 skrev Ashley Merrick :
>
> Hello,
>
> Tried to do some reading online but was unable to find much.
>
> I can imagine a higher K + M size with EC requires more CPU to re-compile the
> shards into the required object.
>
> But is there any benefit or negative going with a
Hello,
Tried to do some reading online but was unable to find much.
I can imagine a higher K + M size with EC requires more CPU to re-compile
the shards into the required object.
But is there any benefit or negative going with a larger K + M, obviously
their is the size benefit but technically c
13 matches
Mail list logo