Keep in mind that in order for the workers not to overlap each other you need
to set the total number of workers (worker_m) to nodes*20, and assign each node
with it’s own processing range (worker_n).
On Nov 4, 2018, 03:43 +0300, Rhian Resnick , wrote:
> Sounds like we are going to restart with 2
Sounds like we are going to restart with 20 threads on each storage node.
On Sat, Nov 3, 2018 at 8:26 PM Sergey Malinin wrote:
> scan_extents using 8 threads took 82 hours for my cluster holding 120M
> files on 12 OSDs with 1gbps between nodes. I would have gone with lot more
> threads if I had
scan_extents using 8 threads took 82 hours for my cluster holding 120M files on
12 OSDs with 1gbps between nodes. I would have gone with lot more threads if I
had known it only operated on data pool and the only problem was network
latency. If I recall correctly, each worker used up to 800mb ram
For a 150TB file system with 40 Million files how many cephfs-data-scan
threads should be used? Or what is the expected run time. (we have 160 osd
with 4TB disks.)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
>
> On 27.09.2018, at 15:04, John Spray wrote:
>
> On Thu, Sep 27, 2018 at 11:34 AM Sergey Malinin wrote:
>>
>> Can such behaviour be related to data pool cache tiering?
>
> Yes -- if there's a cache tier in use then deletions in the base pool
> can be delayed and then happen later when the c
On Thu, Sep 27, 2018 at 11:34 AM Sergey Malinin wrote:
>
> Can such behaviour be related to data pool cache tiering?
Yes -- if there's a cache tier in use then deletions in the base pool
can be delayed and then happen later when the cache entries get
expired.
You may find that for a full scan of
Can such behaviour be related to data pool cache tiering?
> On 27.09.2018, at 13:14, Sergey Malinin wrote:
>
> I'm trying alternate metadata pool approach. I double checked that MDS
> servers are down and both original and recovery fs are set not joinable.
>
>
>> On 27.09.2018, at 13:10, Joh
I'm trying alternate metadata pool approach. I double checked that MDS servers
are down and both original and recovery fs are set not joinable.
> On 27.09.2018, at 13:10, John Spray wrote:
>
> On Thu, Sep 27, 2018 at 11:03 AM Sergey Malinin wrote:
>>
>> Hello,
>> Does anybody have experience
On Thu, Sep 27, 2018 at 11:03 AM Sergey Malinin wrote:
>
> Hello,
> Does anybody have experience with using cephfs-data-scan tool?
> Questions I have are how long would it take to scan extents on filesystem
> with 120M relatively small files? While running extents scan I noticed that
> number of
Hello,
Does anybody have experience with using cephfs-data-scan tool?
Questions I have are how long would it take to scan extents on filesystem with
120M relatively small files? While running extents scan I noticed that number
of objects in data pool is decreasing over the time. Is that normal?
T
On Tue, May 8, 2018 at 8:49 PM, Ryan Leimenstoll
wrote:
> Hi Gregg, John,
>
> Thanks for the warning. It was definitely conveyed that they are dangerous. I
> thought the online part was implied to be a bad idea, but just wanted to
> verify.
>
> John,
>
> We were mostly operating off of what the
Hi Gregg, John,
Thanks for the warning. It was definitely conveyed that they are dangerous. I
thought the online part was implied to be a bad idea, but just wanted to verify.
John,
We were mostly operating off of what the mds logs reported. After bringing the
mds back online and active, we mo
On Mon, May 7, 2018 at 8:50 PM, Ryan Leimenstoll
wrote:
> Hi All,
>
> We recently experienced a failure with our 12.2.4 cluster running a CephFS
> instance that resulted in some data loss due to a seemingly problematic OSD
> blocking IO on its PGs. We restarted the (single active) mds daemon durin
Absolutely not. Please don't do this. None of the CephFS disaster recovery
tooling in any way plays nicely with a live filesystem.
I haven't looked at these docs in a while, are they not crystal clear about
all these operations being offline and in every way dangerous? :/
-Greg
On Mon, May 7, 2018
Hi All,
We recently experienced a failure with our 12.2.4 cluster running a CephFS
instance that resulted in some data loss due to a seemingly problematic OSD
blocking IO on its PGs. We restarted the (single active) mds daemon during
this, which caused damage due to the journal not having the
When running the cephfs-data-scan tool to discover what files are affected
by my incomplete PGs, I get paths returned as expected. But, I also receive
2 different kinds of errors in the output.
2018-01-05 10:49:01.217218 7fc274fbb140 -1 pgeffects.hit_dir: Failed to
stat path
/homefolders/bdeetz-2/
On Tue, Jun 20, 2017 at 4:06 PM, Mazzystr wrote:
>
> I'm on Red Hat Storage 2.2 (ceph-10.2.7-0.el7.x86_64) and I see this...
> # cephfs-data-scan
> Usage:
> cephfs-data-scan init [--force-init]
> cephfs-data-scan scan_extents [--force-pool]
> cephfs-data-scan scan_inodes [--force-pool] [--f
I'm on Red Hat Storage 2.2 (ceph-10.2.7-0.el7.x86_64) and I see this...
# cephfs-data-scan
Usage:
cephfs-data-scan init [--force-init]
cephfs-data-scan scan_extents [--force-pool]
cephfs-data-scan scan_inodes [--force-pool] [--force-corrupt]
--force-corrupt: overrite apparently corrupt
On Thu, Jan 12, 2017 at 4:10 PM, Kjetil Jørgensen wrote:
> Hi,
>
> I want/need cephfs-data-scan scan_links, it's in master, although we're
> currently on jewel (10.2.5). Am I better off cherry-picking the relevant
> commit onto the jewel branch rather than just using master ?
Almost certainly. I
Hi,
I want/need cephfs-data-scan scan_links, it's in master, although we're
currently on jewel (10.2.5). Am I better off cherry-picking the relevant
commit onto the jewel branch rather than just using master ?
Cheers,
--
Kjetil Joergensen
SRE, Medallia Inc
Phone: +1 (650) 739-6580
_
20 matches
Mail list logo