Thank you very much, along with the "ceph-volume lvm activate —all" we now
have a working solution on the test environment.
On Fri, 10 Jun 2022 at 11:21, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 10.06.22 10:23, Flemming Fran
Hmm, does that also create the mon, mgr and mds units?
On Fri, 10 Jun 2022 at 09:06, 胡 玮文 wrote:
> I think “ceph-volume lvm activate —all” should do it.
>
> Weiwen Hu
>
> > 在 2022年6月10日,14:34,Flemming Frandsen 写道:
> >
> > Hi, this is somewhat embarrassing
/systemd/system
?
--
Flemming Frandsen - YAPH - http://osaa.dk - http://dren.dk/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
d: flatten error: (22) Invalid argument
2022-01-06 13:52:08.746 7f093568b0c0 -1 librbd::Operations: image has no
parent
I'm running Nautilus.
Any ideas?
--
Flemming Frandsen - YAPH - http://osaa.dk - http://dren.dk/
___
ceph-users mailing list -- cep
mally, lots of md io would indicate that the cache size is too
> small for the workload; but since you said the clients are pretty
> idle, this might not be the case for you.
>
> Cheers, Dan
>
> On Thu, Jul 8, 2021 at 9:36 AM Flemming Frandsen
> wrote:
> >
> > We have
We have a nautilus cluster where any metadata write operation is very slow.
We're seeing very light load from clients, as reported by dumping ops in
flight, often it's zero.
We're also seeing about 100 MB/s writes to the metadata pool, constantly,
for weeks on end, which seems excessive, as only
is balancing happening actively now. If you don't pin,
> then it's likely.
>
> Try the debug logs. And check the exports using something like :
>
> ceph daemon mds.b get subtrees | jq '.[] | [.dir.path, .auth_first,
> .export_pin]'
>
> Dan
>
>
>
; particularly bad behavior in the md balancer.
> Increase debug_mds gradually on both mds's; hopefully that gives a hint as
> to what it's doing.
>
> .. dan
>
>
> On Wed, Apr 21, 2021, 8:48 PM Flemming Frandsen wrote:
>
>> Not as of yet, it's steadil
kept growing into a few 10k, iirc. As soon as the
> exports completed the md log trimmed quickly.
>
> .. Dan
>
>
>
> On Wed, Apr 21, 2021, 7:38 PM Flemming Frandsen wrote:
>
>> I've gone through the clients mentioned by the ops in flight and none of
>> th
ds.dalmore(mds.0): Behind on trimming (4515/128) max_segments: 128,
num_segments: 4515
On Wed, 21 Apr 2021 at 19:09, Flemming Frandsen wrote:
> I've just spent a couple of hours waiting for an MDS server to replay a
> journal that it was behind on and it seems to be getting worse.
>
},
{
"description": "client_request(client.10439445:7 lookup
#0x10004cd0675/jul 2021-04-21 16:26:17.334272 caller_uid=1000,
caller_gid=1000{})",
"initiated_at": "2021-04-21 16:26:17.352448",
"age": 1633.813581226,
"d
seq
1507, after about two hours of downtime.
I'm worried that restarting an MDS server takes the fs down for so
long, it makes upgrading it a bit hard.
--
Flemming Frandsen - YAPH - http://osaa.dk - http://dren.dk/
___
ceph-users mailing list -
utilus (stable)": 69,
"ceph version 14.2.19 (bb796b9b5bab9463106022eef406373182465d11)
nautilus (stable)": 6,
"ceph version 14.2.20 (36274af6eb7f2a5055f2d53ad448f2694e9046a0)
nautilus (stable)": 12
}
}
Any hints will be greatly appreciated.
--
Flemming Frandsen - YAPH - http://osaa.dk - http://dren.dk/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
13 matches
Mail list logo