Hi Alexander,
it might be that you are expecting too much from ceph. The design of the
filesystem was not some grand plan with every detail worked out. It was more
the classic evolutionary approach, something working was screwed on top of
rados and things evolved from there on.
It is possible
Hi Dan,
thanks for the link, I've been reading it over and over again but
still didn't come to a conclusion yet.
IIRC, the maintenance windows are one hour long, currently every week.
But it's not entirely clear if the maintenance will even have an
impact, because apparently, last time nobo
I'm not aware of any hard limit for the number of Filesystems, but
that doesn't really mean very much. IIRC, last week during a Clyso
talk at Eventbrite I heard someone say that they deployed around 200
Filesystems or so, I don't remember if it was a production environment
or just a lab env
Hi Frank,
thanks a lot for the hint, and I have read the documentation about this.
What is not clear to me is this:
== snip
The first category of these failures that we will discuss involves
inconsistent networks -- if there is a netsplit (a disconnection between
two servers that splits the
Den tors 21 nov. 2024 kl 09:45 skrev Andre Tann :
> Hi Frank,
> thanks a lot for the hint, and I have read the documentation about this.
> What is not clear to me is this:
>
> == snip
> The first category of these failures that we will discuss involves
> inconsistent networks -- if there is a netsp
Am 21.11.24 um 10:56 schrieb Janne Johansson:
== snip
The first category of these failures that we will discuss involves
inconsistent networks -- if there is a netsplit (a disconnection between
two servers that splits the network into two pieces), Ceph might be
unable to mark OSDs down and remov
They actually did have problems after standby-replay daemons took over
as active daemons. After each failover they had to clean up some stale
processes (or something like that). I'm not sure who recommended it,
probably someone from SUSE engineering, but we switched off
standby-replay and t
The octopus repos disappeared a couple of days ago - no argument with that
given its marked as out of support. However I see from
https://docs.ceph.com/en/latest/releases/ that quincy is also marked as out
of support, but currently the repos are still there.
Is there any guesstimate of when the qu
I concur with that observation. Standby-replay seems a useless mode of
operation. The replay daemons use a lot more RAM than the active ones and the
fail-over took ages. After switching to standby-only fail-over is usually 5-20s
with the lower end being more common.
We have 8 active and 4 stand
Hi,
can anyone share some experience with these two configs?
ceph config get mds mds_session_blocklist_on_timeout
true
ceph config get mds mds_session_blocklist_on_evict
true
If there's some network maintenance going on and the client connection
is interrupted, could it help to disable evicti
On Wed, Nov 20, 2024 at 2:05 PM Rajmohan Ramamoorthy
wrote:
>
> Hi Patrick,
>
> Few other follow up questions.
>
> Is directory fragmentation applicable only when multiple active MDS is
> enabled for a Ceph FS?
It has no effect when applied with only one rank (active). It can be
useful to have i
Hi Eugen,
During the talk you've mentioned, Dan said there's a hard coded limit of 256
MDSs per cluster. So with one active and one standby-ish MDSs per filesystem,
that would be 128 filesystems at max per cluster.
Mark said he got 120 but.. things start to get wacky by 80. :-)
More fun to come
Hi folks, the perf meeting will be cancelled today, Mark is flying from
a conference!
Thanks,
Matt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
>
> IRC, last week during a Clyso
> talk at Eventbrite I heard someone say that they deployed around 200
> Filesystems or so, I don't remember if it was a production environment
> or just a lab environment
Interesting, thanks!
I assume that you would probably be limited
> by the number of OSDs/P
Hi, Frank, thanks!
it might be that you are expecting too much from ceph. The design of the
> filesystem was not some grand plan with every detail worked out. It was
> more the classic evolutionary approach, something working was screwed on
> top of rados and things evolved from there on.
There
>
> Can you show the entire 'ceph fs status' output? Any maybe also 'ceph
> fs dump'?
Nothing special, just smoll test cluster.
fs1 - 10 clients
===
RANK STATE MDS ACTIVITY DNSINOS DIRS CAPS
0active a Reqs:0 /s 18.7k 18.4k 351513
1active b Reqs:
Hi Eugene,
Disabling blocklisting on eviction is a pretty standard config. In my
experience it allows clients resume their session cleanly without needing a
remount.
There's docs about this here:
https://docs.ceph.com/en/latest/cephfs/eviction/#advanced-configuring-blocklisting
I don't have a go
hey Chris,
On Wed, Nov 20, 2024 at 6:02 PM Christopher Durham wrote:
>
> Casey,
>
> OR, is there a way to continue on with new data syncing (incremental) as the
> full sync catches up, as the full sync will take a long time, and no new
> incremental data is being replicated.
full sync walks th
18 matches
Mail list logo