Hello,
We have been trying to use Ceph-Dokan to mount cephfs on Windows. When
transferring any data below ~1GB the transfer speed is as quick as desired and
works perfectly. However, once more than ~1GB has been transferred the
connection stops being able to send data and everything seems to ju
Have you tries this with the native client under Linux ? It could be
just slow cephfs ?
On 1.11.2021 г. 06:40 ч., Mason-Williams, Gabryel (RFI,RAL,-) wrote:
Hello,
We have been trying to use Ceph-Dokan to mount cephfs on Windows. When
transferring any data below ~1GB the transfer speed is as
You can fail from one running Ganesha to another, using something like
ctdb or pacemaker/corosync. This is how some other clustered
filesysytem (e.g. Gluster) use Ganesha. This is not how the Ceph
community has decided to implement HA with Ganesha, so it will be a more
manual setup for you, b
Hi Thilo,
theoretically this is a recoverable case - due to the bug new prefix was
inserted at the beginning of every OMAP record instead of replacing old
one. So one has to just remove an old prefix to fix that (to-be-removed
prefix starts after the first '.' char and ends with the second one
On 10/29/2021 1:06 PM, Elias Abacioglu wrote:
I don't have any data yet.
I set up a k8s cluster and set up CephFS, RGW and RBD for k8s.. So it's
hard to tell beforehand what we will store and know compression ratios.
Thus making it hard to know how to benchmark, but I guess a mix of
everything f
Hey Dustin,
what Pacific version have you got?
Thanks,
Igor
On 11/1/2021 7:08 PM, Dustin Lagoy wrote:
Hi everyone,
This is my first time posting here, so it's nice to meet you all!
I have a Ceph cluster that was recently upgraded from Octopus to Pacific and
now the write performance is no
Then highly likely you're bitten by https://tracker.ceph.com/issues/52089
This has been fixed starting 16.2.6. So please update or wait for a bit
till 16.2.7 is release which is gonna to happend shortly.
Thanks,
Igor
On 11/1/2021 7:25 PM, Dustin Lagoy wrote:
I am running a cephadm base cl
Hello,
I’m evaluating Ceph as a storage option, using ceph version 16.2.6, Pacific
stable installed using cephadm. I was hoping to use PG autoscaling to reduce
ops efforts. I’m standing this up on a cluster with 96 OSDs across 9 hosts.
The device_health_metrics pool was created by Ceph automati
Hi Alex,
Switch autoscaler to 'scale-up' profile, it will keep PGs at minimum and
increase them as required. Default one is 'scale-down'.
Regards,
Yury.
On Tue, Nov 2, 2021 at 3:31 AM Alex Petty wrote:
> Hello,
>
> I’m evaluating Ceph as a storage option, using ceph version 16.2.6,
> Pacific st
Hi,
Why do you think it’s used at 91%?
Ceph reports 47.51% usage for this pool.
-
Etienne Menguy
etienne.men...@croit.io
> On 1 Nov 2021, at 18:03, Szabo, Istvan (Agoda) wrote:
>
> Hi,
>
> Theoretically my data pool is on 91% used but the fullest osd is on 60%,
> should In” worry?
>
>
Max available = free space actually usable now based on OSD usage, not
including already-used space.
-Alex
MIT CSAIL
On 11/1/21, 2:18 PM, "Szabo, Istvan (Agoda)" wrote:
It says max available: 115TB and current use is 104TB, what I don’t
understand where the max available come from becaus
I can add another 2 positive datapoints for the balancer, my personal and work
clusters are both happily balancing.
Good luck :)
-Alex
On 11/1/21, 3:05 PM, "Josh Baergen" wrote:
Well, those who have negative reviews are often the most vocal. :)
We've had few, if any, problems with the
The balancer does a pretty good job. It's the PG autoscaler that has bitten
us frequently enough that we always ensure it is disabled for all pools.
David
On Mon, Nov 1, 2021 at 2:08 PM Alexander Closs wrote:
> I can add another 2 positive datapoints for the balancer, my personal and
> work clu
Hi Manuel,
I'm looking at the ticket for this issue (
https://tracker.ceph.com/issues/51463) and tried to reproduce. This was
initially trivial to do with vstart (rados bench paused for many seconds
afters stopping an osd) but it turns out that was because the vstart
ceph.conf includes `osd_fast_
I think this thread has inadvertantly conflated the two.
Balancer: ceph-mgr module that uses pg-upmap to balance OSD utilization /
fullness
Autoscaler: attempts to set pg_num / pgp_num for each pool adaptively
>
> The balancer does a pretty good job. It's the PG autoscaler that has bitten
Hi!
I have a 3-node 16.2.6 cluster with 33 OSDs, and plan to add another 3
nodes of the same configuration to it. What is the best way to add the new
nodes and OSDs so that I can avoid a massive rebalance and performance hit
until all new nodes and OSDs are in place and operational?
I would very
16 matches
Mail list logo