Hello,
what is the currently preferred method, in terms of stability and
performance, for exporting a CephFS directory with Samba?
- locally mount the CephFS directory and export it via Samba?
- using the "vfs_ceph" module of Samba?
Best,
Martin
___
ce
We noticed this degraded write performance too recently when the nearfull flag
is present (cephfs kernel client, kernel 4.19.154).
Appears to be due to forced synchronous writes when nearfull.
https://github.com/ceph/ceph-client/blob/558b4510f622a3d96cf9d95050a04e7793d343c7/fs/ceph/file.c#L1837-L1
Hey folks!
We have a user with ~1900 buckets in our RGW service and running this stat
command results in a timeout for them:
swift -A https://:443/auth/1.0 -U -K stat
Running the same command, but specifiying one of their buckets, returns
promptly. Running the command for a different user wi
It might be more accurate to say that the default nearfull is 85% for
that reason, among others. Raising it will probably not get you enough
storage to be worth the hassle.
On Tue, Apr 13, 2021 at 7:18 AM zp_8483 wrote:
>
> Backend:
>
> XFS for the filestore back-end.
>
>
> In our testing, we fou
After much digging figured this was due to Nagle’s Algorithm and due to the
fact that I had the cosbench services on same host as the RGW daemons.
Fix was to disable Nagle’s Algorithm by using the option tcp_nodelay=1 in the
rgw_frontends configuration. This option works for both Civetweb and Be
Hello,
When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap", how can
I tell the cephadm bootstrap NOT to install the ceph-grafana container?
Thank you very much in advance for your answer.
Best regards,
Mabi
___
ceph-users mailing li
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.9.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to the
All;
We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16).
Initially our bucket grew very quickly, as I was loading old data into it and
we quickly ran into Large OMAP Object warnings.
I have since done a couple manual reshards, which has fixed the warning on the
primary
Follow up myself, some notes, what helped:
- Finding OSDs with excessive bad authorizer logs, killing them, restarting them
In many cases this cleared the unknown PGs and restored to more normal
I/O. However some OSDs continued with a high amount of messages for some
more hours even after resta
On Tue, Apr 13, 2021 at 12:35 PM Mark Nelson wrote:
>
> On 4/13/21 4:07 AM, Dan van der Ster wrote:
>
> > On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote:
> >>
> >>
> >> On 4/12/21 5:46 PM, Dan van der Ster wrote:
> >>> Hi all,
> >>>
> >>> bdev_enable_discard has been in ceph for several
Backend:
XFS for the filestore back-end.
In our testing, we found the performance decreases when cluster usage exceed
default nearfull ratio(85%), is it by design?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email t
On 4/13/21 4:07 AM, Dan van der Ster wrote:
On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote:
On 4/12/21 5:46 PM, Dan van der Ster wrote:
Hi all,
bdev_enable_discard has been in ceph for several major releases now
but it is still off by default.
Did anyone try it recently -- is it
Hi,
this is documented with many links to other documents, which
unfortunately only confused me. In our 6-Node-Ceph-Cluster (Pacific)
the Dashboard tells me that I should "provide the URL to the API of
Prometheus' Alertmanager". We only use Grafana and Prometheus which
are deployed by cephadm. We
On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote:
>
>
>
> On 4/12/21 5:46 PM, Dan van der Ster wrote:
> > Hi all,
> >
> > bdev_enable_discard has been in ceph for several major releases now
> > but it is still off by default.
> > Did anyone try it recently -- is it safe to use? And do you
On 4/12/21 5:46 PM, Dan van der Ster wrote:
> Hi all,
>
> bdev_enable_discard has been in ceph for several major releases now
> but it is still off by default.
> Did anyone try it recently -- is it safe to use? And do you have perf
> numbers before and after enabling?
>
I have done so on SATA
15 matches
Mail list logo