I have a self compiled ceph cluster that base on v14.2.9, and test
writing to a pool until it`s full, after that osds start to panic, and
no longer can be restarted.
osd config:
"mon_osd_nearfull_ratio": "0.85",
"mon_osd_full_ratio": "0.95"
"osd_failsafe_full_ratio": "0.97",
Although
Hi
My fellows wanna use ceph rgw to store ES backup and Nexus blobs.
But the services cannot connect to the rgw with s3 protocol when I
provided them with the frontend nginx address(virtual ip). Only when
they use the backend rgw's address(real ip) the ES and Nexus works
well with rgw.
Has anyone
Hi,
Could someone help me what is the issue with our dployment steps please?
Initial RGW Cluster_1
===ADD_RGW_TO_CLUSTERCreate
Default Realm
- sudo radosgw-admin realm create --rgw-realm=default --defaultCreate De
Zhenshi Zhou wrote:
My fellows wanna use ceph rgw to store ES backup and Nexus blobs.
But the services cannot connect to the rgw with s3 protocol when I
provided them with the frontend nginx address(virtual ip). Only when
they use the backend rgw's address(real ip) the ES and Nexus works
well wit
No student should have to feel stressed about their assignment deadlines! With
total assignment help you can get professional Essay Conclusion every single
time!
https://www.totalassignmenthelp.com/blog/essay-conclusion/
___
ceph-users mailing list -- c
Hi,
I'd like to gain a better understanding about what operations emit which
of these performance counters, in particular when is 'op_rw' incremented
instead of 'op_r' + 'op_w'?
I've done a little bit of investigation (v12.2.13) , running various
workoads and operations against an RBD volume
this is ES error log:
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[test] path is not accessible on master node"
}
],
"type": "repository_verification_exception",
"reason": "[test] path is not accessible on ma
I did say I'd test using librbd - and this changes my observations.
Using fio configured with the rbd driver:
- a random write workload emits about equal 'op_w' and 'op_rw'
initially, then just 'op_w' (until filled in sparse allocation maybe)?
So this certainly does help me understand why I'm
Digging a bit deeper: In my 1st tests in order to mount via kernel I'd
had to disable a number of features on the RBD volume - in particular
'object-map'. So redoing my librbd testing - but disabling various
features immediately after creating the volume I find:
- disabling 'object-map' elimin
Wow 34K ios 4k iodetph 1 😊
How many nodes, ssd#s and what network?
I can't find any firmware for the lsi card anymore...
-Ursprüngliche Nachricht-
Von: Marc Roos
Gesendet: Dienstag, 01. September 2020 23:33
An: VELARTIS Philipp Dürhammer ; reed.dier
Cc: ceph-users
Betreff: RE: [ceph-
>> I assume you are referencing this parameter?
>> storcli /c0/v0 set ssdcaching=
>> If so, this is for CacheCade, which is LSI's cache tiering solution, which
>> should both be off and not in use for ceph.
No storcli /cx/vx set pdcache=off is denied because of the lsi setting "Block
SSD Writ
Hi,
How do I set the correct URL to Grafana in a
new cephadm bootstrapped cluster?
When I try to access the performance parts of the
Ceph dashboard my browser tells me that it cannot
resolve the short hostname that is presented in the
URL to Grafana.
cephadm seems to use only the hostname and no
On Wed, Sep 2, 2020 at 12:41 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
>
> Could someone help me what is the issue with our dployment steps please?
>
>
> Initial RGW Cluster_1
> ===ADD_RGW_TO_CLUSTERCreate
> Default R
On 2020-09-01 10:51, Marcel Kuiper wrote:
> As a matter of fact we did. We doubled the storage nodes from 25 to 50.
> Total osds now 460.
>
> You want to share your thoughts on that?
Yes. We observed the same thing with expansions. The OSDs will be very
busy (with multiple threads per OSD) on hou
:) this is just native disk performance with regular sata adapter
nothing fancy, on the ceph hosts I have the SAS2308.
-Original Message-
Cc: 'ceph-users'
Subject: AW: [ceph-users] Re: Can 16 server grade ssd's be slower then
60 hdds? (no extra journals)
Wow 34K ios 4k iodetph 1 😊
H
Just for the sake of curiosity, if you do a show all on /cX/vX, what is shown
for the VD properties?
> VD0 Properties :
> ==
> Strip Size = 256 KB
> Number of Blocks = 1953374208
> VD has Emulated PD = No
> Span Depth = 1
> Number of Drives Per Span = 1
> Write Cache(initial setting) =
With that first command, I get this error:
Error EINVAL: pool 'cephfs_metadata' already contains some objects. Use an
empty pool instead.
What can I do?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@
Nevermind, it works now. Thanks for the help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Good day,
I am having an issue with some multipart uploads to radosgw. I
recently upgraded my cluster from Mimic to Nautilus and began having
problems with multipart uploads from clients using the Java AWS SDK
(specifically 1.11.219). I do NOT have issues with multipart uploads
with other clients
Did you try to restart the dashboard mgr module after your change ?
# ceph mgr module disable dashboard
# ceph mgr module enable dashboard
Regards,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i
On 02/09/2020 12:07, Stefan Kooman wrote:
On 2020-09-01 10:51, Marcel Kuiper wrote:
As a matter of fact we did. We doubled the storage nodes from 25 to 50.
Total osds now 460.
You want to share your thoughts on that?
Yes. We observed the same thing with expansions. The OSDs will be very
bu
It seems like your nginx has the wrong configuration for reverse proxy
of S3.
Thanks.
Zhenshi Zhou wrote:
this is ES error log:
{
"error": {
"root_cause": [
{
"type": "repository_verification_exception",
"reason": "[test] path is not accessible on master node"
I just came across a Suse documentation stating that RBD features are not iSCSI
compatible. Since I had 2 cases of image corruption in this scenario in 10 days
I'm wondering if my setup is to blame.
So question is if it is possible to provide disks to a Windows Server 2019 via
iSCSI while using
BTW, the documentation can be found here:
https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html
--
Salsa
‐‐‐ Original Message ‐‐‐
On Wednesday, September 2, 2020 7:08 PM, Salsa wrote:
> I just came across a Suse documentation stating that RBD features are not
> iSCSI compat
Hi Tom
Thanks for the reply. Here is my nginx configuration.
Did I miss something or is there some special option to set?
What's more, our Flink can work well by connecting to the frontend.
[image: image.png]
Tom Black 于2020年9月3日周四 上午8:13写道:
> It seems like your nginx has the wrong configurati
Hi,
The cluster I'm writing about has a long history (months) of instability
mainly related to large RocksDB database and high memory consumption.
The use-case is RGW with an EC8+3 pool for data.
In the last months this cluster has been suffering from OSDs using much
more memory then osd_mem
Zhenshi Zhou wrote:
Thanks for the reply. Here is my nginx configuration.
Did I miss something or is there some special option to set?
What's more, our Flink can work well by connecting to the frontend.
Can you see nginx's error log when there is connection error happening
in java client?
re
27 matches
Mail list logo