Hi, all.
Does anyone know where the endpoint of CREATE TOPIC is? (for bucket
notification)
https://docs.ceph.com/docs/master/radosgw/notifications/#create-a-topic
Is that the same with the normal S3 API? I tried but failed.
Thanks.
___
ceph-users mailing
My question requires too complex an answer.
So let me ask a simple question:
What does the SIZE of "osd pool autoscale-status" tell/mean/comes from?
Thanks
Lars
Wed, 23 Oct 2019 14:28:10 +0200
Lars Täuber ==> ceph-users@ceph.io :
> Hello everybody!
>
> What does this mean?
>
> health: HEA
This question is answered here:
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
But it tells me that there is more data stored in the pool than the raw
capacity provides (taking the replication factor RATE into account) hence the
RATIO being above 1.0 .
How comes this is the ca
Hello,
this is understood.
I needed to start reweighting specific OSD because rebalancing was not
working and I got a warning in Ceph that some OSDs are running out of space.
KR
Am 24.10.2019 um 05:58 schrieb Konstantin Shalygin:
> On 10/23/19 2:46 PM, Thomas Schneider wrote:
>> Sure, here's th
Hi Frank,
just a short note on changing EC profiles. If you try to change only a
single value you'll end up with a mess. See this example (Nautilus):
---snip---
# Created new profile
mon1:~ # ceph osd erasure-code-profile get ec-k2m4
crush-device-class=
crush-failure-domain=host
crush-root=de
The formatting is mangled on my phone, but if I am reading it correctly,
you have set Target Ratio to 4.0. This means you have told the balancer
that this pool will occupy 4x the space of your whole cluster, and to
optimize accordingly. This is naturally a problem. Setting it to 0 will
clear the se
Thanks Nathan for your answer,
but I set the the Target Ratio to 0.9. It is the cephfs_data pool that makes
the troubles.
The 4.0 is the BIAS from the cephfs_metadata pool. This "BIAS" is not explained
on the page linked below. So I don't know its meaning.
How can be a pool overcommited when i
Ah, I see! The BIAS reflects the number of placement groups it should
create. Since cephfs metadata pools are usually very small, but have
many objects and high IO, the autoscaler gives them 4x the number of
placement groups that it would normally give for that amount of data.
So, your cephfs_data
Hi all,
We have FIPS enable cluster where it is running on ceph-12.2.12, after
upgrading to mimic 13.2.6 can't serve any requests. and not able to get/put
objects, buckets.
Is there Mimic support FIPS?
Thanks,
Amit G
___
ceph-users mailing list -- ceph
Hi all,
After an RGW upgrade from 12.2.7 to 12.2.12 for RGW multisite a few days
ago the "sync status" has constantly shown a few "recovering shards", ie:
-
# radosgw-admin sync status
realm 8f7fd3fd-f72d-411d-b06b-7b4b579f5f2f (prod)
zonegroup 60a2cb75-6978-46a3-b830-061c8b
the endpoint is not the RGW endpoint, it is the server to which you
want to send the bucket notifications to.
E.g. if you have a rabbitmq server running at address: 1.2.3.4, you should use:
push-endpoint=amqp://1.2.3.4
note that in such a case the: amqp-exchange parameter must be set as well.
assu
I'm in the process of testing the iscsi target feature of ceph. The cluster
is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5 hosts with 12
SSD OSDs per host. Some basic testing moving VMs to a ceph backed datastore
is only showing 60MB/s transfers. However moving these back off the
datas
Are you using Erasure Coding or replication? What is your crush rule?
What SSDs and CPUs? Does each OSD use 100% of a core or more when
writing?
On Thu, Oct 24, 2019 at 1:22 PM Ryan wrote:
>
> I'm in the process of testing the iscsi target feature of ceph. The cluster
> is running ceph 14.2.4 an
I was told by someone at Red Hat that ISCSI performance is still several
magnitudes behind using the client / driver.
Thanks,
-Drew
-Original Message-
From: Nathan Fish
Sent: Thursday, October 24, 2019 1:27 PM
To: Ryan
Cc: ceph-users
Subject: [ceph-users] Re: iSCSI write performance
Hi Eugen,
thanks for that comment. I did save the command line I used to create the EC
profile. To force an update, I would just re-execute the same line with the
device class set to SSD this time.
I would also expect that the pool only continues using k, m and algorithmic
settings, which must
Hello,
we did some local testing a few days ago on a new installation of a small
cluster.
Performance of our iSCSI implementation showed a performance drop of 20-30%
against krbd.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinV
Hi,
I am running a nice ceph (proxmox 4 / debian-8 / ceph 0.94.3) cluster on
3 nodes (supermicro X8DTT-HIBQF), 2 OSD each (2TB SATA harddisks),
interconnected via Infiniband 40.
Problem is that the ceph performance is quite bad (approx. 30MiB/s
reading, 3-4 MiB/s writing ), so I thought about plug
Hello,
think about migrating to a way faster and better Ceph version and towards
bluestore to increase the performance with the existing hardware.
If you want to go with PCIe card, the Samsung PM1725b can provide quite
good speeds but at much higher costs then the EVO. If you want to check
drives
Dear Hermann,
try your tests again with volatile write cache disabled ([s/h]dparm -W 0
DEVICE). If your disks have super capacitors, you should then see spec
performance (possibly starting with iodopth=2 or 4) with your fio test. A good
article is this one here:
https://yourcmc.ru/wiki/index.p
It's easy:
https://yourcmc.ru/wiki/Ceph_performance
Hi,
I am running a nice ceph (proxmox 4 / debian-8 / ceph 0.94.3) cluster on
3 nodes (supermicro X8DTT-HIBQF), 2 OSD each (2TB SATA harddisks),
interconnected via Infiniband 40.
Problem is that the ceph performance is quite bad (approx. 30MiB
Especially https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21 but I
recommend you to read the whole article
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...
On 10/24/2019 12:22 PM, Ryan wrote:
> I'm in the process of testing the iscsi target feature of ceph. The
> cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5
What kernel are you using?
> hosts with 12 SSD OSDs per host. Some basic testing moving VMs to a ceph
> backed datastore
Dear Cephers,
I have a question concerning static websites with RGW.
To my understanding, it is best to run >=1 RGW client for "classic" S3 and in
addition operate >=1 RGW client for website serving
(potentially with HAProxy or its friends in front) to prevent messup of
requests via the differe
They are Samsung 860 EVO 2TB SSDs. The Dell R740xd servers have dual Intel
Gold 6130 CPUs and dual SAS controllers with 6 SSDs each. Top shows around
20-25% of a core being used by each OSD daemon. I am using erasure coding
with crush-failure-domain=host k=3 m=2.
On Thu, Oct 24, 2019 at 1:37 PM Dr
I'm using CentOS 7.7.1908 with kernel 3.10.0-1062.1.2.el7.x86_64. The
workload was a VMware Storage Motion from a local SSD backed datastore to
the ceph backed datastore. Performance was measured using dstat on the
iscsi gateway for network traffic and ceph status as this cluster is
basically idle.
On 10/24/19 11:00 PM, Frank R wrote:
After an RGW upgrade from 12.2.7 to 12.2.12 for RGW multisite a few
days ago the "sync status" has constantly shown a few "recovering
shards", ie:
-
# radosgw-admin sync status
realm 8f7fd3fd-f72d-411d-b06b-7b4b579f5f2f (prod)
zonegro
On 10/24/19 6:54 PM, Thomas Schneider wrote:
this is understood.
I needed to start reweighting specific OSD because rebalancing was not
working and I got a warning in Ceph that some OSDs are running out of space.
Still, the main your issue is that your buckets is uneven, 350TB vs
79TB, more t
Hi Nathan,
Thu, 24 Oct 2019 10:59:55 -0400
Nathan Fish ==> Lars Täuber :
> Ah, I see! The BIAS reflects the number of placement groups it should
> create. Since cephfs metadata pools are usually very small, but have
> many objects and high IO, the autoscaler gives them 4x the number of
> placeme
28 matches
Mail list logo