Hi, guys
I am testing ceph on bcache devices, I found the performance is not good as
expected. Does anyone have any best practices for it? Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.
On 2021/3/1 下午6:32, Matthias Ferdinand wrote:
> On Mon, Mar 01, 2021 at 12:37:38PM +0800, Norman.Kern wrote:
>> Hi, guys
>>
>> I am testing ceph on bcache devices, I found the performance is not
>> good as expected. Does anyone have any best practices for it? Thanks.
On 2021/3/2 上午5:09, Andreas John wrote:
> Hallo,
>
> do you expect that to be better (faster), than having the OSD's Journal
> on a different disk (ssd, nvme) ?
No, I created the OSD storage devices using bcache devices.
>
>
> rgds,
>
> derjohn
>
>
> On
On 2021/3/2 下午4:49, James Page wrote:
> Hi Norman
>
> On Mon, Mar 1, 2021 at 4:38 AM Norman.Kern wrote:
>
>> Hi, guys
>>
>> I am testing ceph on bcache devices, I found the performance is not good
>> as expected. Does anyone have any best practices for it? T
James,
Can you tell me what's the hardware config of your bcache? I use the 400G SATA
SSD as cache device and
10T HDD as the storage device. Hardware relationed?
On 2021/3/2 下午4:49, James Page wrote:
> Hi Norman
>
> On Mon, Mar 1, 2021 at 4:38 AM Norman.Kern wrote:
>
>&
I met the same problem, I set the reweight value to make it not worse. Do you
solve it by setting balancer? My ceph verison is 14.2.5.
Thank you,
Norman
On 2021/3/7 下午12:40, Anthony D'Atri wrote:
> ceph balancer status
___
ceph-users mailing list -- c
Hi Guys,
I have used Ceph rbd for Openstack for sometime, I met a problem while
destroying a VM. The Openstack tried to
delete rbd image but failed. I have a test deleting a image by rbd command, it
costs lots of time(image size 512G or more).
Anyone met the same problem with me?
Thanks,
No
On 2021/3/10 下午3:05, Konstantin Shalygin wrote:
>> On 10 Mar 2021, at 09:50, Norman.Kern wrote:
>>
>> I have used Ceph rbd for Openstack for sometime, I met a problem while
>> destroying a VM. The Openstack tried to
>>
>> delete rbd image but failed. I
Hi,
Anyone knows how to know which client hold lock of a file in Ceph fs?
I met a dead lock problem that a client holding on get the lock, but I don't
kown which client held it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an e
Hi David,
The web page is missing:
https://docs.ceph.com/en/latest/docs/master/install/get-packages/
\ SORRY/
\ /
\This page does /
] not exist yet.[,'|
] [ / |
Hi guys,
I have a long holiday since this summer, I came back to setup a new ceph
server, I want to know which stable version of ceh you're using for
production?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
Hi guys,
Nowadays, I have some SATA SSDs(400G) and HDDs(8T),How many HHDs(Data)
share one SSD(DB&Journal) is better?
And If the SSD is broken down, it will cause all OSDs which share it down?
Wait for your replies.
___
ceph-users mailing list -- ceph-
Hi Anthony,
Thanks for your reply. If the SSD down, Do I have to rebuild the 3-4
OSDs and balance the data in the OSD?
在 2021/11/20 下午2:27, Anthony D'Atri 写道:
On Nov 19, 2021, at 10:25 PM, norman.kern wrote:
Hi guys,
Nowadays, I have some SATA SSDs(400G) and HDDs(8T),How many HHDs
I create a rbd pool using only two SATA SSDs(one for data, another for
database,WAL), and set the replica size 1.
After that, I setup a fio test on Host same with the OSD placed. I found
the latency is hundreds micro-seconds(sixty micro-seconds for the raw
SATA SSD ).
The fio outpus:
m-seqw
Marc,
Thanks for your reply. The wiki page is very helpful to me. I have
analyzed the I/O flow and pretend to optimize the librbd client.
And I found RBD has support persistent
cache(https://docs.ceph.com/en/pacific/rbd/rbd-persistent-write-back-cache/),
and I will have a try.
P.S. Anyone
Mark,
Thanks for your reply. I made the test on the local host and no replica
pg set. The crimson may help me a lot and I will do more tests.
And I will try rbd persistent cache feature for that the client is
sensitive to latency.
P.S. crimson can be used in production now or not ?
On 12/
Joshua,
Quincy should release in March 2022, You can find the release cycle and
standards from https://docs.ceph.com/en/latest/releases/general/
Norman
Best regards
On 12/22/21 9:37 PM, Joshua West wrote:
Where do I find information on the release timeline for quincy?
I learned a lesson some
Chad,
As the document noted, min_size means "Minimum number of replicas to
serve the request", so you can't read when number of PGs below min_size.
Norman
Best regards
On 12/17/21 10:59 PM, Chad William Seys wrote:
ill open an issue to h
___
cep
Dear Ceph folks,
Anyone is using cephadm in product(Version: Pacific)? I found several bugs on
it and
I really doubt it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
On Tue, 8 Mar 2022 at 05:18, norman.kern wrote:
Dear Ceph folks,
Anyone is using cephadm in product(Version: Pacific)? I found several bugs
on
Ray,
Can you provide more information about your cluster(hardware and
software configs)?
On 3/10/22 7:40 AM, Ray Cunningham wrote:
make any difference. Do
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users
Cunningham
Sent: Thursday, March 10, 2022 7:59 AM
To: norman.kern
Cc: ceph-users@ceph.io
Subject: RE: [ceph-users] Scrubbing
We have 16 Storage Servers each with 16TB HDDs and 2TB SSDs for DB/WAL, so we
are using bluestore. The system is running Nautilus 14.2.19 at the moment, with
an upgrade scheduled
r specific
questions.
Off the top of my head, we have set:
osd_max_scrubs 20
osd_scrub_auto_repair true
osd _scrub_load_threashold 0.6
We do not limit srub hours.
Thank you,
Ray
-Original Message-----
From: norman.kern
Sent: Wednesday, March 9, 2022 7:28 PM
To: Ray Cunningham
Cc:
Hi guys,
I want to add a CDN service for my rgws, and provide a url without
authentication.
I have a test on openresty, but I'm not sure it it suitable for
production. Which tool do you use in production?
Thanks.
___
ceph-users mailing list -- ceph-u
I have used rook for a year, it's really easy to manage the ceph
cluster, but I didn't use it later, because ceph cluster
is complicated enough, I don't want to make it more complicated with k8s.
If you want to use ceph , you have to know ceph+k8s+rook, each module
can cause problems.
On 7/4/2
25 matches
Mail list logo