[ceph-users] Best practices for OSD on bcache

2021-02-28 Thread Norman.Kern
Hi, guys I am testing ceph on bcache devices,  I found the performance is not good as expected. Does anyone have any best practices for it?  Thanks. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Norman.Kern
On 2021/3/1 下午6:32, Matthias Ferdinand wrote: > On Mon, Mar 01, 2021 at 12:37:38PM +0800, Norman.Kern wrote: >> Hi, guys >> >> I am testing ceph on bcache devices,  I found the performance is not >> good as expected. Does anyone have any best practices for it?  Thanks.

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Norman.Kern
On 2021/3/2 上午5:09, Andreas John wrote: > Hallo, > > do you expect that to be better (faster), than having the OSD's Journal > on a different disk (ssd, nvme) ? No, I created the OSD storage devices using bcache devices. > > > rgds, > > derjohn > > > On

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Norman.Kern
On 2021/3/2 下午4:49, James Page wrote: > Hi Norman > > On Mon, Mar 1, 2021 at 4:38 AM Norman.Kern wrote: > >> Hi, guys >> >> I am testing ceph on bcache devices, I found the performance is not good >> as expected. Does anyone have any best practices for it? T

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Norman.Kern
James, Can you tell me what's the hardware config of your bcache? I use the 400G SATA SSD as cache device and 10T HDD as the storage device.  Hardware relationed? On 2021/3/2 下午4:49, James Page wrote: > Hi Norman > > On Mon, Mar 1, 2021 at 4:38 AM Norman.Kern wrote: > >&

[ceph-users] Re: balance OSD usage.

2021-03-07 Thread Norman.Kern
I met the same problem, I set the reweight value to make it not worse. Do you solve it by setting balancer? My ceph verison is 14.2.5. Thank you, Norman On 2021/3/7 下午12:40, Anthony D'Atri wrote: > ceph balancer status ___ ceph-users mailing list -- c

[ceph-users] Openstack rbd image Error deleting problem

2021-03-09 Thread Norman.Kern
Hi Guys, I have used Ceph rbd for Openstack for sometime, I met a problem while destroying a VM. The Openstack tried to delete rbd image but failed. I have a test deleting a image by rbd command, it costs lots of time(image size 512G or more). Anyone met the same problem with me?  Thanks, No

[ceph-users] Re: Openstack rbd image Error deleting problem

2021-03-10 Thread Norman.Kern
On 2021/3/10 下午3:05, Konstantin Shalygin wrote: >> On 10 Mar 2021, at 09:50, Norman.Kern wrote: >> >> I have used Ceph rbd for Openstack for sometime, I met a problem while >> destroying a VM. The Openstack tried to >> >> delete rbd image but failed. I

[ceph-users] How to know which client hold the lock of a file

2021-03-22 Thread Norman.Kern
Hi, Anyone knows how to know which client hold lock of a file in Ceph fs? I met a dead lock problem that a client holding on get the lock, but I don't kown which client held it. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an e

[ceph-users] Re: v16.2.2 Pacific released

2021-05-07 Thread Norman.Kern
Hi David, The web page is missing: https://docs.ceph.com/en/latest/docs/master/install/get-packages/ \ SORRY/ \ / \This page does / ] not exist yet.[,'| ] [ / |

[ceph-users] Which verison of ceph is better

2021-10-18 Thread norman.kern
Hi guys, I have a long holiday since this summer, I came back to setup a new ceph server, I want to know which stable version of ceh you're using for production? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-use

[ceph-users] How many data disks share one meta disks is better

2021-11-19 Thread norman.kern
Hi guys, Nowadays, I have some SATA SSDs(400G) and HDDs(8T),How many HHDs(Data) share one SSD(DB&Journal) is better? And If the SSD is broken down, it will cause all OSDs which share it down? Wait for your replies. ___ ceph-users mailing list -- ceph-

[ceph-users] Re: How many data disks share one meta disks is better

2021-11-19 Thread norman.kern
Hi Anthony, Thanks for your reply.  If the SSD down, Do I have to rebuild the 3-4 OSDs and balance the data in the OSD? 在 2021/11/20 下午2:27, Anthony D'Atri 写道: On Nov 19, 2021, at 10:25 PM, norman.kern wrote: Hi guys, Nowadays, I have some SATA SSDs(400G) and HDDs(8T),How many HHDs

[ceph-users] Large latency for single thread

2021-12-15 Thread norman.kern
I create a rbd pool using only two SATA SSDs(one for data, another for database,WAL), and set the replica size 1. After that, I setup a fio test on Host same with the OSD placed. I found the latency is hundreds micro-seconds(sixty micro-seconds for the raw SATA SSD ). The fio outpus: m-seqw

[ceph-users] Re: Large latency for single thread

2021-12-21 Thread norman.kern
Marc, Thanks for your reply. The wiki page is very helpful to me. I have analyzed the I/O flow and pretend to optimize the librbd client. And I found RBD has support persistent cache(https://docs.ceph.com/en/pacific/rbd/rbd-persistent-write-back-cache/), and I will have a try. P.S. Anyone

[ceph-users] Re: Large latency for single thread

2021-12-21 Thread norman.kern
Mark, Thanks for your reply. I made the test on the local host and no replica pg set.  The crimson may help me a lot and I will do more tests. And I will try rbd persistent cache feature for that the client is sensitive to latency. P.S. crimson can be used in production now or not ? On 12/

[ceph-users] Re: Where do I find information on the release timeline for quincy?

2021-12-22 Thread norman.kern
Joshua, Quincy should release in March 2022, You can find the release cycle and standards from https://docs.ceph.com/en/latest/releases/general/ Norman Best regards On 12/22/21 9:37 PM, Joshua West wrote: Where do I find information on the release timeline for quincy? I learned a lesson some

[ceph-users] Re: min_size ambiguity

2021-12-22 Thread norman.kern
Chad, As the document noted,  min_size means "Minimum number of replicas to serve the request",  so you can't read when number of PGs below min_size. Norman Best regards On 12/17/21 10:59 PM, Chad William Seys wrote: ill open an issue to h ___ cep

[ceph-users] Cephadm is stable or not in product?

2022-03-07 Thread norman.kern
Dear Ceph folks, Anyone is using cephadm in product(Version: Pacific)? I found several bugs on it and I really doubt it. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cephadm is stable or not in product?

2022-03-09 Thread norman.kern
Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263 Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx On Tue, 8 Mar 2022 at 05:18, norman.kern wrote: Dear Ceph folks, Anyone is using cephadm in product(Version: Pacific)? I found several bugs on

[ceph-users] Re: Scrubbing

2022-03-09 Thread norman.kern
Ray, Can you  provide more information about your cluster(hardware and software configs)? On 3/10/22 7:40 AM, Ray Cunningham wrote: make any difference. Do ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users

[ceph-users] Re: Scrubbing

2022-03-10 Thread norman.kern
Cunningham Sent: Thursday, March 10, 2022 7:59 AM To: norman.kern Cc: ceph-users@ceph.io Subject: RE: [ceph-users] Scrubbing We have 16 Storage Servers each with 16TB HDDs and 2TB SSDs for DB/WAL, so we are using bluestore. The system is running Nautilus 14.2.19 at the moment, with an upgrade scheduled

[ceph-users] Re: Scrubbing

2022-03-10 Thread norman.kern
r specific questions. Off the top of my head, we have set: osd_max_scrubs 20 osd_scrub_auto_repair true osd _scrub_load_threashold 0.6 We do not limit srub hours. Thank you, Ray -Original Message----- From: norman.kern Sent: Wednesday, March 9, 2022 7:28 PM To: Ray Cunningham Cc:

[ceph-users] which cdn tool for rgw in production

2022-04-18 Thread norman.kern
Hi guys, I want to add a CDN service for my rgws, and provide a url without authentication. I have a test on openresty, but I'm not sure it it suitable for production. Which tool do you use in production? Thanks. ___ ceph-users mailing list -- ceph-u

[ceph-users] Re: Is Ceph with rook ready for production?

2022-07-04 Thread norman.kern
I have used rook for a year, it's really easy to manage the ceph cluster, but I didn't use it later, because ceph cluster is complicated enough, I don't want to make it more complicated with k8s. If you want to use ceph , you have to know ceph+k8s+rook, each module can cause problems. On 7/4/2