[ceph-users] Re: HDD cache

2023-11-09 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Performance improvement suggestion

2024-01-31 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-01-31 Thread quag...@bol.com.br
Hello everybody, I would like to make a suggestion for improving performance in Ceph architecture. I don't know if this group would be the best place or if my proposal is correct. My suggestion would be in the item https://docs.ceph.com/en/latest/architecture/, at the end of the top

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
with NVMe. However, I don't think it's interesting to lose the functionality of the replicas. I'm just suggesting another way to increase performance without losing the functionality of replicas. Rafael.   De: "Anthony D'Atri" Enviada: 2024/01/31 17:04:08 Para: qu

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
  De: "Janne Johansson" Enviada: 2024/02/01 04:08:05 Para: anthony.da...@gmail.com Cc: acozy...@gmail.com, quag...@bol.com.br, ceph-users@ceph.io Assunto: Re: [ceph-users] Re: Performance improvement suggestion   > I’ve heard conflicting asserts on whether the write returns wi

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
t's just a suggestion. If this type of functionality is not interesting, it is ok. Rafael.   De: "Anthony D'Atri" Enviada: 2024/02/01 12:10:30 Para: quag...@bol.com.br Cc: ceph-users@ceph.io Assunto: [ceph-users] Re: Performance improvement suggestion   > I didn't

[ceph-users] Re: Performance improvement suggestion

2024-02-20 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-20 Thread quag...@bol.com.br
Hi,      I upgraded a cluster 2 weeks ago here. The situation is the same as Michel.      A lot of PGs no scrubbed/deep-scrubed. Rafael.___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] node-exporter error

2024-03-20 Thread quag...@bol.com.br
Hello, After some time, I'm adding some more disks on a new machine in the ceph cluster. However, there is a container that is not rising. It is the "node-exporter". Below is an excerpt from the log that reports the error: Mar 20 15:51:08 adafn02 ceph-da43a27a-eee8-11eb-9c87-525

[ceph-users] Multi-MDS

2024-04-02 Thread quag...@bol.com.br
Hello, I did the configuration to activate multimds in ceph. The parameters I entered looked like this: 3 assets 1 standby I also placed the distributed pinning configuration at the root of the mounted dir of the storage: setfattr -n ceph.dir.pin.distributed -v 1 / This configura

[ceph-users] Re: Multi-MDS

2024-04-03 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] CephFS performance

2022-10-20 Thread quag...@bol.com.br
Hello everyone,     I have some considerations and doubts to ask...     I work at an HPC center and my doubts stem from performance in this environment. All clusters here was suffering from NFS performance and also problems of a single point of failure it has. We were suffering from the performanc

[ceph-users] Re: CephFS performance

2022-11-23 Thread quag...@bol.com.br
lready sent that the cluster is configured with size=2 and min_size=1 for the data and metadata pools. If I have any more information to contribute, please let me know! Obrigado Rafael   De: "David C" Enviada: 2022/11/22 12:27:24 Para: quag...@bol.com.br Cc: ceph-users@cep

[ceph-users] Re: Question about speeding hdd based cluster

2024-10-02 Thread quag...@bol.com.br
  Hi Kyriazis,      I work with a cluster similar to yours : 142 HDDs and 18 SSDs.      I had a lot of performance gains when I made the following settings: 1-) For the pool that is configured on the HDDs (here, home directories are on HDDs), reduce the following replica settings (I don't know wha

[ceph-users] Re: Question about speeding hdd based cluster

2024-10-02 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Question about speeding hdd based cluster

2024-11-19 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Question about speeding hdd based cluster

2024-11-19 Thread quag...@bol.com.br
I need. Rafael.   De: "Joachim Kraftmayer" Enviada: 2024/11/16 06:59:39 Para: christopher.col...@dotdashmdp.com Cc: quag...@bol.com.br, ceph-users@ceph.io Assunto: [ceph-users] Re: Question about speeding hdd based cluster   Short comment on Replikation size 2: is not the question if you

[ceph-users] Re: Question about speeding hdd based cluster

2024-11-19 Thread quag...@bol.com.br
I need. Rafael.   De: "Joachim Kraftmayer" Enviada: 2024/11/16 06:59:39 Para: christopher.col...@dotdashmdp.com Cc: quag...@bol.com.br, ceph-users@ceph.io Assunto: [ceph-users] Re: Question about speeding hdd based cluster   Short comment on Replikation size 2: is not the question if you

[ceph-users] Re: Ceph squid fresh install

2025-04-08 Thread quag...@bol.com.br
These 2 IPs are from the storage servers. There are no user processes running on them. It only has the operating system and ceph installed. Rafael. De: "Eugen Block" Enviada: 2025/04/08 09:35:35 Para: quag...@bol.com.br Cc: ceph-users@ceph.io Assunto: Re: [ceph-users] Ceph s

[ceph-users] Re: Ceph squid fresh install

2025-04-08 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph squid fresh install

2025-04-08 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph squid fresh install

2025-04-08 Thread quag...@bol.com.br
What is a “storage server”?   These are machines that only have the operating system and ceph installed.   De: "Anthony D'Atri" Enviada: 2025/04/08 10:19:08 Para: quag...@bol.com.br Cc: ebl...@nde.ag, ceph-users@ceph.io Assunto: Re: [ceph-users] Ceph squid fresh insta

[ceph-users] Re: Ceph squid fresh install

2025-04-08 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph squid fresh install

2025-04-08 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph squid fresh install

2025-04-09 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph squid fresh install

2025-04-10 Thread quag...@bol.com.br
bootstrap --mon-ip 172.27.254.6 --cluster-network 172.28.254.0/24 --log-to-file cephadm install ceph-common   De: "Anthony D'Atri" Enviada: 2025/04/08 10:35:22 Para: quag...@bol.com.br Cc: ebl...@nde.ag, ceph-users@ceph.io Assunto: Re: [ceph-users] Ceph squid fresh install  

[ceph-users] Ceph squid fresh install

2025-04-10 Thread quag...@bol.com.br
Hi, I just did a new Ceph installation and would like to enable the "read balancer". However, the documentation requires that the minimum client version be reef. I checked this information through "ceph features" and came across the situation of having 2 luminous clients. # ceph featur

[ceph-users] Re: Ceph squid fresh install

2025-04-10 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: POSIX question

2025-05-07 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Free space

2025-02-27 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Free space

2025-02-27 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Free space

2025-02-27 Thread quag...@bol.com.br
Hello, I recently installed a new cluster. After the first node was working, I started transferring the files I needed. As I was in some urgency to do rsync, I enabled size=1 for the CephFS data pool. After a few days, when I managed to place a new node, I put size = 2 for that

[ceph-users] Re: Free space

2025-03-06 Thread quag...@bol.com.br
Hi Janne,      That's it.      When finished rebalancing, the space looked as expected. Thank you.   De: "Janne Johansson" Enviada: 2025/02/27 16:52:20 Para: quag...@bol.com.br Cc: ceph-users@ceph.io Assunto: [ceph-users] Re: Free space   Den tors 27 feb. 2025 kl 1

[ceph-users] Re: Free space

2025-02-27 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] POSIX question

2025-05-07 Thread quag...@bol.com.br
Hello everyone, I am facing a situation that I have not yet been able to find a solution for. There is a client machine that is running a parallel job that generates an output file over time. The problem is that a file with the same name is being generated. There are two files wit

[ceph-users] Re: Where are you running Ceph?

2025-07-03 Thread quag...@bol.com.br
Hi, We use CephFS in an HPC center. Rafael.     De: "Anthony Fecarotta" Enviada: 2025/07/02 13:17:54 Para: ceph-users@ceph.io Assunto: [ceph-users] Where are you running Ceph?   Hello, I was wondering if there are any statistics on which platforms users/organizations are running Ceph on?