Hi,
We use CephFS in an HPC center.
Rafael.
De: "Anthony Fecarotta"
Enviada: 2025/07/02 13:17:54
Para: ceph-users@ceph.io
Assunto: [ceph-users] Where are you running Ceph?
Hello, I was wondering if there are any statistics on which platforms users/organizations are running Ceph on?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello everyone,
I am facing a situation that I have not yet been able to find a solution
for.
There is a client machine that is running a parallel job that generates an
output file over time.
The problem is that a file with the same name is being generated. There
are two files wit
Hi,
I just did a new Ceph installation and would like to enable the "read
balancer".
However, the documentation requires that the minimum client version be
reef. I checked this information through "ceph features" and came across the
situation of having 2 luminous clients.
# ceph featur
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
bootstrap --mon-ip 172.27.254.6 --cluster-network 172.28.254.0/24 --log-to-file
cephadm install ceph-common
De: "Anthony D'Atri"
Enviada: 2025/04/08 10:35:22
Para: quag...@bol.com.br
Cc: ebl...@nde.ag, ceph-users@ceph.io
Assunto: Re: [ceph-users] Ceph squid fresh install
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What is a “storage server”?
These are machines that only have the operating system and ceph installed.
De: "Anthony D'Atri"
Enviada: 2025/04/08 10:19:08
Para: quag...@bol.com.br
Cc: ebl...@nde.ag, ceph-users@ceph.io
Assunto: Re: [ceph-users] Ceph squid fresh insta
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
These 2 IPs are from the storage servers.
There are no user processes running on them. It only has the operating system and ceph installed.
Rafael.
De: "Eugen Block"
Enviada: 2025/04/08 09:35:35
Para: quag...@bol.com.br
Cc: ceph-users@ceph.io
Assunto: Re: [ceph-users] Ceph s
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Janne,
That's it.
When finished rebalancing, the space looked as expected.
Thank you.
De: "Janne Johansson"
Enviada: 2025/02/27 16:52:20
Para: quag...@bol.com.br
Cc: ceph-users@ceph.io
Assunto: [ceph-users] Re: Free space
Den tors 27 feb. 2025 kl 1
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
I recently installed a new cluster.
After the first node was working, I started transferring the files I
needed. As I was in some urgency to do rsync, I enabled size=1 for the CephFS
data pool.
After a few days, when I managed to place a new node, I put size = 2 for
that
I need.
Rafael.
De: "Joachim Kraftmayer"
Enviada: 2024/11/16 06:59:39
Para: christopher.col...@dotdashmdp.com
Cc: quag...@bol.com.br, ceph-users@ceph.io
Assunto: [ceph-users] Re: Question about speeding hdd based cluster
Short comment on Replikation size 2: is not the question if you
I need.
Rafael.
De: "Joachim Kraftmayer"
Enviada: 2024/11/16 06:59:39
Para: christopher.col...@dotdashmdp.com
Cc: quag...@bol.com.br, ceph-users@ceph.io
Assunto: [ceph-users] Re: Question about speeding hdd based cluster
Short comment on Replikation size 2: is not the question if you
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Kyriazis,
I work with a cluster similar to yours : 142 HDDs and 18 SSDs.
I had a lot of performance gains when I made the following settings:
1-) For the pool that is configured on the HDDs (here, home directories are on HDDs), reduce the following replica settings (I don't know wha
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
I did the configuration to activate multimds in ceph. The parameters I
entered looked like this:
3 assets
1 standby
I also placed the distributed pinning configuration at the root of the
mounted dir of the storage:
setfattr -n ceph.dir.pin.distributed -v 1 /
This configura
Hello,
After some time, I'm adding some more disks on a new machine in the ceph
cluster.
However, there is a container that is not rising. It is the
"node-exporter".
Below is an excerpt from the log that reports the error:
Mar 20 15:51:08 adafn02
ceph-da43a27a-eee8-11eb-9c87-525
Hi,
I upgraded a cluster 2 weeks ago here. The situation is the same as Michel.
A lot of PGs no scrubbed/deep-scrubed.
Rafael.___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
t's just a suggestion.
If this type of functionality is not interesting, it is ok.
Rafael.
De: "Anthony D'Atri"
Enviada: 2024/02/01 12:10:30
Para: quag...@bol.com.br
Cc: ceph-users@ceph.io
Assunto: [ceph-users] Re: Performance improvement suggestion
> I didn't
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
De: "Janne Johansson"
Enviada: 2024/02/01 04:08:05
Para: anthony.da...@gmail.com
Cc: acozy...@gmail.com, quag...@bol.com.br, ceph-users@ceph.io
Assunto: Re: [ceph-users] Re: Performance improvement suggestion
> I’ve heard conflicting asserts on whether the write returns wi
with NVMe. However, I don't think it's interesting to lose the functionality of the replicas.
I'm just suggesting another way to increase performance without losing the functionality of replicas.
Rafael.
De: "Anthony D'Atri"
Enviada: 2024/01/31 17:04:08
Para: qu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello everybody,
I would like to make a suggestion for improving performance in Ceph architecture.
I don't know if this group would be the best place or if my proposal is correct.
My suggestion would be in the item https://docs.ceph.com/en/latest/architecture/, at the end of the top
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
lready sent that the cluster is configured with size=2 and min_size=1 for the data and metadata pools.
If I have any more information to contribute, please let me know!
Obrigado
Rafael
De: "David C"
Enviada: 2022/11/22 12:27:24
Para: quag...@bol.com.br
Cc: ceph-users@cep
Hello everyone,
I have some considerations and doubts to ask...
I work at an HPC center and my doubts stem from performance in this environment. All clusters here was suffering from NFS performance and also problems of a single point of failure it has. We were suffering from the performanc
39 matches
Mail list logo