[ceph-users] ceph deployment best practice

2022-09-13 Thread gagan tiwari
Hi Guys, I am new to Ceph and storage. We have a requirement of managing around 40T of data which will be accessed by around 100 clients all running RockyLinux9. We have a HP storage server with 12 SDD of 5T each and have set-up hardware RAID6 on these disks. HP storage server ha

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread gagan tiwari
Sorry. I meant SSD Solid state disks. Thanks, Gagan On Wed, Sep 14, 2022 at 12:49 PM Janne Johansson wrote: > Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari > : > > Hi Guys, > > I am new to Ceph and storage. We have a requirement of > > managing aroun

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread gagan tiwari
/ NFS or not. And yes clients have local disk 1T SSD. How can I set up local caching in NFS clients? Thanks, Gagan On Wed, Sep 14, 2022 at 2:20 PM Janne Johansson wrote: > Den ons 14 sep. 2022 kl 10:14 skrev gagan tiwari > : > > > > Sorry. I meant SSD Solid state disks. >

[ceph-users] ceph deployment best practice

2025-04-09 Thread gagan tiwari
Hi Guys, We have an HPC environment which currently has a single master host that stores the entire data around 100T and exports that data via NFS to the clients. We are using OpenZFS on the single master host. But now we need to store much more data around 500T and we are fa

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread gagan tiwari
Hi Anthony, Thanks for the reply! We will be using CephFS to access Ceph Storage from clients. So, this will need MDS daemon also. So, based on your advice, I am thinking of having 4 Dell PowerEdge servers . 3 of them will run 3 Monitor daemons and one of them will run

[ceph-users] Re: ceph deployment best practice

2025-04-11 Thread gagan tiwari
node will not be used for anything other than mds daemon. Thanks, Gagan On Fri, Apr 11, 2025 at 8:45 PM Anthony D'Atri wrote: > > > > On Apr 11, 2025, at 4:04 AM, gagan tiwari < > gagan.tiw...@mathisys-india.com> wrote: > > > > Hi Anthony, > >