Hi Guys,
                  We have an HPC environment which currently has a
single master host that stores the  entire data around 100T and exports
that data via NFS  to the  clients.

We are using OpenZFS on the single master host.

But now we need to store much more data around 500T and we are facing
scalability issues with ZFS as it can't be scaled out.

So, we are considering replacing OpenZFS with Ceph.

I have gone the online docs on https://docs.ceph.com and I have come up
with the following ceph deployment plan :-

We would start deploying Ceph with 4 hosts ( HP Proliant servers ) each
running RockyLinux 9.

One of the hosts called ceph-adm will be smaller one and will have
following hardware :-

2x4T SSD  with raid 1 to install OS on.

8 Core with 3600MHz freq.

64G  RAM

We are planning to run all Ceph daemons except OSD daemon like monitor ,
metadata ,etc on this host.

We will have 3 hosts to run OSD which will store actual data.

Each OSD host will have following hardware

2x4T SSD  with raid 1 to install OS on.

22X8T SSG  to store data ( OSDs ) ( without partition ). We will use entire
disk without partitions

Each OSD host will have 128G RAM  ( No swap space )

Each OSD host will have 16 cores.

All 4 hosts will connect to each via 10G nic.

The 500T data will be accessed by the clients. We need to have
read performance as fast as possible.

We can't afford data loss and downtime. So, we want to have a Ceph
deployment  which serves our purpose.

So, please advise me if the plan that I have designed will serve our
purpose.
Or is there a better way , please advise that.

Thanks,
Gagan






We have a HP storage server with 12 SDD of 5T each and have set-up hardware
RAID6 on these disks.

 HP storage server has 64G RAM and 18 cores.

So, please advise how I should go about setting up Ceph on it to have best
read performance. We need fastest read performance.


Thanks,
Gagan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to