>
>> I am thinking of designing a cephfs or S3 cluster, with a target to achieve
>> a minimum of 50GB/s (write) bandwidth. For each node, I prefer 4U 36x 3.5"
>> Supermicro server with 36x 12TB 7200K RPM HDDs, 2x Intel P4610 1.6TB NVMe
>> SSD as DB/WAL, a single CPU socket with AMD 7302, and
This is a double 7TiB drives (sda & sdb)
k
Sent from my iPhone
> On 22 Oct 2021, at 17:12, huxia...@horebdata.cn wrote:
>
> Seagate MACH.2 2X14 drive look very interesting. Does this disk appear as two
> (logical) disks under Linux, thus two osds per drive, or still as a single
> disk but w
Dear Martin,
Thanks a lot for insightful suggestions and comments.
Seagate MACH.2 2X14 drive look very interesting. Does this disk appear as two
(logical) disks under Linux, thus two osds per drive, or still as a single disk
but with more IOPS and bandwidth?
Samuel
huxia...@horebdata.cn
Thanks a lot for the insightful comments.
Reply see below
From: Christian Wuerdig
Date: 2021-10-22 02:13
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: [ceph-users] Open discussing: Designing 50GB/s CephFS or S3 ceph
cluster
What is the expected file/object size distribution and count?
Hi,
> On 21 Oct 2021, at 19:23, huxia...@horebdata.cn wrote:
>
> I am thinking of designing a cephfs or S3 cluster, with a target to achieve a
> minimum of 50GB/s (write) bandwidth. For each node, I prefer 4U 36x 3.5"
> Supermicro server with 36x 12TB 7200K RPM HDDs, 2x Intel P4610 1.6TB NVMe S
>
> > How many nodes should be deployed in order to achieve a minimum of
> 50GB/s, if possible, with the above hardware setting?
> About 50 Nodes could be able to deliver it, but strongly depends on many
> more factors.
>
Hi Martin, just curious, but how did you deduct this so quickly/easily?
__
Hello,
if you would choose Seagate MACH.2 2X14 drives, you would get much better
throughput as well as density. Your RAM could be a bit on the lower end,
and for the MACH.2 it definitively would be to low.
You need dedicated metadata drives for S3 or MDS as well. Choose blazing
fast NVMe with low
- What is the expected file/object size distribution and count?
- Is it write-once or modify-often data?
- What's your overall required storage capacity?
- 18 OSDs per WAL/DB drive seems a lot - recommended is ~6-8
- With 12TB OSD the recommended WAL/DB size is 120-480GB (1-4%) per O