I appreciate your help, I will use ceph-deploy and I will start with 10GB journal size.
Thanks, Sunday Olutayo ----- Original Message ----- From: "Dominik Zalewski" <dzalew...@optlink.co.uk> To: "SUNDAY A. OLUTAYO" <olut...@sadeeb.com>, "ceph-users" <ceph-users@lists.ceph.com> Sent: Wednesday, August 5, 2015 4:48:24 PM Subject: Re: [ceph-users] Ceph Design Yes, there should a separate partition per OSD. You are probably looking at 10-20GB journal partition per OSD. If you are creating your cluster using ceph-deploy it can create journal partitions for you. "The expected throughput number should include the expected disk throughput (i.e., sustained data transfer rate), and network throughput. For example, a 7200 RPM disk will likely have approximately 100 MB/s. Taking the min() of the disk and network throughput should provide a reasonable expected throughput. Some users just start off with a 10GB journal size". For example: osd journal size = 10000 On Wed, Aug 5, 2015 at 4:48 PM, Dominik Zalewski < dzalew...@optlink.co.uk > wrote: Yes, there should a separate partition per OSD. You are probably looking at 10-20GB journal partition per OSD. If you are creating your cluster using ceph-deploy it can create journal partitions for you. "The expected throughput number should include the expected disk throughput (i.e., sustained data transfer rate), and network throughput. For example, a 7200 RPM disk will likely have approximately 100 MB/s. Taking the min() of the disk and network throughput should provide a reasonable expected throughput. Some users just start off with a 10GB journal size". For example: osd journal size = 10000 On Wed, Aug 5, 2015 at 4:38 PM, SUNDAY A. OLUTAYO < olut...@sadeeb.com > wrote: <blockquote> I intend to have 5-8 OSDs for 400GB SSD. Should there be different partitions for each OSD on the SSD? Thanks, Sunday Olutayo From: "Dominik Zalewski" < dzalew...@optlink.co.uk > To: "SUNDAY A. OLUTAYO" < olut...@sadeeb.com >, "ceph-users" < ceph-users@lists.ceph.com > Sent: Wednesday, August 5, 2015 3:38:20 PM Subject: Re: [ceph-users] Ceph Design I would suggest splitting OSDs across two or more SSD journals (depending on OSD write speed and SSD sustained speed limits) e.g 2x Intel S3700 400GB for 8-10 OSDs or 4x Intel S3500 300GB for 8-10 OSDs (it may vary depending on the setup) If you RAID-1 SSD journals they will potentially "wear out" in the same time due to writes happening on both of them. You only going to get journal write performance penalty with RAID-1. Dominik On Wed, Aug 5, 2015 at 3:37 PM, Dominik Zalewski < dzalew...@optlink.co.uk > wrote: <blockquote> I would suggest splitting OSDs across two or more SSD journals (depending on OSD write speed and SSD sustained speed limits) e.g 2x Intel S3700 400GB for 8-10 OSDs or 4x Intel S3500 300GB for 8-10 OSDs (it may vary depending on the setup) If you RAID-1 SSD journals they will potentially "wear out" in the same time due to writes happening on both of them. You only going to get journal write performance penalty with RAID-1. Dominik On Tue, Aug 4, 2015 at 10:54 PM, SUNDAY A. OLUTAYO < olut...@sadeeb.com > wrote: <blockquote> I am thinking of having ceph journal on a RAID1 SSD. Kindly advise me on this, does the RAID1 SSD for journal make sense? _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com </blockquote> </blockquote> </blockquote>
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com