Ok, answering myself about "sync" / "async" and other stuff (default
rsize/wsize): It comes from server after negotiation.

But still the question how to deal with rsize/wsize and cloudstack agent
mount...

2017-10-20 14:57 GMT+07:00 Ivan Kudryavtsev <[email protected]>:

> Hello, Eric.
>
> Thanks for your very comprehensive answer. I believe it's great it'll be
> in archives for future users. I haven't decided how to go completely, but
> have several questions here.
>
> You mentioned:
>
> > 10.100.255.3:/storage/primary3 on /mnt/0ab13de9-2310-334c-b438-94dfb0b8ec84
> type nfs4 (rw,relatime,sync,vers=4.0,rsize=1048576,wsize=1048576,namle
> n=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=
> tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.100.255.2,local_
> lock=none,addr=10.100.255.3)
>
> But I see at my Ubuntu 14.04 client and ACS 4.9:
>
> 192.168.3.218:/export/primary on /mnt/4e5a8e18-a425-32cc-a78b-30535229efef
> type nfs (rw,vers=4,addr=192.168.3.218,clientaddr=192.168.3.220)
>
> Any Idea what's wrong? It's just does as I copied above... The other thing
> is it's not a question works it or not out of the box, but how to tweak
> rsize/wsize anyway to make it work better, also, you have "sync", but our
> other ACS (4.3 with Ubuntu 14.04) uses "async". Where it comes from?
>
>
>
> 2017-10-20 14:12 GMT+07:00 Eric Green <[email protected]>:
>
>> Okay. So:
>>
>> 1) Don't use EXT4 with LVM/RAID, it performs terribly with QCOW2. Use XFS.
>> 2) I didn't do anything to my NFS mount options and they came out fine:
>>
>> 10.100.255.3:/storage/primary3 on /mnt/0ab13de9-2310-334c-b438-94dfb0b8ec84
>> type nfs4 (rw,relatime,sync,vers=4.0,rsize=1048576,wsize=1048576,namle
>> n=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,
>> noac,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,
>> clientaddr=10.100.255.2,local_lock=none,addr=10.100.255.3)
>>
>> 3) ZFS works relatively well, but if you are using it with SSD's you
>> *must* set the alignment to match the SSD alignment, and *must* set the ZFS
>> block size to match the SSD internal block size, else performance is
>> terrible. For performance, use LVM/XFS/RAID. *EXCEPT* it is really hard to
>> make that perform on SSD's too, if you have a block size and alignment
>> mismatch performance will be *terrible* with LVM/XFS/RAID.
>>
>> 4) Hardware RAID with a battery backup tends to result in much faster
>> writes *if writing to rotating storage* and only if using Linux LVM/XFS.
>> ZFS does its own redundancy better, so don't use BBU with ZFS. If writing
>> to SSD's, you will get better performance with the Linux MD RAID or ZFS,
>> but note you must be *very* careful about block sizes and alignment. Don't
>> use a BBU hardware raid with SSD's, performance will be terrible compared
>> to Linux MD RAID or ZFS for a number of reasons.
>>
>> 5) Needless to say the ZFS NFS shares work fine, *if* you've done your
>> homework and set them up well with proper alignment and block size for your
>> hardware. However, for rotational storage the LVM/XFS/RAID will be faster,
>> especially with the hardware raid and BBU.
>>
>> My own CloudStack implementation has two storage servers that use
>> LVM/XFS/RAID for storage and one storage server that uses ZFS for storage.
>> The ZFS server has two 12-disk RAID groups, one made up of SSD's for a
>> database, the other made up of large rotational storage drives. It also has
>> a NVMe card that is used for log and cache for the rotational storage. I
>> spent a *lot* of time trying to get the SSD's to perform under RAID/LVM/XFS
>> and just couldn't get everything to agree on alignment. That was when I
>> said foo on that and put ZFS on there, and since I was using ZFS for one
>> RAID group, it made sense to use it on the other too. I'm using RAID10 on
>> the SSD RAID group, and RAIDZ2 on the rotational storage (which is there
>> for bulk storage where performance isn't a priority).
>>
>> Storage is not a limitation on my cloud, especially since I have four
>> other storage servers that I can throw at it if necessary. RAM and CPU are,
>> so that's my next task -- get more compute servers into the Cloudstack
>> cluster.
>>
>>
>> > On Oct 19, 2017, at 22:16, Ivan Kudryavtsev <[email protected]>
>> wrote:
>> >
>> > Hello, friends.
>> >
>> > I'm planning to deploy new cluster with KVM and shared NFS storage soon.
>> > Right now I already have such deploys that operate fine. Currently, I'm
>> > trying to compare different storage options for new cluster between
>> > - LVM+EXT4+HWRAID+BBU,
>> > - LVM+EXT4+MDADM and
>> > - ZFS.
>> >
>> > HW setup is:
>> > - 5 x Micron 5100PRO 1.9TB
>> > - LSI9300-8i HBA or LSI9266-8i+BBU
>> > - Cisco UCS server w/ 24x2.5" / 2xE5-2650 / 64GB RAM
>> >
>> > I have got competitive performance results between them locally already,
>> > but now I need to test over NFS. I'm pretty sure that first two options
>> > will operate nice with ACS default NFS mount args (because I already
>> have
>> > such cases in prod), but ZFS is quite smart thing, so I started to
>> > investigate how to change NFS client mount options and unfortunately
>> > haven't succeed defining the proper place where cloudstack agent
>> determines
>> > how to mount share and what args to use. I read a lot of ZFS-related
>> > articles and people write rsize/wsize affect quite much, so I wonder
>> how to
>> > instruct cloudstack agent to use specific rsize/wsize args to mount
>> primary
>> > storage.
>> >
>> > Also, I haven't found in ACS archives mentions about ZFS NFS share, so
>> > might be it's a bad case for ACS because of Qcow image format?, but I
>> think
>> > it could be a good one so want to test personally.
>> >
>> > Any suggestions are welcome.
>> >
>> > --
>> > With best regards, Ivan Kudryavtsev
>> > Bitworks Software, Ltd.
>> > Cell: +7-923-414-1515
>> > WWW: http://bitworks.software/ <http://bw-sw.com/>
>>
>>
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>
>


-- 
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/ <http://bw-sw.com/>

Reply via email to