I don't think NFS is to blame, but I could blame the hardware for being
slow, could you please tell us your IOPS usage? Network setup? drive types
and RAID you are using?

have you used tools like ioping?

For %90 of the time is the hardware that preforms poorly, I have been using
NFS for over a year now with cloudstack and never had an issue with
performance.




On Tue, Aug 6, 2013 at 10:03 AM, Kirk Jantzer <[email protected]>wrote:

> I doubt NFS is the issue. What are the specs of the VMs? What is the
> network like? What are the disks? Etc. There are too many variables to say
> "NFS is a CloudStack bottleneck". I just implemented a 35TB storage cluster
> that is capable of 10's of thousands of IOPS and >1GB (yes, gigabyte, not
> gigabit) network bandwidth via NFS.
>
>
> On Tue, Aug 6, 2013 at 9:48 AM, WXR <[email protected]> wrote:
>
> > I use kvm as hypervisor and nfs(nfs server on centos 6.4) as primary and
> > secondary storage.
> >
> > I use server A as host node and B with 1 hdd as primary storage.When I
> > create 20 vms,I find the disk io performance is very low.
> > At first I think the bottleneck is from the hard disk,because there are
> 20
> > vms on a single hdd.So I attach another 4hdds on server B and increase
> the
> > number of primary storage from 1 to 5.Now there are 20 vms allocated
> > averagely on 5 primary storage(4 vms per storage),but the vm disk IO
> > performance is the same as before.
> >
> > I think NFS may be the bottleneck,but I don't know if it is true.Does
> > anyone have a good idea to help me finding the real reason?
>
>
>
>
> --
> Regards,
>
> Kirk Jantzer
> c: (678) 561-5475
> http://about.met/kirkjantzer
>

Reply via email to