On Fri, Oct 11, 2013 at 4:12 PM, Stan Hoeppner <s...@hardwarefreak.com>wrote:

> On 10/11/2013 2:42 AM, Muhammad Yousuf Khan wrote:
> > [Cut].....
> > Are dual and quad port Intel NICs available in your country?
> >
> > Not very easily but yes, we can arrange. i personally have PCIe 4 Port
> > intel  NIC.
> > so this can be arranged.
>
> I recommend Intel NICs because they simply work, every time, full
> bandwidth, full Linux kernel support, great feature set, etc.  Very high
> quality, long lasting.  I had an Intel Pro/100 in service in an MX mail
> server for over 10 years.  Still works.
>

Thats great thanks for the advice.BTW have you hosted any VM on this 100MB
LAN :)?



> ...
> > just  a very basic question i am into virtualization for few years on
> > Debian box.
> > i never host a VM on external box. i have more then 10 nodes and all VMs
> > are hosted on local Mdadm RAID drives.
> > Just to have an idea. if you like to suggest. how many VM can be hosted
> on
> > 1G link. i know your next statement will be "it depends upon the
> > utilization of your VM and decision would be made on IO stats basis"
>
> Yes, it does depend on exactly that.
>
> > but just asking in general how many general VMs  can be hosted on 1G LAN
> > that are more or less untouched throughout the day.
>
> If idle?  As many as you can fit in memory up to the hypervisor limit,
> or virtual IP address limit, if there is a limit on these.  It's
> possible to create VMs that have no network stack at all.  In that case
> there is literally no limit WRT the shared GbE link.
>
> > and my big big time confusion is backup the VM from Virtualization
> terminal.
> > lets say for a while 2 VM are running on 1GB link and i am taking a
> backup
> > of a VM from virtual server. as the server is connected to external
> storage
> > on 1 GB link.  first virtual server will bring all the virtual drive data
> > from External box to local RAM via same 1GB link on which VMs are hosted.
> > it does mean that when backup will start all other VMs has to suffer?
> > so even if 1 VM is running and we are making/creating a backup then how
> can
> > we avoid chocking the link or bottle neck.
>
> Ok, so apparently I misunderstood previously.  I was under the
> impression that you had an NFS storage server box, a backup server box,
> and many physically boxes on which you were running virtual machines.
> I.e. 6 or more computers connected to a GbE switch.
>

If I understand correctly now, all of your VMs are on one PC, and there
> is an NFS server somewhere on the network where you store the files.  Is
> there a switch between the PC with all of the VMs, and the NFS server?
> If so...
>
>


> There are a couple of ways to address this:
>
> 1.  Add another GbE interface on the PC and dedicate it to NFS
>     traffic.  You should be able to bind the NFS client to a specific
>     IP address.  This will require setting up source based routing
>     so NFS traffic only uses the new interface.  Without source based
>     routing Linux will always use the first bound adapter for all
>     outbound traffic.  This dedicates the current NIC to everything
>     other than NFS traffic, so the VMs have 1Gb/s for non-NFS traffic,
>     and 1Gb/s for NFS traffic, 2Gb/s aggregate.  This would be my
>     preferred method.  It's low cost, just a NIC and a cable.  But
>     you have a steep learning curve ahead WRT Linux routing.  A bonus
>     is you'll learn a lot about Linux networking in the process.
>
> 2.  Implement QOS features in the switch, if it has them, to limit
>     the amount of bandwidth used by NFS traffic.  The problem with
>     this method is that most switches don't allow this on a per port
>     basis, but on a VLAN basis.  Which means you'd be limiting NFS
>     bandwidth everywhere, network wide, not just to the VM PC.
>
>
Thanks for the advice, but i have found a feature to limit the bandwidth
during backup in Qemu :) thanks for make me thing that way.



> ...
> > any howto document on DRBD and GFS2 on debian? as i am using debian and
> > only debian in overall environment.
> > DRBD+GFS2 has got a native support on Redhat (as GFS2 is owned by
> Redhat).
> > i do not have the experience nor confidence on stability of the both.
> > i will be glad if you share any specific one with Debian.
>
> DRBD and GFS2 are both kernel modules.  Their configuration on Debian
> should be little different than on any Linux distro.
>
> > i found this
> > http://pve.proxmox.com/wiki/DRBD
>
> http://www.drbd.org/users-guide/ch-gfs.html
>


ok i will go through with this however this is on RHEL very different from
debian. anyways i will try to understand and run things on Debian.

>
> > the above is Primary/Primary installation means both drbd drives can be
> > mounted. but there is a question.
> > if i can mount in Primary/Primary mode on both the nodes then what is the
> > need of GFS?
> > just asking for my learning.
>
> The key word here is "mount".  Linux cannot mount a block device.  DRBD
> is a block device.  Linux mounts filesystems.  Filesystems reside on top
> of block devices.  No two hosts can mount the same filesystem on a
> shared block device unless it is a cluster filesystem.  Cluster
> filesystem are designed specifically for this purpose.  However, in the
> real world, the block device under GFS2 and OCFS2 filesystems is most
> often a LUN on a fiber channel or iSCSI SAN storage array, not DRBD.
>
>
...
> > Thanks for sharing such a detail and very helpful email.
>
> You're welcome.
>
> --
> Stan
>
>
>

Reply via email to