----- Original Message -----
| On Tue, Oct 12, 2010 at 8:07 PM, John R Pierce <pie...@hogranch.com>
| wrote:
| >  On 10/12/10 10:39 AM, Rudi Ahlers wrote:
| >> Hi all,
| >>
| >> I hope someone can shed some light on this for me. Has anyone
| >> tried,
| >> or have experience with, setting up a Linux server to manage a few
| >> NAS
| >> devices and thus make them all visible to the clients as one large
| >> SAN?
| >>
| >> Basically, I'm thinking it would be a good idea to combine the
| >> current
| >> NAS's we have into one large system (typically a SAN?) and then let
| >> the clients all connect to one server (for authentication, LUN
| >> control, etc), but then when they need to access their drives /
| >> devices / LUN's, they get redirected to the specific server
| >> directly.
| >> I'm also thinking it could be a good way to say some IP addresses,
| >> i.e. instead of giving each NAS a public IP, they could all have
| >> private IP's and then everyone just connects to the public IP if
| >> needed. The servers which will access the NAS's will be on the same
| >> physical LAN and will also be on the same private IP subnet to make
| >> it
| >> easier.
| >>
| >> Does anyone know what I'm talking about?
| >
| >
| > how do you plan to implement redundancy on this system? there's a
| > -huge- single point of failure in the middle of what you're talking
| > about.
| >
| 
| True, but then one could setup a HA server for the management server.
| And probably some load balancer(s) to cater for high availability.
| 
| 
| 
| >
| > would this be SAN storage (eg, block storage, like ISCSI, FC) or
| > would
| > it be NAS storage (file storage like NFS, SMB/CIFS) ?
| >
| 
| both? Many NAS devices offer both iSCSI & NFS / SMB / CIFS.
| 
| 
| Ideally, I want to setup a scalable SAN, with reliability in mind on a
| RAID 6 / RAID10 concept, but need each client to only connect to the
| relevant server where his data is stored.
| 
| --
| Kind Regards
| Rudi Ahlers
| SoftDux
| 
| Website: http://www.SoftDux.com
| Technical Blog: http://Blog.SoftDux.com
| Office: 087 805 9573
| Cell: 082 554 7532
| _______________________________________________
| CentOS mailing list
| CentOS@centos.org
| http://lists.centos.org/mailman/listinfo/centos


So your issues to address are going to be:

A) How do I make it reliable (H/A solution of some sort)
B) How do I make it scalable (load balancing of some sort)
C) How do I make it perform well (iSCSI / NFS / SMB / CIFS all in kernel)

You need to know that in your current setup you have more aggregated 
performance across each of the 3 SANs than you would have with a single 
centralized general purpose solution, such as a GNU/Linux head in front of the 
devices.  Are these devices block devices only or do the SAN heads provide 
protocol availability as well.

I tried a similar setup with Solaris 10 ZFS and iSCSI and the performance was 
abysmal.  This was because the iSCSI initiator/server ran in user space.  Very 
busy file systems had very poor performance.  Reclaimation of space after 
deleting snapshots on a file system with lots of snapshots killed disk 
performance, etc.

The GNU/Linux solution had it's own issues around performance and associated 
with the various file systems.  For example, XFS requiring a new UUID be 
generated for snapshotted volumes.  Difficult H/A setups, each with their own 
warts. This is just one example of many!

What you are looking for is something like the FalconStor or Panacas PanFS.  
These products are designed specifically to do these "director" like 
functionality.
 
--
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax     : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca
MSN     : subatomic_s...@hotmail.com

Does your OS has a man 8 lart?
http://www.xinu.nl/unix/humour/asr-manpages/lart.html


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to