We have Dell Compellent (we had them before Dell) in house as well as
Isilon.

The Compellent hardware is more like a standard two-controller system
(though they do virtual ports so it's a slight step up from
active/passive).  It's comparable to the EMC Clariion type of array in that
sense, but the hardware is where the comparison stops.  The current
hardware is standard Dell systems for controllers and standard Dell disk
shelves.

The system works very well.  If you are not familiar with it, they
virtualize the blocks in the array and then can migrate data around
different spindles to provide the best price/performance.  If you need high
IO, they'll move the blocks up to SSD.  If you don't, and most blocks
aren't being hit, they'll move down to NL-SAS.  Otherwise, there's a beefy
middle tier of fast SAS (10k or 15k).  They also re-RAID blocks on the fly,
so they write to mirrored blocks (no parity IO hit) and then move the
blocks to RAID5 or RAID6 later.  You can specify policies for how blocks
move around.  If the system needs more RAID5, it'll just create more out of
the RAID1 blocks it already has.

Essentially, this type of system removes a lot of management work from the
storage admins -- you just provision a LUN and no longer have to worry
about RAID performance or disk tier performance.

It's similar to FAST-VP on your VMAX, if you're familiar with that.  We
have a VMAX as well, and I like the Compellent better in almost every
respect, except that it doesn't scale as big.  Compellent virtual
provisioning is much more robust and mature than EMC's, simpler to
understand, and works better.  I can go into more details if you'd like,
perhaps off-list unless others are interested.

For NAS, Isilon is a great system, again depending on your needs.  Be
aware, though, that a smaller Isilon (and 200TB is a *small* Isilon) is
going to be expensive.  Isilon gets affordable when it gets big, and the
max size currently is 15 PB.  The more nodes you buy, the better the deal.
 You *must* start with 3 nodes of a given type.  You can mix nodes (NL
nodes with lots of cheap, big drives, X nodes for balanced performance, and
S nodes for high IOPS).  However, like I said you must have at least three
of whatever nodes you have (so if you have 3 NL nodes and want high IOPS,
you have to buy 3 S nodes and then you can add one at a time from there).
 The filesystem is powerful and has a lot of neat features, including a lot
of options for resiliency (up to multiple disk and full node failures).

If you really prefer NAS, you could always look into directly connecting a
decent but inexpensive array directly to a Sun box and running zfs, too.

-Adam


On Fri, Jan 25, 2013 at 11:34 AM, Andrew Hume <and...@research.att.com>wrote:

> have you looked at isilon?
>
> On Jan 25, 2013, at 7:07 AM, Craig Cook wrote:
>
> We have around 200TB of dev/QA storage that is across 3 SAN's. (EMC VMAX
> and IBM XIV's)
>
> Servers are AIX, Solaris and VMware.
>
> We are looking to replace it with cheaper storage.
>
> I would be interested in build-your-own-NAS solutions, but I am not sure
> what to start looking at. (Netapp is too expensive)
>
> Failing that, what does a cheap SAN look like?
>
> We don't need SAN replication, or many fancy features.
>
> Any ideas?
>
> Thanks
>
> Craig
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
> http://lopsa.org/
>
>
>
> -----------------------
> Andrew Hume
> 623-551-2845 (VO and best)
> 973-236-2014 (NJ)
> and...@research.att.com
>
>
>
>
> _______________________________________________
> Tech mailing list
> Tech@lists.lopsa.org
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
>
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to