Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Karl Wagner
 

On 2012-11-13 17:42, Peter Tribble wrote: 

> Given storage
provisioned off a SAN (I know, but sometimes that's
> what you have to
work with), what's the best way to expand a pool?
> 
> Specifically, I
can either grow existing LUNs, a]or add new LUNs.
> 
> As an example, If
I have 24x 2TB LUNs, and wish to double the
> size of the pool, is it
better to add 24 additional 2TB LUNs, or
> get each of the existing LUNs
expanded to 4TB each?

This is only my opinion, but I would say you'd be
better off expanding your current LUNs. 

The reason for this is
balance. Currently, your data should be spread fairly evenly over the
LUNs. If you add more, those will be empty, which will affect how data
is written (data will try to go to those first). 

If you just expand
your current LUNs, the data will remain balanced, and ZFS will just use
the additional space. 

I _think _that's how it would work. Others here
will be able to give a more definitive answer. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Brian Wilson
Not sure if this will make it to the list, but I'll try...

On 11/13/12, Peter Tribble 
 wrote:
> Given storage provisioned off a SAN (I know, but sometimes that's
> what you have to work with), what's the best way to expand a pool?
> 
> Specifically, I can either grow existing LUNs, a]or add new LUNs.
> 
> As an example, If I have 24x 2TB LUNs, and wish to double the
> size of the pool, is it better to add 24 additional 2TB LUNs, or
> get each of the existing LUNs expanded to 4TB each?






The thing I've found about growing the LUN is to evaluate if it saves you time 
and effort in your setup to make it work. So, first I figure out if it's 
possible. Then I figure out how big a PITA it is compared to the PITA of 
allocating new LUNs. For example I've got one SAN backend here where it's easy 
as pie (lun resize blah and then OS steps) and another backend where it 
requires a major undertaking (create temporary horcm files, destroy mirrors, 
run special command to resize mirror, wait, run special command to resize 
source, wait, recreate mirrors, delete temporary horcms, etc etc and *then* the 
OS steps).






What I love about ZFS is that it can handle either approach.






So it depends on your setup. In your case if it's at all painful to grow the 
LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB LUNs 
with them one at a time with zpool replace, and wait for the resliver to finish 
each time. With autoexpansion on, you should get the additional capacity as 
soon as the resliver for each one is done, and the old 2TB LUNs should be 
reclaimable as soon as it's reslivered out.






That being said, I'm not aware of what any deeper implications of doing that 
are - from what Karl said about balancing the data out as one example.






Cheers,


Brian



> 
> -- 
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
-- 




---

Brian Wilson, Solaris SE, UW-Madison DoIT

Room 3114 CS&S 608-263-8047

brian.wilson(a)doit.wisc.edu

'I try to save a life a day. Usually it's my own.' - John Crichton

---
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Peter Tripp
Hi folks,

I'm in the market for a couple of JBODs.  Up until now I've been relatively 
lucky with finding hardware that plays very nicely with ZFS.  All my gear 
currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i) 
with backplanes powered by LSI SAS expanders (Sun x4250, Sun J4400, etc).  But 
I'm in the market for SAS2 JBODs to support a large number 3.5inch SAS disks 
(60+ 3TB disks to start).

I'm aware of potential issues with SATA drives/interposers and the whole SATA 
Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS setup.  
Also, since I've had trouble with in the past with daisy-chained SAS JBODs I'll 
probably stick with one SAS 4x cable (SFF8088) per JBOD and unless there were a 
compelling reason for multi-pathing I'd probably stick to a single controller.  
If possible I'd rather buy 20 packs of enterprise SAS disks with 5yr warranties 
and have the JBOD come with empty trays, but would also consider buying disks 
with the JBOD if the price wasn't too crazy.

Does anyone have any positive/negative experiences with any of the following 
with ZFS: 
 * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI SAS2X28 
expander)
 * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown 
expander)
 * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown expanders)
 * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown 
expanders)

I'm leaning towards the SuperMicro stuff, but every time I order SuperMicro 
gear there's always something missing or wrongly configured so some of the cost 
savings gets eaten up with my time figuring out where things went wrong and 
returning/ordering replacements.  The Dell/HP gear I'm sure is fine, but buying 
disks from them gets pricey quick. The last time I looked they charged $150 
extra per disk for when the only added value was a proprietary sled a shorter 
warranty (3yr vs 5yr).

I'm open to other JBOD vendors too, was just really just curious what folks 
were using when they needed more than two dozen 3.5" SAS disks for use with ZFS.

Thanks
-Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Rocky Shek
Peter,

You may consider DataON JBOD. Lots of users are using DataON JBOD for ZFS
storage. Yes, LSI SAS HBA is the best choice for ZFS 

DataON DNS-1600 4U 24 Bay 6G SAS JBOD
http://dataonstorage.com/dns-1600

DataON DNS-1640 2U 24 Bay 6G SAS JBOD
http://dataonstorage.com/dns-1640

DataON DNS-1660 4U 60 Bay 6G SAS JBOD
http://dataonstorage.com/dns-1660



Rocky


-Original Message-
From: Peter Tripp [mailto:pe...@psych.columbia.edu] 
Sent: Tuesday, November 13, 2012 12:08 PM
To: disc...@lists.illumos.org; zfs-discuss@opensolaris.org
Subject: [discuss] Hardware Recommendations: SAS2 JBODs

Hi folks,

I'm in the market for a couple of JBODs.  Up until now I've been relatively
lucky with finding hardware that plays very nicely with ZFS.  All my gear
currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i)
with backplanes powered by LSI SAS expanders (Sun x4250, Sun J4400, etc).
But I'm in the market for SAS2 JBODs to support a large number 3.5inch SAS
disks (60+ 3TB disks to start).

I'm aware of potential issues with SATA drives/interposers and the whole
SATA Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS
setup.  Also, since I've had trouble with in the past with daisy-chained SAS
JBODs I'll probably stick with one SAS 4x cable (SFF8088) per JBOD and
unless there were a compelling reason for multi-pathing I'd probably stick
to a single controller.  If possible I'd rather buy 20 packs of enterprise
SAS disks with 5yr warranties and have the JBOD come with empty trays, but
would also consider buying disks with the JBOD if the price wasn't too
crazy.

Does anyone have any positive/negative experiences with any of the following
with ZFS: 
 * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI
SAS2X28 expander)
 * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown
expander)
 * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown
expanders)
 * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown
expanders)

I'm leaning towards the SuperMicro stuff, but every time I order SuperMicro
gear there's always something missing or wrongly configured so some of the
cost savings gets eaten up with my time figuring out where things went wrong
and returning/ordering replacements.  The Dell/HP gear I'm sure is fine, but
buying disks from them gets pricey quick. The last time I looked they
charged $150 extra per disk for when the only added value was a proprietary
sled a shorter warranty (3yr vs 5yr).

I'm open to other JBOD vendors too, was just really just curious what folks
were using when they needed more than two dozen 3.5" SAS disks for use with
ZFS.

Thanks
-Peter

---
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed:
https://www.listbox.com/member/archive/rss/182180/21175751-e450dce1
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21175751&id_secret=21175751-e6533b
b6
Powered by Listbox: http://www.listbox.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Sašo Kiselkov
We've got a SC847E26-RJBOD1. Takes a bit of getting used to that you
have to wire it yourself (plus you need to buy a pair of internal
SFF-8087 cables to connect the back and front backplanes - incredible
SuperMicro doesn't provide those out of the box), but other than that,
never had a problem with it. Dirt cheap. Just works. No nonsense.

Cheers,
--
Saso

On 11/13/2012 09:08 PM, Peter Tripp wrote:
> Hi folks,
> 
> I'm in the market for a couple of JBODs.  Up until now I've been relatively 
> lucky with finding hardware that plays very nicely with ZFS.  All my gear 
> currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i) 
> with backplanes powered by LSI SAS expanders (Sun x4250, Sun J4400, etc).  
> But I'm in the market for SAS2 JBODs to support a large number 3.5inch SAS 
> disks (60+ 3TB disks to start).
> 
> I'm aware of potential issues with SATA drives/interposers and the whole SATA 
> Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS setup. 
>  Also, since I've had trouble with in the past with daisy-chained SAS JBODs 
> I'll probably stick with one SAS 4x cable (SFF8088) per JBOD and unless there 
> were a compelling reason for multi-pathing I'd probably stick to a single 
> controller.  If possible I'd rather buy 20 packs of enterprise SAS disks with 
> 5yr warranties and have the JBOD come with empty trays, but would also 
> consider buying disks with the JBOD if the price wasn't too crazy.
> 
> Does anyone have any positive/negative experiences with any of the following 
> with ZFS: 
>  * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI 
> SAS2X28 expander)
>  * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown 
> expander)
>  * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown 
> expanders)
>  * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown 
> expanders)
> 
> I'm leaning towards the SuperMicro stuff, but every time I order SuperMicro 
> gear there's always something missing or wrongly configured so some of the 
> cost savings gets eaten up with my time figuring out where things went wrong 
> and returning/ordering replacements.  The Dell/HP gear I'm sure is fine, but 
> buying disks from them gets pricey quick. The last time I looked they charged 
> $150 extra per disk for when the only added value was a proprietary sled a 
> shorter warranty (3yr vs 5yr).
> 
> I'm open to other JBOD vendors too, was just really just curious what folks 
> were using when they needed more than two dozen 3.5" SAS disks for use with 
> ZFS.
> 
> Thanks
> -Peter
> 
> ---
> illumos-discuss
> Archives: https://www.listbox.com/member/archive/182180/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182180/22722377-e9306e56
> Modify Your Subscription: 
> https://www.listbox.com/member/?member_id=22722377&id_secret=22722377-08ac87bf
> Powered by Listbox: http://www.listbox.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Schweiss, Chip
I've had perfect success with the SuperMicro SC847E25-RJOB1 It has two back
planes and holds 45 3.5 inch drives.  It's built as a JBOD so it has
everything that is needed.



On Tue, Nov 13, 2012 at 2:08 PM, Peter Tripp wrote:

> Hi folks,
>
> I'm in the market for a couple of JBODs.  Up until now I've been
> relatively lucky with finding hardware that plays very nicely with ZFS.
>  All my gear currently in production uses LSI SAS controllers (3801e,
> 9200-16e, 9211-8i) with backplanes powered by LSI SAS expanders (Sun x4250,
> Sun J4400, etc).  But I'm in the market for SAS2 JBODs to support a large
> number 3.5inch SAS disks (60+ 3TB disks to start).
>
> I'm aware of potential issues with SATA drives/interposers and the whole
> SATA Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS
> setup.  Also, since I've had trouble with in the past with daisy-chained
> SAS JBODs I'll probably stick with one SAS 4x cable (SFF8088) per JBOD and
> unless there were a compelling reason for multi-pathing I'd probably stick
> to a single controller.  If possible I'd rather buy 20 packs of enterprise
> SAS disks with 5yr warranties and have the JBOD come with empty trays, but
> would also consider buying disks with the JBOD if the price wasn't too
> crazy.
>
> Does anyone have any positive/negative experiences with any of the
> following with ZFS:
>  * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI
> SAS2X28 expander)
>  * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown
> expander)
>  * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown
> expanders)
>  * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown
> expanders)
>
> I'm leaning towards the SuperMicro stuff, but every time I order
> SuperMicro gear there's always something missing or wrongly configured so
> some of the cost savings gets eaten up with my time figuring out where
> things went wrong and returning/ordering replacements.  The Dell/HP gear
> I'm sure is fine, but buying disks from them gets pricey quick. The last
> time I looked they charged $150 extra per disk for when the only added
> value was a proprietary sled a shorter warranty (3yr vs 5yr).
>
> I'm open to other JBOD vendors too, was just really just curious what
> folks were using when they needed more than two dozen 3.5" SAS disks for
> use with ZFS.
>
> Thanks
> -Peter
>
> ---
> illumos-discuss
> Archives: https://www.listbox.com/member/archive/182180/=now
> RSS Feed:
> https://www.listbox.com/member/archive/rss/182180/21878145-4afe7abf
> Modify Your Subscription:
> https://www.listbox.com/member/?member_id=21878145&id_secret=21878145-d5e6ec1f
> Powered by Listbox: http://www.listbox.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Ray Van Dolson
On Tue, Nov 13, 2012 at 03:08:04PM -0500, Peter Tripp wrote:
> Hi folks,
> 
> I'm in the market for a couple of JBODs.  Up until now I've been
> relatively lucky with finding hardware that plays very nicely with
> ZFS.  All my gear currently in production uses LSI SAS controllers
> (3801e, 9200-16e, 9211-8i) with backplanes powered by LSI SAS
> expanders (Sun x4250, Sun J4400, etc).  But I'm in the market for
> SAS2 JBODs to support a large number 3.5inch SAS disks (60+ 3TB disks
> to start).
> 
> I'm aware of potential issues with SATA drives/interposers and the
> whole SATA Tunneling Protocol (STP) nonsense, so I'm going to stick
> to a pure SAS setup.  Also, since I've had trouble with in the past
> with daisy-chained SAS JBODs I'll probably stick with one SAS 4x
> cable (SFF8088) per JBOD and unless there were a compelling reason
> for multi-pathing I'd probably stick to a single controller.  If
> possible I'd rather buy 20 packs of enterprise SAS disks with 5yr
> warranties and have the JBOD come with empty trays, but would also
> consider buying disks with the JBOD if the price wasn't too crazy.
> 
> Does anyone have any positive/negative experiences with any of the following 
> with ZFS: 
>  * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI 
> SAS2X28 expander)
>  * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown 
> expander)
>  * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown 
> expanders)
>  * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown 
> expanders)
> 
> I'm leaning towards the SuperMicro stuff, but every time I order
> SuperMicro gear there's always something missing or wrongly
> configured so some of the cost savings gets eaten up with my time
> figuring out where things went wrong and returning/ordering
> replacements.  The Dell/HP gear I'm sure is fine, but buying disks
> from them gets pricey quick. The last time I looked they charged $150
> extra per disk for when the only added value was a proprietary sled a
> shorter warranty (3yr vs 5yr).
> 
> I'm open to other JBOD vendors too, was just really just curious what
> folks were using when they needed more than two dozen 3.5" SAS disks
> for use with ZFS.
> 
> Thanks
> -Peter

We've had good experiences with the Dell MD line.  It's been MD1200 up
until now, but are keeping our eyes on their MD3260 (60-bay).

You're right in that their costs are higher for disks and such, but
since we are a big Dell shop it simplifies support significantly for us
and we have quick turnaround on parts anywhere in the world.

If that weren't a significant issue I'd go SuperMicro or DataON.  We
used SuperMicro for quite a while with mixed experiences.  Best bet was
to find a chassis that work and stick with it as long as possible. :)

Even if you're not using Nexenta, their HCL is valuable for finding HW
that is likely to work for you.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Cedric Tineo
Peter,

Could you please give info or links to clarify "Sata tunneling protocol 
nonsense" issues?

Also have you considered Supermicro's 45 drive in 4U enclosure - The 
SC847E26-RJBOD1 ? It's cheap too at around 2000$ and based on LSI backplanes 
anyway.

SGI has a nice and crazy 81(!) 3.5" disks in 4U JBOD enclosure but they refused 
to sell it to us without them supplying the disks also. 
http://www.sgi.com/products/storage/modular/jbod.html

Anyone aware of a 60 or more disks in 4U enclosure on top of those mentioned in 
this message or the data-on? We are trying to build super-high-density-storage 
racks.

Cedric Tineo


On 13 nov. 2012, at 21:08, Peter Tripp  wrote:

> Hi folks,
> 
> I'm in the market for a couple of JBODs.  Up until now I've been relatively 
> lucky with finding hardware that plays very nicely with ZFS.  All my gear 
> currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i) 
> with backplanes powered by LSI SAS expanders (Sun x4250, Sun J4400, etc).  
> But I'm in the market for SAS2 JBODs to support a large number 3.5inch SAS 
> disks (60+ 3TB disks to start).
> 
> I'm aware of potential issues with SATA drives/interposers and the whole SATA 
> Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS setup. 
>  Also, since I've had trouble with in the past with daisy-chained SAS JBODs 
> I'll probably stick with one SAS 4x cable (SFF8088) per JBOD and unless there 
> were a compelling reason for multi-pathing I'd probably stick to a single 
> controller.  If possible I'd rather buy 20 packs of enterprise SAS disks with 
> 5yr warranties and have the JBOD come with empty trays, but would also 
> consider buying disks with the JBOD if the price wasn't too crazy.
> 
> Does anyone have any positive/negative experiences with any of the following 
> with ZFS: 
> * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI SAS2X28 
> expander)
> * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown 
> expander)
> * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown expanders)
> * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown 
> expanders)
> 
> I'm leaning towards the SuperMicro stuff, but every time I order SuperMicro 
> gear there's always something missing or wrongly configured so some of the 
> cost savings gets eaten up with my time figuring out where things went wrong 
> and returning/ordering replacements.  The Dell/HP gear I'm sure is fine, but 
> buying disks from them gets pricey quick. The last time I looked they charged 
> $150 extra per disk for when the only added value was a proprietary sled a 
> shorter warranty (3yr vs 5yr).
> 
> I'm open to other JBOD vendors too, was just really just curious what folks 
> were using when they needed more than two dozen 3.5" SAS disks for use with 
> ZFS.
> 
> Thanks
> -Peter
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Edmund White
I'm quite happy with my HP D2700 and D2600 enclosures. I'm using them with
LSI 9205-8e controllers and NexentaStor, but MPxIO definitely works. You
will need to find HP drive traysŠ They're available on eBay.

-- 
Edmund White




On 11/13/12 2:08 PM, "Peter Tripp"  wrote:

>Hi folks,
>
>I'm in the market for a couple of JBODs.  Up until now I've been
>relatively lucky with finding hardware that plays very nicely with ZFS.
>All my gear currently in production uses LSI SAS controllers (3801e,
>9200-16e, 9211-8i) with backplanes powered by LSI SAS expanders (Sun
>x4250, Sun J4400, etc).  But I'm in the market for SAS2 JBODs to support
>a large number 3.5inch SAS disks (60+ 3TB disks to start).
>
>I'm aware of potential issues with SATA drives/interposers and the whole
>SATA Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure
>SAS setup.  Also, since I've had trouble with in the past with
>daisy-chained SAS JBODs I'll probably stick with one SAS 4x cable
>(SFF8088) per JBOD and unless there were a compelling reason for
>multi-pathing I'd probably stick to a single controller.  If possible I'd
>rather buy 20 packs of enterprise SAS disks with 5yr warranties and have
>the JBOD come with empty trays, but would also consider buying disks with
>the JBOD if the price wasn't too crazy.
>
>Does anyone have any positive/negative experiences with any of the
>following with ZFS:
> * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI
>SAS2X28 expander)
> * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown
>expander)
> * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown
>expanders)
> * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown
>expanders)
>
>I'm leaning towards the SuperMicro stuff, but every time I order
>SuperMicro gear there's always something missing or wrongly configured so
>some of the cost savings gets eaten up with my time figuring out where
>things went wrong and returning/ordering replacements.  The Dell/HP gear
>I'm sure is fine, but buying disks from them gets pricey quick. The last
>time I looked they charged $150 extra per disk for when the only added
>value was a proprietary sled a shorter warranty (3yr vs 5yr).
>
>I'm open to other JBOD vendors too, was just really just curious what
>folks were using when they needed more than two dozen 3.5" SAS disks for
>use with ZFS.
>
>Thanks
>-Peter
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Edmund White
Also consider looking at the HP MDS600 enclosure.
http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12169-304616-3930445-3930445-
3930445-3936271.html


They're available for <$1000US on eBay, fully ZFS-friendly and hold 70 x
3.5" disks. The only downside is that they were introduced as 3G SAS
units. I cannot tell if 6G is possible now (via firmware or otherwise).

-- 
Edmund White





On 11/13/12 2:08 PM, "Peter Tripp"  wrote:

>Hi folks,
>
>I'm in the market for a couple of JBODs.  Up until now I've been
>relatively lucky with finding hardware that plays very nicely with ZFS.
>All my gear currently in production uses LSI SAS controllers (3801e,
>9200-16e, 9211-8i) with backplanes powered by LSI SAS expanders (Sun
>x4250, Sun J4400, etc).  But I'm in the market for SAS2 JBODs to support
>a large number 3.5inch SAS disks (60+ 3TB disks to start).
>
>I'm aware of potential issues with SATA drives/interposers and the whole
>SATA Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure
>SAS setup.  Also, since I've had trouble with in the past with
>daisy-chained SAS JBODs I'll probably stick with one SAS 4x cable
>(SFF8088) per JBOD and unless there were a compelling reason for
>multi-pathing I'd probably stick to a single controller.  If possible I'd
>rather buy 20 packs of enterprise SAS disks with 5yr warranties and have
>the JBOD come with empty trays, but would also consider buying disks with
>the JBOD if the price wasn't too crazy.
>
>Does anyone have any positive/negative experiences with any of the
>following with ZFS:
> * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI
>SAS2X28 expander)
> * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown
>expander)
> * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown
>expanders)
> * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown
>expanders)
>
>I'm leaning towards the SuperMicro stuff, but every time I order
>SuperMicro gear there's always something missing or wrongly configured so
>some of the cost savings gets eaten up with my time figuring out where
>things went wrong and returning/ordering replacements.  The Dell/HP gear
>I'm sure is fine, but buying disks from them gets pricey quick. The last
>time I looked they charged $150 extra per disk for when the only added
>value was a proprietary sled a shorter warranty (3yr vs 5yr).
>
>I'm open to other JBOD vendors too, was just really just curious what
>folks were using when they needed more than two dozen 3.5" SAS disks for
>use with ZFS.
>
>Thanks
>-Peter
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Intel DC S3700

2012-11-13 Thread Mauricio Tavares

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Mauricio Tavares
Trying again:

Intel just released those drives. Any thoughts on how nicely they will
play in a zfs/hardware raid setup?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Jim Klimov

On 2012-11-13 22:56, Mauricio Tavares wrote:

Trying again:

Intel just released those drives. Any thoughts on how nicely they will
play in a zfs/hardware raid setup?


Seems interesting - fast, assumed reliable and consistent in its IOPS
(according to marketing talk), addresses power loss reliability (acc.
to datasheet):

* Endurance Rating - 10 drive writes/day over 5 years while running
JESD218 standard

* The Intel SSD DC S3700 supports testing of the power loss capacitor,
which can be monitored using the following SMART attribute: (175, AFh).

Somewhat affordably priced (at least in the volume market for shops
that buy hardware in cubic meters ;)

http://newsroom.intel.com/community/intel_newsroom/blog/2012/11/05/intel-announces-intel-ssd-dc-s3700-series--next-generation-data-center-solid-state-drive-ssd

http://download.intel.com/newsroom/kits/ssd/pdfs/Intel_SSD_DC_S3700_Product_Specification.pdf

All in all, I can't come up with anything offensive against it quickly 
;) One possible nit regards the ratings being geared towards 4KB block

(which is not unusual with SSDs), so it may be further from announced
performance with other block sizes - i.e. when caching ZFS metadata.

Thanks for bringing it into attention spotlight, and I hope the more
savvy posters would overview it better.

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Recommendations: SAS2 JBODs

2012-11-13 Thread Richard Elling
On Nov 13, 2012, at 12:08 PM, Peter Tripp  wrote:

> Hi folks,
> 
> I'm in the market for a couple of JBODs.  Up until now I've been relatively 
> lucky with finding hardware that plays very nicely with ZFS.  All my gear 
> currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i) 
> with backplanes powered by LSI SAS expanders (Sun x4250, Sun J4400, etc).  
> But I'm in the market for SAS2 JBODs to support a large number 3.5inch SAS 
> disks (60+ 3TB disks to start).
> 
> I'm aware of potential issues with SATA drives/interposers and the whole SATA 
> Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS setup. 
>  Also, since I've had trouble with in the past with daisy-chained SAS JBODs 
> I'll probably stick with one SAS 4x cable (SFF8088) per JBOD and unless there 
> were a compelling reason for multi-pathing I'd probably stick to a single 
> controller.  If possible I'd rather buy 20 packs of enterprise SAS disks with 
> 5yr warranties and have the JBOD come with empty trays, but would also 
> consider buying disks with the JBOD if the price wasn't too crazy.
> 
> Does anyone have any positive/negative experiences with any of the following 
> with ZFS: 
> * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI SAS2X28 
> expander)
> * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown 
> expander)
> * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown expanders)
> * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown 
> expanders)

I've used all of the above and all of the DataOn systems, too (Hi Rocky!) 
No real complaints, though as others have noted the supermicro gear
tends to require more work to get going.
 -- richard

> I'm leaning towards the SuperMicro stuff, but every time I order SuperMicro 
> gear there's always something missing or wrongly configured so some of the 
> cost savings gets eaten up with my time figuring out where things went wrong 
> and returning/ordering replacements.  The Dell/HP gear I'm sure is fine, but 
> buying disks from them gets pricey quick. The last time I looked they charged 
> $150 extra per disk for when the only added value was a proprietary sled a 
> shorter warranty (3yr vs 5yr).
> 
> I'm open to other JBOD vendors too, was just really just curious what folks 
> were using when they needed more than two dozen 3.5" SAS disks for use with 
> ZFS.
> 
> Thanks
> -Peter
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--

richard.ell...@richardelling.com
+1-760-896-4422









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel DC S3700

2012-11-13 Thread Freddie Cash
Anandtech.com has a thorough review of it. Performance is consistent
(within 10-15% IOPS) across the lifetime of the drive, has capacitors to
flush RAM cache to disk, and doesn't store user data in the cache. It's
also cheaper per GB than the 710 it replaces.
On 2012-11-13 3:32 PM, "Jim Klimov"  wrote:

> On 2012-11-13 22:56, Mauricio Tavares wrote:
>
>> Trying again:
>>
>> Intel just released those drives. Any thoughts on how nicely they will
>> play in a zfs/hardware raid setup?
>>
>
> Seems interesting - fast, assumed reliable and consistent in its IOPS
> (according to marketing talk), addresses power loss reliability (acc.
> to datasheet):
>
> * Endurance Rating - 10 drive writes/day over 5 years while running
> JESD218 standard
>
> * The Intel SSD DC S3700 supports testing of the power loss capacitor,
> which can be monitored using the following SMART attribute: (175, AFh).
>
> Somewhat affordably priced (at least in the volume market for shops
> that buy hardware in cubic meters ;)
>
> http://newsroom.intel.com/**community/intel_newsroom/blog/**
> 2012/11/05/intel-announces-**intel-ssd-dc-s3700-series--**
> next-generation-data-center-**solid-state-drive-ssd
>
> http://download.intel.com/**newsroom/kits/ssd/pdfs/Intel_**
> SSD_DC_S3700_Product_**Specification.pdf
>
> All in all, I can't come up with anything offensive against it quickly ;)
> One possible nit regards the ratings being geared towards 4KB block
> (which is not unusual with SSDs), so it may be further from announced
> performance with other block sizes - i.e. when caching ZFS metadata.
>
> Thanks for bringing it into attention spotlight, and I hope the more
> savvy posters would overview it better.
>
> //Jim
> __**_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Dan Swartzendruber

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP for
management, but nothing but fail for me.  Found quite a few posts on various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.  The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.  Last
line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled it
to no avail.  Even tried a backup BE from hours earlier, to no avail.
Likely whatever was bunged happened prior to that.  If I could get something
that ran like xen or kvm reliably for a headless setup, I'd be willing to
give it a try, but for now, no...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Ian Collins

On 11/14/12 15:20, Dan Swartzendruber wrote:

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP for
management, but nothing but fail for me.  Found quite a few posts on various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.  The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.  Last
line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled it
to no avail.  Even tried a backup BE from hours earlier, to no avail.
Likely whatever was bunged happened prior to that.  If I could get something
that ran like xen or kvm reliably for a headless setup, I'd be willing to
give it a try, but for now, no...


SmartOS.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Jim Klimov

On 2012-11-14 03:20, Dan Swartzendruber wrote:

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP for
management, but nothing but fail for me.  Found quite a few posts on various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.



I can't say I used VirtualBox RDP extensively, certainly not in the
newer 4.x series, yet. For my tasks it sufficed to switch the VM
from headless to GUI and back via savestate, as automated by my
script from vboxsvc ("vbox.sh -s vmname startgui" for a VM config'd
as a vboxsvc SMF service already).

>  The final straw was when I
> rebooted the OI server as part of cleaning things up, and... It hung. 
 Last
> line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I 
power-cycled it

> to no avail.  Even tried a backup BE from hours earlier, to no avail.
> Likely whatever was bunged happened prior to that.  If I could get 
something

> that ran like xen or kvm reliably for a headless setup, I'd be willing to
> give it a try, but for now, no...

I can't say much about OI desktop problems either - works for me
(along with VBox 4.2.0 release), suboptimally due to lack of drivers,
but reliably.

Try to boot with "-k" option to use a kmdb debugger as well - maybe
the system would enter it upon getting stuck (does so instead of
rebooting when it is panicking) and you can find some more details
there?..

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Edmund White
What was wrong with the suggestion to use VMWare ESXi and Nexenta or
OpenIndiana to do this?

-- 
Edmund White




On 11/13/12 8:20 PM, "Dan Swartzendruber"  wrote:

>
>Well, I think I give up for now.  I spent quite a few hours over the last
>couple of days trying to get gnome desktop working on bare-metal OI,
>followed by virtualbox.  Supposedly that works in headless mode with RDP
>for
>management, but nothing but fail for me.  Found quite a few posts on
>various
>forums of people complaining that RDP with external auth doesn't work (or
>not reliably), and that was my experience.  The final straw was when I
>rebooted the OI server as part of cleaning things up, and... It hung.
>Last
>line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
>it
>to no avail.  Even tried a backup BE from hours earlier, to no avail.
>Likely whatever was bunged happened prior to that.  If I could get
>something
>that ran like xen or kvm reliably for a headless setup, I'd be willing to
>give it a try, but for now, no...
>
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Fajar A. Nugraha
On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson
 wrote:
> So it depends on your setup. In your case if it's at all painful to grow the 
> LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB 
> LUNs with them one at a time with zpool replace, and wait for the resliver to 
> finish each time. With autoexpansion on,

Yup, that's the gotcha. AFAIK autoexpand is off by default. You should
be able to use "zpool online -e" to force the expansion though.

> you should get the additional capacity as soon as the resliver for each one 
> is done, and the old 2TB LUNs should be reclaimable as soon as it's 
> reslivered out.

Minor correction: the additional capacity is only usable after a top
level vdev is completely replaced. In case of stripe-of-mirrors, this
is as soon as all vdev in one mirror is replaced. In the case of
raidzX, this is when all vdev is replaced.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss