On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson
wrote:
> So it depends on your setup. In your case if it's at all painful to grow the
> LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB
> LUNs with them one at a time with zpool replace, and wait for the resliver to
> fini
What was wrong with the suggestion to use VMWare ESXi and Nexenta or
OpenIndiana to do this?
--
Edmund White
On 11/13/12 8:20 PM, "Dan Swartzendruber" wrote:
>
>Well, I think I give up for now. I spent quite a few hours over the last
>couple of days trying to get gnome desktop working on b
On 2012-11-14 03:20, Dan Swartzendruber wrote:
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but fa
On 11/14/12 15:20, Dan Swartzendruber wrote:
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but fail
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but fail for me. Found quite a few posts on various
f
Anandtech.com has a thorough review of it. Performance is consistent
(within 10-15% IOPS) across the lifetime of the drive, has capacitors to
flush RAM cache to disk, and doesn't store user data in the cache. It's
also cheaper per GB than the 710 it replaces.
On 2012-11-13 3:32 PM, "Jim Klimov" wr
On Nov 13, 2012, at 12:08 PM, Peter Tripp wrote:
> Hi folks,
>
> I'm in the market for a couple of JBODs. Up until now I've been relatively
> lucky with finding hardware that plays very nicely with ZFS. All my gear
> currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i)
On 2012-11-13 22:56, Mauricio Tavares wrote:
Trying again:
Intel just released those drives. Any thoughts on how nicely they will
play in a zfs/hardware raid setup?
Seems interesting - fast, assumed reliable and consistent in its IOPS
(according to marketing talk), addresses power loss reliabi
Trying again:
Intel just released those drives. Any thoughts on how nicely they will
play in a zfs/hardware raid setup?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Also consider looking at the HP MDS600 enclosure.
http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/12169-304616-3930445-3930445-
3930445-3936271.html
They're available for <$1000US on eBay, fully ZFS-friendly and hold 70 x
3.5" disks. The only downside is that they were introduced as 3G SAS
units. I
I'm quite happy with my HP D2700 and D2600 enclosures. I'm using them with
LSI 9205-8e controllers and NexentaStor, but MPxIO definitely works. You
will need to find HP drive traysŠ They're available on eBay.
--
Edmund White
On 11/13/12 2:08 PM, "Peter Tripp" wrote:
>Hi folks,
>
>I'm in the
Peter,
Could you please give info or links to clarify "Sata tunneling protocol
nonsense" issues?
Also have you considered Supermicro's 45 drive in 4U enclosure - The
SC847E26-RJBOD1 ? It's cheap too at around 2000$ and based on LSI backplanes
anyway.
SGI has a nice and crazy 81(!) 3.5" disks
On Tue, Nov 13, 2012 at 03:08:04PM -0500, Peter Tripp wrote:
> Hi folks,
>
> I'm in the market for a couple of JBODs. Up until now I've been
> relatively lucky with finding hardware that plays very nicely with
> ZFS. All my gear currently in production uses LSI SAS controllers
> (3801e, 9200-16e
I've had perfect success with the SuperMicro SC847E25-RJOB1 It has two back
planes and holds 45 3.5 inch drives. It's built as a JBOD so it has
everything that is needed.
On Tue, Nov 13, 2012 at 2:08 PM, Peter Tripp wrote:
> Hi folks,
>
> I'm in the market for a couple of JBODs. Up until now
We've got a SC847E26-RJBOD1. Takes a bit of getting used to that you
have to wire it yourself (plus you need to buy a pair of internal
SFF-8087 cables to connect the back and front backplanes - incredible
SuperMicro doesn't provide those out of the box), but other than that,
never had a problem wit
Peter,
You may consider DataON JBOD. Lots of users are using DataON JBOD for ZFS
storage. Yes, LSI SAS HBA is the best choice for ZFS
DataON DNS-1600 4U 24 Bay 6G SAS JBOD
http://dataonstorage.com/dns-1600
DataON DNS-1640 2U 24 Bay 6G SAS JBOD
http://dataonstorage.com/dns-1640
DataON DNS-1660
Hi folks,
I'm in the market for a couple of JBODs. Up until now I've been relatively
lucky with finding hardware that plays very nicely with ZFS. All my gear
currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i)
with backplanes powered by LSI SAS expanders (Sun x4250, Su
Not sure if this will make it to the list, but I'll try...
On 11/13/12, Peter Tribble
wrote:
> Given storage provisioned off a SAN (I know, but sometimes that's
> what you have to work with), what's the best way to expand a pool?
>
> Specifically, I can either grow existing LUNs, a]or add new L
On 2012-11-13 17:42, Peter Tribble wrote:
> Given storage
provisioned off a SAN (I know, but sometimes that's
> what you have to
work with), what's the best way to expand a pool?
>
> Specifically, I
can either grow existing LUNs, a]or add new LUNs.
>
> As an example, If
I have 24x 2TB LUNs,
20 matches
Mail list logo