[zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Geoff Nordli
Anyone have any experience with a R510 with the PERC H200/H700 controller
with ZFS?

My perception is that Dell doesn't play well with OpenSolaris. 

Thanks,

Geoff 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Geoff Nordli


>-Original Message-
>From: Brian Hechinger [mailto:wo...@4amlunch.net]
>Sent: Saturday, August 07, 2010 8:10 AM
>To: Geoff Nordli
>Subject: Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS
>
>On Sat, Aug 07, 2010 at 08:00:11AM -0700, Geoff Nordli wrote:
>> Anyone have any experience with a R510 with the PERC H200/H700
>> controller with ZFS?
>
>Not that particular setup, but I do run Solaris on a Precision 690 with
PERC 6i
>controllers.
>
>> My perception is that Dell doesn't play well with OpenSolaris.
>
>What makes you say that?  I've run Solaris on quite a few Dell boxes and
have
>never had any issues.
>
>-brian
>--
 
Hi Brian.

I am glad to hear that, because I would prefer to use a dell box.  

Is there a JBOD mode with the PERC 6i? 

It is funny how sometimes one forms these views as you gather information.  

Geoff   
 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Terry Hull

> From: Geoff Nordli 
> Date: Sat, 7 Aug 2010 08:39:46 -0700
> To: 
> Subject: Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS
> 
> 
> 
>> -Original Message-
>> From: Brian Hechinger [mailto:wo...@4amlunch.net]
>> Sent: Saturday, August 07, 2010 8:10 AM
>> To: Geoff Nordli
>> Subject: Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS
>> 
>> On Sat, Aug 07, 2010 at 08:00:11AM -0700, Geoff Nordli wrote:
>>> Anyone have any experience with a R510 with the PERC H200/H700
>>> controller with ZFS?
>> 
>> Not that particular setup, but I do run Solaris on a Precision 690 with
> PERC 6i
>> controllers.
>> 
>>> My perception is that Dell doesn't play well with OpenSolaris.
>> 
>> What makes you say that?  I've run Solaris on quite a few Dell boxes and
> have
>> never had any issues.
>> 
>> -brian
>> --
>  
> Hi Brian.
> 
> I am glad to hear that, because I would prefer to use a dell box.
> 
> Is there a JBOD mode with the PERC 6i?
> 
> It is funny how sometimes one forms these views as you gather information.
> 
> Geoff   

It is just that lots of the PERC controllers do not do JBOD very well.  I've
done it several times making a RAID 0 for each drive.  Unfortunately, that
means the server has lots of RAID hardware that is not utilized very well.
Also, ZFS loves to see lots of spindles, and Dell boxes tend not to have
lots of drive bays in comparison to what you can build at a given price
point.   Of course then you have warranty / service issues to consider.

--
Terry Hull
Network Resource Group, Inc.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Geoff Nordli


>-Original Message-
>From: Terry Hull [mailto:t...@nrg-inc.com]
>Sent: Saturday, August 07, 2010 1:12 PM
>
>> From: Geoff Nordli 
>> Date: Sat, 7 Aug 2010 08:39:46 -0700
>>
>>> From: Brian Hechinger [mailto:wo...@4amlunch.net]
>>> Sent: Saturday, August 07, 2010 8:10 AM
>>>
>>> On Sat, Aug 07, 2010 at 08:00:11AM -0700, Geoff Nordli wrote:
 Anyone have any experience with a R510 with the PERC H200/H700
 controller with ZFS?
>>>
>>> Not that particular setup, but I do run Solaris on a Precision 690
>>> with
>> PERC 6i
>>> controllers.
>>>
 My perception is that Dell doesn't play well with OpenSolaris.
>>>
>>> What makes you say that?  I've run Solaris on quite a few Dell boxes
>>> and
>> have
>>> never had any issues.
>>>
>>> -brian
>>
>> Hi Brian.
>>
>> I am glad to hear that, because I would prefer to use a dell box.
>>
>> Is there a JBOD mode with the PERC 6i?
>>
>> It is funny how sometimes one forms these views as you gather
information.
>>
>> Geoff
>
>It is just that lots of the PERC controllers do not do JBOD very well.
I've done it
>several times making a RAID 0 for each drive.  Unfortunately, that means
the
>server has lots of RAID hardware that is not utilized very well.
>Also, ZFS loves to see lots of spindles, and Dell boxes tend not to have
lots of
>drive bays in comparison to what you can build at a given price
>point.   Of course then you have warranty / service issues to consider.
>
>--
>Terry Hull

Terry, you are right, the part that was really missing with the Dell was the
lack of spindles.  It seems the R510 can have up to 12 spindles.  

The online configurator only allows you to select SLC SSDs, which are a lot
more expensive than the MLC versions.  It would be nice to do MLC since that
works fine for L2ARC.  

I believe they have an onboard SD flash connector too.  It would be great to
be able to install the base OS onto a flash card and not waste two drives.  

Are you using the Broadcom or Intel NICs?  
 
For sure the benefit of buying name brand is the warranty/service side of
things, which is important to me.  I don't want to spend any time
worrying/fixing boxes.  

Thanks,

Geoff 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-07 Thread Roy Sigurd Karlsbakk
- Original Message -
> Greetings, all!
> 
> I've recently jumped into OpenSolaris after years of using Gentoo for
> my primary server OS, and I've run into a bit of trouble on my main
> storage zpool. Reading through the archives, it seems like the
> symptoms I'm seeing are fairly common though the causes seem to vary a
> bit. Also, it seems like many of the issues were fixed at or around
> snv99 while I'm running dev-134.
> 
> The trouble started when I created and subsequently tried to destroy a
> 2TB zvol. The zpool hosting it has compression & dedup enabled on the
> root.

The current dedup code is said to be good with a truckload of memory and 
sufficient L2ARC, but at 4GB of RAM, it sucks quite badly. I've done some 
testing with 134 and dedup on a 12TB box, and even removing small datasets 
(<1TB) may take a long time. If this happens to you, and you get an 
(unexpected?) reboot, let ZFS spend hours (or days) mounting the filesystems, 
and it'll probably be ok after some time. Last time this happened to me, the 
box hung for some seven hours. I've heard of others talking about days.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Terry Hull



> From: Geoff Nordli 
> Date: Sat, 7 Aug 2010 14:11:37 -0700
> To: Terry Hull , 
> Subject: RE: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS
[stuff deleted]
> 
> Terry, you are right, the part that was really missing with the Dell was the
> lack of spindles.  It seems the R510 can have up to 12 spindles.
> 
> The online configurator only allows you to select SLC SSDs, which are a lot
> more expensive than the MLC versions.  It would be nice to do MLC since that
> works fine for L2ARC.
> 
> I believe they have an onboard SD flash connector too.  It would be great to
> be able to install the base OS onto a flash card and not waste two drives.
> 
> Are you using the Broadcom or Intel NICs?
>  
> For sure the benefit of buying name brand is the warranty/service side of
> things, which is important to me.  I don't want to spend any time
> worrying/fixing boxes.

I understand that one.

I have been using both Intel and Broadcom NICs successfully.   My gut tells
me I like the Intel better, but I can't say that is because I have had
trouble with the Broadcom.  It is just a personal preference that I probably
can't justify.   

--
Terry 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-07 Thread Zachary Bedell
It's alive!!  Some 20 hours after starting, the zpool import finished, and all 
data is alive & well.  

I ordered the extra RAM, so that'll no doubt help in the future.  I'm also in 
the process of de-Xen'ing the server I had running under xVM, so Solaris will 
get the whole 8GB to itself.  Finally, I turned off compression & dedup on all 
the datasets, used zfs send to dump them to a clean pool that hasn't and won't 
see dedup.  

Lesson learned on dedup...  Toy home servers need not apply. =)

I need to do a bit of benchmarking on compression on the new drives as 
decompressing everything expanded several of the datasets a bit more than I 
would have liked.  Might turn it on selectively as long as it's only dedup that 
causes `zfs destroy` to take an eternity.

Thanks all for the calm words.  Turned out I just needed to wait it out, but 
I'm not very good at waiting when sick storage arrays are involved. =)

-Zac
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-07 Thread Roy Sigurd Karlsbakk
- Original Message -
> It's alive!! Some 20 hours after starting, the zpool import finished,
> and all data is alive & well.
> 
> I ordered the extra RAM, so that'll no doubt help in the future. I'm
> also in the process of de-Xen'ing the server I had running under xVM,
> so Solaris will get the whole 8GB to itself. Finally, I turned off
> compression & dedup on all the datasets, used zfs send to dump them to
> a clean pool that hasn't and won't see dedup.
> 
> Lesson learned on dedup... Toy home servers need not apply. =)

Compression is safe, though

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed?

2010-08-07 Thread Darren Taylor
>
> zpool import -fF -R / tank
> 

As above victor managed to mock up a log device and get the pool to a state 
where it could be imported again. Awesome stuff. I'm also interested in how if 
its not too complex. :) 

Unfortunately though it looks like something is still not quite right. Even 
though running "zpool import -fR / tank" seems to suggest it is going to 
recovery and loose 91 seconds when i actually run the command "zpool import -fF 
-R / tank" the machine locks up and reboots. I've tried this multiple times 
with the same results. h.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread valrh...@gmail.com
I've been running OpenSolaris on my Dell Precision Workstation T7500 for 9 
months, and it works great. It's my default desktop operating system, and I've 
had zero problems with hardware compatibility.

I also have installed EON 0.600 on my Dell PowerEdge T410 (not so different 
from your R510). A few words of caution:

1. Beware of the onboard controllers. The "RAID" controller on that motherboard 
only works in Windows; neither Linux nor OpenSolaris can recognize drives 
attached to it at all. So I was stuck running in "ATA" mode at the beginning, 
which is awful in terms of performance.

2. I'd also recommend avoiding the PERC cards, in particular since it makes 
drives attached to it impossible to transport to another system. Instead, I use 
the SAS 6i/R controller. That's built into the motherboard on the PW T7500, and 
I got one separately for the PE T410. That works well, and is completely fine 
with OpenSolaris. I'd recommend those, because then you can be sure to get the 
cabling from Dell (which in the case of the PowerEdge, was completely 
nonstandard). And if the card fails, they'll replace it ASAP, which isn't 
necessarily the case with other vendors' cards.

So aside from the RAID controller and cabling issues on the PE T410, I've had 
nothing but good experiences in terms of Dell Precision workstations and 
PowerEdge servers, running OpenSolaris.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss