Re: [zfs-discuss] Zpool Import Hanging

2011-01-18 Thread Richard Elling
On Jan 17, 2011, at 8:22 PM, Repetski, Stephen wrote: > > On Mon, Jan 17, 2011 at 22:08, Ian Collins wrote: > On 01/18/11 04:00 PM, Repetski, Stephen wrote: > > Hi All, > > I believe this has been asked before, but I wasn’t able to find too much > information about the subject. Long story sho

[zfs-discuss] incorrect vdev added to pool

2011-01-18 Thread Gal Buki
Hi I have a pool with a raidz2 vdev. Today I accidentally added a single drive to the pool. I now have a pool that partially has no redundancy as this vdev is a single drive. Is there a way to remove the vdev and replace it with a new raidz2 vdev? If not what can I do to do damage control and a

Re: [zfs-discuss] configuration

2011-01-18 Thread Gal Buki
With two drives it makes more sense to use a mirror then raidz configuration. You will have the same amount of space and mirroring gives you more performance, afaik. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2011-01-18 Thread Gal Buki
I second that. This is exactly what happened to me. There is a bug (ID 4852783) that is in State "6-Fix Understood" but it is unchanged since February 2010. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris

Re: [zfs-discuss] configuration

2011-01-18 Thread Piotr Tarnowski
You can also make 250 GB slices (partitions) and create RAIDZ 3x250GB and mirror 2x1750GB (one or more). Mirror has better performance for write operations, Raidz shoud be faster for read. Regards -- Piotr Tarnowski /DrFugazi/ http://www.drfugazi.eu.org/ -- This message posted from opensolari

[zfs-discuss] Surprise Thread Preemptions

2011-01-18 Thread Kishore Kumar Pusukuri
Hi, I would like to know about which threads will be preempted by which on my OpenSolaris machine. Therefore, I ran a multithreaded program "myprogram" with 32 threads on my 24-core Solaris machine. I make sure that each thread of my program has same priority (priority zero), so that we can red

[zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Hi guys, sorry in advance if this is somewhat a lowly question, I've recently built a zfs test box based on nexentastor with 4x samsung 2tb drives connected via SATA-II in a raidz1 configuration with dedup enabled compression off and pool version 23. From running bonnie++ I get the following res

[zfs-discuss] kernel panic on USB disk power loss

2011-01-18 Thread Reginald Beardsley
I was copying a filesystem using "zfs send | zfs receive" and inadvertently unplugged the power to the USB disk that was the destination. Much to my horror this caused the system to panic. I recovered fine on rebooting, but it *really* unnerved me. I don't find anything about this online. I

[zfs-discuss] configuration

2011-01-18 Thread Trusty Twelve
Hello, I'm going to build home server. System is deployed on 8 GB USB flash drive. I have two identical 2 TB HDD and 250 GB one. Could you please recommend me ZFS configuration for the set of my hard drives? 1) pool1: mirror 2tb x 2 pool2: 250 gb (or maybe add this drive to pool1???) 2) pool1: mi

Re: [zfs-discuss] HP ProLiant N36L

2011-01-18 Thread Trusty Twelve
I've successfully installed NexentaStor 3.0.4 on this microserver using PXE. Works like a charm. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discus

Re: [zfs-discuss] incorrect vdev added to pool

2011-01-18 Thread Andrew Gabriel
On 01/15/11 11:32 PM, Gal Buki wrote: Hi I have a pool with a raidz2 vdev. Today I accidentally added a single drive to the pool. I now have a pool that partially has no redundancy as this vdev is a single drive. Is there a way to remove the vdev Not at the moment, as far as I know. and

[zfs-discuss] Request for comments: L2ARC, ZIL, RAM, and slow storage

2011-01-18 Thread Karl Wagner
Hi all This is just an off-the-cuff idea at the moment, but I would like to sound it out. Consider the situation where someone has a large amount of off-site data storage (of the order of 100s of TB or more). They have a slow network link to this storage. My idea is that this could be used to bu

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-18 Thread Orvar Korvar
"...If this is a general rule, maybe it will be worth considering using SHA512 truncated to 256 bits to get more speed..." Doesn't it need more investigation if truncating 512bit to 256bit gives equivalent security as a plain 256bit hash? Maybe truncation will introduce some bias? -- This messa

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Richard Elling
On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote: > Hi guys, sorry in advance if this is somewhat a lowly question, I've recently > built a zfs test box based on nexentastor with 4x samsung 2tb drives > connected via SATA-II in a raidz1 configuration with dedup enabled > compression off and

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-18 Thread Orvar Korvar
Totally Off Topic: Very interesting. Did you produce some papers on this? Where do you work? Seems very fun place to work at! BTW, I thought about this. What do you say? Assume I want to compress data and I succeed in doing so. And then I transfer the compressed data. So all the information I

Re: [zfs-discuss] Surprise Thread Preemptions

2011-01-18 Thread Phil Harman
Big subject! You haven't said what your 32 threads are doing, or how you gave them the same priority, or what scheduler class they are running in. However, you only have 24 VCPUs, and (I assume) 32 active threads, so Solaris will try to share resources evenly, and yes, it will preempt one of

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
I've since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average, would turning on compression make things more CPU bound and improve performance further? On 18 Jan 2011, at 15:07, Richard Elling wrote: > On Jan 15, 2011, at 4:21 PM, Michael Armstro

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Torrey McMahon
I've seen a lot of cases where enabling compression helps with systems that are disk-bound. If you've got extra CPU ... give it a shot. On 1/18/2011 10:11 AM, Michael Armstrong wrote: I've since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average

Re: [zfs-discuss] Surprise Thread Preemptions

2011-01-18 Thread Jim Mauro
Hi Kishore - If memory serves, the kernel uses the preemption mechanism when a thread uses its time quantum and thus must be forced to give up the CPU. If your "myprogram" threads are compute-bound, I would suspect they are being preempted by other myprogram threads of the same priority due to tim

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-18 Thread Nicolas Williams
On Tue, Jan 18, 2011 at 07:16:04AM -0800, Orvar Korvar wrote: > BTW, I thought about this. What do you say? > > Assume I want to compress data and I succeed in doing so. And then I > transfer the compressed data. So all the information I transferred is > the compressed data. But, then you don't co

Re: [zfs-discuss] HP ProLiant N36L

2011-01-18 Thread Eugen Leitl
On Mon, Jan 17, 2011 at 02:19:23AM -0800, Trusty Twelve wrote: > I've successfully installed NexentaStor 3.0.4 on this microserver using PXE. > Works like a charm. I've got 5 of them today, and for some reason NexentaCore 3.0.1 b134 was unable to write to disks (whether internal USB or the 4x SAT

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Erik Trimble
On Tue, 2011-01-18 at 15:11 +, Michael Armstrong wrote: > I've since turned off dedup, added another 3 drives and results have improved > to around 148388K/sec on average, would turning on compression make things > more CPU bound and improve performance further? > > On 18 Jan 2011, at 15:07,

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Thanks everyone, I think overtime I'm gonna update the system to include an ssd for sure. Memory may come later though. Thanks for everyone's responses Erik Trimble wrote: >On Tue, 2011-01-18 at 15:11 +, Michael Armstrong wrote: >> I've since turned off dedup, added another 3 drives and res

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Erik Trimble
You can't really do that. Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes RAM to maintain a cache table of what's in the L2ARC. Using 2GB of RAM with an SSD-based L2ARC (even without Dedup) likely won't help you too much vs not having the SSD. If you're going to turn on

Re: [zfs-discuss] configuration

2011-01-18 Thread Brandon High
On Mon, Jan 17, 2011 at 6:19 AM, Piotr Tarnowski wrote: > You can also make 250 GB slices (partitions) and create RAIDZ 3x250GB and > mirror 2x1750GB (one or more). This configuration doesn't make a lot of sense for redundancy, since it doesn't provide any. It will have poor performance caused b

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Ah ok, I wont be using dedup anyway just wanted to try. Ill be adding more ram though, I guess you can't have too much. Thanks Erik Trimble wrote: >You can't really do that. > >Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes >RAM to maintain a cache table of what's in t

[zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Philip Brown
Sorry if this is well known.. I tried a bunch of googles, but didnt get anywhere useful. Closest I came, was http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/028090.html but that doesnt answer my question, below, reguarding zfs mirror recovery. Details of our needs follow. We norm

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Torrey McMahon
On 1/18/2011 2:46 PM, Philip Brown wrote: My specific question is, how easily does ZFS handle*temporary* SAN disconnects, to one side of the mirror? What if the outage is only 60 seconds? 3 minutes? 10 minutes? an hour? Depends on the multipath drivers and the failure mode. For example, if

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Erik Trimble
On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon wrote: > > On 1/18/2011 2:46 PM, Philip Brown wrote: > > My specific question is, how easily does ZFS handle*temporary* SAN > > disconnects, to one side of the mirror? > > What if the outage is only 60 seconds? > > 3 minutes? > > 10 minutes? > >

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Chris Banal
Erik Trimble wrote: On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon wrote: On 1/18/2011 2:46 PM, Philip Brown wrote: My specific question is, how easily does ZFS handle*temporary* SAN disconnects, to one side of the mirror? What if the outage is only 60 seconds? 3 minutes? 10 minutes? an ho

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Philip Brown
> On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon > wrote: > > ZFS's ability to handle "short-term" interruptions > depend heavily on the > underlying device driver. > > If the device driver reports the device as > "dead/missing/etc" at any > point, then ZFS is going to require a "zpool replace"

Re: [zfs-discuss] HP ProLiant N36L

2011-01-18 Thread Trusty Twelve
I've installed nexentastor on 8GB usb stick without any problems, so try nexentastor instead of nexentacore... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listi

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Erik Trimble
On Tue, 2011-01-18 at 13:34 -0800, Philip Brown wrote: > > On Tue, 2011-01-18 at 14:51 -0500, Torrey McMahon > > wrote: > > > > ZFS's ability to handle "short-term" interruptions > > depend heavily on the > > underlying device driver. > > > > If the device driver reports the device as > > "dead/mi

Re: [zfs-discuss] configuration

2011-01-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Trusty Twelve > > Hello, I'm going to build home server. System is deployed on 8 GB USB flash > drive. I have two identical 2 TB HDD and 250 GB one. Could you please > recommend me ZFS configur

Re: [zfs-discuss] Request for comments: L2ARC, ZIL, RAM, and slow storage

2011-01-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Karl Wagner > > Consider the situation where someone has a large amount of off-site data > storage (of the order of 100s of TB or more). They have a slow network link > to this storage. > > My

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Erik Trimble > > As far as what the resync does: ZFS does "smart" resilvering, in that > it compares what the "good" side of the mirror has against what the > "bad" side has, and only copies t