Re: [zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-12 Thread Richard Elling
On Apr 12, 2010, at 8:01 PM, Peter Tripp wrote: > Hi folks, > > At home I run OpenSolaris x86 with a 4 drive Raid-Z (4x1TB) zpool and it's > not in great shape. A fan stopped spinning and soon after the top disk > failed (cause you know, heat rises). Naturally, OpenSolaris and ZFS didn't > s

[zfs-discuss] Fileserver help.

2010-04-12 Thread Daniel
Hi all. Im pretty new to the whole OpenSolaris thing, i've been doing a bit of research but cant find anything on what i need. I am thinking of making myself a home file server running OpenSolaris with ZFS and utilizing Raid/Z I was wondering if there is anything i can get that will allow Wind

Re: [zfs-discuss] snapshots taking too much space

2010-04-12 Thread Peter Tripp
Though the rsync switch is probably the answer to your problem... You might want to consider upgrading to Nexenta 3.0, switching checksums from fletcher to sha1 and then enabling block level deduplication. You'd probably use less GB per snapshot even with rsync running inefficiently. -- This me

[zfs-discuss] problem in recovering back data

2010-04-12 Thread MstAsg
Hello, I have problem regarding zfs. After installing solaris 10 x86, it worked for a while, and then, some problem happened that there was problem in solaris and it could not get loaded! Even fail safe didn't resolve the problem. I put an open solaris CD and booted from it, I wrote the bellow

Re: [zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 08:01:27PM -0700, Peter Tripp wrote: > So I decided I would attach the disks to 2nd system (with working fans) where > I could backup the data to tape. So here's where I got dumb...I ran 'zpool > export'. Of course, I never actually ended up attaching the disks to another

Re: [zfs-discuss] How to Catch ZFS error with syslog?

2010-04-12 Thread matthew patton
> > Please can this be on by default? Please? > > There are some situations where many reports may be sent > per second so > it is not necessarily a wise idea for this to be enabled by > default. Every implementation of Syslog worth a damn has automatic message throttling and coalescing. This

[zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-12 Thread Peter Tripp
Hi folks, At home I run OpenSolaris x86 with a 4 drive Raid-Z (4x1TB) zpool and it's not in great shape. A fan stopped spinning and soon after the top disk failed (cause you know, heat rises). Naturally, OpenSolaris and ZFS didn't skip a beat; I didn't even notice it was dead until I saw the

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Harry Putnam
Daniel Carosone writes: [...] snipped welcome info... thanks. > Oh, another thing, just to make sure before you start, since this is > evidently older hardware: are you running a 32-bit or 64-bit kernel? > The 32-bit kernel won't use drives larger than 1TB. Its an athlon64 so I'm good there. _

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Brandon High
On Mon, Apr 12, 2010 at 4:17 PM, Harry Putnam wrote: > If its possible to add a mirrored set as a vdev to a zpool like what > seems to be happening in (3) above, why wouldn't I just add the two > new disks as mirrored vdev to z2 to start off, rather than additional > mirrors, and never remove the

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Eric D. Mudama
On Mon, Apr 12 at 10:50, Bob Friesenhahn wrote: On Mon, 12 Apr 2010, Tomas Ögren wrote: For flash to overwrite a block, it needs to clear it first.. so yes, clearing it out in the background (after erasing) instead of just before the timing critical write(), you can make stuff go faster. Yes

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 06:17:47PM -0500, Harry Putnam wrote: > But, I'm too unskilled in solaris and zfs admin to be risking a total > melt down if I try that before gaining a more thorough understanding. Grab virtualbox or something similar and set yourself up a test environment. In general, an

Re: [zfs-discuss] ZFS Send/Receive Question

2010-04-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > I am trying to duplicate a filesystem from one zpool to another zpool. > I don't care so much about snapshots on the destination side...I am > more trying to duplicate how RSYNC would copy a filesystem, and then > only copy incre

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Harry Putnam
Brandon High writes: > On Mon, Apr 12, 2010 at 7:38 AM, Harry Putnam wrote: >> But as someone suggested it might be better to get two more bigger >> drives.  1t or 1.5t would handle all my data on one pair. >> >> Then I guess after moving all the data to a single zpool made up of >> those 2 new

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Edward Ned Harvey
> Carson Gaspar wrote: > > Does anyone who understands the internals better than care to take a > stab at what happens if: > > - ZFS writes data to /dev/foo > - /dev/foo looses power and the data from the above write, not yet > flushed to rust (say a field tech pulls the wrong drive...) > - /dev/

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Eric D. Mudama > > I believe the reason strings of bits "leak" on rotating drives you've > overwritten (other than grown defects) is because of minute off-track > occurances while writing (vibr

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread Bob Friesenhahn
On Tue, 13 Apr 2010, Daniel Carosone wrote: On Mon, Apr 12, 2010 at 09:32:50AM -0600, Tim Haley wrote: Try explicitly enabling fmd to send to syslog in /usr/lib/fm/fmd/plugins/syslog-msgs.conf Wow, so useful, yet so well hidden I never even knew to look for it. Please can this be on by defau

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 09:32:50AM -0600, Tim Haley wrote: > Try explicitly enabling fmd to send to syslog in > /usr/lib/fm/fmd/plugins/syslog-msgs.conf Wow, so useful, yet so well hidden I never even knew to look for it. Please can this be on by default? Please? -- Dan. pgpDwZouV1dUr.pgp Desc

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Brandon High
On Mon, Apr 12, 2010 at 7:38 AM, Harry Putnam wrote: > But as someone suggested it might be better to get two more bigger > drives.  1t or 1.5t would handle all my data on one pair. > > Then I guess after moving all the data to a single zpool made up of > those 2 new disks, I could then add the fr

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Carson Gaspar
Carson Gaspar wrote: Miles Nordin wrote: "re" == Richard Elling writes: How do you handle the case when a hotplug SATA drive is powered off unexpectedly with data in its write cache? Do you replay the writes, or do they go down the ZFS hotplug write hole? If zfs never got a positive resp

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Carson Gaspar
Miles Nordin wrote: "re" == Richard Elling writes: How do you handle the case when a hotplug SATA drive is powered off unexpectedly with data in its write cache? Do you replay the writes, or do they go down the ZFS hotplug write hole? If zfs never got a positive response to a cache flush,

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Miles Nordin
> "re" == Richard Elling writes: > "dc" == Daniel Carosone writes: re> In general, I agree. How would you propose handling nested re> mounts? force-unmount them. (so that they can be manually mounted elsewhere, if desired, or even in the same place with the middle filesystem mi

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Mattias Pantzare
On Mon, Apr 12, 2010 at 19:19, David Magda wrote: > On Mon, April 12, 2010 12:28, Tomas Ögren wrote: >> On 12 April, 2010 - David Magda sent me these 0,7K bytes: >> >>> On Mon, April 12, 2010 10:48, Tomas Ögren wrote: >>> >>> > For flash to overwrite a block, it needs to clear it first.. so yes, >

[zfs-discuss] ZFS Send/Receive Question

2010-04-12 Thread Robert Loper
I am trying to duplicate a filesystem from one zpool to another zpool. I don't care so much about snapshots on the destination side...I am more trying to duplicate how RSYNC would copy a filesystem, and then only copy incrementals from the source side to the destination side in subsequent runs unt

Re: [zfs-discuss] snapshots taking too much space

2010-04-12 Thread Arne Jansen
> NAME USED AVAIL REFER > MOUNTPOINT > bpool/backups/oracle_bac...@20100411-023130 479G - 681G - > bpool/backups/oracle_bac...@20100411-104428 515G - 721G - > bpool/backups/oracle_bac...@20100412-144700

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
On Mon, 12 Apr 2010, James Van Artsdalen wrote: TRIM is not a Windows 7 command but rather a device command. I only called it the "Windows 7 TRIM command" since that is how almost all of the original reports in the media described it. It seems best to preserve this original name (as used by

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread James Van Artsdalen
My point is not to advocate the TRIM command - those issues are already well-known - but rather suggest that the code that sends TRIM is also a good place to securely erase data on other media, such a hard disk. TRIM is not a Windows 7 command but rather a device command. FreeBSD's CAM layer a

[zfs-discuss] snapshots taking too much space

2010-04-12 Thread Paul Archer
USED AVAIL REFER MOUNTPOINT bpool/backups/oracle_bac...@20100411-023130 479G - 681G - bpool/backups/oracle_bac...@20100411-104428 515G - 721G - bpool/backups/oracle_bac...@20100412-144700 0 - 734G - Thanks for any help, Paul _

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread David Magda
On Mon, April 12, 2010 12:28, Tomas Ögren wrote: > On 12 April, 2010 - David Magda sent me these 0,7K bytes: > >> On Mon, April 12, 2010 10:48, Tomas Ögren wrote: >> >> > For flash to overwrite a block, it needs to clear it first.. so yes, >> > clearing it out in the background (after erasing) inst

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Kyle McDonald
On 4/12/2010 9:10 AM, Willard Korfhage wrote: > I upgraded to the latest firmware. When I rebooted the machine, the pool was > back, with no errors. I was surprised. > > I will work with it more, and see if it stays good. I've done a scrub, so now > I'll put more data on it and stress it some mor

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Mattias Pantzare
>> OpenSolaris needs support for the TRIM command for SSDs.  This command is >> issued to an SSD to indicate that a block is no longer in use and the SSD >> may erase it in preparation for future writes. > > There does not seem to be very much `need' since there are other ways that a > SSD can know

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Tomas Ögren
On 12 April, 2010 - David Magda sent me these 0,7K bytes: > On Mon, April 12, 2010 10:48, Tomas Ögren wrote: > > On 12 April, 2010 - Bob Friesenhahn sent me these 0,9K bytes: > > > >> Zfs is designed for high thoughput, and TRIM does not seem to improve > >> throughput. Perhaps it is most useful

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-12 Thread Dave Pooser
> What do you mean by overpromised and underdelivered? Well, when I did a quick Google search this was one of the first results I got. (I know, a different card-- but the same company, and if they f

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
On Mon, 12 Apr 2010, David Magda wrote: Except that ZFS does not overwrite blocks because it is copy-on-write. At some time in the (possibly distant) future the ZFS block might become free and then the Windows 7 TRIM command could be used to try to pre-erase it. This might help an intermitt

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
On Mon, 12 Apr 2010, Tomas Ögren wrote: For flash to overwrite a block, it needs to clear it first.. so yes, clearing it out in the background (after erasing) instead of just before the timing critical write(), you can make stuff go faster. Yes of course. Properly built SSDs include considera

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread David Magda
On Mon, April 12, 2010 10:48, Tomas Ögren wrote: > On 12 April, 2010 - Bob Friesenhahn sent me these 0,9K bytes: > >> Zfs is designed for high thoughput, and TRIM does not seem to improve >> throughput. Perhaps it is most useful for low-grade devices like USB >> dongles and compact flash. > > For

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread Tim Haley
On 04/12/10 09:05 AM, J James wrote: I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a failed drive. zpool status shows that the pool is in DEGRADED state. I want syslog to log these type of ZFS errors. I have syslog running and logging all sorts of error to a log s

[zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread J James
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a failed drive. zpool status shows that the pool is in DEGRADED state. I want syslog to log these type of ZFS errors. I have syslog running and logging all sorts of error to a log server. But this failed disk in ZFS pool

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Tomas Ögren
On 12 April, 2010 - Bob Friesenhahn sent me these 0,9K bytes: > On Sun, 11 Apr 2010, James Van Artsdalen wrote: > >> OpenSolaris needs support for the TRIM command for SSDs. This command >> is issued to an SSD to indicate that a block is no longer in use and >> the SSD may erase it in preparati

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Bob Friesenhahn
On Sun, 11 Apr 2010, James Van Artsdalen wrote: OpenSolaris needs support for the TRIM command for SSDs. This command is issued to an SSD to indicate that a block is no longer in use and the SSD may erase it in preparation for future writes. There does not seem to be very much `need' since t

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Harry Putnam
Daniel Carosone writes: > For Harry's benefit, the recipe we're talking about here is roughly as > follows. Your pools z2 and z3, we will merge into z2. and > are the current members of z3. [...] snipped very handy outline Thank you. That kind of walk through is really helpful here. I have

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Pablo Méndez Hernández
Hi James: On Mon, Apr 12, 2010 at 06:45, James Van Artsdalen wrote: > OpenSolaris needs support for the TRIM command for SSDs.  This command is > issued to an SSD to indicate that a block is no longer in use and the SSD may > erase it in preparation for future writes. That's what this RFE is a

Re: [zfs-discuss] Secure delete?

2010-04-12 Thread Eric D. Mudama
On Sun, Apr 11 at 22:45, James Van Artsdalen wrote: PS. It is faster for an SSD to write a block of 0xFF than 0 and it's possible some might make that optimization. That's why I suggest erase-to-ones rather than erase-to-zero. Do you have any data to back this up? While I understand the unde

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread David Magda
On Mon, April 12, 2010 09:10, Willard Korfhage wrote: > If the firmware upgrade fixed everything, then I've got a question about > which I am better off doing: keep it as-is, with the raid card providing > redundancy, or turn it all back into pass-through drives and let ZFS > handle it, making th

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Willard Korfhage
I upgraded to the latest firmware. When I rebooted the machine, the pool was back, with no errors. I was surprised. I will work with it more, and see if it stays good. I've done a scrub, so now I'll put more data on it and stress it some more. If the firmware upgrade fixed everything, then I've

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Tonmaus
Upgrading the firmware a good idea, as there are other issues with Areca controllers that only have been solved recently. i.e. 1.42 is probably still affected by a problem with SCSI labels that may give problems importing a pool. -Tonmaus -- This message posted from opensolaris.org

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Willard Korfhage
I was wondering if the controller itself has problems. My card's firmware is version 1.42, and the firmware on the website is up to 1.48. I see the firmware released in last September says Fix Opensolaris+ZFS to add device to mirror set in JBOD or passthrough mode and Fix SATA raid controller

[zfs-discuss] CIFS in production and my experience so far, advice needed

2010-04-12 Thread charles
I am looking at opensolaris with ZFS and CIFS shares as an option for large scale production use with Active Directory. I have successfully joined the opensolaris CIFS server to our Windows AD test domain and created an SMB share that the Windows server 2003 can see. I have also created test us

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Willard Korfhage
Just a message 7 hours earlier about an IRQ being shared by drivers with different interrupt levels might result in reduced performance. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-12 Thread Tonmaus
Hi, > I started off my setting up all the disks to be > pass-through disks, and tried to make a raidz2 array > using all the disks. It would work for a while, then > suddenly every disk in the array would have too many > errors and the system would fail. I had exactly the same experience with my