Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook wrote: > > > On Tue, Nov 10, 2009 at 10:55 AM, Richard Elling > wrote: > >> >> On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote: >> >> Does this mean that there are no driver changes in marvell88sx2, between >>> b125 and b126? If no driver changes, then

Re: [zfs-discuss] CIFS crashes when accessed with Adobe Photoshop Elements 6.0 via Vista

2009-11-10 Thread scott smallie
upgrade to the latest dev release fixed the problem for me. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs eradication

2009-11-10 Thread Trevor Pretty
Excuse me for mentioning it but why not just use the format command? format(1M) analyze Run read, write, compare tests, and data purge. The data purge function implements the National Computer Security Center Guide to Understanding Data Remnance (NCSC-TG-025 version 2) Overwriting

Re: [zfs-discuss] zfs eradication

2009-11-10 Thread David Magda
On Nov 10, 2009, at 20:55, Mark A. Carlson wrote: Typically this is called "Sanitization" and could be done as part of an evacuation of data from the disk in preparation for removal. You would want to specify the patterns to write and the number of passes. See also "remanence": http:

[zfs-discuss] Zpool hosed during testing

2009-11-10 Thread Ron Mexico
This didn't occur on a production server, but I thought I'd post this anyway because it might be interesting. I'm currently testing a ZFS NAS machine consisting of a Dell R710 with two Dell 5/E SAS HBAs. Right now I'm in the middle of torture testing the system, simulating drive failures, expor

Re: [zfs-discuss] zfs eradication

2009-11-10 Thread Mark A. Carlson
Typically this is called "Sanitization" and could be done as part of an evacuation of data from the disk in preparation for removal. You would want to specify the patterns to write and the number of passes. -- mark Brian Kolaci wrote: Hi, I was discussing the common practice of disk eradicati

Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 6:51 PM, George Janczuk < geor...@objectconsulting.com.au> wrote: > I've been following the use of SSD with ZFS and HSPs for some time now, and > I am working (in an architectural capacity) with one of our IT guys to set > up our own ZFS HSP (using a J4200 connected to an X

[zfs-discuss] zfs eradication

2009-11-10 Thread Brian Kolaci
Hi, I was discussing the common practice of disk eradication used by many firms for security. I was thinking this may be a useful feature of ZFS to have an option to eradicate data as its removed, meaning after the last reference/snapshot is done and a block is freed, then write the eradicati

Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-11-10 Thread George Janczuk
I've been following the use of SSD with ZFS and HSPs for some time now, and I am working (in an architectural capacity) with one of our IT guys to set up our own ZFS HSP (using a J4200 connected to an X2270). The best practice seems to be to use an Intel X25-M for the L2ARC (Readzilla) and an I

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread BJ Quinn
I believe it was physical corruption of the media. Strange thing is last time it happened to me it also managed to replicate the bad blocks over to my backup server replicated with SNDR... And yes, it IS read only, and a scrub will NOT actively clean up corruption in snapshots. It will DETECT

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 10:55 AM, Richard Elling wrote: > > On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote: > > Does this mean that there are no driver changes in marvell88sx2, between >> b125 and b126? If no driver changes, then it means that we both had extreme >> unluck with our drives, becau

Re: [zfs-discuss] Odd sparing problem

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 4:38 PM, Cindy Swearingen wrote: > Hi Tim, > > I'm not sure I understand this output completely, but have you > tried detaching the spare? > > Cindy > > Hey Cindy, Detaching did in fact solve the issue. During my previous issues when the spare kicked in, it actually autom

Re: [zfs-discuss] Odd sparing problem

2009-11-10 Thread Cindy Swearingen
Hi Tim, I'm not sure I understand this output completely, but have you tried detaching the spare? Cindy On 11/10/09 09:21, Tim Cook wrote: So, I currently have a pool with 12 disks raid-z2 (12+2). As you may have seen in the other thread, I've been having on and off issues with b126 randomly

Re: [zfs-discuss] This is the scrub that never ends...

2009-11-10 Thread Bill Sommerfeld
On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote: > On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote: > > On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote: > >>> Any suggestions? > >> > >> Let it run for another day. > > I'll let it keep running as long as it wants this time. > scrub: s

[zfs-discuss] zle compression ?

2009-11-10 Thread roland
by some posting on zfs-fuse mailinglist, i came across "zle" compression which seems to be part of the dedupe-commit some days ago: http://hg.genunix.org/onnv-gate.hg/diff/e2081f502306/usr/src/uts/common/fs/zfs/zle.c --snipp 31 + * Zero-length encoding. This is a fast and simple algorithm to el

[zfs-discuss] Rebooting while Scrubing in snv_126

2009-11-10 Thread Francois Laagel
Greetings folks, Something funny happened to my amd64 box last night. I shut it down while a a scrub was running on rpool. This was not a fast reboot or anything like that. Since then, the system does not come up any more. I still can boot in single user mode but "/sbin/zfs mount -va" hangs whil

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread Nicolas Williams
On Tue, Nov 10, 2009 at 03:33:22PM -0600, Tim Cook wrote: > You're telling me a scrub won't actively clean up corruption in snapshots? > That sounds absolutely absurd to me. Depends on how much redundancy you have in your pool. If you have no mirrors, no RAID-Z, and no ditto blocks for data, well

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 3:19 PM, A Darren Dunham wrote: > On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote: > > No. The whole point of a snapshot is to keep a consistent on-disk state > > from a certain point in time. I'm not entirely sure how you managed to > > corrupt blocks that are

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread A Darren Dunham
On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote: > No. The whole point of a snapshot is to keep a consistent on-disk state > from a certain point in time. I'm not entirely sure how you managed to > corrupt blocks that are part of an existing snapshot though, as they'd be > read-only. Ph

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 2:40 PM, BJ Quinn wrote: > Say I end up with a handful of unrecoverable bad blocks that just so happen > to be referenced by ALL of my snapshots (in some file that's been around > forever). Say I don't care about the file or two in which the bad blocks > exist. Is there

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread BJ Quinn
Say I end up with a handful of unrecoverable bad blocks that just so happen to be referenced by ALL of my snapshots (in some file that's been around forever). Say I don't care about the file or two in which the bad blocks exist. Is there any way to purge those blocks from the pool (and all sna

Re: [zfs-discuss] zfs inotify?

2009-11-10 Thread Jeremy Kitchen
On Nov 10, 2009, at 10:23 AM, Andrew Daugherity wrote: For example: rsync -avn --delete-before /export/ims/.zfs/snapshot/zfs-auto- snap_daily-2009-11-09-1900/ /export/ims/.zfs/snapshot/zfs-auto- snap_daily-2009-11-08-1900/ [...] If you cared to see changes within files (I don't), toss t

Re: [zfs-discuss] zfs inotify?

2009-11-10 Thread Andrew Daugherity
Thanks for info, although the audit system seems a lot more complex than what I need. Would still be nice if they fixed bart to work on large filesystems, though. Turns out the solution was right under my nose -- rsync in dry-run mode works quite well as a "snapshot diff" tool. I'll share thi

Re: [zfs-discuss] ZFS + fsck

2009-11-10 Thread Joerg Moellenkamp
Hi, >> *everybody* is interested in the flag days page. Including me. >> Asking me to "raise the priority" is not helpful. > >> From my perspective, it's a surprise that 'everybody' is interested, as I'm > not seeing a lot of people complaining that the flag day page is not updating. > Only a co

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Richard Elling
On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote: Does this mean that there are no driver changes in marvell88sx2, between b125 and b126? If no driver changes, then it means that we both had extreme unluck with our drives, because we both had checksum errors? And my discs were brand new.

Re: [zfs-discuss] ZFS and oracle on SAN disks

2009-11-10 Thread Richard Elling
On Nov 10, 2009, at 5:32 AM, Ian Garbutt wrote: I believe the best practice is to use seperate disks/zpool for oracle database files as the record size needs to be set the same as the db block size - when using a jbod or internal disks. recordsize is not a pool property, it is a dataset (zv

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-10 Thread Joe Auty
Toby Thain wrote: > On 8-Nov-09, at 12:20 PM, Joe Auty wrote: > >> Tim Cook wrote: >>> On Sun, Nov 8, 2009 at 2:03 AM, besson3c >> > wrote: >>> >>> ... >>> >>> >>> Why not just convert the VM's to run in virtualbox and run Solaris >>> directly on the hardware? >>> >

Re: [zfs-discuss] ..and now ZFS send dedupe

2009-11-10 Thread Roman Naumenko
James C. McPherson wrote, On 09-11-09 04:40 PM: Roman Naumenko wrote: Interesting stuff. By the way, is there a place to watch lated news like this on zfs/opensolaris? rss maybe? You could subscribe to onnv-not...@opensolaris.org... James C. McPherson -- Senior Kernel Software Engine

[zfs-discuss] Odd sparing problem

2009-11-10 Thread Tim Cook
So, I currently have a pool with 12 disks raid-z2 (12+2). As you may have seen in the other thread, I've been having on and off issues with b126 randomly dropping drives. Well, I think after changing several cables, and doing about 20 reboots plugging one drive in at a time (I only booted to the

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Cindy Swearingen
Hi Orvar, Correct, I don't see any marvell8sx2 driver changes between b125-126. So far, only you and Tim are reporting these issues. Generally, we see bugs filed by the internal test teams if they see similar problems. I will try to reproduce the RAIDZ checksum errors separately from the marve

Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-11-10 Thread Robin Bowes
On 01/09/09 08:26, James Andrewartha wrote: > Jorgen Lundman wrote: >>> The mv8 is a marvell based chipset, and it appears there are no >>> Solaris drivers for it. There doesn't appear to be any movement from >>> Sun or marvell to provide any either. >> >> Do you mean specifically Marvell 6480 dri

[zfs-discuss] FreeNAS 0.7 with zfs out

2009-11-10 Thread Eugen Leitl
Apparently went live on 6th November. This isn't FreeBSD 8.x zfs, but at least raidz2 is there. http://www.freenas.org/ FreeNAS 0.7 (Khasadar) Sunday, 21 June 2009 Majors changes: * Add ability to configure the login shell for a user. * Upgrade Samba to 3.0.37. * Upgrade transmissio

Re: [zfs-discuss] ZFS + fsck

2009-11-10 Thread Nigel Smith
Hi James James C. McPherson wrote: > *everybody* is interested in the flag days page. Including me. > Asking me to "raise the priority" is not helpful. >From my perspective, it's a surprise that 'everybody' is interested, as I'm not seeing a lot of people complaining that the flag day page is not

[zfs-discuss] ZFS and oracle on SAN disks

2009-11-10 Thread Ian Garbutt
I believe the best practice is to use seperate disks/zpool for oracle database files as the record size needs to be set the same as the db block size - when using a jbod or internal disks. If the server is using a large SAN LUN can anybody see any issues if there is only one zpool and the datas

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Orvar Korvar
Does this mean that there are no driver changes in marvell88sx2, between b125 and b126? If no driver changes, then it means that we both had extreme unluck with our drives, because we both had checksum errors? And my discs were brand new. How probable is this? Something is weird here. What is

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Tim Cook
On Mon, Nov 9, 2009 at 2:51 PM, Cindy Swearingen wrote: > Hi, > > I can't find any bug-related issues with marvell88sx2 in b126. > > I looked over Dave Hollister's shoulder while he searched for > marvell in his webrevs of this putback and nothing came up: > > > driver change with build 126? > not