[zfs-discuss] scrub percentage complete decreasing, but without snaps.

2007-12-14 Thread Ian Collins
I've seen the problems with bug 6343667, but I haven't seen the problem I have at the the moment. I started a scrub of a b72 system that doesn't have any recent snapshots (none since the last scrub) and the % complete is cycling: scrub: scrub in progress, 69.08% done, 0h13m to go scrub: scrub i

Re: [zfs-discuss] Nice chassis for ZFS server

2007-12-14 Thread can you guess?
> On Dec 14, 2007 1:12 AM, can you guess? > <[EMAIL PROTECTED]> wrote: > > > yes. far rarer and yet home users still see > them. > > > > I'd need to see evidence of that for current > hardware. > What would constitute "evidence"? Do anecdotal tales > from home users > qualify? I have two disks (

Re: [zfs-discuss] Nice chassis for ZFS server

2007-12-14 Thread Casper . Dik
>... >though I'm not familiar with any recent examples in normal desktop environments > One example found during early use of zfs in Solaris engineering was a system with a flaky power supply. It seemed to work just fine with ufs but when zfs was installed the sata drives started to shows many

[zfs-discuss] zfs snapshot leaking data ?

2007-12-14 Thread Guy
Hello every ZFS gurus I've been using a ZFS server for about one year now (for rsync-based disk backup purpose). The process is quite simple : I backup each fs using rsync. After each filesystem backup, I take a zfs snapshot to freeze read-only the saved data. So I end up with a zfs snapshot f

Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-14 Thread Shawn Ferry
On Dec 14, 2007, at 12:27 AM, Jorgen Lundman wrote: > > > Shawn Ferry wrote: >> Jorgen, >> >> You may want to try running 'bootadm update-archive' >> >> Assuming that your boot-archive problem is an out of date boot- >> archive >> message at boot and/or doing a clean reboot to let the system try

[zfs-discuss] JBOD performance

2007-12-14 Thread Frank Penczek
Hi all, we are using the following setup as file server: --- # uname -a SunOS troubadix 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-280R # prtconf -D System Configuration: Sun Microsystems sun4u Memory size: 2048 Megabytes System Peripherals (Software Nodes): SUNW,Sun-Fire-280R (driver n

Re: [zfs-discuss] ZFS with array-level block replication (TrueCopy, SRDF, etc.)

2007-12-14 Thread Jim Dunham
Steve, > I have a couple of questions and concerns about using ZFS in an > environment where the underlying LUNs are replicated at a block > level using products like HDS TrueCopy or EMC SRDF. Apologies in > advance for the length, but I wanted the explanation to be clear. > > (I do realise

[zfs-discuss] LUN configuration for disk-based backups

2007-12-14 Thread Andrew Chace
Hello, We have a StorageTek FLX280 (very similar to a 6140) with 16 750 GB SATA drives that we would like to use for disk-based backups. I am trying to make an (educated) guess at what the best configuration for the LUN's on the FLX280 might be. I've read, or at least skimmed, most of the "ZF

Re: [zfs-discuss] JBOD performance

2007-12-14 Thread Richard Elling
Frank Penczek wrote: > > The performance is slightly disappointing. Does anyone have > a similar setup and can anyone share some figures? > Any pointers to possible improvements are greatly appreciated. > > Use a faster processor or change to a mirrored configuration. raidz2 can become processo

Re: [zfs-discuss] JBOD performance

2007-12-14 Thread Louwtjie Burger
> The throughput when writing from a local disk to the > zpool is around 30MB/s, when writing from a client Err.. sorry, the internal storage would be good old 1Gbit FCAL disks @ 10K rpm. Still, not the fastest around ;) ___ zfs-discuss mailing list zfs-

[zfs-discuss] Bugid 6535160

2007-12-14 Thread Vincent Fox
So does anyone have any insight on BugID 6535160? We have verified on a similar system, that ZFS shows big latency in filebench varmail test. We formatted the same LUN with UFS and latency went down from 300 ms to 1-2 ms. http://sunsolve.sun.com/search/document.do?assetkey=1-1-6535160-1 We run

Re: [zfs-discuss] LUN configuration for disk-based backups

2007-12-14 Thread Al Hopper
On Fri, 14 Dec 2007, Andrew Chace wrote: [ reformatted ] > Hello, > > We have a StorageTek FLX280 (very similar to a 6140) with 16 750 GB > SATA drives that we would like to use for disk-based backups. I am > trying to make an (educated) guess at what the best configuration > for the L

Re: [zfs-discuss] Bugid 6535160

2007-12-14 Thread Neil Perrin
Vincent Fox wrote: > So does anyone have any insight on BugID 6535160? > > We have verified on a similar system, that ZFS shows big latency in filebench > varmail test. > > We formatted the same LUN with UFS and latency went down from 300 ms to 1-2 > ms. This is such a big difference it makes

Re: [zfs-discuss] Bugid 6535160

2007-12-14 Thread Neil Perrin
Vincent Fox wrote: > So does anyone have any insight on BugID 6535160? > > We have verified on a similar system, that ZFS shows big latency in filebench > varmail test. > > We formatted the same LUN with UFS and latency went down from 300 ms to 1-2 > ms. This is such a big difference it makes me

Re: [zfs-discuss] Nice chassis for ZFS server

2007-12-14 Thread can you guess?
> > >... > >though I'm not familiar with any recent examples in > normal desktop environments > > > > > One example found during early use of zfs in Solaris > engineering was > a system with a flaky power supply. > > It seemed to work just fine with ufs but when zfs was > installed the > sata d

Re: [zfs-discuss] Bugid 6535160

2007-12-14 Thread Vincent Fox
> ) The write cache is non volatile, but ZFS hasn't > been configured > to stop flushing it (set zfs:zfs_nocacheflush = > 1). These are a pair of 2540 with dual-controllers, definitely non-volatile cache. We set the zfs_nocacheflush=1 and that improved things considerably. ZFS filesystem (2540

Re: [zfs-discuss] Nice chassis for ZFS server

2007-12-14 Thread Will Murnane
On Dec 14, 2007 4:23 AM, can you guess? <[EMAIL PROTECTED]> wrote: > I assume that you're referring to ZFS checksum errors rather than to transfer > errors caught by the CRC resulting in retries. Correct. > If so, then the next obvious question is, what is causing the ZFS checksum > errors? An

[zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-14 Thread Gary Mills
I'm testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format': 1. c4t

[zfs-discuss] Update: zpool kernel panics.

2007-12-14 Thread Edward Irvine
Hi Folks, Begin forwarded message: > From: Edward Irvine <[EMAIL PROTECTED]> > Date: 12 December 2007 8:44:57 AM > To: [EMAIL PROTECTED] > Subject: Fwd: [zfs-discuss] zpool kernel panics. > > FYI ... > > Begin forwarded message: > >> From: "James C. McPherson" <[EMAIL PROTECTED]> >> Date: 12 Dece

Re: [zfs-discuss] Nice chassis for ZFS server

2007-12-14 Thread can you guess?
the next obvious question is, what is > causing the ZFS checksum errors? And (possibly of > some help in answering that question) is the disk > seeing CRC transfer errors (which show up in its > SMART data)? > > The memory is ECC in this machine, and Memtest passed > it for five > days. The disk

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-14 Thread Jonathan Loran
This is the same configuration we use on 4 separate servers (T2000, two X4100, and a V215). We do use a different iSCSI solution, but we have the same multi path config setup with scsi_vhci. Dual GigE switches on separate NICs both server and iSCSI node side. We suffered from the e1000g int

Re: [zfs-discuss] JBOD performance

2007-12-14 Thread Peter Schuller
> Use a faster processor or change to a mirrored configuration. > raidz2 can become processor bound in the Reed-Soloman calculations > for the 2nd parity set. You should be able to see this in mpstat, and to > a coarser grain in vmstat. Hmm. Is the OP's hardware *that* slow? (I don't know enough