Re: [zfs-discuss] ZFS on Ubuntu

2010-06-27 Thread Joe Little
Of course, nexenta os is a build of ubuntu on an opensolaris kernel. On Jun 26, 2010, at 12:27 AM, Freddie Cash wrote: > On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles wrote: >> What supporting applications are there on Ubuntu for RAIDZ? > > None. Ubuntu doesn't officially support ZFS. > > Yo

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-28 Thread Joe Little
All true, I just saw too many "need ubuntu and zfs" and thought to state the obvious in case the patch set for nexenta happen to differ enough to provide a working set. I've had nexenta succeed where opensolaris quarter releases failed and vice versa On Jun 27, 2010, at 9:54 PM, Erik Trimble w

Re: [zfs-discuss] Extremely bad performance - hw failure?

2009-12-27 Thread Joe Little
I've had this happen to me too. I found some dtrace scripts at the time that showed that the file system was spending too much time finding available 128k blocks or the like as I was near full per each disk, even though combined I still had 140GB left of my 3TB pool. The SPA code I believe it was w

Re: [zfs-discuss] cluster features

2011-05-10 Thread Joe Little
Well, here's my previous summary off list to different solaris folk (regarding NFS serving via ZFS and iSCSI): I want to use ZFS as a NAS with no bounds on the backing hardware (not restricted to one boxes capacity). Thus, there are two options: FC SAN or iSCSI. In my case, I have multi-building c

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Joe Little
On 1/24/07, Jonathan Edwards <[EMAIL PROTECTED]> wrote: On Jan 24, 2007, at 09:25, Peter Eriksson wrote: >> too much of our future roadmap, suffice it to say that one should >> expect >> much, much more from Sun in this vein: innovative software and >> innovative >> hardware working together to

Re: [zfs-discuss] Re: What SATA controllers are people using for ZFS?

2007-02-01 Thread Joe Little
On 2/1/07, Al Hopper <[EMAIL PROTECTED]> wrote: On Thu, 1 Feb 2007, Tom Buskey wrote: > [i] > I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the bat w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair and I can use it if I upgrade my server fo

Re: Re[2]: [zfs-discuss] 118855-36 & ZFS

2007-02-05 Thread Joe Little
On 2/5/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello Casper, Monday, February 5, 2007, 2:32:49 PM, you wrote: >>Hello zfs-discuss, >> >> I've patched U2 system to 118855-36. Several zfs related bugs id >> should be covered between -19 and -36 like HotSpare support. >> >> However desp

[zfs-discuss] zfs corruption -- odd inum?

2007-02-10 Thread Joe Little
So, I attempting to find the inode from the result of a "zpool status -v": errors: The following persistent errors have been detected: DATASET OBJECT RANGE cc 21e382 lvl=0 blkid=0 Well, 21e282 appears to not be a valid number for "find . -inum blah" Any suggestions?

Re: [zfs-discuss] zfs corruption -- odd inum?

2007-02-11 Thread Joe Little
integrated into Nevada build 57. Jeff On Sat, Feb 10, 2007 at 05:18:05PM -0800, Joe Little wrote: > So, I attempting to find the inode from the result of a "zpool status -v": > > errors: The following persistent errors have been detected: > > DATASET OBJECT RANGE

Re: [zfs-discuss] Why doesn't Solaris remove a faulty disk from operation?

2007-02-11 Thread Joe Little
On 2/11/07, Matty <[EMAIL PROTECTED]> wrote: Howdy, On one of my Solaris 10 11/06 servers, I am getting numerous errors similar to the following: Feb 11 09:30:23 rx scsi: WARNING: /[EMAIL PROTECTED],2000/[EMAIL PROTECTED],1/[EMAIL PROTECTED],0 (sd1): Feb 11 09:30:23 rx Error for Command:

Re: [zfs-discuss] Re: Re: .zfs snapshot directory in all directories

2007-02-28 Thread Joe Little
On 2/27/07, Eric Haycraft <[EMAIL PROTECTED]> wrote: I am no scripting pro, but I would imagine it would be fairly simple to create a script and batch it to make symlinks in all subdirectories. I've done something similar using NFS aggregation products. The real problem is when you export, e

Re: [zfs-discuss] Announcing NexentaCP(b65) with ZFS/Boot integrated installer

2007-06-07 Thread Joe Little
On 6/7/07, Al Hopper <[EMAIL PROTECTED]> wrote: On Wed, 6 Jun 2007, Erast Benson wrote: > Announcing new direction of Open Source NexentaOS development: > NexentaCP (Nexenta Core Platform). > > NexentaCP is Dapper/LTS-based core Operating System Platform distributed > as a single-CD ISO, integra

[zfs-discuss] first public offering of NexentaStor

2007-11-01 Thread Joe Little
I consider myself an early adopter of ZFS and pushed it hard on this list and in real life with regards to iSCSI integration, zfs performance issues with latency there of, and how best to use it with NFS. Well, I finally get to talk more about the ZFS-based product I've been beta testing for quite

Re: [zfs-discuss] first public offering of NexentaStor

2007-11-02 Thread Joe Little
On 11/2/07, MC <[EMAIL PROTECTED]> wrote: > > I consider myself an early adopter of ZFS and pushed > > it hard on this > > list and in real life with regards to iSCSI > > integration, zfs > > performance issues with latency there of, and how > > best to use it with > > NFS. Well, I finally get to t

Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a, [and NexentaStor]

2007-11-02 Thread Joe Little
On 11/2/07, Rob Logan <[EMAIL PROTECTED]> wrote: > > I'm confused by this and NexentaStor... wouldn't it be better > to use b77? with: > > Heads Up: File system framework changes (supplement to CIFS' "head's up") > Heads Up: Flag Day (Addendum) (CIFS Service) > Heads Up: Flag Day (CIFS Service) > c

Re: [zfs-discuss] first public offering of NexentaStor

2007-11-07 Thread Joe Little
Not for NexentaStor as yet to my knowledge. I'd like to caution that the target of the initial product release is digital archiving/tiering/etc and is not necessarily primary NAS usage, though it can be used as such for those so inclined. However, interested parties should contact them as they fles

[zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-16 Thread Joe Little
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-16 Thread Joe Little
NFS. We may have 16, 32 or whatever threads, but if a single writer keeps the ZIL pegged and prohibiting reads, its all for nought. Is there anyway to tune/configure the ZFS/NFS combination to balance reads/writes to not starve one for the other. Its either feast or famine or so tests have shown. &g

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-16 Thread Joe Little
On Nov 16, 2007 9:17 PM, Joe Little <[EMAIL PROTECTED]> wrote: > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote: > > Joe, > > > > I don't think adding a slog helped in this case. In fact I > > believe it made performance worse. Previou

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-17 Thread Joe Little
On Nov 16, 2007 10:41 PM, Neil Perrin <[EMAIL PROTECTED]> wrote: > > > Joe Little wrote: > > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote: > >> Joe, > >> > >> I don't think adding a slog helped in this case. In fact I

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-18 Thread Joe Little
On Nov 18, 2007 1:44 PM, Richard Elling <[EMAIL PROTECTED]> wrote: > one more thing... > > > Joe Little wrote: > > I have historically noticed that in ZFS, when ever there is a heavy > > writer to a pool via NFS, the reads can held back (basically paused). > &

Re: [zfs-discuss] slog tests on read throughput exhaustion (NFS)

2007-11-19 Thread Joe Little
On Nov 19, 2007 9:41 AM, Roch - PAE <[EMAIL PROTECTED]> wrote: > > Neil Perrin writes: > > > > > > Joe Little wrote: > > > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote: > > >> Joe, > > >> > > >&

Re: [zfs-discuss] raidz DEGRADED state

2007-11-20 Thread Joe Little
On Nov 20, 2007 6:34 AM, MC <[EMAIL PROTECTED]> wrote: > > So there is no current way to specify the creation of > > a 3 disk raid-z > > array with a known missing disk? > > Can someone answer that? Or does the zpool command NOT accommodate the > creation of a degraded raidz array? > can't start

[zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Joe Little
I was playing with a Gigabyte i-RAM card and found out it works great to improve overall performance when there are a lot of writes of small files over NFS to such a ZFS pool. However, I noted a frequent situation in periods of long writes over NFS of small files. Here's a snippet of iostat during

Re: [zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Joe Little
r answer explains why its 60 seconds or so. What's sad is that this is a ramdisk so to speak, albeit connected via SATA-I to the sil3124. Any way to isolate this further? Anyway to limit i/o timeouts to a drive? this is just two sticks of ram.. ms would be fine :) > -- richard > > >

Re: [zfs-discuss] odd slog behavior on B70

2007-11-26 Thread Joe Little
On Nov 26, 2007 7:57 PM, Richard Elling <[EMAIL PROTECTED]> wrote: > Joe Little wrote: > > On Nov 26, 2007 7:00 PM, Richard Elling <[EMAIL PROTECTED]> wrote: > > > >> I would expect such iostat output from a device which can handle > >> only a single

Re: [zfs-discuss] How many ZFS pools is it sensible to use on a single server?

2008-04-12 Thread Joe Little
On Tue, Apr 8, 2008 at 9:55 AM, <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM: > > > > In our environment, the politically and administratively simplest > > approach to managing our storage is to give each separate group at > > least one ZFS pool of their own (

[zfs-discuss] zfs mount i/o error and workarounds

2008-04-17 Thread Joe Little
Hello list, We discovered a failed disk with checksum errors. Took out the disk and resilvered, which reported many errors. A few of my subvolumes to the pool won't mount anymore, with "zfs import poolname" reporting that "cannot mount 'poolname/proj': I/O error" Ok, we have a problem. I can succ

[zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
This past weekend, but holiday was ruined due to a log device "replacement" gone awry. I posted all about it here: http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html In a nutshell, an resilver of a single log device with itself, due to the fact one can't remove a log device

Re: [zfs-discuss] indiana as nfs server: crash due to zfs

2008-05-27 Thread Joe Little
On Mon, May 26, 2008 at 6:10 AM, Gerard Henry <[EMAIL PROTECTED]> wrote: > hello all, > i have indiana freshly installed on a sun ultra 20 machine. It only does nfs > server. During one night, the kernel had crashed, and i got this messages: > " > May 22 02:18:57 ultra20 unix: [ID 836849 kern.noti

Re: [zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
log evacuation would make logs useful now instead of waiting. > - Eric > > On Tue, May 27, 2008 at 01:13:47PM -0700, Joe Little wrote: >> This past weekend, but holiday was ruined due to a log device >> "replacement" gone awry. >> >> I posted all about

Re: [zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
ced). At one point there were plans to do this as a separate >> piece of work (since the vdev changes are needed for the general case >> anyway), but I don't know whether this is still the case. >> >> - Eric >> >> On Tue, May 27, 2008 at 01:13:47PM -0700, Joe

Re: [zfs-discuss] slog devices don't resilver correctly

2008-05-27 Thread Joe Little
On Tue, May 27, 2008 at 5:04 PM, Neil Perrin <[EMAIL PROTECTED]> wrote: > Joe Little wrote: >> >> On Tue, May 27, 2008 at 4:50 PM, Eric Schrock <[EMAIL PROTECTED]> >> wrote: >>> >>> Joe - >>> >>> We definitely don't do

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Joe Little
On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell <[EMAIL PROTECTED]> wrote: > Meant to add that zpool import -f pool doesn't work b/c of the missing log > vdev. > > All the other disks are there and show up with "zpool import", but it won't > import. > > Is there anyway a util could clear the log de

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Joe Little
On Thu, May 29, 2008 at 8:59 PM, Joe Little <[EMAIL PROTECTED]> wrote: > On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell <[EMAIL PROTECTED]> wrote: >> Meant to add that zpool import -f pool doesn't work b/c of the missing log >> vdev. >> >> All the o

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-05-30 Thread Joe Little
On Fri, May 30, 2008 at 7:43 AM, Paul Raines <[EMAIL PROTECTED]> wrote: > > It seems when a zfs filesystem with reserv/quota is 100% full users can no > longer even delete files to fix the situation getting errors like these: > > $ rm rh.pm6895.medial.V2.tif > rm: cannot remove `rh.pm6895.medial.V2

Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-30 Thread Joe Little
On Fri, May 30, 2008 at 6:30 AM, Jeb Campbell <[EMAIL PROTECTED]> wrote: > Ok, here is where I'm at: > > My install of OS 2008.05 (snv_86?) will not even come up in single user. > > The OS 2008.05 live cd comes up fine, but I can't import my old pool b/c of > the missing log (and I have to import

Re: [zfs-discuss] [osol-help] >1TB ZFS thin provisioned partition prevents Opensolaris from booting.

2008-05-30 Thread Joe Little
On Fri, May 30, 2008 at 7:07 AM, Hugh Saunders <[EMAIL PROTECTED]> wrote: > On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai > <[EMAIL PROTECTED]> wrote: >> I think it's right. You'd have to move to a 64 bit kernel. Any reasons to >> stick to a 32 bit >> kernel ? > > My reason would be lack of

Re: [zfs-discuss] SATA controller suggestion

2008-06-05 Thread Joe Little
On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote: > > > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <[EMAIL PROTECTED]> > wrote: >> >> Hey guys, please excuse me in advance if I say or ask anything stupid :) >> >> Anyway, Solaris newbie here. I've built for myself a new file server

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Joe Little
On Thu, Jun 5, 2008 at 9:26 PM, Tim <[EMAIL PROTECTED]> wrote: > > > On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <[EMAIL PROTECTED]> wrote: >> >> On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote: >> > >> > >> &g

Re: [zfs-discuss] cluster features

2006-05-30 Thread Joe Little
Well, I would caution at this point against the iscsi backend if you are planning on using NFS. We took a long winded conversation online and have yet to return to this list, but the gist of it is that the latency of iscsi along with the tendency for NFS to fsync 3 times per write causes performan

Re: Re[2]: [zfs-discuss] cluster features

2006-05-31 Thread Joe Little
Well, here's my previous summary off list to different solaris folk (regarding NFS serving via ZFS and iSCSI): I want to use ZFS as a NAS with no bounds on the backing hardware (not restricted to one boxes capacity). Thus, there are two options: FC SAN or iSCSI. In my case, I have multi-building

Re: [zfs-discuss] ZFS performance metric/cookbook/whitepaper

2006-06-01 Thread Joe Little
Please add to the list the differences on locally or remotely attach vdevs: FC, SCSI/SATA, or iSCSI. This is the part that is troubling me most, as there are wildly different performance characteristics when you use NFS with any of these backends with the various configs of ZFS. Another thing is w

[zfs-discuss] zfs going out to lunch

2006-06-02 Thread Joe Little
I've been writing via tar to a pool some stuff from backup, around 500GB. Its taken quite a while as the tar is being read from NFS. My ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA drives (sil3124 card) Ever once in a while, a "df" stalls and during that time my io's go fla

[zfs-discuss] status question regarding sol10u2

2006-06-19 Thread Joe Little
So, if I recall from this list, a mid-june release to the web was expected for S10U2. I'm about to do some final production testing, and I was wondering if S10U2 was near term or more of a July thing now. This may not be the perfect venue for the question, but the subject was previously covered wi

Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Joe Little
What if your 32bit system is just a NAS -- ZFS and NFS, nothing else? I think it would still be ideal to allow tweaking of things at runtime to make 32-bit systems more ideal. On 6/21/06, Mark Maybee <[EMAIL PROTECTED]> wrote: Yup, your probably running up against the limitations of 32-bit kern

Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-22 Thread Joe Little
Well, I should weigh in hear. I have been using ZFS with an iscsi backend and a NFS front end to my clients. Until B41 (not sure what fixed this) I was getting 20KB/sec for RAIDZ and 200KB/sec for just ZFS on on large iscsi LUNs (non-RAIDZ) when I was receiving many small writes, such as untarrin

Re: Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Joe Little
On 6/22/06, Darren J Moffat <[EMAIL PROTECTED]> wrote: Rich Teer wrote: > On Thu, 22 Jun 2006, Joe Little wrote: > > Please don't top post. > >> What if your 32bit system is just a NAS -- ZFS and NFS, nothing else? >> I think it would still be ideal to allow

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-22 Thread Joe Little
On 6/22/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote: > a test against the same iscsi targets using linux and XFS and the > NFS server implementation there gave me 1.25MB/sec writes. I was about > to throw in the towel and deem ZFS/NFS has unusable until B41 came > along and at least gave me 1.25MB

Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Joe Little
I guess the only hope is to find pin-compatible Xeons that are 64bit to replace what is a large chassis with 24 slots of disks that has specific motherboard form-factor, etc. We have 6 of these things from a government grant that must be used for the stated purpose. So, yes, we can buy product, bu

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-22 Thread Joe Little
order for the change to take effect. If you don't have time, no big deal. --Bill On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote: > On 6/22/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote: > >> a test against the same iscsi targets using linux and XFS and the >

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-23 Thread Joe Little
On 6/23/06, Roch <[EMAIL PROTECTED]> wrote: Joe Little writes: > On 6/22/06, Bill Moore <[EMAIL PROTECTED]> wrote: > > Hey Joe. We're working on some ZFS changes in this area, and if you > > could run an experiment for us, that would be great. Just do this:

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-23 Thread Joe Little
On 6/23/06, Roch <[EMAIL PROTECTED]> wrote: Joe Little writes: > On 6/22/06, Bill Moore <[EMAIL PROTECTED]> wrote: > > Hey Joe. We're working on some ZFS changes in this area, and if you > > could run an experiment for us, that would be great. Just do this:

Re: Re: [zfs-discuss] 15 minute fdsync problem and ZFS: Solved

2006-06-23 Thread Joe Little
To clarify what has just been stated. With zil disabled I got 4MB/sec. With zil enabled I get 1.25MB/sec On 6/23/06, Tao Chen <[EMAIL PROTECTED]> wrote: On 6/23/06, Roch <[EMAIL PROTECTED]> wrote: > > > > On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:

Re: Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Joe Little
On 6/27/06, Erik Trimble <[EMAIL PROTECTED]> wrote: Darren J Moffat wrote: > Peter Rival wrote: > >> storage arrays with the same arguments over and over without >> providing an answer to the customer problem doesn't do anyone any >> good. So. I'll restate the question. I have a 10TB database

Re: Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Joe Little
On 6/28/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote: On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote: > But Joe makes a good point about RAID-Z and iSCSI. > > It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much > to do that: parity computation on write, checksum verificat

Re: [zfs-discuss] The ZFS Read / Write roundabout

2006-06-30 Thread Joe Little
I've always seen this curve in my tests (local disk or iscsi) and just think its zfs as designed. I haven't seen much parallelism when I have multiple i/o jobs going, the filesystem seems to go mostly into one or the other mode. Perhaps per vdev (in iscsi I'm only exposing one or two), there is on

Re: Re: [zfs-discuss] ZFS vs. Apple XRaid

2006-07-31 Thread Joe Little
On 7/31/06, Dale Ghent <[EMAIL PROTECTED]> wrote: On Jul 31, 2006, at 8:07 PM, eric kustarz wrote: > > The 2.6.x Linux client is much nicer... one thing fixed was the > client doing too many commits (which translates to fsyncs on the > server). I would still recommend the Solaris client but i'm

Re: Re: [zfs-discuss] ZFS vs. Apple XRaid

2006-08-01 Thread Joe Little
y and some major penalties for streaming writes of various sizes with the NFS implementation and its fsync happiness (3 fsyncs per write from an NFS client). Its all very true that its stable/safe, but its also very slow in various use cases! On 8/1/06, eric kustarz <[EMAIL PROTECTED]>

[zfs-discuss] multi-layer ZFS filesystems and exporting: my stupid question for the day

2006-08-16 Thread Joe Little
One of the things espoused on this list again and again is that quotas for users are not ideal, and that one should just make a filesystem per user. Ok.. I did that. I now have per just one "volume" within my pool some 380 odd users. By way of example, lets say I have /pool/common/users/user1 ...

Re: Re: [zfs-discuss] multi-layer ZFS filesystems and exporting: my stupid question for the day

2006-08-16 Thread Joe Little
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote: On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote: > Is there a way to allow simple export commands the traverse multiple > ZFS filesystems for exporting? I'd hate to have to have hundreds of >

Re: Re: Re: [zfs-discuss] multi-layer ZFS filesystems and exporting: my stupid question for the day

2006-08-16 Thread Joe Little
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote: On August 16, 2006 10:34:31 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote: > On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote: >> On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote: &g

[zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-24 Thread Joe Little
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high ver

Re: Re: [zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-24 Thread Joe Little
On 8/24/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote: > We finally flipped the switch on one of our ZFS-based servers, with > approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is > a RAID5 volume on t

Re: Re: Re: [zfs-discuss] unaccounted for daily growth in ZFS disk space usage

2006-08-24 Thread Joe Little
On 8/24/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: On Thu, Aug 24, 2006 at 02:21:33PM -0700, Joe Little wrote: > well, by deleting my 4-hourlies I reclaimed most of the data. To > answer some of the questions, its about 15 filesystems (decendents > included). I'm aware

[zfs-discuss] marvel cards.. as recommended

2006-09-12 Thread Joe Little
So, people here recommended the Marvell cards, and one even provided a link to acquire them for SATA jbod support. Well, this is what the latest bits (B47) say: Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING: marvell88sx0: Could not attach, unsupported chip stepping or unable

Re: Re: [zfs-discuss] marvel cards.. as recommended

2006-09-13 Thread Joe Little
On 9/12/06, James C. McPherson <[EMAIL PROTECTED]> wrote: Joe Little wrote: > So, people here recommended the Marvell cards, and one even provided a > link to acquire them for SATA jbod support. Well, this is what the > latest bits (B47) say: > > Sep 12 13:51:54 vram ma

Re: [zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Joe Little
Yeah. I got the message from a few others, and we are hoping to return/buy the newer one. I've sort of surprised by the limited set of SATA RAID or JBOD cards that one can actually use. Even the one's linked to on this list sometimes aren't supported :). I need to get up and running like yesterday

Re: [zfs-discuss] Best version of Solaris 10 fro ZFS ?

2006-10-27 Thread Joe Little
The latest OpenSolaris release? Perhaps Nexenta in the end is the way to best deliver/maintain that. On 10/27/06, David Blacklock <[EMAIL PROTECTED]> wrote: What is the current recommended version of Solaris 10 for ZFS ? -thanks, -Dave ___ zfs-discus

Re: [zfs-discuss] poor NFS/ZFS performance

2006-11-22 Thread Joe Little
On 11/22/06, Chad Leigh -- Shire.Net LLC <[EMAIL PROTECTED]> wrote: On Nov 22, 2006, at 4:11 PM, Al Hopper wrote: > No problem there! ZFS rocks. NFS/ZFS is a bad combination. Has anyone tried sharing a ZFS fs using samba or afs or something else besides nfs? Do we have the same issues? I

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread Joe Little
On 12/12/06, James F. Hranicky <[EMAIL PROTECTED]> wrote: Jim Davis wrote: >> Have you tried using the automounter as suggested by the linux faq?: >> http://nfs.sourceforge.net/#section_b > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > that limit with a standard au

[zfs-discuss] B54 and marvell cards

2006-12-20 Thread Joe Little
We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log on, but the zpools are hosed, and a reboot never occurs if requested since it can't unmount the zfs volumes. So, only a power cycle works. In both cases, we get this: Dec 20

[zfs-discuss] Re: B54 and marvell cards

2006-12-20 Thread Joe Little
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote: We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log on, but the zpools are hosed, and a reboot never occurs if requested since it can't unmount the zfs vol

[zfs-discuss] Re: B54 and marvell cards

2006-12-20 Thread Joe Little
Some further joy: http://bugs.opensolaris.org/view_bug.do?bug_id=6504404 On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote: On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote: > We just put together a new system for ZFS use at a company, and twice > in one week we've ha

Re: [zfs-discuss] What SATA controllers are people using for ZFS?

2006-12-21 Thread Joe Little
and specific models, and the driver used? Looks like there may be stability issues with the marvell, which appear to go unanswered.. On 12/21/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: Hi Naveen, I believe the newer LSI cards work pretty well with Solaris. Best Regards, Jason On 12/

Re: [zfs-discuss] What SATA controllers are people using for ZFS?

2006-12-21 Thread Joe Little
On 12/21/06, Al Hopper <[EMAIL PROTECTED]> wrote: On Thu, 21 Dec 2006, Joe Little wrote: > and specific models, and the driver used? Looks like there may be > stability issues with the marvell, which appear to go unanswered.. I've tested a box running two Marvell based 8

[zfs-discuss] Poor directory traversal or small file performance?

2006-05-04 Thread Joe Little
I've been writing to the Solaris NFS list since I was getting some bad performance copying via NFS (noticeably there) a large set of small files. We have various source trees, including a tree with many linux versions that I was copying to my ZFS NAS-to-be. On large files, it flies pretty well, an

Re: [zfs-discuss] Poor directory traversal or small file performance?

2006-05-04 Thread Joe Little
nd ." times on a local zfs. Neil Perrin wrote On 05/04/06 21:01,: > Was this a 32 bit intel system by chance? > If so this is quite likely caused by: > > 6413731 pathologically slower fsync on 32 bit systems > > This was fixed in snv_39. > > Joe Little wrote On 0

Re: [zfs-discuss] Poor directory traversal or small file performance?

2006-05-04 Thread Joe Little
1 pathologically slower fsync on 32 bit systems > > This was fixed in snv_39. > > Joe Little wrote On 05/04/06 15:47,: > >> I've been writing to the Solaris NFS list since I was getting some bad >> performance copying via NFS (noticeably there) a large set of small &

[zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
well, it was already an NFS-discuss list message. Someone else added dtrace-discuss to it. I have already noted this to a degree on zfs-discuss, but it seems to be mainly a NFS specific issue at this stage. On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote: On Fri, Joe Little wrote:>

[zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
   188 RFS3_COMMIT         306 On 5/5/06, Joe Little <[EMAIL PROTECTED]> wrote: well, it was already an NFS-discuss list message. Someone else added dtrace-discuss to it. I have already noted this to a degree on zfs-discuss, but it seems to be mainly a NFS specific issue at this s

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
Thanks for the tip. In the local case, I could send to the iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time of 50seconds (17 seconds better than UFS). However, I didn't even both finishing the NFS client test, since it was taking a few seconds between multiple 27K files. So,

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote: On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote: > Thanks for the tip. In the local case, I could send to the > iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time > of 50seconds (17 seconds

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
And of course, just to circle back, an rsync via ssh from the client to the Solaris ZFS/iscsi server came in at 17.5MB/sec, taking 1minute 16 seconds, or about 20% longer. So, NFS (over TCP) is 1.4k/s, and encrypted ssh is 17.5MB/sec following the same network path. On 5/5/06, Joe Little <[EM

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
Are there known i/o or iscsi dtrace scripts available? On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote: On Fri, Joe Little wrote: > On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote: > >On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote: > >> Thanks

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
such a different code path? On 5/5/06, Lisa Week <[EMAIL PROTECTED]> wrote: These may help: http://opensolaris.org/os/community/dtrace/scripts/ Check out iosnoop.d http://www.solarisinternals.com/si/dtrace/index.php Check out iotrace.d - Lisa Joe Little wrote On 05/05/06 18:

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-05 Thread Joe Little
uling. Is this tuneable for either ZFS or NFS and/or can be set? On 5/5/06, Lisa Week <[EMAIL PROTECTED]> wrote: These may help: http://opensolaris.org/os/community/dtrace/scripts/ Check out iosnoop.d http://www.solarisinternals.com/si/dtrace/index.php Check out iotrace.d - Lisa

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-06 Thread Joe Little
ing R2T: 1 Max Receive Data Segment Length: 8192 Max Connections: 1 Header Digest: NONE Data Digest: NONE On 5/6/06, Nicolas Williams <[EMAIL PROTECTED]> wrote: On Fri, May 05, 2006 at 09:48:00PM -0700, Joe Little

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-08 Thread Joe Little
amely, XFS, JFS, etc which I've tested before) On 5/8/06, Nicolas Williams <[EMAIL PROTECTED]> wrote: On Fri, May 05, 2006 at 11:55:17PM -0500, Spencer Shepler wrote: > On Fri, Joe Little wrote: > > Thanks. I'm playing with it now, trying to get the most succinct te

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-09 Thread Joe Little
ee with NFS. I definitely think the bug is on the NFS server end, even considering that the SMB protocol is different. On 5/8/06, Joe Little <[EMAIL PROTECTED]> wrote: I was asked to also snoop the iscsi end of things, trying to findsomething different between the two. iscsi being relativ

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-10 Thread Joe Little
mance testing and benchmarking. I will hand off my configuration for the Sun NFS teams internally to check out. -David Joe Little wrote: > Well, I tried some suggested iscsi tunings to no avail. I did try > something else though: I brought up samba. My linux 2.2 source tree > copying i

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-11 Thread Joe Little
How did you get the average time for async writes? My client (lacking ptime, its linux) comes in at 50 minutes, not 50 seconds. I'm running again right now for a more accurate number. I'm untarring from a local file on the directory to the NFS share. On 5/11/06, Roch Bourbonnais - Performance En

Re: [zfs-discuss] Re: [dtrace-discuss] Re: [nfs-discuss] Script to trace NFSv3 client operations

2006-05-11 Thread Joe Little
well, here's my first pass result: [EMAIL PROTECTED] loges1]# time tar xf /root/linux-2.2.26.tar real114m6.662s user0m0.049s sys 0m1.354s On 5/11/06, Roch Bourbonnais - Performance Engineering <[EMAIL PROTECTED]> wrote: Joe Little writes: > How did you get the av