Re: [zfs-discuss] ZFS very slow under xVM

2007-11-03 Thread Erblichs
Martin, This is a shot in the dark, but, this seems to be a IO scheduling issue. Since, i am late on this thread, what is the characteristic of the IO: read mostly, appending writes, read, modify write, sequentiality, random, single large file, multiple fil

Re: [zfs-discuss] [docs-discuss] Introduction to Operating Systems

2007-08-02 Thread Erblichs
http://www.sun.com/software/solaris/ds/zfs.jsp Solaris ZFS—The Most Advanced File System on the Planet Anyone who has ever lost important files, run out of space on a partition, spent weekends adding new storage to servers, tried to grow or shrink a file system, or experienced data corruption kno

Re: [zfs-discuss] Re: Re: Mac OS X "Leopard" to use ZFS

2007-06-13 Thread Erblichs
Toby Thain, et al, I am guessing here, but to just be able to access the FS data locally without the headaches of verifying FS consistency, write caches, etc. Mitchell Erblich Toby Thain wrote: > > On 13-Jun-07, at 1:14 PM, Rick Mann wrot

Re: [zfs-discuss] Re: ZFS Apple WWDC Keynote Absence

2007-06-12 Thread Erblichs
Group, Isn't Apple strength really in the non-compute intensive personal computer / small business environment? IE, Plug and play. Thus, even though ZFS is able to work as the default FS, should it be the default FS for the small system environment

Re: [zfs-discuss] Optimal strategy (add or replace disks) to build acheap and raidz?

2007-05-08 Thread Erblichs
Group, MOST people want a system to work without doing ANYTHING when they turn on the system. So yes, the thought of people buying another drive and installing it in a brand new system would be insane for this group of buyers. Mitchell Erblich

Re: [zfs-discuss] gzip compression throttles system?

2007-05-04 Thread Erblichs
Mitchell Erblich -- Darren J Moffat wrote: > > Erblichs wrote: > > So, my first order would be to take 1GB or 10GB .wav files > > AND time both the kernel implementation of Gzip and the > > user application. Approx the

Re: [zfs-discuss] gzip compression throttles system?

2007-05-03 Thread Erblichs
Ian Collins, My two free cents.. If the gzip was in application space, most gzip's implementations support (maybe a new compile) a less extensive/expensive "deflation" that would consume fewer CPU cycles. Secondly, if the file objects are being written lo

Re: [zfs-discuss] Very Large Filesystems

2007-04-28 Thread Erblichs
Jorg, Do you really think that ANY FS actually needs to support more FS objects? If that would be an issue, why not create more FSs? A multi-TB FS SHOULD support 100MB+/GB size FS objects, which IMO is the more common use. I have seen this alot in video

Re: [zfs-discuss] cow performance penatly

2007-04-26 Thread Erblichs
Ming, Lets take a pro example with a minimal performance tradeoff. All FSs that modify a disk block, IMO, do a full disk block read before anything. If doing a extended write and moving to a larger block size with COW you give yourself the

Re: [zfs-discuss] Re: ZFS+NFS on storedge 6120 (sun t4)

2007-04-21 Thread Erblichs
or only Suns? > > > > > > > > -Original Message- > > From: [EMAIL PROTECTED] on behalf of Erblichs > > Sent: Sun 4/22/2007 4:50 AM > > To: Leon Koll > > Cc: zfs-discuss@opensolaris.org > > Subject: Re: [zfs-discuss] Re: ZFS+NFS on storedge 6

Re: [zfs-discuss] Re: ZFS+NFS on storedge 6120 (sun t4)

2007-04-21 Thread Erblichs
Leon Koll, As a knowldegeable outsider I can say something. The benchbark (SFS) page specifies NFSv3,v2 support, so I question whether you ra n NFSv4. I would expect a major change in performance just to version 4 NFS version and ZFS. The benchmark seems

Re: [zfs-discuss] ZFS and Linux

2007-04-18 Thread Erblichs
cause more problems. Mitchell Erblich Sr Software Engineer - Joerg Schilling wrote: > > Erblichs <[EMAIL PROTECTED]> wrote: > > > Joerg Shilling, > > > > Putting the license issues aside for a moment. &g

Re: [zfs-discuss] ZFS on the desktop

2007-04-17 Thread Erblichs
Rich Teer, I have a perfect app for the masses. A Hi-Def Video/ audio server for the hi-def TV and audio setup. I would think the average person would want to have access to 1000s of DVDs / CDs within a small box versus taking up the full

Re: [zfs-discuss] ZFS and Linux

2007-04-17 Thread Erblichs
Joerg Shilling, Putting the license issues aside for a moment. If their is "INTEREST" in ZFS within Linux, should a small Linux group be formed to break down ZFS in easily portable sections and non-portable sections. And get a real-time/effort assessment

Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Erblichs
Toby Thain, I am sure someone will divise a method of subdividing the FS and run a background fsck and/or checksums on the different file objects or ... before this becomes a issue. :) Mitchell Erblich - Toby Thain wrote: > > >

Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Erblichs
Group, Did Joerg Schilling bring up a bigger issue within this discussion thread? > And it seems that you missunderstand the way the Linux kernel is developed. > If _you_ started a ZFS project for Linux, _you_ would need to maintain it too > or otherwise it would not be kept up to

Re: [zfs-discuss] Gzip compression for ZFS

2007-04-05 Thread Erblichs
My two cents, Assuming that you may pick a specific compression algorithm, most algorithms can have different levels/percentages of deflations/inflations which is effects the time to compress and/or inflate wrt the CPU capacity. Secondly, if I can add an ad

Re: [zfs-discuss] Re: Layout for multiple large streaming writes.

2007-03-13 Thread Erblichs
Tp the original poster, FYI, Accessing RAID drives at a constant "~70-75%" does not probably leave enough excess for degraded mode. A normal rule of thumb is 50 to 60% constant to allow excess capacity to be absorbed in degraded mode. An "

Re: [zfs-discuss] writes lost with zfs !

2007-03-11 Thread Erblichs
Ayaz Anjum and others, I think once you move into NFS over TCP in a client server env, the chance for lost data is significantly higher than just a disconnecting a cable, Scenario, before a client generates a delayed write from his violatile DRAM client cac

Re: [zfs-discuss] Re: Re: Efficiency when reading the same file blocks

2007-02-28 Thread Erblichs
ll Erblich - Toby Thain wrote: > > On 28-Feb-07, at 6:43 PM, Erblichs wrote: > > > ZFS Group, > > > > My two cents.. > > > > Currently, in my experience, it is a waste of time to try to > > guarantee "exact&quo

Re: [zfs-discuss] Re: Re: Efficiency when reading the same file blocks

2007-02-28 Thread Erblichs
ZFS Group, My two cents.. Currently, in my experience, it is a waste of time to try to guarantee "exact" location of disk blocks with any FS. A simple reason exception is bad blocks, a neighboring block will suffice. Second, current disk controll

Re: [zfs-discuss] Implementing fbarrier() on ZFS

2007-02-12 Thread Erblichs
Jeff Bonwick, Do you agree that their is a major tradeoff of "builds up a wad of transactions in memory"? We loose the changes if we have an unstable environment. Thus, I don't quite understand why a 2-phase approach to commits isn't done. First, t

Re: [zfs-discuss] Re: Heavy writes freezing system

2007-01-16 Thread Erblichs
Rainer Heilke, You have 1/4 of the amount of memory that the 2900 system is capable of (192GBs : I think). Secondly, output from fsstat(1M) could be helpful. Run this command over time and check to see if the values change over time.. Mitchell Erb

Re: [zfs-discuss] Limit ZFS Memory Utilization

2007-01-10 Thread Erblichs
Hey guys, Do to lng URL lookups, the DNLC was pushed to variable sized entries. The hit rate was dropping because of "name to long" misses. This was done long ago while I was at Sun under a bug reported by me.. I don't know your usage, but you should at

Re: [zfs-discuss] I/O patterns during a "zpool replace": whywritetothe disk being replaced?

2006-11-09 Thread Erblichs
ield is updated. Remember, that unless you are just touching a FS low-level(file) object, all writes are proceeded by at least 1 read. Mitchell Erblich Bill Sommerfeld wrote: > > On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote:

Re: [zfs-discuss] I/O patterns during a "zpool replace": why writetothe disk being replaced?

2006-11-09 Thread Erblichs
08 at 01:54 -0800, Erblichs wrote: > > > > Bill Sommerfield, > > that's not how my name is spelled > > > > Are their any existing snaps? > no. why do you think this would matter? > > > > Can you have any scripts that may be > >

Re: [zfs-discuss] I/O patterns during a "zpool replace": why write tothe disk being replaced?

2006-11-08 Thread Erblichs
Bill Sommerfield, Are their any existing snaps? Can you have any scripts that may be removing aged files? Mitchell Erblich -- Bill Sommerfeld wrote: > > On a v40z running snv_51, I'm doing a "zpool replace z c1t4d0 c1t5d0". > > (so, w

Re: [zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Erblichs
Hi, My suggestion is direct any command output to a file that may print thous of lines. I have not tried that number of FSs. So, my first suggestion is to have alot of phys mem installed. The second item that I could be concerned with is path tran

Re: [zfs-discuss] copying a large file..

2006-10-29 Thread Erblichs
Hi, How much time is a "long time"? Second, had a snapshot been taken after the file was created? Are the src and dst directories in the same slice? What other work was being done at the time of the move? Were their numerous fil

Re: [zfs-discuss] ENOSPC : No space on file deletion

2006-10-20 Thread Erblichs
file within the snapshot and remove it? Mitchell Erblich Matthew Ahrens wrote: > > Erblichs wrote: > > Now the stupid question.. > > If the snapshot is identical to the FS, I can't > > remove files from the FS because

[zfs-discuss] ENOSPC : No space on file deletion

2006-10-19 Thread Erblichs
Hey guys, I think i know what is going on. A set of files was attempted to be deleted on a FS that had almost consumed its reservation. It failed because one or more snapshots hold references to these files and the snaps needed to allocate FS space

Re: [zfs-discuss] Self-tuning recordsize

2006-10-17 Thread Erblichs
Group, et al, I don't understand that if the problem is systemic based on the number of continual dirty pages and stress to clean those pages, then why . If the problem is FS independent, because any number of different installed FSs can equally consum

Re: [zfs-discuss] Self-tuning recordsize

2006-10-14 Thread Erblichs
Nicolas Williams wrote: > > On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote: > > For extremely large files (25 to 100GBs), that are accessed > > sequentially for both read & write, I would expect 64k or 128k. > > Lager files accessed sequentially d

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Erblichs
Group, I am not sure I agree with the 8k size. Since "recordsize" is based on the size of filesystem blocks for large files, my first consideration is what will be the max size of the file object. For extremely large files (25 to 100GBs), that are accessed

[zfs-discuss] zfs_vfsops.c : zfs_vfsinit() : line 1179: Src inspection

2006-10-13 Thread Erblichs
Group, If their is a bad vfs ops template, why wouldn't you just return(error) versus trying to create the vnode ops template? My suggestion is after the cmn_err() then return(error); Mitchell Erblich -

Re: [zfs-discuss] single memory allocation in the ZFS intent log

2006-10-06 Thread Erblichs
or steal it from another's cache. Mitchell Erblich - Frank Hofmann wrote: > > On Thu, 5 Oct 2006, Erblichs wrote: > > > Casper Dik, > > > > After my posting, I assumed that a code question should be > >

Re: [zfs-discuss] single memory allocation in the ZFS intent log

2006-10-05 Thread Erblichs
Casper Dik, After my posting, I assumed that a code question should be directed to the ZFS code alias, so I apologize to the people show don't read code. However, since the discussion is here, I will post a code proof here. Just use "time program" to get a g

Re: [zfs-discuss] single memory allocation in the ZFS intent log

2006-10-04 Thread Erblichs
Casper Dik, Yes, I am familiar with Bonwick's slab allocators and tried it for wirespeed test of 64byte pieces for a 1Gb and then 100Mb Eths and lastly 10Mb Eth. My results were not encouraging. I assume it has improved over time. First, let me ask what ha

[zfs-discuss] single memory allocation in the ZFS intent log

2006-10-03 Thread Erblichs
group, at least one location: When adding a new dva node into the tree, a kmem_alloc is done with a KM_SLEEP argument. thus, this process thread could block waiting for memory. I would suggest adding a pre-allocated pool of dva nodes. When a new