Martin,
This is a shot in the dark, but, this seems to be a IO scheduling
issue.
Since, i am late on this thread, what is the characteristic of
the IO: read mostly, appending writes, read, modify write,
sequentiality, random, single large file, multiple fil
http://www.sun.com/software/solaris/ds/zfs.jsp
Solaris ZFSThe Most Advanced File System on the Planet
Anyone who has ever lost important files, run out of space on a
partition, spent weekends adding new storage to servers, tried to grow
or shrink a file system, or experienced data corruption kno
Toby Thain, et al,
I am guessing here, but to just be able to access
the FS data locally without the headaches of
verifying FS consistency, write caches, etc.
Mitchell Erblich
Toby Thain wrote:
>
> On 13-Jun-07, at 1:14 PM, Rick Mann wrot
Group,
Isn't Apple strength really in the non-compute intensive
personal computer / small business environment?
IE, Plug and play.
Thus, even though ZFS is able to work as the default
FS, should it be the default FS for the small system
environment
Group,
MOST people want a system to work without doing
ANYTHING when they turn on the system.
So yes, the thought of people buying another
drive and installing it in a brand new system
would be insane for this group of buyers.
Mitchell Erblich
Mitchell Erblich
--
Darren J Moffat wrote:
>
> Erblichs wrote:
> > So, my first order would be to take 1GB or 10GB .wav files
> > AND time both the kernel implementation of Gzip and the
> > user application. Approx the
Ian Collins,
My two free cents..
If the gzip was in application space, most gzip's implementations
support (maybe a new compile) a less extensive/expensive "deflation"
that would consume fewer CPU cycles.
Secondly, if the file objects are being written lo
Jorg,
Do you really think that ANY FS actually needs to support
more FS objects? If that would be an issue, why not create
more FSs?
A multi-TB FS SHOULD support 100MB+/GB size FS objects, which
IMO is the more common use. I have seen this alot in video
Ming,
Lets take a pro example with a minimal performance
tradeoff.
All FSs that modify a disk block, IMO, do a full
disk block read before anything.
If doing a extended write and moving to a
larger block size with COW you give yourself
the
or only Suns?
> >
> >
> >
> > -Original Message-
> > From: [EMAIL PROTECTED] on behalf of Erblichs
> > Sent: Sun 4/22/2007 4:50 AM
> > To: Leon Koll
> > Cc: zfs-discuss@opensolaris.org
> > Subject: Re: [zfs-discuss] Re: ZFS+NFS on storedge 6
Leon Koll,
As a knowldegeable outsider I can say something.
The benchbark (SFS) page specifies NFSv3,v2 support, so I question
whether you ra n NFSv4. I would expect a major change in
performance just to version 4 NFS version and ZFS.
The benchmark seems
cause more problems.
Mitchell Erblich
Sr Software Engineer
-
Joerg Schilling wrote:
>
> Erblichs <[EMAIL PROTECTED]> wrote:
>
> > Joerg Shilling,
> >
> > Putting the license issues aside for a moment.
&g
Rich Teer,
I have a perfect app for the masses.
A Hi-Def Video/ audio server for the hi-def TV
and audio setup.
I would think the average person would want
to have access to 1000s of DVDs / CDs within
a small box versus taking up the full
Joerg Shilling,
Putting the license issues aside for a moment.
If their is "INTEREST" in ZFS within Linux, should
a small Linux group be formed to break down ZFS in
easily portable sections and non-portable sections.
And get a real-time/effort assessment
Toby Thain,
I am sure someone will divise a method of subdividing
the FS and run a background fsck and/or checksums on the
different file objects or ... before this becomes a issue. :)
Mitchell Erblich
-
Toby Thain wrote:
>
> >
Group,
Did Joerg Schilling bring up a bigger issue within this
discussion thread?
> And it seems that you missunderstand the way the Linux kernel is developed.
> If _you_ started a ZFS project for Linux, _you_ would need to maintain it too
> or otherwise it would not be kept up to
My two cents,
Assuming that you may pick a specific compression algorithm,
most algorithms can have different levels/percentages of
deflations/inflations which is effects the time to compress
and/or inflate wrt the CPU capacity.
Secondly, if I can add an ad
Tp the original poster,
FYI,
Accessing RAID drives at a constant "~70-75%" does not
probably leave enough excess for degraded mode.
A normal rule of thumb is 50 to 60% constant to
allow excess capacity to be absorbed in degraded
mode.
An "
Ayaz Anjum and others,
I think once you move into NFS over TCP in a client
server env, the chance for lost data is significantly
higher than just a disconnecting a cable,
Scenario, before a client generates a delayed write
from his violatile DRAM client cac
ll Erblich
-
Toby Thain wrote:
>
> On 28-Feb-07, at 6:43 PM, Erblichs wrote:
>
> > ZFS Group,
> >
> > My two cents..
> >
> > Currently, in my experience, it is a waste of time to try to
> > guarantee "exact&quo
ZFS Group,
My two cents..
Currently, in my experience, it is a waste of time to try to
guarantee "exact" location of disk blocks with any FS.
A simple reason exception is bad blocks, a neighboring block
will suffice.
Second, current disk controll
Jeff Bonwick,
Do you agree that their is a major tradeoff of
"builds up a wad of transactions in memory"?
We loose the changes if we have an unstable
environment.
Thus, I don't quite understand why a 2-phase
approach to commits isn't done. First, t
Rainer Heilke,
You have 1/4 of the amount of memory that the 2900 system
is capable of (192GBs : I think).
Secondly, output from fsstat(1M) could be helpful.
Run this command over time and check to see if the
values change over time..
Mitchell Erb
Hey guys,
Do to lng URL lookups, the DNLC was pushed to variable
sized entries. The hit rate was dropping because of
"name to long" misses. This was done long ago while I
was at Sun under a bug reported by me..
I don't know your usage, but you should at
ield is updated. Remember, that unless you are just
touching a FS low-level(file) object, all writes are
proceeded by at least 1 read.
Mitchell Erblich
Bill Sommerfeld wrote:
>
> On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote:
08 at 01:54 -0800, Erblichs wrote:
> >
> > Bill Sommerfield,
>
> that's not how my name is spelled
> >
> > Are their any existing snaps?
> no. why do you think this would matter?
> >
> > Can you have any scripts that may be
> >
Bill Sommerfield,
Are their any existing snaps?
Can you have any scripts that may be
removing aged files?
Mitchell Erblich
--
Bill Sommerfeld wrote:
>
> On a v40z running snv_51, I'm doing a "zpool replace z c1t4d0 c1t5d0".
>
> (so, w
Hi,
My suggestion is direct any command output to a file
that may print thous of lines.
I have not tried that number of FSs. So, my first
suggestion is to have alot of phys mem installed.
The second item that I could be concerned with is
path tran
Hi,
How much time is a "long time"?
Second, had a snapshot been taken after the file
was created?
Are the src and dst directories in the
same slice?
What other work was being done at the time of
the move?
Were their numerous fil
file within the snapshot and remove it?
Mitchell Erblich
Matthew Ahrens wrote:
>
> Erblichs wrote:
> > Now the stupid question..
> > If the snapshot is identical to the FS, I can't
> > remove files from the FS because
Hey guys,
I think i know what is going on.
A set of files was attempted to be deleted on a FS
that had almost consumed its reservation.
It failed because one or more snapshots hold
references to these files and the snaps needed
to allocate FS space
Group, et al,
I don't understand that if the problem is systemic based on
the number of continual dirty pages and stress to clean
those pages, then why .
If the problem is FS independent, because any number of
different installed FSs can equally consum
Nicolas Williams wrote:
>
> On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote:
> > For extremely large files (25 to 100GBs), that are accessed
> > sequentially for both read & write, I would expect 64k or 128k.
>
> Lager files accessed sequentially d
Group,
I am not sure I agree with the 8k size.
Since "recordsize" is based on the size of filesystem blocks
for large files, my first consideration is what will be
the max size of the file object.
For extremely large files (25 to 100GBs), that are accessed
Group,
If their is a bad vfs ops template, why
wouldn't you just return(error) versus
trying to create the vnode ops template?
My suggestion is after the cmn_err()
then return(error);
Mitchell Erblich
-
or steal it from another's cache.
Mitchell Erblich
-
Frank Hofmann wrote:
>
> On Thu, 5 Oct 2006, Erblichs wrote:
>
> > Casper Dik,
> >
> > After my posting, I assumed that a code question should be
> >
Casper Dik,
After my posting, I assumed that a code question should be
directed to the ZFS code alias, so I apologize to the people
show don't read code. However, since the discussion is here,
I will post a code proof here. Just use "time program" to get
a g
Casper Dik,
Yes, I am familiar with Bonwick's slab allocators and tried
it for wirespeed test of 64byte pieces for a 1Gb and then
100Mb Eths and lastly 10Mb Eth. My results were not
encouraging. I assume it has improved over time.
First, let me ask what ha
group,
at least one location:
When adding a new dva node into the tree, a kmem_alloc is done with
a KM_SLEEP argument.
thus, this process thread could block waiting for memory.
I would suggest adding a pre-allocated pool of dva nodes.
When a new
39 matches
Mail list logo