Bob Friesenhahn writes:
> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
> >>> What was the interlace on the LUN ?
> >
> > The question was about LUN interlace not interface.
> > 128K to 1M works better.
>
> The "segment size"
Le 20 févr. 08 à 23:03, Robert Milkowski a écrit :
> Hello Roch,
>
> Friday, February 15, 2008, 10:51:50 AM, you wrote:
>
> RB> Le 10 févr. 08 à 12:51, Robert Milkowski a écrit :
>
>>> Hello Nathan,
>>>
>>> Thursday, February 7, 2008, 6:54:39 AM,
I would imagine that linux to behave more like ZFS that does not flush
caches.
(google Evil zfs_nocacheflush).
If you can nfs tar extract files on linux faster than one file per
rotation latency;
that is suspicious.
-r
Le 26 févr. 08 à 13:16, msl a écrit :
>> For Linux NFS service, it's a
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>
> Quick question:
>
> If I create a ZFS mirrored pool, will the read performance get a
> boost?
> In other words, will the data/parity be read round robin between the
> disks, or do both mirrored sets of data and parity get read off of
> both
Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
>
>
> Roch Bourbonnais wrote:
>>
>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>>
>>>
>>> Quick question:
>>>
>>> If I create a ZFS mirrored pool, will the read performance get
Le 1 mars 08 à 22:14, Bill Shannon a écrit :
> Jonathan Edwards wrote:
>>
>> On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote:
>>> Running just plain "iosnoop" shows accesses to lots of files, but
>>> none
>>> on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m
>>> /export/home/shannon"
>>>
Le 3 mars 08 à 09:58, Robert Milkowski a écrit :
> Hello zfs-discuss,
>
>
> I had a zfs file system with recordsize=8k and a couple of large
> files. While doing zfs send | zfs recv I noticed it's doing
> about 1500 IOPS but with block size 8K so total throughput
> wasn't impr
Le 30 mars 08 à 15:57, Kyle McDonald a écrit :
> Fred Oliver wrote:
>>
>> Marion Hakanson wrote:
>>> [EMAIL PROTECTED] said:
I am having trouble destroying a zfs file system (device busy) and
fuser
isn't telling me who has the file open: . . .
This situation appears to occur e
Bob Friesenhahn writes:
> On Tue, 15 Apr 2008, Mark Maybee wrote:
> > going to take 12sec to get this data onto the disk. This "impedance
> > mis-match" is going to manifest as pauses: the application fills
> > the pipe, then waits for the pipe to empty, then starts writing again.
> > Note t
Le 28 juin 08 à 05:14, Robert Milkowski a écrit :
> Hello Mark,
>
> Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
>
> MM> The new write throttle code put back into build 87 attempts to
> MM> smooth out the process. We now measure the amount of time it
> takes
> MM> to sync each transaction g
Robert Milkowski writes:
> Hello Roch,
>
> Saturday, June 28, 2008, 11:25:17 AM, you wrote:
>
>
> RB> I suspect, a single dd is cpu bound.
>
> I don't think so.
>
We're nearly so as you show. More below.
> Se below one with a stri
Wee Yeh Tan writes:
> On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> > This is simply not true. ZFS would protect against the same type of
> > errors seen on an individual drive as it would on a pool made of HW raid
> > LUN(s). It might be overkill to layer ZFS on top of a LUN that is
Torrey McMahon writes:
> Nicolas Dorfsman wrote:
> >> The hard part is getting a set of simple
> >> requirements. As you go into
> >> more complex data center environments you get hit
> >> with older Solaris
> >> revs, other OSs, SOX compliance issues, etc. etc.
> >> etc. The world where
zfs "hogs all the ram" under a sustained heavy write load. This is
being tracked by:
6429205 each zpool needs to monitor it's throughput and throttle heavy
writers
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Jim Sloey writes:
> > Roch - PAE wrote:
> > The hard part is getting a set of simple requirements. As you go into
> > more complex data center environments you get hit with older Solaris
> > revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
&
Anton B. Rang writes:
> The bigger problem with system utilization for software
RAID is the cache, not the CPU cycles proper. Simply
preparing to write 1 MB of data will flush half of a 2 MB L2
cache. This hurts overall system performance far more than
the few microseconds
Jill Manfield writes:
> My customer is running java on a ZFS file system. His platform is Soalris
> 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs
> rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it
> came back:
>
> The culprit app
workloads.
Performance, Availability & Architecture Engineering
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Senior Performance Analyst 180, Avenue De L'Europe, 38330,
Montbonnot Saint Martin, Fran
Hi Gino,
Can you post the 'zpool status' for each pool and 'zfs get all'
for each fs; Any interesting data in the dmesg output ?
-r
Gino Ruopolo writes:
> Other test, same setup.
>
>
> SOLARIS10:
>
> zpool/a filesystem containing over 10Millions subdirs each containing
> 10 fil
I've discussed this with some guys I know, and we decided that your
admin must have given you an incorrect description.
BTW, that config falls outside of best practice; The current thinking
is to use raid-z group of not much more than 10 disks.
You may stripe multiple such groups into a pool
Jürgen Keil writes:
> > ZFS 11.0 on Solaris release 06/06, hangs systems when
> > trying to copy files from my VXFS 4.1 file system.
> > any ideas what this problem could be?.
>
> What kind of system is that? How much memory is installed?
>
> I'm able to hang an Ultra 60 with 256 MByte
as an alternative, I thaught this would be relevant to the
discussion:
Bug ID: 6478980
Synopsis: zfs should support automount property
In other words, do we really need to mount 1 FS in a
snap, or do we just need to system to be up quickly then
mount on demand
-r
Ch
Erblichs writes:
> Hi,
>
> My suggestion is direct any command output to a file
> that may print thous of lines.
>
> I have not tried that number of FSs. So, my first
> suggestion is to have alot of phys mem installed.
I seem to recall 64K per FS and being worked on t
Luke Lonergan writes:
> Robert,
>
> > I belive it's not solved yet but you may want to try with
> > latest nevada and see if there's a difference.
>
> It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
> post build 47 I think.
>
> - Luke
>
This one is not yet fi
Chris Gerhard writes:
> One question that keeps coming up in my discussions about ZFS is the lack of
> user quotas.
>
> Typically this comes from people who have many tens of thousands
> (30,000 - 100,000) of users where they feel that having a file system
> per user will not be manageab
How much memory in the V210 ?
UFS will recycle it's own pages while creating files that
are big. ZFS working against a large heap of free memory will
cache the data (why not?). The problem is that ZFS does not
know when to stop. During the subsequent memory/cache
reclaim, ZFS is potentially not
Here is my take on this
http://blogs.sun.com/roch/entry/zfs_and_directio
-r
Marlanne DeLaSource writes:
> I had a look at various topics covering ZFS direct I/O, and this topic is
> sometimes mentioned, and it was not really clear to me.
>
> Correct me if I'm
Tomas Ögren writes:
> On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
>
> > Tomas,
> >
> > comments inline...
> >
> >
> > >>arc::print struct arc
> > >>
> > >>
> > >{
> > > anon = ARC_anon
> > > mru = ARC_mru
> > > mru_ghost = ARC_mru_gh
prone (from the client side point of view).
-r
Joe Little writes:
> On 11/21/06, Matthew B Sweeney - Sun Microsystems Inc.
> <[EMAIL PROTECTED]> wrote:
> >
> > Roch,
> >
> > Am I barking up the wrong tree? Or is ZFS over NFS not the right
> >
hange ZFS to
anything else and it won't change the conclusion.
NFS/AnyFS is a bad combination for single threaded tar x.
-r
Al Hopper writes:
> On Tue, 21 Nov 2006, Joe Little wrote:
>
> > On 11/21/06, Roch - PAE <[EMAIL PROTECTED]> wrote:
> > >
> >
Nope, wrong conclusion again.
This large performance degradation has nothing whatsoever to
do with ZFS. I have not seen data that would show a possible
slowness on the part of ZFS vfs AnyFS on the
backend; there may well be and that would be an entirely
diffenrent discussion to the large slowdo
Al Hopper writes:
> On Thu, 23 Nov 2006, Roch - PAE wrote:
>
> >
> > Hi Al, You conclude:
> >
> >No problem there! ZFS rocks. NFS/ZFS is a bad combination.
> >
> > But my reading of your data leads to:
> >
> >single thread
B/s, the best run for ZFS was approx 58 MB/s. Not a
> huge difference for sure, but enough to make you think about switching.
> This was single stream over a 10GE link. (x4600 mounting vols from an x4500)
>
> Matt
>
> Bill Moore wrote:
> > On Thu, Nov 23, 2006 at
How about attaching the slow storage and kick off a
scrub during the nights ? Then detach in the morning ?
Downside: you are running an unreplicated pool during the
day. Storage side errors won't be recoverable.
-r
Albert Shih writes:
> Le 04/12/2006 à 21:24:26-0800, Anton B. Rang a écrit
> Why all people are strongly recommending to use whole disk (not part
> of disk) for creation zpools / ZFS file system ?
One thing is performance; ZFS can enable/disable write cache in the disk
at will if it has full control over the entire disk..
ZFS will also flush the WC when nec
I got around that some time ago with a little hack;
Maintain a directory with soft links to disks of interest:
ls -l .../mydsklist
total 50
lrwxrwxrwx 1 cx158393 staff 17 Apr 29 2006 c1t0d0s1 ->
/dev/dsk/c1t0d0s1
lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t16d0s1 ->
/de
Anton B. Rang writes:
> If your database performance is dominated by sequential reads, ZFS may
> not be the best solution from a performance perspective. Because ZFS
> uses a write-anywhere layout, any database table which is being
> updated will quickly become scattered on the disk, so that s
Maybe this will help:
http://blogs.sun.com/roch/entry/zfs_and_directio
-r
dudekula mastan writes:
> Hi All,
>
> We have directio() system to do DIRECT IO on UFS file system. Can
> any one know how to do DIRECT IO on ZFS file system.
>
> Re
The latency issue might improve with this rfe
6471212 need reserved I/O scheduler slots to improve I/O latency of critical
ops
-r
Tom Duell writes:
> Group,
>
> We are running a benchmark with 4000 users
> simulating a hospital management system
> running on Solaris 10 6/06 on USIV+ bas
Anton B. Rang writes:
> It took manufacturers of SCSI drives some years to get this
> right. Around 1997 or so we were still seeing drives at my former
> employer that didn't properly flush their caches under all
> circumstances (and had other "interesting" behaviours WRT caching).
>
> Lot
Right on. And you might want to capture this in a blog for
reference. The permalink will be quite useful.
We did have a use case for zil synchronicity which was a
big user controlled transaction :
turn zil off
do tons of thing to the filesystem.
big sync
turn zil
Was it over NFS ?
Was zil_disable set on the server ?
If it's yes/yes, I still don't know for sure if that would
be grounds for a causal relationship, but I would certainly
be looking into it.
-r
Trevor Watson writes:
> Anton B. Rang wrote:
> > Were there any errors reported in /var/adm/messa
Jason J. W. Williams writes:
> Hi Jeremy,
>
> It would be nice if you could tell ZFS to turn off fsync() for ZIL
> writes on a per-zpool basis. That being said, I'm not sure there's a
> consensus on that...and I'm sure not smart enough to be a ZFS
> contributor. :-)
>
> The behavior is
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
-r
Al Hopper writes:
> On Sun, 17 Dec 2006, Ricardo Correia wrote:
>
> > On Friday 15 December 2006 20:02, Dave Burleson wrote:
> > > Does anyone have a document that descr
Jonathan Edwards writes:
> On Dec 19, 2006, at 07:17, Roch - PAE wrote:
>
> >
> > Shouldn't there be a big warning when configuring a pool
> > with no redundancy and/or should that not require a -f flag ?
>
> why? what if the redundancy is below the
Robert Milkowski writes:
> Hello przemolicc,
>
> Friday, December 22, 2006, 10:02:44 AM, you wrote:
>
> ppf> On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote:
> >> Hello Shawn,
> >>
> >> Thursday, December 21, 2006, 4:28:39 PM, you wrote:
> >>
> >> SJ> All,
> >>
>
It seems though that the critical feature we need was optional in the
SBC-2 spec.
So we still need some development to happen on the storage end.
But we'll get there...
Le 19 déc. 06 à 20:59, Jason J. W. Williams a écrit :
Hi Roch,
That sounds like a most excellent resolution
I've just generated some data for an upcoming blog entry on
the subject. This is about a small file tar extract :
All times are elapse (single 72GB SAS disk)
Local and memory based filesystems
tmpfs : 0.077 sec
ufs : 0.25 sec
zfs : 0.12 sec
NFS service th
Anton B. Rang writes:
> >> In our recent experience RAID-5 due to the 2 reads, a XOR calc and a
> >> write op per write instruction is usually much slower than RAID-10
> >> (two write ops). Any advice is greatly appreciated.
> >
> > RAIDZ and RAIDZ2 does not suffer from this malady (the RAID
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Check out
http://blogs.sun.com/roch/entry/zfs_and_directio
Then I would be interested to know what is the expectation for ZFS/DIO.
Le 5 janv. 07 à 06:39, dudekula mastan a écrit :
Hi
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
Performance, Availability & Architecture Engineering
Roch BourbonnaisSun Microsystems,
Hans-Juergen Schnitzer writes:
> Roch - PAE wrote:
> >
> > Just posted:
> >
> > http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
> >
> >
>
> Which role plays network latency? If I understand you right,
> even a low-late
Dennis Clarke writes:
>
> > On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote:
> >> > http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
> >>
> >> So just to confirm; disabling the zil *ONLY* breaks the semantics of
> >&g
Jonathan Edwards writes:
>
> On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
>
> >> DIRECT IO is a set of performance optimisations to circumvent
> >> shortcomings of a given filesystem.
> >
> > Direct I/O as generally understood (i.e. not UFS-specific) is an
> > optimization which al
If some aspect of the load is writing large amount of data
into the pool (through the memory cache, as opposed to the
zil) and that leads to a frozen system, I think that a
possible contributor should be:
|6429205||each zpool needs to monitor its throughput and throttle heavy
wri
Jason J. W. Williams writes:
> Hi Anantha,
>
> I was curious why segregating at the FS level would provide adequate
> I/O isolation? Since all FS are on the same pool, I assumed flogging a
> FS would flog the pool and negatively affect all the other FS on that
> pool?
>
> Best Regards,
From: Gregory Shaw <[EMAIL PROTECTED]>
Sender: [EMAIL PROTECTED]
To: Mike Gerdts <[EMAIL PROTECTED]>
Cc: ZFS filesystem discussion list ,
[EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZFS and databases
Date: Thu, 11 May 2006 13:15:48 -0600
Regarding directio and quickio, is there
Jeff Bonwick writes:
> > Are you saying that copy-on-write doesn't apply for mmap changes, but
> > only file re-writes? I don't think that gels with anything else I
> > know about ZFS.
>
> No, you're correct -- everything is copy-on-write.
>
Maybe the confusion comes from:
mma
ce it never overwrites the same
> pre-allocated block, i.e. the tablespace becomes fragmented in that
> case no matter what.
is fragmented the right word here ?
Anyway: random writes can be turned into sequential.
>
> Also, in order to write a partial update to a new block, zfs needs the
&
Tao Chen writes:
> On 5/12/06, Roch Bourbonnais - Performance Engineering
> <[EMAIL PROTECTED]> wrote:
> >
> > From: Gregory Shaw <[EMAIL PROTECTED]>
> > Regarding directio and quickio, is there a way with ZFS to skip the
> > system buffe
Hi Robert,
Could you try 35 concurrent dd each issuing 128K I/O ?
That would be closer to how ZFS would behave.
-r
Robert Milkowski writes:
> Well I have just tested UFS on the same disk.
>
> bash-3.00# newfs -v /dev/rdsk/c5t50E0119495A0d0s0
> newfs: construct a new file system /dev/rd
'ZFS optimizes random writes versus potential sequential reads.'
Now I don't think the current readahead code is where we
want it to be yet but, in the same way that enough
concurrent 128K I/O can saturate a disk (I sure hope that
Milkowski's data will confirm this, ot
You could start with the ARC paper, Megiddo/Modha FAST'03
conference. ZFS uses a variation of that. It's an interesting
read.
-r
Franz Haberhauer writes:
> Gregory Shaw wrote On 05/11/06 21:15,:
> > Regarding directio and quickio, is there a way with ZFS to skip the
> > system buffer cache?
Peter Rival writes:
> Roch Bourbonnais - Performance Engineering wrote:
> > Tao Chen writes:
> > > On 5/12/06, Roch Bourbonnais - Performance Engineering
> > > <[EMAIL PROTECTED]> wrote:
> > > >
> > > > From: Gregory Shaw <[
Franz Haberhauer writes:
> > 'ZFS optimizes random writes versus potential sequential reads.'
>
> This remark focused on the allocation policy during writes,
> not the readahead that occurs during reads.
> Data that are rewritten randomly but in place in a sequential,
> contiguos file (l
Anton B. Rang writes:
> >Were the benefits coming from extra concurrency (no
> >single writer lock) or avoiding the extra copy to page cache or
> >from too much readahead that is not used before pages need to
> >be recycled.
>
> With QFS, a major benefit we see for databases and direct I/
Robert Milkowski writes:
> Hello Roch,
>
> Friday, May 12, 2006, 2:28:59 PM, you wrote:
>
> RBPE> Hi Robert,
>
> RBPE> Could you try 35 concurrent dd each issuing 128K I/O ?
> RBPE> That would be closer to how ZFS would behave.
>
> You mean
Nicolas Williams writes:
> On Fri, May 12, 2006 at 05:23:53PM +0200, Roch Bourbonnais - Performance
> Engineering wrote:
> > For read it is an interesting concept. Since
> >
> >Reading into cache
> >Then copy into user space
> >th
that issue spa_sync, but only one of them actuallybecomes
activated. So the script will print out some spurious lines of output
at times. I measure I/O with the script while this runs:
dd if=/dev/zero of=/zfs2/roch/f1 bs=1024k count=8000
And I see:
1431 MB; 23723 ms of spa
Gregory Shaw writes:
> I really like the below idea:
> - the ability to defragment a file 'live'.
>
> I can see instances where that could be very useful. For instance,
> if you have multiple LUNs (or spindles, whatever) using ZFS, you
> could re-optimize large files to spre
Anton B. Rang writes:
> One issue is what we mean by "saturation." It's easy to
bring a disk to 100% busy. We need to keep this discussion
in the context of a workload. Generally when people care
about streaming throghput of a disk, it's because they are
reading or writing a single large file
Robert Milkowski writes:
> Hello Roch,
>
> Monday, May 15, 2006, 3:23:14 PM, you wrote:
>
> RBPE> The question put forth is whether the ZFS 128K blocksize is sufficient
> RBPE> to saturate a regular disk. There is great body of evidence that shows
> RBPE> t
disk block will be.
So everything is a tradeoff and at this point 128K appears
sufficiently large ... at least for a while.
-r
____
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Se
Gregory Shaw writes:
> Rich, correct me if I'm wrong, but here's the scenario I was thinking
> of:
>
> - A large file is created.
> - Over time, the file grows and shrinks.
>
> The anticipated layout on disk due to this is that extents are
> allocated as the file changes. The extent
Cool, I'll try the tool and for good measure the data I
posted was sequential access (from logical point of view).
As for the physical layout, Idon't know, it's quite
possible that ZFS has layed out all blocks sequentially on
the physical side; so certainly this is not a good way
Scott Dickson writes:
> How does (or does) ZFS maintain sequentiality of the blocks of a file.
> If I mkfile on a clean UFS, I likely will get contiguous blocks for my
> file, right? A customer I talked to recently has a desire to access
you would get up to maxcontig worth of sequential b
Chris Csanady writes:
> On 5/26/06, Bart Smaalders <[EMAIL PROTECTED]> wrote:
> >
> > There are two failure modes associated with disk write caches:
>
> Failure modes aside, is there any benefit to a write cache when command
> queueing is available? It seems that the primary advantage is i
Hi Grant, this may provide some guidance for your setup;
it's somewhat theoretical (take it for what it's worth) but
it spells out some of the tradeoffs in the RAID-Z vs Mirror
battle:
http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to
As for serving NFS,
Anton wrote:
(For what it's worth, the current 128K-per-I/O policy of ZFS really
hurts its performance for large writes. I imagine this would not be
too difficult to fix if we allowed multiple 128K blocks to be
allocated as a group.)
I'm not taking a stance on this, but if I keep a co
> I think ZFS should do fine in streaming mode also, though there are
> currently some shortcomings, such as the mentioned 128K I/O size.
It may eventually. The lack of direct I/O may also be an issue, since
some of our systems don't have enough main memory bandwidth to support
data be
Robert Milkowski writes:
>
>
>
> btw: just a quick thought - why not to write one block only on 2 disks
> (+checksum on a one disk) instead of spreading one fs block to N-1
> disks? That way zfs could read many fs block at the same time in case
> of larger raid-z pools. ?
That's what y
You propose ((2-way mirrored) x RAID-Z (3+1)) . That gives
you 3 data disks worth and you'd have to loose 2 disk in
each mirror (4 total) to loose data.
For random read load you describe, I could expect that the
per device cache to work nicely; That is file blocks stored
at some given
Tao Chen writes:
> Hello Robert,
>
> On 6/1/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> > Hello Anton,
> >
> > Thursday, June 1, 2006, 5:27:24 PM, you wrote:
> >
> > ABR> What about small random writes? Won't those also require reading
> > ABR> from all disks in RAID-Z to read the
I'm puzzled by 2 things.
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a cache helps throughput ?
And the second thing...q
Gehr, Chuck R writes:
> One word of caution about random writes. From my experience, they are
> not nearly as fast as sequential writes (like 10 to 20 times slower)
> unless they are carefully aligned on the same boundary as the file
> system record size. Otherwise, there is a heavy read pena
# ptime tar xf linux-2.2.22.tar
ptime tar xf linux-2.2.22.tar
real 50.292
user1.019
sys11.417
# ptime tar xf linux-2.2.22.tar
ptime tar xf linux-2.2.22.tar
real 56.833
user1.056
sys11.581
#
avg time waiting for async writes is
- Description of why I don't need directio, quickio, or ODM.
The 2 main benefits that cames out of using directio was
reducing memory consumption by avoiding the page cache AND
bypassing the UFS single writer behavior.
ZFS does not have the single writer lock.
As for memory, the UFS code
1.033
sys11.405
-r
>
> On 5/11/06, Roch Bourbonnais - Performance Engineering
> <[EMAIL PROTECTED]> wrote:
> >
> >
> > # ptime tar xf linux-2.2.22.tar
> > ptime tar xf linux-2.2.22.tar
> >
> > real 50.292
> &g
Certainly something we'll have to tackle. How about a zpool
memstat (or zpool -m iostat) variation that would report at
least freemem and the amount evictable cached data ?
Would that work for you ?
-r
Philip Beevers writes:
> Roch Bourbonnais - Performance Engineeri
the require memory pressure on ZFS.
Sounds like another bug we'd need to track;
-r
Daniel Rock writes:
> Roch Bourbonnais - Performance Engineering schrieb:
> > A already noted, this needs not be different from other FS
> > but is still an interesting question. I
301 - 390 of 390 matches
Mail list logo