Robert Milkowski writes:
> On 01/04/2010 20:58, Jeroen Roodhart wrote:
> >
> >> I'm happy to see that it is now the default and I hope this will cause the
> >> Linux NFS client implementation to be faster for conforming NFS servers.
> >>
> > Interesting thing is that apparently default
When we use one vmod, both machines are finished in about 6min45,
zilstat maxes out at about 4200 IOPS.
Using four vmods it takes about 6min55, zilstat maxes out at 2200
IOPS.
Can you try 4 concurrent tar to four different ZFS filesystems (same
pool).
-r
_
v writes:
> Hi,
> A basic question regarding how zil works:
> For asynchronous write, will zil be used?
> For synchronous write, and if io is small, will the whole io be place on
> zil? or just the pointer be save into zil? what about large size io?
>
Let me try.
ZIL : code and data stru
Ross Walker writes:
> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
> wrote:
>
> >
> > Le 27 mai 2010 à 07:03, Brent Jones a écrit :
> >
> >> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
> >> wrote:
> >>> I've set u
uestion earlier, but got no answer: while an
iSCSI target is presented WCE does it respect the flush
command?
Yes. I would like to say "obviously" but it's been anything
but.
-r
Ross Walker writes:
> On Aug 4, 2010, at 3:52 AM, Roch wrote:
>
> >
> > Ro
Ross Walker writes:
> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>
> >
> >
> > Ross Asks:
> > So on that note, ZFS should disable the disks' write cache,
> > not enable them despite ZFS's COW properties because it
> > sho
Ross Walker writes:
> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>
> >
> > Ross Walker writes:
> >> On Aug 4, 2010, at 9:20 AM, Roch wrote:
> >>
> >>>
> >>>
> >>> Ross Asks:
> >>> So on that note,
Tim Cook writes:
> On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us> wrote:
>
> > On Sun, 27 Dec 2009, Tim Cook wrote:
> >
> >>
> >> That is ONLY true when there's significant free space available/a fresh
> >> pool. Once those files have been deleted and
f bit rot occurs in X and disk
holding Y dies, resilvering would generate garbage for Y.
This seems to force use to chunk up disks with every unit
checksummed even if freed. Secure deletion becomes a problem
as well. And you need can end up madly searching for free
stripes, repositioning old blocks in p
ic
> to the IMAP server (called skiplist), and some are small flat files
> that are just rewritten. All they have in common is activity and
> frequent locking. They can be relocated as a whole.
>
> > > The second one is from:
> > >
> > >http://blo
Sao Kiselkov writes:
> On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
> >
> > So the xcall are necessary part of memory reclaiming, when one needs to
> > tear down the TLB entry mapping the physical memory (which can from here
> > on be repurposed).
> &
Brandon High writes:
> On Tue, Nov 23, 2010 at 9:55 AM, Krunal Desai wrote:
> > What is the "upgrade path" like from this? For example, currently I
>
> The ashift is set in the pool when it's created and will persist
> through the life of that pool. If you set it at pool creation, it will
Le 7 févr. 2011 à 06:25, Richard Elling a écrit :
> On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
>
>> Hi all,
>>
>> I'm trying to achieve the same effect of UFS directio on ZFS and here
>> is what I did:
>
> Solaris UFS directio has three functions:
> 1. improved async code path
> 2
Le 7 févr. 2011 à 17:08, Yi Zhang a écrit :
> On Mon, Feb 7, 2011 at 10:26 AM, Roch wrote:
>>
>> Le 7 févr. 2011 à 06:25, Richard Elling a écrit :
>>
>>> On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
>>>
>>>> Hi all,
>>>>
>&g
Edward Ned Harvey writes:
> Based on observed behavior measuring performance of dedup, I would say, some
> chunk of data and its associated metadata seem have approximately the same
> "warmness" in the cache. So when the data gets evicted, the associated
> metadata tends to be evicted too. S
Thomas, for long latency fat links, it should be quite
beneficial to set the socket buffer on the receive side
(instead of having users tune tcp_recv_hiwat).
throughput of a tcp connnection is gated by
"receive socket buffer / round trip time".
Could that be Ross' problem ?
-r
Ross Smith wr
Tim writes:
> On Sat, Nov 29, 2008 at 11:06 AM, Ray Clark <[EMAIL PROTECTED]>wrote:
>
> > Please help me understand what you mean. There is a big difference between
> > being unacceptably slow and not working correctly, or between being
> > unacceptably slow and having an implementation pro
Bill Sommerfeld writes:
> On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
> > I'm assuming this is local filesystem rather than ZFS backed NFS (which
> > is what I have).
>
> Correct, on a laptop.
>
> > What has setting the 32KB recordsize done for the rest of your home
> > di
Scott Laird writes:
> On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling
> wrote:
> > Scott Laird wrote:
> >>
> >> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
> >> wrote:
> >>
> >>>
> >>> As for source, here you go :)
> >>>
> >>>
> >>> http://cvs.opensolaris.org/source/xref/onnv/o
Alastair Neil writes:
> I am attempting to create approx 10600 zfs file systems across two
> pools. The devices underlying the pools are mirrored iscsi volumes
> shared over a dedicated gigabit Ethernet with jumbo frames enabled
> (MTU 9000) from a Linux Openfiler 2.3 system. I have added a co
Marcelo Leal writes:
> Hello all,
> Somedays ago i was looking at the code and did see some variable that
> seems to make a correlation between the size of the data, and if the
> data is written to the slog or directly to the pool. But i did not
> find it anymore, and i think is way more com
18.4 403.31.22.9 1.1 0.22.50.6 15 24 c6t5d0
>19.3 402.71.22.9 1.1 0.32.50.6 15 25 c6t6d0
>18.8 406.11.22.9 1.0 0.22.40.6 15 25 c6t7d0
>
>
> Any experts here to say if that's just because bonnie
Ahmed Kamal writes:
> Hi,
>
> I have been doing some basic performance tests, and I am getting a big hit
> when I run UFS over a zvol, instead of directly using zfs. Any hints or
> explanations is very welcome. Here's the scenario. The machine has 30G RAM,
> and two IDE disks attached. The
milosz writes:
> iperf test coming out fine, actually...
>
> iperf -s -w 64k
>
> iperf -c -w 64k -t 900 -i 5
>
> [ ID] Interval Transfer Bandwidth
> [ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
>
> totally steady. i could probably implement some tweaks to improve it,
Tim writes:
> On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson wrote:
>
> >
> > Does creating ZFS pools on multiple partitions on the same physical drive
> > still run into the performance and other issues that putting pools in
> > slices
> > does?
> >
>
>
> Is zfs going to own the whol
Chookiex writes:
> Hi all,
>
> I have 2 questions about ZFS.
>
> 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to
> pool2/data2. but I found the USED in zfs list is different:
> NAME USED AVAIL REFER MOUNTPOINT
> pool2/data2 160G 1.44T
ain level of
> performance, and what we've got with the ZIL on the pool is completely
> unacceptable.
>
> Thanks for any pointers you may have...
>
I think you found out for the replies, this NFS issue is not
related to ZFS nor a ZIL malfunction in any way.
http:/
Nicholas Lee writes:
> Another option to look at is:
> set zfs:zfs_nocacheflush=1
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
>
> Best option is to get a a fast ZIL log device.
>
>
> Depends on your pool as well. NFS+ZFS means zfs will wait for write
> comple
Eric D. Mudama writes:
> On Mon, Jan 19 at 23:14, Greg Mason wrote:
> >So, what we're looking for is a way to improve performance, without
> >disabling the ZIL, as it's my understanding that disabling the ZIL
> >isn't exactly a safe thing to do.
> >
> >We're looking for the best way to
Eric D. Mudama writes:
> On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
> > On Tue, Jan 20 at 9:04, Richard Elling wrote:
> >>
> >> Yes. And I think there are many more use cases which are not
> >> yet characterized. What we do know is that using an SSD for
> >> the separate ZIL log works
Hi Noel.
zpool iostat -v
For a working pool and for a problem pool would help to see
the type of pool and it's capacity.
I assume the problem is not the source of the data.
To read large number of small files typically requires lots
and lots of threads (say 100 per source disks).
Is da
p://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
If you do, then be prepared to unmount or reboot all clients of
the server in case of a crash in order to clear their
corrupted caches.
This is in no way a ZIL problem nor a ZFS problem.
http://blogs.sun.com/roch/entry/nfs_and_zfs
tester writes:
> Hello,
>
> Trying to understand the ZFS IO scheduler, because of the async nature
> it is not very apparent, can someone give a short explanation for each
> of these stack traces and for their frequency
>
> this is the command
>
> dd if=/dev/zero of=/test/test1/tras
Stuart Anderson writes:
>
> On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
>
> >
> >
> > On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
> > > > wrote:
> >
> > However, it is a bit disconcerting to have to run with reduced data
> > protection for an entire week. While I am certai
zio_assess went away with SPA 3.0 :
6754011 SPA 3.0: lock breakup, i/o pipeline refactoring, device failure
handling
You now have :
zio_vdev_io_assess(zio_t *zio)
Yes it's one of the last stages of the I/O pipeline (see zio_impl.h).
-r
tester writes:
> Hi,
>
> What does zio
Bob Friesenhahn writes:
> On Wed, 29 Jul 2009, Jorgen Lundman wrote:
> >
> > For example, I know rsync and tar does not use fdsync (but dovecot does)
> > on
> > its close(), but does NFS make it fdsync anyway?
>
> NFS is required to do synchronous writes. This is what allows NFS
> cli
"C. Bergström" writes:
> James C. McPherson wrote:
> > An introduction to btrfs, from somebody who used to work on ZFS:
> >
> > http://www.osnews.com/story/21920/A_Short_History_of_btrfs
> >
> *very* interesting article.. Not sure why James didn't directly link to
> it, but courteous of
Henk Langeveld writes:
> Mario Goebbels wrote:
> >>> An introduction to btrfs, from somebody who used to work on ZFS:
> >>>
> >>> http://www.osnews.com/story/21920/A_Short_History_of_btrfs
> >> *very* interesting article.. Not sure why James didn't directly link to
> >> it, but courteous o
Tim Cook writes:
> On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais
> wrote:
>
> >
> > Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
> >
> > does anybody have some numbers on speed on sata vs 15k sas?
> >>>
> >>
> >>
roland writes:
> >SSDs with capacitor-backed write caches
> >seem to be fastest.
>
> how to distinguish them from ssd`s without one?
> i never saw this explicitly mentioned in the specs.
They probably don't have one then (or they should fire their
entire marketing dept).
Capacitors allows
Scott Lawson writes:
> Also you may wish to look at the output of 'iostat -xnce 1' as well.
>
> You can post those to the list if you have a specific problem.
>
> You want to be looking for error counts increasing and specifically 'asvc_t'
> for the service times on the disks. I higher num
Do you have the zfs primarycache property on this release ?
if so, you could set it to 'metadata' or none.
primarycache=all | none | metadata
Controls what is cached in the primary cache (ARC). If
this property is set to "all", then both user data and
metadat
"100% random writes produce around 200 IOPS with a 4-6 second pause
around every 10 seconds. "
This indicates that the bandwidth you're able to transfer
through the protocol is about 50% greater than the bandwidth
the pool can offer to ZFS. Since, this is is not sustainable, you
s
stuart anderson writes:
> > > > Question :
> > > >
> > > > Is there a way to change the volume blocksize
> > say
> > > via 'zfs snapshot send/receive'?
> > > >
> > > > As I see things, this isn't possible as the
> > target
> > > volume (including property values) gets
> > overwritten
>
I wonder if a taskq pool does not suffer from a similar
effect observed for the nfsd pool :
6467988 Minimize the working set of nfsd threads
Created threads round robin our of taskq loop, doing little
work but wake up at least once per 5 minute and so are never
reaped.
-r
Nils Goroll
Bob Friesenhahn writes:
> On Wed, 23 Sep 2009, Ray Clark wrote:
>
> > My understanding is that if I "zfs set checksum=" to
> > change the algorithm that this will change the checksum algorithm
> > for all FUTURE data blocks written, but does not in any way change
> > the checksum for prev
Anton Rang writes:
> On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance
> Engineering wrote:
>
> > I'm not taking a stance on this, but if I keep a controler
> > full of 128K I/Os and assuming there are targetting
> > contiguous physic
a comparison, a single disk's dd write performance is around 6MB/sec no
> cache, and 30MB/sec with write cache enabled.
>
> So the 40-50MB/sec result is kind of disappointing, with a **10** disk pool.
>
I Don't think RAID-Z is your problem in the above, but if the
perf
I carelessly let it run until ... it made my system crash.
Is that the expected behaviour?
Not funny ;-)
Couldbe (based solely on the presence of
zio_write_allocate_gang_members; no deep analysis)
6411261 busy intent log runs out of space on small pools.
-r
veryone waiting on I/O for 10s of
seconds.
and while I hold the floor, I posted this entry last week
which could be of interesting to try in the general purpose
(small file updates) NFS serving:
http://blogs.sun.com/roller/page/roch?entry=tuning_zf
For Output ops, ZFS could setup a 10MB I/O transfer to disk
starting at sector X, or chunk that up in 128K while still
assigning the samerangeof disk blocks forthe
operations. Yes there will be more control information going
around, a little more CPU consumed, but the disk w
drives
from the write_cache; and I guess the other bug explain why
ZFS is still not able to benefit from it.
-r
Jonathan Edwards writes:
>
> On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering
> wrote:
>
> > Naively I'd think a write_cache sho
Check here:
http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157
-r
Phil Brown writes:
> Roch Bourbonnais - Performance Engineering wrote:
> > I'm puzzled by 2 things.
> >
> > Naively I'd think a write_cache should not help
Robert Milkowski writes:
> Hi.
>
>All filesystems have compression set to off.
>
>
> bash-3.00# zfs list -o compression|grep -i on
> bash-3.00#
>
> But still lzjb_compress() is ised by ZFS - is it for metadata or what?
>
Yes, for metadata.
-r
_
What does vmstat look like ?
Also zpool iostat 1.
Do you have any disk based swap ?
One best practice we probably will be coming out with is to
configure at least physmem of swap with ZFS (at least as of
this release).
The partly hung system could be this :
http://bugs.opensolaris.org/
This just published:
http://blogs.sun.com/roller/trackback/roch/Weblog/the_dynamics_of_zfs
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This just published:
http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
15 minutes to do a fdsync is way outside the slowdown usually seen.
The footprint for 6413510 is that when a huge amount of
data is being written non synchronously and a fsync comes in for the
same filesystem then all the non-synchronous data is also forced out
synchronously. So is there
Sean Meighan writes:
> The vi we were doing was a 2 line file. If you just vi a new file, add
> one line and exit it would take 15 minutes in fdsynch. On recommendation
> of a workaround we set
>
> set zfs:zil_disable=1
>
> after the reboot the fdsynch is now < 0.1 seconds. Now I have n
Martin, Marcia R writes:
> Did I miss something on this thread? Was the root cause of the
> 15-minute fsync <> actually determined?
>
I think so ;-)
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/
How about the 'deferred' option be on a leased basis with a
deadline to revert to normal behavior; at most 24hrs at a
time. Console output everytime the option is enabled.
-r
Torrey McMahon writes:
> Neil Perrin wrote:
> >
> > Of course we would need to stress the dangers of setting 'd
ing
> AB> for another.
> AB> (http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs)
> AB> Is it possible to throttle only selected processes (e.g. nfsd) ?
>
NFS usually needs to sync a lot so that has a throttling
effect on it's own. I'm not sure this is
Bill Sommerfeld writes:
> On Thu, 2006-06-22 at 03:55, Roch wrote:
> > How about the 'deferred' option be on a leased basis with a
> > deadline to revert to normal behavior; at most 24hrs at a
> > time.
> why?
I'll trust your judgement over m
As I recall, the zfs sync is, unlike UFS, synchronous.
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Joe Little writes:
> On 6/22/06, Bill Moore <[EMAIL PROTECTED]> wrote:
> > Hey Joe. We're working on some ZFS changes in this area, and if you
> > could run an experiment for us, that would be great. Just do this:
> >
> > echo 'zil_disable/W1' | mdb -kw
> >
> > We're working on some f
So if you have a single thread doing open/write/close of 8K
files and get 1.25MB/sec, that tells me you have something
like a 6ms I/O latency. Which look reasonable also.
What does iostat -x svc_t (client side) says ?
400ms seems high for the workload _and_ doesn't match my
formula, so I don't li
About:
-I've read the threads about zfs and databases. Still I'm not 100%
convenienced about read performance. Doesn't the fragmentation of the
large database files (because of the concept of COW) impact
read-performance?
I do need to get back to this thread. The way I am currently
loo
Chris Csanady writes:
> On 6/26/06, Neil Perrin <[EMAIL PROTECTED]> wrote:
> >
> >
> > Robert Milkowski wrote On 06/25/06 04:12,:
> > > Hello Neil,
> > >
> > > Saturday, June 24, 2006, 3:46:34 PM, you wrote:
> > >
> > > NP> Chris,
> > >
> > > NP> The data will be written twice on ZFS us
Philip Brown writes:
> Roch wrote:
> > And, ifthe load can accomodate a
> > reorder, to get top per-spindle read-streaming performance,
> > a cp(1) of the file should do wonders on the layout.
> >
>
> but there may not be filesystem space for doub
Mika Borner writes:
> >RAID5 is not a "nice" feature when it breaks.
>
> Let me correct myself... RAID5 is a "nice" feature for systems without
> ZFS...
>
> >Are huge write caches really a advantage? Or are you taking about
> huge
> >write caches with non-volatile storage?
>
> Yes,
d, which may answer some of your questions:
> >
> >http://www.opensolaris.org/jive/thread.jspa?messageID=40617
> >
> >sounds like your workload is very similar to mine. is all public
> >access via NFS?
> >
> >also, check out this blog entry from R
Patrick writes:
> Hi,
>
> > sounds like your workload is very similar to mine. is all public
> > access via NFS?
>
> Well it's not 'public directly', courier-imap/pop3/postfix/etc... but
> the maildirs are accessed directly by some programs for certain
> things.
>
> > for small file wo
grant beattie writes:
> On Tue, Jun 27, 2006 at 12:07:47PM +0200, Roch wrote:
>
> > > > for small file workloads, setting recordsize to a value lower than the
> > > > default (128k) may prove useful.
> > >
> > > When changing thi
Mika Borner writes:
> >given that zfs always does copy-on-write for any updates, it's not
> clear
> >why this would necessarily degrade performance..
>
> Writing should be no problem, as it is serialized... but when both
> database instances are reading a lot of different blocks at the sa
Hi there.
I have a telco customer who has a home grown application which deals with
inter carrier sms. Basically all this application does is read an sms request
and write it to a queue.
However that's lots and lots of very small reads and writes.
They've done performance testing on a
Darren J Moffat writes:
> Steven Sim wrote:
> > Casper;
> >
> > Does this mean it would be a good practice to say increase the amount of
> > memory and/or swap space we usually recommend if the customer intends to
> > use ZFS very heavily?
>
> ZFS doesn't necessarily use more memory (p
Hi Sean, You suffer from an extreme bout of
6429205 each zpool needs to monitor it's throughput and throttle heavy writers
When this is fixed, your responsiveness will be better.
Note to Mark, Sean is more than willing to test any fix we
would have for this...
-r
Sorry to plug my own blog but have you had a look at these ?
http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to (raidz)
http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs
Also, my thinking is that raid-z is probably more friendly
when the config contains
Robert Milkowski writes:
> Hello zfs-discuss,
>
> What would you rather propose for ZFS+ORACLE - zvols or just files
> from the performance standpoint?
>
>
> --
> Best regards,
> Robert mailto:[EMAIL PROTECTED]
> ht
I just ran:
[EMAIL PROTECTED](129): mkfile 5000M f3
Could not set length of f3: No space left on device
Which fails in anon_resvmem:
dtrace -n fbt::anon_resvmem:return/arg1==0/[EMAIL PROTECTED](20)]=count()}
tmpfs`tmp_resv+0x50
tmpfs`wrtmp+0x28c
eric kustarz writes:
>
> >ES> Second, you may be able to get more performance from the ZFS filesystem
> >ES> on the HW lun by tweaking the max pending # of reqeusts. One thing
> >ES> we've found is that ZFS currently has a hardcoded limit of how many
> >ES> outstanding requests to send to t
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
write that with a single operation. Th
mario heimel writes:
> Hi.
>
> i am very interested in ZFS compression on vs off tests maybe you can run
> another one with the 3510.
>
> i have seen a slightly benefit with compression on in the following test
> (also with high system load):
> S10U2
> v880 8xcore 16Ggb ram
> (only s
RM:
> I do not understand - why in some cases with smaller block writing
> block twice could be actually faster than doing it once every time?
> I definitely am missing something here...
In addition to what Neil said, I want to add that
when an application O_DSYNC write cover only parts o
Robert Milkowski writes:
> Hello Neil,
>
> Thursday, August 10, 2006, 7:02:58 PM, you wrote:
>
> NP> Robert Milkowski wrote:
> >> Hello Matthew,
> >>
> >> Thursday, August 10, 2006, 6:55:41 PM, you wrote:
> >>
> >> MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
Darren:
> > With all of the talk about performance problems due to
> > ZFS doing a sync to force the drives to commit to data
> > being on disk, how much of a benefit is this - especially
> > for NFS?
I would not call those things as problems, more like setting
proper expectations.
My unde
The test case was build 38, Solaris 11, a 2 GB file, initially created
with 1 MB SW, and a recsize of 8 KB, on a pool with two raid-z 5+1,
accessed with 24 threads of 8 KB RW, for 500,000 ops or 40 seconds which
ever came first. The result at the pool level was 78% of the operations
Hi Bob,
Looks like : 6415647 Sequential writing is jumping
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
-r
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Senior
Neil Perrin writes:
> Yes James is right this is normal behaviour. Unless the writes are
> synchronous (O_DSYNC) or explicitely flushed (fsync()) then they
> are batched up, written out and committed as a transaction
> every txg_time (5 seconds).
>
> Neil.
>
> James C. McPherson wrote:
Incidentally, this is part of how QFS gets its performance
for streaming I/O. We use an "allocate forward" policy,
allow very largeallocation blocks, and separate the
metadata from data. This allows us to write (or read) data
in fairly large I/O requests, without unne
Bob Evans writes:
> I'm starting simple, there is no app.
>
> I have a 10GB file (called foo) on the internal FC drive, I did a zfs create
> raidz bar
> then ran "cp foo /bar/", so there is no cpu activity due to an app.
>
> As a test case, this took 7 min 30 sec to copy to the zfs
Bob Evans writes:
> One last tidbit, for what it is worth. Rather than watch top, I ran
> xcpustate. It seems that just as the writes pause, the cpu looks like
> it hits 100% (or very close), then it falls back down to its lower
> level.
>
> I'm still getting used to Solaris 10 as well,
ise, tracking releases is very important.
-r
Performance, Availability & Architecture Engineering
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Senior Performance An
Anantha N. Srirama writes:
> Therein lies my dillemma:
>
> - We know the I/O sub-system is capable of much higher I/O rates
> - Under the test setup I've SAS datasets which are lending
> themselves to compression. This should manifest itself as lots of read
> I/O resulting in much sma
Robert Milkowski writes:
> Hello Roch,
>
> Thursday, August 17, 2006, 11:08:37 AM, you wrote:
> R> My general principles are:
>
> R> If you can, to improve you 'Availability' metrics,
> R> let ZFS handle one level of redun
Hi Robert, Maybe this RFE would contribute to alleviate your
problem:
6417135 need generic way to dissociate disk or slice from it's
filesystem
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6417135
-r
Robert Milkowski writes:
> Hello zfs-discuss,
>
> I've got many y
Eric Schrock writes:
> Following up on a string of related proposals, here is another draft
> proposal for user-defined properties. As usual, all feedback and
> comments are welcome.
>
> The prototype is finished, and I would expect the code to be integrated
> sometime within the next mon
g got in the way and made compression singled threaded
per zpool.
-r
Performance, Availability & Architecture Engineering
Roch BourbonnaisSun Microsystems, Icnc-Grenobl
Dick Davies writes:
> On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote:
> > On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> > > Yes, ZFS uses this command very frequently. However, it only does this
> > > if the whole disk is under the control of ZFS, I believe; so a
> > > w
Michael Schuster writes:
> IHAC who is using a very similar test (cp -pr /zpool1/Studio11
> /zpool1/Studio11.copy) and is seeing behaviour similar to what we've
> seen described here; BUT since he's using a single-CPU box (SunBlade
> 1500) and has a single disk in his pool, every time the CPU
1 - 100 of 390 matches
Mail list logo