JS writes:
> The big problem is that if you don't do your redundancy in the zpool,
> then the loss of a single device flatlines the system. This occurs in
> single device pools or stripes or concats. Sun support has said in
> support calls and Sunsolve docs that this is by design, but I've nev
Thomas Nau writes:
> Dear all.
> I've setup the following scenario:
>
> Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
> diskspace of the two internal drives with a total of 90GB is used as zpool
> for the two 32GB volumes "exported" via iSCSI
>
> The initiator is
With latest Nevada setting zfs_arc_max in /etc/system is
sufficient. Playing with mdb on a live system is more
tricky and is what caused the problem here.
-r
[EMAIL PROTECTED] writes:
> Jim Mauro wrote:
>
> > All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
> >
> > > arc::
Richard L. Hamilton writes:
> _FIOSATIME - why doesn't zfs support this (assuming I didn't just miss it)?
> Might be handy for backups.
>
Are these syscall sufficent ?
int utimes(const char *path, const struct timeval times[2]);
int futimesat(int fildes, const char *path, const str
See
Kernel Statistics Library Functions kstat(3KSTAT)
-r
Atul Vidwansa writes:
> Peter,
> How do I get those stats programatically? Any clues?
> Regards,
> _Atul
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris
Robert Milkowski writes:
> Hello Selim,
>
> Wednesday, March 28, 2007, 5:45:42 AM, you wrote:
>
> SD> talking of which,
> SD> what's the effort and consequences to increase the max allowed block
> SD> size in zfs to highr figures like 1M...
>
> Think what would happen then if you try to
Le 30 mars 07 à 08:36, Anton B. Rang a écrit :
However, even with sequential writes, a large I/O size makes a huge
difference in throughput. Ask the QFS folks about data capture
applications. ;-)
I quantified the 'huge' this as such
60MB/s and 5ms per seek means that for a FS that requ
Le 30 mars 07 à 20:32, Anton Rang a écrit :
Perhaps you should read the QFS documentation and/or source. :-)
I probably should...
QFS, like
other write-forward and/or delayed-allocation file systems, does
not incur a
seek per I/O. For sequential writes in a typical data capture
applic
Total 4167561 16279
> Physical 4078747 15932
>
>
> On 3/23/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> > With latest Nevada setting zfs_arc_max in /etc/system is
> > sufficient. Playing with mdb on a
Le 5 avr. 07 à 08:28, Robert Milkowski a écrit :
Hello Matthew,
Thursday, April 5, 2007, 1:08:25 AM, you wrote:
MA> Lori Alt wrote:
Can write-cache not be turned on manually as the user is sure
that it is
only ZFS that is using the entire disk?
yes it can be turned on. But I don't know
Now, given proper I/O concurrency (like recently improved NCQ in our
drivers) or SCSI CTQ,
I don't not expect the write caches to provide much performance
gains, if any, over the situation
with write caches off.
Write caches can be extremelly effective when dealing with drives
that do not
Le 4 avr. 07 à 10:01, Paul Boven a écrit :
Hi everyone,
Swap would probably have to go on a zvol - would that be best
placed on
the n-way mirror, or on the raidz?
From the book of Richard Elling,
Shouldn't matter. The 'existence' of a swap device is sometimes
required.
If the devic
Annie Li writes:
> Can anyone help explain what does "out-of-order issue" mean in the
> following segment?
>
> ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The
> pipeline operates on I/O dependency graphs and provides scoreboarding,
> priority, deadline scheduling, o
Gino writes:
> > 6322646 ZFS should gracefully handle all devices
> > failing (when writing)
> >
> > Which is being worked on. Using a redundant
> > configuration prevents this
> > from happening.
>
> What do you mean with "redundant"? All our servers has 2 or 4 HBAs, 2 or 4
> fc swi
I lost track if this rfe was posted yet:
5097228 provide 'zpool split' to create new pool...
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=5097228
-r
Mark J Musante writes:
> On Wed, 11 Apr 2007, Constantin Gonzalez Schmitz wrote:
>
> > So, instead of detaching,
Dan Mick writes:
> Robert Milkowski wrote:
> > Hello Dan,
> >
> > Tuesday, April 17, 2007, 9:44:45 PM, you wrote:
> >
> How can this work? With compressed data, its hard to predict its
> final size before compression.
> >>> Because you are NOT compressing the file only compr
Richard L. Hamilton writes:
> Well, no; his quote did say "software or hardware". The theory is apparently
> that ZFS can do better at detecting (and with redundancy, correcting) errors
> if it's dealing with raw hardware, or as nearly so as possible. Most SANs
> _can_ hand out raw LUNs as we
tester writes:
> Hi,
>
> quoting from zfs docs
>
> "The SPA allocates blocks in a round-robin fashion from the top-level
> vdevs. A storage pool with multiple top-level vdevs allows the SPA to
> use dynamic striping to increase disk bandwidth. Since a new block may
> be allocated from an
Tony Galway writes:
> I have a few questions regarding ZFS, and would appreciate if someone
> could enlighten me as I work my way through.
>
> First write cache.
>
We often use write cache to designate the cache present at
the disk level. Lets call this "disk write cache".
Most FS will c
Tony Galway writes:
> Anton & Roch,
>
> Thank you for helping me understand this. I didn't want
to make too many assumptions that were unfounded and then
incorrectly relay that information back to clients.
>
> So if I might just repeat your statements, so
Tony Galway writes:
> Let me elaborate slightly on the reason I ask these questions.
>
> I am performing some simple benchmarking, and during this a file is
> created by sequentially writing 64k blocks until the 100Gb file is
> created. I am seeing, and this is the exact same as VxFS, large p
Albert Chin writes:
> On Sat, Apr 21, 2007 at 09:05:01AM +0200, Selim Daoud wrote:
> > isn't there another flag in /etc/system to force zfs not to send flush
> > requests to NVRAM?
>
> I think it's zfs_nocacheflush=1, according to Matthew Ahrens in
> http://blogs.digitar.com/jjww/?itemid=44.
Leon Koll writes:
> Welcome to the club, Andy...
>
> I tried several times to attract the attention of the community to the
> dramatic performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS
> combination - without any result : href="http://www.opensolaris.org/jive/thread.jspa?messa
Robert Milkowski writes:
> Hello Brian,
>
> Thursday, April 26, 2007, 3:55:16 AM, you wrote:
>
> BG> If I recall, the dump partition needed to be at least as large as RAM.
>
> BG> In Solaris 8(?) this changed, in that crashdumps streans were
> BG> compressed as they were written out to
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
-r
cedric briner writes:
> Hello,
>
> I wonder if the subject of this email is not self-explanetory ?
>
>
> okay let'say that it is no
cedric briner writes:
> > You might set zil_disable to 1 (_then_ mount the fs to be
> > shared). But you're still exposed to OS crashes; those would
> > still corrupt your nfs clients.
> >
> > -r
>
> hello Roch,
>
> I've few que
Robert Milkowski writes:
> Hello Wee,
>
> Thursday, April 26, 2007, 4:21:00 PM, you wrote:
>
> WYT> On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
> >> okay let'say that it is not. :)
> >> Imagine that I setup a box:
> >> - with Solaris
> >> - with many HDs (directly attached)
Wee Yeh Tan writes:
> Robert,
>
> On 4/27/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> > Hello Wee,
> >
> > Thursday, April 26, 2007, 4:21:00 PM, you wrote:
> >
> > WYT> On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
> > >> okay let'say that it is not. :)
> > >> Imagine that
Chad Mynhier writes:
> On 4/27/07, Erblichs <[EMAIL PROTECTED]> wrote:
> > Ming Zhang wrote:
> > >
> > > Hi All
> > >
> > > I wonder if any one have idea about the performance loss caused by COW
> > > in ZFS? If you have to read old data out before write it to some other
> > > place, it in
ge Pools Recommendations
> -http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_Storage_Pools_Recommendations
> where I read :
> - For production systems, consider using whole disks for storage pools
> rather than slices for the following reasons:
>+
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
So the observed pauses should be consistent with that of a load
generating high system time.
The assump
Ian Collins writes:
> Roch Bourbonnais wrote:
> >
> > with recent bits ZFS compression is now handled concurrently with many
> > CPUs working on different records.
> > So this load will burn more CPUs and acheive it's results
> > (compression) faste
Manoj Joseph writes:
> Hi,
>
> I was wondering about the ARC and its interaction with the VM
> pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
> cache get mapped to the process' virtual memory? Or is there another copy?
>
My understanding is,
The ARC does not get m
This looks like another instance of
6429205 each zpool needs to monitor its throughput and throttle
heavy writers|
or at least it is a contributing factor.
Note that your /etc/system is mispelled (maybe just in the e-mail)
Didn't you get a console message ?
-r
Le 24 mai 07 à 09:50, Amer
hi Shweta;
First thing is to look for all kernel function return that errno (25
I think) during your test.
dtrace -n 'fbt:::return/arg1 == 25/[EMAIL PROTECTED]()}'
More verbose but also useful :
dtrace -n 'fbt:::return/arg1 == 25/[EMAIL PROTECTED](20)]=count()}'
It's a cat
Le 22 mai 07 à 01:11, Nicolas Williams a écrit :
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS sync
semantics
conspire against single-threaded performanc
Le 22 mai 07 à 01:21, Albert Chin a écrit :
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS
Le 22 mai 07 à 03:18, Frank Cusack a écrit :
On May 21, 2007 6:30:42 PM -0500 Nicolas Williams
<[EMAIL PROTECTED]> wrote:
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote:
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500,
Le 22 mai 07 à 16:23, Dick Davies a écrit :
Take off every ZIL!
http://number9.hellooperator.net/articles/2007/02/12/zil-
communication
Cause client corrupt but also database corruption and just about
anything that carefully manages data.
Yes the zpool will survive, but it may be t
Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit :
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some
kind of
fil
Torrey McMahon writes:
> Toby Thain wrote:
> >
> > On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
> >
> >> Toby Thain wrote:
> >>>
> >>> On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
> >>>
> On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
> > What if your HW-RAID-cont
Hi Seigfried, just making sure you had seen this:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
You have very fast NFS to non-ZFS runs.
That seems only possible if the hosting OS did not sync the
data when NFS required it or the drive in question had some
fast write caches. If
Joe S writes:
> After researching this further, I found that there are some known
> performance issues with NFS + ZFS. I tried transferring files via SMB, and
> got write speeds on average of 25MB/s.
>
> So I will have my UNIX systems use SMB to write files to my Solaris server.
> This seem
Le 20 juin 07 à 04:59, Ian Collins a écrit :
I'm not sure why, but when I was testing various configurations with
bonnie++, 3 pairs of mirrors did give about 3x the random read
performance of a 6 disk raidz, but with 4 pairs, the random read
performance dropped by 50%:
3x2
Blockread: 22
Sorry about that; looks like you've hit this:
6546683 marvell88sx driver misses wakeup for mv_empty_cv
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6546683
Fixed in snv_64.
-r
Thomas Garner writes:
> > We have seen this behavior, but it appears to be entirely re
Dedicate some CPU to the task. Create a psrset and bind the ftp
daemon to it.
If that works then add a few of the read threads as many as what fits
in the requirements.
-r
Le 25 juin 07 à 15:00, Paul van der Zwan a écrit :
On 25 Jun 2007, at 14:37, [EMAIL PROTECTED] wrote:
On 25 Ju
Regarding the bold statement
There is no NFS over ZFS issue
What I mean here is that,if you _do_ encounter a
performance pathology not linked to the NVRAM Storage/cache
flush issue then you _should_ complain or better get someone
to do an analysis of the situation.
One
Possibly the storage is flushing the write caches when it
should not. Until we get a fix, cache flushing could be
disabled in the storage (ask the vendor for the magic
incantation). If that's not forthcoming and if all pools are
attached to NVRAM protected devices; then these /etc/
Brandorr wrote:
> Is ZFS efficient at handling huge populations of tiny-to-small files -
> for example, 20 million TIFF images in a collection, each between 5
> and 500k in size?
>
> I am asking because I could have sworn that I read somewhere that it
> isn't, but I can't find the re
£ukasz K writes:
> > Is ZFS efficient at handling huge populations of tiny-to-small files -
> > for example, 20 million TIFF images in a collection, each between 5
> > and 500k in size?
> >
> > I am asking because I could have sworn that I read somewhere that it
> > isn't, but I can't find t
£ukasz K writes:
> > Is ZFS efficient at handling huge populations of tiny-to-small files -
> > for example, 20 million TIFF images in a collection, each between 5
> > and 500k in size?
> >
> > I am asking because I could have sworn that I read somewhere that it
> > isn't, but I can't find t
Matty writes:
> Are there any plans to support record sizes larger than 128k? We use
> ZFS file systems for disk staging on our backup servers (compression
> is a nice feature here), and we typically configure the disk staging
> process to read and write large blocks (typically 1MB or so). This
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
http://www.solarisinternals.com/wiki
Pawel Jakub Dawidek writes:
> On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
> >
> > Tuning should not be done in general and Best practices
> > should be followed.
> >
> > So get very much acquainted with this first :
> >
>
Simple answer : yes.
-r
Robert Milkowski writes:
> Hello zfs-discuss,
>
> I wonder if ZFS will be able to take any advantage of Niagara's
> built-in crypto?
>
>
> --
> Best regards,
> Robert Milkowskimailto:[EMAIL PROTECTED]
>
[EMAIL PROTECTED] writes:
> Jim Mauro wrote:
> >
> > Hey Max - Check out the on-disk specification document at
> > http://opensolaris.org/os/community/zfs/docs/.
> >
> > Page 32 illustration shows the rootbp pointing to a dnode_phys_t
> > object (the first member of a objset_phys_t data str
[EMAIL PROTECTED] writes:
> Roch - PAE wrote:
> > [EMAIL PROTECTED] writes:
> > > Jim Mauro wrote:
> > > >
> > > > Hey Max - Check out the on-disk specification document at
> > > > http://opensolaris.org/os/community/zfs/docs/.
&g
Here is a different twist on your interesting scheme. First
start with writting 3 blocks and parity in a full stripe.
Disk0 Disk1 Disk2 Disk3
D0 D1 D2 P0,1,2
Next application modifies D0 -> D0' and also writes other
data D3, D4. Now you have
D
Claus Guttesen writes:
> > > I have many small - mostly jpg - files where the original file is
> > > approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
> > > are currently on vxfs. I have copied all files from one partition onto
> > > a zfs-ditto. The vxfs-partition occupies 4
Claus Guttesen writes:
> > So the 1 MB files are stored as ~8 x 128K recordsize.
> >
> > Because of
> > 5003563 use smaller "tail block" for last block of object
> >
> > The last block of you file is partially used. It will depend
> > on your filesize distribution by without that in
Pawel Jakub Dawidek writes:
> I'm CCing zfs-discuss@opensolaris.org, as this doesn't look like
> FreeBSD-specific problem.
>
> It looks there is a problem with block allocation(?) when we are near
> quota limit. tank/foo dataset has quota set to 10m:
>
> Without quota:
>
> FreeBSD
Hi Jason, This should have helped.
6542676 ARC needs to track meta-data memory overhead
Some of the lines to arc.c:
1551 1.36 if (arc_meta_used >= arc_meta_limit) {
1552/*
1553 * We are exceeding our meta-data cache l
Vincent Fox writes:
> I don't understand. How do you
>
> "setup one LUN that has all of the NVRAM on the array dedicated to it"
>
> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
> thick here, but can you be more specific for the n00b?
>
> Do you mean from firmware
Neelakanth Nadgir writes:
> io:::start probe does not seem to get zfs filenames in
> args[2]->fi_pathname. Any ideas how to get this info?
> -neel
>
Who says an I/O is doing work for a single pathname/vnode
or for a single process. There is not that one to one
correspondance anymore. Not in
The theory I am going by is that 10 seconds worth of your synchronous
writes is sufficient
for the slog. That breaks down if the main pool is the bottleneck.
-r
Le 26 sept. 07 à 20:10, Torrey McMahon a écrit :
> Albert Chin wrote:
>> On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrot
n be quickly
recycled by ZFS for subsequent operations. It means ZFS
memory footprint is more likely to containuseful ZFS
metadata and not cached data block we know are not likely to
be used again anytime soon.
We also would operated better in mixed DIO/non-DIO workloads.
See also:
Matty writes:
> On 10/3/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
> > Rayson Ho writes:
> >
> > > 1) Modern DBMSs cache database pages in their own buffer pool because
> > > it is less expensive than to access data from the OS. (IIRC, MySQL's
Jim Mauro writes:
>
> > Where does the win come from with "directI/O"? Is it 1), 2), or some
> > combination? If its a combination, what's the percentage of each
> > towards the win?
> >
> That will vary based on workload (I know, you already knew that ... :^).
> Decomposing the pe
eric kustarz writes:
> >
> > Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
> > surprised that this is being met with skepticism considering that
> > Oracle highly recommends direct IO be used, and, IIRC, Oracle
> > performance was the main motivation to adding DIO to
Nicolas Williams writes:
> On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
> > ...memory utilisation... OK so we should implement the 'lost cause' rfe.
> >
> > In all cases, ZFS must not steal pages from other memory consumers :
> >
>
Nicolas Williams writes:
> On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote:
> > > It does, which leads to the core problem. Why do we have to store the
> > > exact same data twice in memory (i.e., once in the ARC, and once in
> > > the shared m
Le 21 oct. 07 à 02:40, Vincent Fox a écrit :
> We had a Sun Engineer on-site recently who said this:
>
> We should set our array controllers to sequential I/O *even* if we
> are doing random I/O if we are using ZFS.
> This is because the Arc cache is already grouping requests up
> sequentiall
>
> This should work. It shouldn't even lose the in-flight transactions.
> ZFS reverts to using the main pool if a slog write fails or the
> slog fills up.
So, the only way to lose transactions would be a crash or power loss,
leaving outstanding transactions in the log, followed by th
I would suspect the checksum part of this (I do believe it's being
actively worked on) :
6533726 single-threaded checksum & raidz2 parity calculations limit
write bandwidth on thumper
-r
Robert Milkowski writes:
> Hi,
>
> snv_74, x4500, 48x 500GB, 16GB RAM, 2x dual core
>
> # zp
Original Message
Subject: [zfs-discuss] MySQL benchmark
Date: Tue, 30 Oct 2007 00:32:43 +
From: Robert Milkowski <[EMAIL PROTECTED]>
Reply-To: Robert Milkowski <[EMAIL PROTECTED]>
Organization: CI TASK http://www.task.gda.pl
To:
Was that with compression enabled ?
Got "zpool status" output ?
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Louwtjie Burger writes:
> Hi
>
> What is the impact of not aligning the DB blocksize (16K) with ZFS,
> especially when it comes to random reads on single HW RAID LUN.
>
> How would one go about measuring the impact (if any) on the workload?
>
The DB will have a bigger in memory footprint
Louwtjie Burger writes:
> Hi
>
> After a clean database load a database would (should?) look like this,
> if a random stab at the data is taken...
>
> [8KB-m][8KB-n][8KB-o][8KB-p]...
>
> The data should be fairly (100%) sequential in layout ... after some
> days though that same spot (
Anton B. Rang writes:
> > When you have a striped storage device under a
> > file system, then the database or file system's view
> > of contiguous data is not contiguous on the media.
>
> Right. That's a good reason to use fairly large stripes. (The
> primary limiting factor for stripe s
Neil Perrin writes:
>
>
> Joe Little wrote:
> > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> >> Joe,
> >>
> >> I don't think adding a slog helped in this case. In fact I
> >> believe it made performance worse. Previously the ZIL would be
> >> spread out over all devi
Moore, Joe writes:
> Louwtjie Burger wrote:
> > Richard Elling wrote:
> > >
> > > >- COW probably makes that conflict worse
> > > >
> > > >
> > >
> > > This needs to be proven with a reproducible, real-world
> > workload before it
> > > makes sense to try to solve it. After all, if
xcal are sometimes a signature of some problem. Of
themselves they should be cheap. Below one sees that the sys
time is rather small, so I'm inclined to think this is not a
problem here pending further analysis of the problem. We see
that all you CPUs are making what appears to progress
Dmitry Degrave writes:
> In pre-ZFS era, we had observable parameters like scan rate and
> anonymous page-in/-out counters to discover situations when a system
> experiences a lack of physical memory. With ZFS, it's difficult to use
> mentioned parameters to figure out situations like that. Ha
No need to tune recordsize when the filesizes are small. Each file is
stored as a single record.
-r
Le 29 nov. 07 à 08:20, Kam Lane a écrit :
> I'm getting ready to test a thumper (500gig drives/ 16GB) as a
> backup store for small (avg 2kb) encrypted text files. I'm
> considering a zpool
Dickon Hood writes:
> On Fri, Dec 07, 2007 at 13:14:56 +, I wrote:
> : On Fri, Dec 07, 2007 at 12:58:17 +, Darren J Moffat wrote:
> : : Dickon Hood wrote:
> : : >On Fri, Dec 07, 2007 at 12:38:11 +, Darren J Moffat wrote:
> : : >: Dickon Hood wrote:
>
> : : >: >We're seeing the w
dd uses a default block size of 512B. Does this map to your
expected usage ? When I quickly tested the CPU cost of small
read from cache, I did see that ZFS was more costly than UFS
up to a crossover between 8K and 16K. We might need a more
comprehensive study of that (data in/out of cache, di
Frank Penczek writes:
> Hi,
>
> On Dec 17, 2007 10:37 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> >
> > dd uses a default block size of 512B. Does this map to your
> > expected usage ? When I quickly tested the CPU cost of small
> > read f
Frank Penczek writes:
> Hi,
>
> On Dec 17, 2007 4:18 PM, Roch - PAE <[EMAIL PROTECTED]> wrote:
> > >
> > > The pool holds home directories so small sequential writes to one
> > > large file present one of a few interesting use case
Why do you want greater than 128K records.
Do Check out :
http://blogs.sun.com/roch/entry/128k_suffice
-r
Manoj Nayak writes:
> Hi All,
>
> Is it not poosible to increase zfs record size beyond 128k.I am using
> Solaris 10 Update 4.
>
> I get following error
Manoj Nayak writes:
> Roch - PAE wrote:
> > Why do you want greater than 128K records.
> >
> A single-parity RAID-Z pool on thumper is created & it consists of four
> disk.Solaris 10 update 4 runs on thumper.Then zfs filesystem is created in
> the pool.1
Manoj Nayak writes:
> Hi All,
>
> If any dtrace script is available to figure out the vdev_cache (or
> software track buffer) reads in kiloBytes ?
>
> The document says the default size of the read is 128k , However
> vdev_cache source code implementation says the default size is 64k
Manoj Nayak writes:
> Hi All.
>
> ZFS document says ZFS schedules it's I/O in such way that it manages to
> saturate a single disk bandwidth using enough concurrent 128K I/O.
> The no of concurrent I/O is decided by vq_max_pending.The default value
> for vq_max_pending is 35.
>
> We
Jonathan Loran writes:
>
> Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
> that exist in the various recent Open Solaris flavors? I would like to
> move my ZIL to solid state storage, but I fear I can't do it until I
> have another update. Heck, I would be hap
Andrew Robb writes:
> The big problem that I have with non-directio is that buffering delays
> program execution. When reading/writing files that are many times
> larger than RAM without directio, it is very apparent that system
> response drops through the floor- it can take several minutes f
Priming the cache for ZFS should work at least after boot
When freemem is large; any read block will make it to
cache. Post boot when memory is primed with something else
(what?) then it gets more difficult for both UFS and ZFS to
guess what to keep in caches.
Did you try priming ZFS after boot ?
Le 14 févr. 08 à 02:22, Marion Hakanson a écrit :
> [EMAIL PROTECTED] said:
>> It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
>> Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
>> handily pull 120MB/sec from it, and write at over 100MB/sec. It
>> f
Le 15 févr. 08 à 03:34, Bob Friesenhahn a écrit :
> On Thu, 14 Feb 2008, Tim wrote:
>>
>> If you're going for best single file write performance, why are you
>> doing
>> mirrors of the LUNs? Perhaps I'm misunderstanding why you went
>> from one
>> giant raid-0 to what is essentially a raid-1
Le 10 févr. 08 à 12:51, Robert Milkowski a écrit :
> Hello Nathan,
>
> Thursday, February 7, 2008, 6:54:39 AM, you wrote:
>
> NK> For kicks, I disabled the ZIL: zil_disable/W0t1, and that made
> not a
> NK> pinch of difference. :)
>
> Have you exported and them imported pool to get zil_disable
Le 15 févr. 08 à 11:38, Philip Beevers a écrit :
> Hi everyone,
>
> This is my first post to zfs-discuss, so be gentle with me :-)
>
> I've been doing some testing with ZFS - in particular, in
> checkpointing
> the large, proprietary in-memory database which is a key part of the
> application I
Le 15 févr. 08 à 18:24, Bob Friesenhahn a écrit :
> On Fri, 15 Feb 2008, Roch Bourbonnais wrote:
>>>
>>> As mentioned before, the write rate peaked at 200MB/second using
>>> RAID-0 across 12 disks exported as one big LUN.
>>
>> What was the interlace
201 - 300 of 390 matches
Mail list logo