Cross-posted to perf-discuss.
You can't change the write behavior of the app without
changing the app itself. The code would need to be modified
to issue fsync() calls on the file(s), or open the files for
synchronous writes (O_SYNC | O_DSYNC flags).
fsflush will run, by default, once per second
hello jim,
i also fiddled around with zfs_vdev_max_pending, maybe i did a mistake and did
not revert that correctly or maybe they both play a role in this game and i
didn`t recognize correctly. i will recheck tomorrow and report.
regards
roland
--
This message posted from opensolaris.org
I don't understand why disabling ZFS prefetch solved this
problem. The test case was a single threaded sequential write, followed
by a single threaded sequential read.
Anyone listening on ZFS have an explanation as to why disabling
prefetch solved Roland's very poor bandwidth problem?
My only th
Cross posting to zfs-discuss.
By my math, here's what you're getting;
4.6MB/sec on writes to ZFS.
2.2MB/sec on reads from ZFS.
90MB/sec on read from block device.
What is c0t1d0 - I assume it's a hardware RAID LUN,
but how many disks, and what type of LUN?
What version of Solaris (cat /etc/re
Hello Ben,
>If you want to put this to the test, consider disabling prefetch and
>trying again. See
>http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
i should have read and follow advice better - this was the essential hint.
thanks very much.
after issuing
echo zfs_prefetc
mhhh - i found this in dmesg:
Mar 30 16:49:20 s-zfs01 genunix: [ID 923486 kern.warning] WARNING: Page83 data
not standards compliant MegaRAID LD 1 RAID5 572G 516O
i don`t have a clue, what this means.
Montag, 30. März 2009, 17:00:54 Uhr CEST
Mar 30 16:48:06 s-zfs01 pcplusmp: [ID 805372 kern.
thanks so far!
so - here are some numbers:
i booted into linux, and streaming writes (1M blocksize) are 6-8MB/s, streaming
READS are >100MB/s (tested with filesize >>ramsize)
with Solaris, i`m getting similar value for WRITES, but READS are painfully
slow.
>1) Have you disabled atime o
Responding to myself...
m...@bruningsystems.com wrote:
Hi Jim,
Jim Mauro wrote:
mdb's memstat is cool in how it summarizes things, but it takes a very
long time to run on large systems. memstat is walking page lists, so
it should be quite accurate.
If you can live with the run time of ::memsta
I like acctcom since it gives me a brief synopsis of process performance.
However I can't figure out what Blocks Read means. A few quick tests shows it
has nothing to do blocks as reported by ls -ls. I copied old files which would
not have been referenced for a long time to a new filename.
Hi Jim,
Jim Mauro wrote:
mdb's memstat is cool in how it summarizes things, but it takes a very
long time to run on large systems. memstat is walking page lists, so
it should be quite accurate.
If you can live with the run time of ::memstat, it's currently your
best bet for memory accounting.
I
10 matches
Mail list logo