I have the same issue on two Dells 2850 with the same controller so I think it
is safe to assume it is a "feature" of the raid controller or a driver.
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted from opensolaris.org
___
perf-d
On Oct 7, 2009, at 8:05 AM, Robert Milkowski wrote:
Another thing is that if I run 4 dd in parallel (but from /dev/rdsk
to avoid caching) I can see exactly the same issue as then many IOs
are issued in parallel. Which is expected nevertheless it is
probably worth pointing out. Single dd wor
Another thing is that if I run 4 dd in parallel (but from /dev/rdsk to avoid
caching) I can see exactly the same issue as then many IOs are issued in
parallel. Which is expected nevertheless it is probably worth pointing out.
Single dd works fine.
--
Robert Milkowski
http://milek.blogspot.com
Hi,
I have the same issue I think.
If I do 'dd if=/dev/dsk/c0t0d0s0 of=/dev/null bs=128k' I get:
# iostat -xnz 1
[...]
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1676.10.0 93860.40.0 0.0 0.90.00.5 1 91 c0t0d0
extended device stat
mhh, i received the last answer via mail and did a reply via mail, but it seems
that it did not get to this list.
so i repost via web:
--
> are you still having an issue with this?
yes the issue still exists.
even worse,
On Jun 10, 2009, at 8:58 AM, John Levon wrote:
On Wed, Jun 10, 2009 at 01:17:39AM -0700, roland wrote:
and sun has closed the ticket without leaving a comment why this
is not an issue. (11-Closed:Not a Defect (Closed))
There was a comment placed in a Sun-private section of the bug: sorr
On Wed, Jun 10, 2009 at 01:17:39AM -0700, roland wrote:
> and sun has closed the ticket without leaving a comment why this
> is not an issue. (11-Closed:Not a Defect (Closed))
There was a comment placed in a Sun-private section of the bug: sorry
about this, it's against the policy. As the rea
and sun has closed the ticket without leaving a comment why this is not an
issue. (11-Closed:Not a Defect (Closed))
this IS an issue for me and is this the way to handle users which spend hours
of their spare time to report and trying to dig into it?
sorry, but i never saw issues handled l
i have created bugticket nr 6833814
(http://bugs.opensolaris.org/view_bug.do?bug_id=6833814 ) for this issue
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
maybe this one is related:
http://opensolaris.org/jive/thread.jspa?threadID=100675&tstart=0
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
yes, after setting
echo zfs_vdev_max_pending/W0t1 | mdb -kw
the problem also went away.
so my scsi-driver is flaky ?
i`m using lsi controller and this driver:
bash-3.2# modinfo |grep -i lsi
28 fbbf74e8 1ec8 - 1 scsi_vhci_f_asym_lsi (f_asym_lsi)
53 f79a4000 8688 216
thanks for the pointer.
i also think that this is controller related, as i´m unable to reproduce that
behaviour on another box.
unfortunately i cannot verify this before next week.
--
This message posted from opensolaris.org
___
perf-discuss mailing l
On Sun, Apr 12, 2009 at 11:23:37AM -0400, Jim Mauro wrote:
> Thank you Roland. I will try and get an nv110 build in-house
> and reproduce this. Your dd test after reboot is a single threaded
> sequential read, so I still don't get how disabling prefetch yields
> a 15X bandwidth increase.
>
> I appr
Thank you Roland. I will try and get an nv110 build in-house
and reproduce this. Your dd test after reboot is a single threaded
sequential read, so I still don't get how disabling prefetch yields
a 15X bandwidth increase.
I appreciate the update.
Thanks,
/jim
roland wrote:
Hello Jim,
i doub
sorry, but i cannot test 2008.11 for now.
i`m running snv_110, is that equivalent of 110b?
the problem existed also with 109 and also with nexenta-os.
anyway, by disabling file-level prefetching the problem went away, so i don`t
get the point what 2008.11 has to do with this !?
regards
roland
--
Hello devzero,
Would be nice to see if that throughput in your configuration would be
possible with OS 2008.11, or is from the enhancements from 105b above. You are
running 110b right?
Leal
[ http://www.eall.com.br/blog ]
--
This message posted from opensolaris.org
__
james.ma...@sun.com said:
> I'm not yet sure what's broken here, but there's something pathologically
> wrong with the IO rates to the device during the ZFS tests. In both cases,
> the wait queue is getting backed up, with horrific wait queue latency
> numbers. On the read side, I don't understand
>Please send "zpool status" output.
bash-3.2# zpool status
Pool: rpool
Status: ONLINE
scrub: Keine erforderlich
config:
NAMESTATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
Fehler: Keine bekannten Datenfe
Hello Jim,
i double checked again - but it`s like i told:
echo zfs_prefetch_disable/W0t1 | mdb -kw
fixes my problem.
i did a reboot and only set this single param - which immediately makes the
read troughput go up from ~2 MB/s to ~30 MB/s
>I don't understand why disabling ZFS prefetch solv
hello jim,
i also fiddled around with zfs_vdev_max_pending, maybe i did a mistake and did
not revert that correctly or maybe they both play a role in this game and i
didn`t recognize correctly. i will recheck tomorrow and report.
regards
roland
--
This message posted from opensolaris.org
I don't understand why disabling ZFS prefetch solved this
problem. The test case was a single threaded sequential write, followed
by a single threaded sequential read.
Anyone listening on ZFS have an explanation as to why disabling
prefetch solved Roland's very poor bandwidth problem?
My only th
Cross posting to zfs-discuss.
By my math, here's what you're getting;
4.6MB/sec on writes to ZFS.
2.2MB/sec on reads from ZFS.
90MB/sec on read from block device.
What is c0t1d0 - I assume it's a hardware RAID LUN,
but how many disks, and what type of LUN?
What version of Solaris (cat /etc/re
Hello Ben,
>If you want to put this to the test, consider disabling prefetch and
>trying again. See
>http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
i should have read and follow advice better - this was the essential hint.
thanks very much.
after issuing
echo zfs_prefetc
mhhh - i found this in dmesg:
Mar 30 16:49:20 s-zfs01 genunix: [ID 923486 kern.warning] WARNING: Page83 data
not standards compliant MegaRAID LD 1 RAID5 572G 516O
i don`t have a clue, what this means.
Montag, 30. März 2009, 17:00:54 Uhr CEST
Mar 30 16:48:06 s-zfs01 pcplusmp: [ID 805372 kern.
thanks so far!
so - here are some numbers:
i booted into linux, and streaming writes (1M blocksize) are 6-8MB/s, streaming
READS are >100MB/s (tested with filesize >>ramsize)
with Solaris, i`m getting similar value for WRITES, but READS are painfully
slow.
>1) Have you disabled atime o
I agree with Jim, we need some numbers to help. I would recommend also
looking not just at 'iostat' but also 'fsstat' to get a better idea of
what the IO load is like on an op basis.
Some questions and suggestions come to mind:
1) Have you disabled atime on the dataset(s)? (zfs set atime=off po
Can you give us some numbers?
ptime dd ..
iostat -zxnd while the dd is running, so we can see what
kind of IO rates you're getting from the drives...
Thanks
roland wrote:
Hello,
i have a very weird problem on a FSC RX300 Server machine (LSI Raid controller)
i first came across this wh
27 matches
Mail list logo