Richard Elling wrote:
> I think a picture is emerging that if you have enough RAM, the
> ARC is working very well. Which means that the ARC management
> is suspect.
>
> I propose the hypothesis that ARC misses are not prefetched. The
> first time through, prefetching works. For the second pass,
You can't replace it because this disk is still a valid member of the pool,
although it is marked faulty.
Put in a replacement disk, add this to the pool and replace the faulty one with
the new disk.
Regards,
Tom
--
This message posted from opensolaris.org
__
Yes, that makes sense. For the first run, the pool has only just been mounted,
so the ARC will be empty, with plenty of space for prefetching.
On the second run however, the ARC is already full of the data that we just
read, and I'm guessing that the prefetch code is less aggressive when there
> It would be good to see results from a few
> OpenSolaris users running a
> recent 64-bit kernel, and with fast storage to see if
> this is an
> OpenSolaris issue as well.
Bob,
Here's an example of an OpenSolaris machine, 2008.11 upgraded to the 117 devel
release. X4540, 32GB RAM. The file
I don't have a replacement, but I don't want the disk to be used right now by
the volume: how do I do that?
This is exactly the point of the offline command as explained in the
documentation: disabling unreliable hardware, or removing it temporarily.
So this is a huge bug of the documentation?
W
On Wed, 15 Jul 2009, Ross wrote:
Yes, that makes sense. For the first run, the pool has only just
been mounted, so the ARC will be empty, with plenty of space for
prefetching.
I don't think that this hypothesis is quite correct. If you use
'zpool iostat' to monitor the read rate while read
On Wed, 15 Jul 2009, My D. Truong wrote:
Here's an example of an OpenSolaris machine, 2008.11 upgraded to the
117 devel release. X4540, 32GB RAM. The file count was bumped up
to 9000 to be a little over double the RAM.
Your timings show a 3.1X hit so it appears that the OpenSolaris
improv
You could offline the disk if [b]this[/b] disk (not the pool) had a replica.
Nothing wrong with the documentation. Hmm, maybe it is little misleading here.
I walked into the same "trap".
The pool is not using the disk anymore anyway, so (from the zfs point of view)
there is no need to offline t
Bob Friesenhahn wrote:
On Wed, 15 Jul 2009, Ross wrote:
Yes, that makes sense. For the first run, the pool has only just
been mounted, so the ARC will be empty, with plenty of space for
prefetching.
I don't think that this hypothesis is quite correct. If you use
'zpool iostat' to monito
On Wed, 15 Jul 2009, Richard Elling wrote:
Unfortunately, "zpool iostat" doesn't really tell you anything about
performance. All it shows is bandwidth. Latency is what you need
to understand performance, so use iostat.
You are still thinking about this as if it was a hardware-related
problem
Bob Friesenhahn wrote:
On Wed, 15 Jul 2009, Richard Elling wrote:
Unfortunately, "zpool iostat" doesn't really tell you anything about
performance. All it shows is bandwidth. Latency is what you need
to understand performance, so use iostat.
You are still thinking about this as if it was a
On Wed, 15 Jul 2009, Richard Elling wrote:
heh. What you would be looking for is evidence of prefetching. If
there is a lot of prefetching, the actv will tend to be high and
latencies relatively low. If there is no prefetching, actv will be
low and latencies may be higher. This also implies
Aaah, ok, I think I understand now. Thanks Richard.
I'll grab the updated test and have a look at the ARC ghost results when I get
back to work tomorrow.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
I want to express my thanks. My gratitude. I am not easily impressed
by technology anymore and ZFS impressed me this morning.
Sometime late last night a primary server of mine had a critical
fault. One of the PCI cards in a V480 was the cause and for whatever
reasons this destroyed the DC-DC powe
I recently installed opensolaris with the intention of creating a home
fileserver. The machine I installed on has two 1TB drives, and I wanted to
create a raidz config. Unfortunately, I am very, very new to solaris and
installed the OS on a single 100GB partition on the first disk, with the
a
I found a guide that explains how to accomplish what I was looking to do:
http://www.kamiogi.net/Kamiogi/Frame_Dragging/Entries/2009/5/10_OpenSolaris_Disk_Partitioning_and_the_Free_Hog.html
--
This message posted from opensolaris.org
___
zfs-discuss mai
Today, I ran a scrub on my rootFS pool.
I received the following lovely output:
# zpool status larger_root
pool: larger_root
state: ONLINE
scrub: scrub completed after 307445734561825856h29m with 0 errors on
Wed Jul 15 21:49:02 2009
config:
NAME STATE READ WRITE CKSUM
On Wed, Jul 15, 2009 at 9:19 PM, Rich wrote:
> Today, I ran a scrub on my rootFS pool.
>
> I received the following lovely output:
> # zpool status larger_root
> pool: larger_root
> state: ONLINE
> scrub: scrub completed after 307445734561825856h29m with 0 errors on
> Wed Jul 15 21:49:02 2009
>
18 matches
Mail list logo