On 08 June, 2011 - Donald Stahl sent me these 0,6K bytes:
> >> One day, the write performance of zfs degrade.
> >> The write performance decrease from 60MB/s to about 6MB/s in sequence
> >> write.
> >>
> >> Command:
> >> date;dd if=/dev/zero of=block bs=1024*128 count=1;date
>
> See this thre
Hi, also see;
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg45408.html
We hit this with Sol11 though, not sure if it's possible with sol10
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf
On 06/08/2011 12:12 PM, Donald Stahl wrote:
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence
write.
Command:
date;dd if=/dev/zero of=block bs=1024*128 count=1;date
See this thread:
http://www.opensolaris.org/jive/thread.
For now, I find it take long time in function metaslab_block_picker in
metaslab.c.
I guess there maybe many avl search actions.
I still not sure what cause the avl search and if there is any
parameters to tune for it.
Any suggestions?
On 06/08/2011 05:57 PM, Markus Kovero wrote:
Hi, also se
On 06/08/2011 04:05 PM, Tomas Ögren wrote:
On 08 June, 2011 - Donald Stahl sent me these 0,6K bytes:
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence
write.
Command:
date;dd if=/dev/zero of=block bs=1024*128 count=1;date
Hi
I got following problem: duing a controller (LSI MegaRAID 9261-8i) outage I
goot a Solaris Express 11 zpool corrupted.
It is a whole 1,3 TB rpool zpool, RAID5 made by controller.
Changing damaged controller the new one reports it be OPTIMAL.
Important, zpool has got dedup enabled.
Ok,
I tested it. It made two Scrubs with open encrypted folders. No issues
anymore. Thanks for the hint. Hope that will be fixed for all soon.
Cheers
Am 06.06.2011, 11:54 Uhr, schrieb Darren J Moffat
:
On 06/04/11 13:52, Thomas Hobbes wrote:
I am testing Solaris Express 11 with napp-i
> > Are some of the reads sequential? Sequential reads
> don't go to L2ARC.
>
> That'll be it. I assume the L2ARC is just taking
> metadata. In situations
> such as mine, I would quite like the option of
> routing sequential read
> data to the L2ARC also.
The good news is that it is almost a c
On 08/06/2011 14:35, Marty Scholes wrote:
Are some of the reads sequential? Sequential reads
don't go to L2ARC.
That'll be it. I assume the L2ARC is just taking
metadata. In situations
such as mine, I would quite like the option of
routing sequential read
data to the L2ARC also.
The good news
On 06/08/2011 09:15 PM, Donald Stahl wrote:
"metaslab_min_alloc_size" is not in use when block allocator isDynamic block
allocator[1].
So it is not tunable parameter in my case.
May I ask where it says this is not a tunable in that case? I've read
through the code and I don't see what you are ta
Anyone running a Crucial CT064M4SSD2? Any good, or should
I try getting a RealSSD C300, as long as these are still
available?
--
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com h
> In Solaris 10u8:
> root@nas-hz-01:~# uname -a
> SunOS nas-hz-01 5.10 Generic_141445-09 i86pc i386 i86pc
> root@nas-hz-01:~# echo "metaslab_min_alloc_size/K" | mdb -kw
> mdb: failed to dereference symbol: unknown symbol name
Fair enough. I don't have anything older than b147 at this point so I
was
On 06/08/11 01:05, Tomas Ögren wrote:
And if pool usage is>90%, then there's another problem (change of
finding free space algorithm).
Another (less satisfying) workaround is to increase the amount of free
space in the pool, either by reducing usage or adding more storage.
Observed behavior i
On Jun 7, 2011, at 9:12 AM, Phil Harman wrote:
> Ok here's the thing ...
>
> A customer has some big tier 1 storage, and has presented 24 LUNs (from four
> RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC bridge
> (using some of the cool features of ZFS along the way). The OI
> Another (less satisfying) workaround is to increase the amount of free space
> in the pool, either by reducing usage or adding more storage. Observed
> behavior is that allocation is fast until usage crosses a threshhold, then
> performance hits a wall.
We actually tried this solution. We were at
> This is not a true statement. If the primarycache
> policy is set to the default, all data will
> be cached in the ARC.
Richard, you know this stuff so well that I am hesitant to disagree with you.
At the same time, I have seen this myself, trying to load video files into
L2ARC without succes
On 08 June, 2011 - Eugen Leitl sent me these 0,5K bytes:
>
> Anyone running a Crucial CT064M4SSD2? Any good, or should
> I try getting a RealSSD C300, as long as these are still
> available?
Haven't tried any of those, but how about one of these:
OCZ Vertex3 (Sandforce SF-2281, sataIII, MLC, t
I am running 4 of the 128GB version in our DR environment as L2ARC. I don't
have anything bad to say about them. They run quite well.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tomas Ögren
Sent: Wednesday, June 0
On Wed, Jun 08, 2011 at 11:44:16AM -0700, Marty Scholes wrote:
> And I looked in the source. My C is a little rusty, yet it appears
> that prefetch items are not stored in L2ARC by default. Prefetches
> will satisfy a good portion of sequential reads but won't go to
> L2ARC.
Won't go to L2ARC
On 06/09/2011 12:23 AM, Donald Stahl wrote:
Another (less satisfying) workaround is to increase the amount of free space
in the pool, either by reducing usage or adding more storage. Observed
behavior is that allocation is fast until usage crosses a threshhold, then
performance hits a wall.
We
On 06/09/2011 10:14 AM, Ding Honghui wrote:
On 06/09/2011 12:23 AM, Donald Stahl wrote:
Another (less satisfying) workaround is to increase the amount of
free space
in the pool, either by reducing usage or adding more storage. Observed
behavior is that allocation is fast until usage crosses a
> There is snapshot of metaslab layout, the last 51 metaslabs have 64G free
> space.
After we added all the disks to our system we had lots of free
metaslabs- but that didn't seem to matter. I don't know if perhaps the
system was attempting to balance the writes across more of our devices
but whate
22 matches
Mail list logo