>> One day, the write performance of zfs degrade.
>> The write performance decrease from 60MB/s to about 6MB/s in sequence
>> write.
>>
>> Command:
>> date;dd if=/dev/zero of=block bs=1024*128 count=1;date
See this thread:
http://www.opensolaris.org/jive/thread.jspa?threadID=139317&tstart=45
And one comment:
When we do write operation(by command dd), heavy read operation
increased from zero to 3M for each disk,
and the write bandwidth is poor.
The disk io %b increase from 0 to about 60.
I don't understand why this happened.
capacity o
Hi,
I got a wired write performance and need your help.
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence write.
Command:
date;dd if=/dev/zero of=block bs=1024*128 count=1;date
The hardware configuration is 1 Dell MD3000 an
On 07/06/2011 22:57, LaoTsao wrote:
You have un balance setup
Fc 4gbps vs 10gbps nic
It's actually 2x 4Gbps (using MPXIO) vs 1x 10Gbps.
After 10b/8b encoding it is even worse, but this not yet impact your benchmark
yet
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Jun 7, 2011, at 5:
You have un balance setup
Fc 4gbps vs 10gbps nic
After 10b/8b encoding it is even worse, but this not yet impact your benchmark
yet
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Jun 7, 2011, at 5:46 PM, Phil Harman wrote:
> On 07/06/2011 20:34, Marty Scholes wrote:
>> I'll throw out som
On 07/06/2011 20:34, Marty Scholes wrote:
I'll throw out some (possibly bad) ideas.
Thanks for taking the time.
Is ARC satisfying the caching needs? 32 GB for ARC should almost cover the
40GB of total reads, suggesting that the L2ARC doesn't add any value for this
test.
Are the SSD device
I'll throw out some (possibly bad) ideas.
Is ARC satisfying the caching needs? 32 GB for ARC should almost cover the
40GB of total reads, suggesting that the L2ARC doesn't add any value for this
test.
Are the SSD devices saturated from an I/O standpoint? Put another way, can ZFS
put data to
> The guide suggests that the zil be sized to 1/2 the amount of ram in the
> server which would be 1GB.
The ZFS Best Practices Guide does detail the absolute maximum
size the ZIL can grow in theory, which as you stated is 1/2 the size
of the host's physical memory. But in practice, the very next
Ok here's the thing ...
A customer has some big tier 1 storage, and has presented 24 LUNs (from
four RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC
bridge (using some of the cool features of ZFS along the way). The OI
box currently has 32GB configured for the ARC, and 4x 2
The server I have currently have only has 2GB of ram. At some point, I
will be adding more ram to the server but I'm not sure when.
I want to add a mirrored zil. I have 2 Intel 32GB SSDSA2SH032G1GN drives
As such, I have been reading the ZFS Best Practices Guide
http://www.solarisinternals.com/
I am running zpool 22 (Solaris 10U9) and I am looking for a way to
determine how much more work has to be done to complete a resilver
operation (it is already at 100%, but I know that is not a really
accurate number).
From my understanding of how the resilver operation works, it
walks the
11 matches
Mail list logo