Quoting David Magda :
On Wed, September 16, 2009 10:31, Edward Ned Harvey wrote:
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
disks hold any significant advantages over a RAIDZ pool?
Generally spea
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs
for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24
disks hold any significant advantages over a RAIDZ pool?
This email
Quoting Brian Hechinger :
If you need to ask you can't afford it? :-D
-brian
--
We can all dream can't we?
This email and any files transmitted with it are confidential and are
intended solely for the use
Quoting Roman Naumenko :
http://www.plianttechnology.com/lightning_ls.php
Write Endurance Unlimited
:)
Does anyone know list prices?
This email and any files transmitted with it are confidential and a
Quoting Bob Friesenhahn :
On Thu, 10 Sep 2009, Rich Morris wrote:
On 07/28/09 17:13, Rich Morris wrote:
On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:
Sun has opened internal CR 6859997. It is now in Dispatched
state at High priority.
CR 6859997 has recently been fixed in Neva
Quoting en...@businessgrade.com:
Hi. As the subject indicates, I'm trying to understand the impact of
the ZFS prefetch "issues" and if they only impact a local zfs
filesystem versus say a zvol that remotely using the lun via iSCSI or
Fibrechannel.
Can anyone comment on this?
The specific "issue
Hi. As the subject indicates, I'm trying to understand the impact of
the ZFS prefetch "issues" and if they only impact a local zfs
filesystem versus say a zvol that remotely using the lun via iSCSI or
Fibrechannel.
Can anyone comment on this?
The specific "issues" are consolidated here:
ht
Quoting Mertol Ozyoney :
Hi;
You may be hitting a bottleneck at your HBA. Try using multiple HBA's or
drive channels
Mertol
I'm pretty sure it's not a HBA issue. As I commented, my per-disk
write throughput stayed pretty consistent for 4, 8 and 12 disk pools
and varied between 80 and 90
Quoting Bob Friesenhahn :
On Mon, 31 Aug 2009, en...@businessgrade.com wrote:
Hi. I've been doing some simple read/write tests using filebench on
a mirrored pool. Essentially, I've been scaling up the number of
disks in the pool before each test between 4, 8 and 12. I've
noticed that f
Hi. I've been doing some simple read/write tests using filebench on a
mirrored pool. Essentially, I've been scaling up the number of disks
in the pool before each test between 4, 8 and 12. I've noticed that
for individual disks, ZFS write performance scales very well between
4, 8 and 12 dis
10 matches
Mail list logo