On Jan 29, 2010, at 9:12 AM, Scott Meilicke wrote:
> Link aggregation can use different algorithms to load balance. Using L4 (IP
> plus originating port I think), using a single client computer and the same
> protocol (NFS), but different origination ports has allowed me to saturate
> both NICS
Link aggregation can use different algorithms to load balance. Using L4 (IP
plus originating port I think), using a single client computer and the same
protocol (NFS), but different origination ports has allowed me to saturate both
NICS in my LAG. So yes, you just need more than one 'conversatio
Thomas Burgess wrote:
On Fri, Jan 29, 2010 at 5:54 AM, Edward Ned Harvey
mailto:sola...@nedharvey.com>> wrote:
> Thanks for the responses guys. It looks like I'll probably use
RaidZ2
> with 8 drives. The write bandwidth isn't that great as it'll be a
> hundred gigs every co
On Fri, Jan 29, 2010 at 5:54 AM, Edward Ned Harvey wrote:
> > Thanks for the responses guys. It looks like I'll probably use RaidZ2
> > with 8 drives. The write bandwidth isn't that great as it'll be a
> > hundred gigs every couple weeks but in a bulk load type of environment.
> > So, not a majo
> Thanks for the responses guys. It looks like I'll probably use RaidZ2
> with 8 drives. The write bandwidth isn't that great as it'll be a
> hundred gigs every couple weeks but in a bulk load type of environment.
> So, not a major issue. Testing with 8 drives in a raidz2 easily
> saturated a Gi
On Thu, Jan 28, 2010 at 09:33:19PM -0800, Ed Fang wrote:
> We considered a SSD ZIL as well but from my understanding it won't
> help much on sequential bulk writes but really helps on random
> writes (to sequence going to disk better).
slog will only help if your write load involves lots of sync
Thanks for the responses guys. It looks like I'll probably use RaidZ2 with 8
drives. The write bandwidth isn't that great as it'll be a hundred gigs every
couple weeks but in a bulk load type of environment. So, not a major issue.
Testing with 8 drives in a raidz2 easily saturated a GigE con
On Thu, Jan 28, 2010 at 07:26:42AM -0800, Ed Fang wrote:
> 4 x x6 vdevs in RaidZ1 configuration
> 3 x x8 vdevs in RaidZ2 configuration
Another choice might be
2 x x12 vdevs in raidz2 configuration
This gets you the space of the first, with the recovery properties of
the second - at a cost in pot
> Replacing my current media server with another larger capacity media
> server. Also switching over to solaris/zfs.
>
> Anyhow we have 24 drive capacity. These are for large sequential
> access (large media files) used by no more than 3 or 5 users at a time.
What type of disks are you using,
Personally, I'd go with 4x raidz2 vdevs, each with 6 drives. You may not get
as much raw storage space, but you can lose up to 2 drives per vdev, and you'll
get more IOPS than with a 3x vdev setup.
Our current 24-drive storage servers use the 3x raidz2 vdevs with 8 drives in
each. Performance
It looks like there is not a free slot for a hot spare? If that is the case,
then it is one more factor to push towards raidz2, as you will need time to
remove the failed disk and insert a new one. During that time you don't want to
be left unprotected.
--
This message posted from opensolaris.o
Some very interesting insights on the availability calculations:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
For streaming also look at:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6732803
Regards,
Robert
--
This message posted from opensolaris.o
if a vdev fails you loose the pool.
if you go with raidz1 and 2 of the RIGHT drives fail (2 in the same vdev)
your pool is lost.
I was faced with a similar situation recently and decided that raidz2 was
the better option.
It's comes down to resilver timesif you look at how long it will take
Replacing my current media server with another larger capacity media server.
Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large
media files) used by no more than 3 or 5 users at a time. I'm inquiring as to
what the best configu
14 matches
Mail list logo