raidz2 is recommended. As discs get large, it can take long time to repair
raidz. Maybe several days. With raidz1, if another discs blows during repair,
you are screwed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Thomas Burgess wrote:
For the OS, I'd drop the adapter/compact-flash combo and use the
"stripped down" Kingston version of the Intel x25m MLC SSD. If you're
not familiar with it, the basic scoup is that this drive contains half
the flash memory (40Gb) *and* half the controller
>
> For the OS, I'd drop the adapter/compact-flash combo and use the
> "stripped down" Kingston version of the Intel x25m MLC SSD. If you're
> not familiar with it, the basic scoup is that this drive contains half
> the flash memory (40Gb) *and* half the controller channels (5 versus
> 10) of the
Rather than hacking something like that, he could use a Disk on Module
(http://en.wikipedia.org/wiki/Disk_on_module) or something like
http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html
(which I suspect may be a DOM but I've not poked around sufficiently to see).
Paul
--
On Wed, Dec 30, 2009 at 7:08 AM, Thomas Burgess wrote:
>
> I'm about to build a ZFS based NAS and i'd like some suggestions about how to
> set up my drives.
>
> The case i'm using holds 20 hot swap drives, so i plan to use either 4 vdevs
> with 5 drives or 5 vdevs with 4 drives each (and a hot s
On Wed, 30 Dec 2009, Richard Elling wrote:
Disagree. Scrubs and resilvers are IOPS bound.
This is a case of "it depends". On both of my Solaris systems, scrubs
seem to be bandwidth-limited. However, I am not using raidz or SATA
and the drives are faster than the total connectivity.
Bob
-
On Dec 30, 2009, at 11:01 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Thomas Burgess wrote:
Just curious, but in your "ideal" situation, is it considered best
to use 1 controller for each vdev or user a different controler for
each device in the vdev (i'd guess the latter but ive been wr
On Dec 30, 2009, at 10:56 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Richard Elling wrote:
He's limited by GbE, which can only do 100 MB/s or so...
the PCI busses, bridges, memory, controllers, and disks will
be mostly loafing, from a bandwidth perspective. In other
words, don't worry abo
On Wed, Dec 30, 2009 at 2:01 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 30 Dec 2009, Thomas Burgess wrote:
>
>>
>> Just curious, but in your "ideal" situation, is it considered best to use
>> 1 controller for each vdev or user a different controler for each device in
>> t
On Wed, 30 Dec 2009, Thomas Burgess wrote:
Just curious, but in your "ideal" situation, is it considered best
to use 1 controller for each vdev or user a different controler for
each device in the vdev (i'd guess the latter but ive been wrong
before)
From both a fault-tolerance standpoint,
On Wed, 30 Dec 2009, Richard Elling wrote:
He's limited by GbE, which can only do 100 MB/s or so...
the PCI busses, bridges, memory, controllers, and disks will
be mostly loafing, from a bandwidth perspective. In other
words, don't worry about it.
Except that cases like 'zfs scrub' and resilve
On Wed, Dec 30, 2009 at 1:17 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 30 Dec 2009, Thomas Burgess wrote:
>
>>
>> and, onboard with 6 sata portsso what would be the best method of
>> connecting the drives if i go with 4 raidz vdevs or 5 raidz vdevs?
>>
>
> Try to
On Dec 30, 2009, at 10:17 AM, Bob Friesenhahn wrote:
On Wed, 30 Dec 2009, Thomas Burgess wrote:
and, onboard with 6 sata portsso what would be the best
method of connecting the drives if i go with 4 raidz vdevs or 5
raidz vdevs?
Try to distribute the raidz vdevs as evenly as possib
On Wed, 30 Dec 2009, Thomas Burgess wrote:
and, onboard with 6 sata portsso what would be the best
method of connecting the drives if i go with 4 raidz vdevs or 5
raidz vdevs?
Try to distribute the raidz vdevs as evenly as possible across the
available SATA controllers. In other wo
On Dec 30, 2009, at 7:50 AM, Thomas Burgess wrote:
ok, but how should i connect the drives across the controllers?
Don't worry about the controllers. They are at least an order of
magnitude more reliable than the disks and if you are using HDDs,
then you will have plenty of performance.
-- ri
ok, but how should i connect the drives across the controllers?
i'll have 3 pci-x cards each with 8 sata ports
2 pci-x bus with 133 Mhz and 2 with 100 Mhz
and, onboard with 6 sata portsso what would be the best method of
connecting the drives if i go with 4 raidz vdevs or 5 raidz vdevs?
Hello,
On Dec 30, 2009, at 2:08 PM, Thomas Burgess wrote:
> I'm about to build a ZFS based NAS and i'd like some suggestions about how to
> set up my drives.
>
> The case i'm using holds 20 hot swap drives, so i plan to use either 4 vdevs
> with 5 drives or 5 vdevs with 4 drives each (and a ho
I can't answer your question - but I would like to see more details about the
system you are building (sorry if off topic here). What motherboard and what
compact flash adapters are you using?
--
This message posted from opensolaris.org
___
zfs-discus
I'm about to build a ZFS based NAS and i'd like some suggestions about how
to set up my drives.
The case i'm using holds 20 hot swap drives, so i plan to use either 4 vdevs
with 5 drives or 5 vdevs with 4 drives each (and a hot spare inside the
machine)
The motherboard i'm getting has 4 pci-x sl
19 matches
Mail list logo