Hi,
*The PCIE 8x port gives me 4GBps, which is 32Gbps. No problem there. Each
ESata port guarantees 3Gbps, therefore 12Gbps limit on the controller.*
I was simply listing the bandwidth available at the different stages of the
data cycle. The PCIE port gives me 32Gbps. The Sata card gives me a pos
A, I see. But I think your math is a bit out:
62.5e6 iop @ 100iops
= 625000 seconds
= 10416m
= 173h
= 7D6h.
So 7 days & 6 hours. Thats long, but I can live with it. This isnt for an
enterprise environment. While the length of time is of worry in terms of
increasing the chance another drive wi
Mattias, what you say makes a lot of sense. When I saw *Both of the above
situations resilver in equal time*, I was like "no way!" But like you said,
assuming no bus bottlenecks.
This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata P
Makes sense. My understanding is not good enough to confidently make my own
decisions, and I'm learning as Im going. The BPG says:
- The recommended number of disks per group is between 3 and 9. If you
have more disks, use multiple groups
If there was a reason leading up to this statement,
On Sep 9, 2010, at 6:39 AM, Marty Scholes wrote:
> Erik wrote:
>> Actually, your biggest bottleneck will be the IOPS
>> limits of the
>> drives. A 7200RPM SATA drive tops out at 100 IOPS.
>> Yup. That's it.
>> So, if you need to do 62.5e6 IOPS, and the rebuild
>> drive can do just 100
>> IOPS,
Ahhh! So thats how the formula works. That makes perfect sense.
Lets take my case as a scenario:
Each of my vdevs is 10 disk RaidZ2 (8 data + 2 Parity). Using 128K stripe, I'll
have 128K/8 = 16K blocks per data drive & 16K blocks per parity drive. That
fits both 512B & 4KB.
It works in my favo
> From: Haudy Kazemi [mailto:kaze0...@umn.edu]
>
> There is another optimization in the Best Practices Guide that says the
> number of devices in a vdev should be (N+P) with P = 1 (raidz), 2
> (raidz2), or 3 (raidz3) and N equals 2, 4, or 8.
> I.e. 2^N + P where N is 1, 2, or 3 and P is the RAIDZ l
Erik Trimble wrote:
On 9/9/2010 2:15 AM, taemun wrote:
Erik: does that mean that keeping the number of data drives in a
raidz(n) to a power of two is better? In the example you gave, you
mentioned 14kb being written to each drive. That doesn't sound very
efficient to me.
(when I say the abo
Comment at end...
Mattias Pantzare wrote:
On Wed, Sep 8, 2010 at 15:27, Edward Ned Harvey wrote:
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
Mattias Pantzare
It
is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
vdev you have to read half the data com
On 9/9/2010 6:19 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
the thing that folks tend to forget is that RaidZ is IOPS limited. For
the most part, if I want to reconstruct a single slab (stripe)
Erik wrote:
> Actually, your biggest bottleneck will be the IOPS
> limits of the
> drives. A 7200RPM SATA drive tops out at 100 IOPS.
> Yup. That's it.
> So, if you need to do 62.5e6 IOPS, and the rebuild
> drive can do just 100
> IOPS, that means you will finish (best case) in
> 62.5e4 seconds
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> The characteristic that *really* makes a big difference is the number
> of
> slabs in the pool. i.e. if your filesystem is composed of mostly small
> files or fragments,
> From: Hatish Narotam [mailto:hat...@gmail.com]
>
> PCI-E 8X 4-port ESata Raid Controller.
> 4 x ESata to 5Sata Port multipliers (each connected to a ESata port on
> the controller).
> 20 x Samsung 1TB HDD's. (each connected to a Port Multiplier).
Assuming your disks can all sustain 500Mbit/sec,
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> the thing that folks tend to forget is that RaidZ is IOPS limited. For
> the most part, if I want to reconstruct a single slab (stripe) of data,
> I have to issue a read to EA
On Thu, Sep 9, 2010 at 09:03, Erik Trimble wrote:
> Actually, your biggest bottleneck will be the IOPS limits of the drives. A
> 7200RPM SATA drive tops out at 100 IOPS. Yup. That's it.
>
> So, if you need to do 62.5e6 IOPS, and the rebuild drive can do just 100
> IOPS, that means you will finis
On 9/9/2010 5:49 AM, hatish wrote:
Very interesting...
Well, lets see if we can do the numbers for my setup.
From a previous post of mine:
[i]This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata Port multipliers (each connected
On 9/9/2010 5:49 AM, hatish wrote:
Very interesting...
Well, lets see if we can do the numbers for my setup.
From a previous post of mine:
[i]This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata Port multipliers (each connected
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> No, it (21-disk raidz3 vdev) most certainly will not resilver in the
> same amount of time. In fact, I highly doubt it would resilver at
> all.
>
> My first foray into ZFS re
Very interesting...
Well, lets see if we can do the numbers for my setup.
>From a previous post of mine:
[i]This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata Port multipliers (each connected to a ESata port on the
controller).
On 9/9/2010 2:15 AM, taemun wrote:
Erik: does that mean that keeping the number of data drives in a
raidz(n) to a power of two is better? In the example you gave, you
mentioned 14kb being written to each drive. That doesn't sound very
efficient to me.
(when I say the above, I mean a five dis
Erik: does that mean that keeping the number of data drives in a raidz(n) to
a power of two is better? In the example you gave, you mentioned 14kb being
written to each drive. That doesn't sound very efficient to me.
(when I say the above, I mean a five disk raidz or a ten disk raidz2, etc)
Cheer
On 9/8/2010 10:08 PM, Freddie Cash wrote:
On Wed, Sep 8, 2010 at 6:27 AM, Edward Ned Harvey wrote:
Both of the above situations resilver in equal time, unless there is a bus
bottleneck. 21 disks in a single raidz3 will resilver just as fast as 7
disks in a raidz1, as long as you are avoiding
On Wed, Sep 8, 2010 at 6:27 AM, Edward Ned Harvey wrote:
> Both of the above situations resilver in equal time, unless there is a bus
> bottleneck. 21 disks in a single raidz3 will resilver just as fast as 7
> disks in a raidz1, as long as you are avoiding the bus bottleneck. But 21
> disks in a
On Wed, Sep 8, 2010 at 15:27, Edward Ned Harvey wrote:
>> From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
>> Mattias Pantzare
>>
>> It
>> is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
>> vdev you have to read half the data compared to 1 vdev to resilver a
>>
> From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
> Mattias Pantzare
>
> It
> is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
> vdev you have to read half the data compared to 1 vdev to resilver a
> disk.
Let's suppose you have 1T of data. You have 12-disk r
Rebuild time is not a concern for me. The concern with rebuilding was the
stress it puts on the disks for an extended period of time (increasing the
chances of another disk failure). The % of data used doesnt matter, as the
system will try to get it done at max speed, thus creating the mentioned
On Wed, Sep 8, 2010 at 06:59, Edward Ned Harvey wrote:
>> On Tue, Sep 7, 2010 at 4:59 PM, Edward Ned Harvey
>> wrote:
>>
>> I think the value you can take from this is:
>> Why does the BPG say that? What is the reasoning behind it?
>>
>> Anything that is a "rule of thumb" either has reasoning be
> On Tue, Sep 7, 2010 at 4:59 PM, Edward Ned Harvey
> wrote:
>
> I think the value you can take from this is:
> Why does the BPG say that? What is the reasoning behind it?
>
> Anything that is a "rule of thumb" either has reasoning behind it (you
> should know the reasoning) or it doesn't (you s
may be 5x(3+1) use one disk from each controller, 15TB usable space,
3+1 raidz rebuild time should be reasonable
On 9/7/2010 4:40 AM, hatish wrote:
Thanks for all the replies :)
My mindset is split in two now...
Some detail - I'm using 4 1-to-5 Sata Port multipliers connected to a 4-port
S
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of hatish
>
> I have just
> read the Best Practices guide, and it says your group shouldnt have > 9
> disks.
I think the value you can take from this is:
Why does the BPG say that? What is the
Thanks for all the replies :)
My mindset is split in two now...
Some detail - I'm using 4 1-to-5 Sata Port multipliers connected to a 4-port
SATA raid card.
I only need reliability and size, as long as my performance is the equivalent
of one drive, Im happy.
Im assuming all the data used in t
On Mon, Sep 6, 2010 at 2:36 PM, Roy Sigurd Karlsbakk wrote:
> a 7k2 drive for l2arc?
It wouldn't be great, but you could put an SSD in the bay instead.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
- Original Message -
> On Mon, Sep 6, 2010 at 8:53 AM, hatish wrote:
> > Im setting up a server with 20x1TB disks. Initially I had thought to
> > setup the disks using 2 RaidZ2 groups of 10 discs. However, I have
> > just read the Best Practices guide, and it says your group shouldnt
> > h
On Mon, Sep 6, 2010 at 8:53 AM, hatish wrote:
> Im setting up a server with 20x1TB disks. Initially I had thought to setup
> the disks using 2 RaidZ2 groups of 10 discs. However, I have just read the
> Best Practices guide, and it says your group shouldnt have > 9 disks. So Im
> thinking a bett
Otherwise you can have 2 discs as hot spare. three 6 disc vdevs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Can you add another disk? then you have three 7 disc vdevs. (Always use raidz2.)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
On Monday 06 September 2010 17:53:44 hatish wrote:
> Im setting up a server with 20x1TB disks. Initially I had thought to setup
> the disks using 2 RaidZ2 groups of 10 discs. However, I have just read the
> Best Practices guide, and it says your group shouldnt have > 9 disks. So
> Im thinking a
Im setting up a server with 20x1TB disks. Initially I had thought to setup the
disks using 2 RaidZ2 groups of 10 discs. However, I have just read the Best
Practices guide, and it says your group shouldnt have > 9 disks. So Im thinking
a better configuration would be 2 x 7disk RaidZ2 + 1 x 6disk
38 matches
Mail list logo