freebsd-sta...@freebsd.org] On Behalf Of Markiyan Kushnir
Sent: Friday, January 07, 2011 8:10 AM
To: Jeremy Chadwick
Cc: Chris Forgeron; freebsd-stable@freebsd.org; Artem Belevich; Jean-Yves
Avenard
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
2011/1/7 Jeremy Chadwick
On 09/01/2011 10:14, Patrick M. Hausen wrote:
> I assume you are familiar with these papers?
>
> http://queue.acm.org/detail.cfm?id=1317403
> http://queue.acm.org/detail.cfm?id=1670144
>
> Short version: as hard disk sizes increase to 2 TB and beyond while the URE
> rate
> stays in the order of
On 09/01/2011 10:24, Jean-Yves Avenard wrote:
> On 9 January 2011 21:03, Matthew Seaman
> wrote:
>
>>
>> So you sacrifice performance 100% of the time based on the very unlikely
>> possibility of drives 1+2 or 3+4 failing simultaneously, compared to the
>> similarly unlikely possibility of drive
Hi, all,
Am 09.01.2011 um 11:03 schrieb Matthew Seaman:
> [*] All of this mathematics is pretty suspect, because if two drives
> fail simultaneously in a machine, the chances are the failures are not
> independent, but due to some external cause [eg. like the case fan
> breaking and the box toast
On 9 January 2011 21:03, Matthew Seaman wrote:
>
> So you sacrifice performance 100% of the time based on the very unlikely
> possibility of drives 1+2 or 3+4 failing simultaneously, compared to the
> similarly unlikely possibility of drives 1+3 or 1+4 or 2+3 or 2+4
But this is not what you firs
On 09/01/2011 09:01, Jean-Yves Avenard wrote:
> Hi
>
> On 9 January 2011 19:44, Matthew Seaman
> wrote:
>> Not without backing up your current data, destroying the existing
>> zpool(s) and rebuilding from scratch.
>>
>> Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
>> p
Hi
On 9 January 2011 19:44, Matthew Seaman wrote:
> Not without backing up your current data, destroying the existing
> zpool(s) and rebuilding from scratch.
>
> Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
> pairs of disks, and the RAID10 mirror is going to be rather m
Brill! Thanks :)
Joe
On 8 Jan 2011, at 09:50, Jeremy Chadwick wrote:
> On Sat, Jan 08, 2011 at 09:14:19AM +, Josef Karthauser wrote:
>> On 7 Jan 2011, at 17:30, Artem Belevich wrote:
>>> One way to get specific ratio for *your* pool would be to collect
>>> record size statistics from your
On 09/01/2011 05:50, Randy Bush wrote:
> given i have raid or raidz1, can i move to raidz2?
>
> # zpool status
> pool: tank
> state: ONLINE
> scrub: none requested
> config:
>
> NAMESTATE READ WRITE CKSUM
> tankONLINE 0 0 0
> raidz1
given i have raid or raidz1, can i move to raidz2?
# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz1ONLINE 0 0 0
ad4s2 ONLINE
On Sat, Jan 08, 2011 at 09:14:19AM +, Josef Karthauser wrote:
> On 7 Jan 2011, at 17:30, Artem Belevich wrote:
> > One way to get specific ratio for *your* pool would be to collect
> > record size statistics from your pool using "zdb -L -b " and
> > then calculate L2ARC:ARC ratio based on aver
On 7 Jan 2011, at 17:30, Artem Belevich wrote:
> One way to get specific ratio for *your* pool would be to collect
> record size statistics from your pool using "zdb -L -b " and
> then calculate L2ARC:ARC ratio based on average record size. I'm not
> sure, though whether L2ARC stores records in co
On Fri, Jan 7, 2011 at 3:16 AM, Matthew D. Fuller
wrote:
> On Thu, Jan 06, 2011 at 03:45:04PM +0200 I heard the voice of
> Daniel Kalchev, and lo! it spake thus:
>>
>> You should also know that having large L2ARC requires that you also
>> have larger ARC, because there are data pointers in the ARC
On 1/7/11 1:10 PM, Markiyan Kushnir wrote:
> 2011/1/7 Jeremy Chadwick :
>> On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
>>> On 6 January 2011 22:26, Chris Forgeron wrote:
You know, these days I'm not as happy with SSD's for ZIL. I may blog about
some of the speed
2011/1/7 Jeremy Chadwick :
> On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
>> On 6 January 2011 22:26, Chris Forgeron wrote:
>> > You know, these days I'm not as happy with SSD's for ZIL. I may blog about
>> > some of the speed results I've been getting over the last 6mo-1yr
On Thu, Jan 06, 2011 at 03:45:04PM +0200 I heard the voice of
Daniel Kalchev, and lo! it spake thus:
>
> You should also know that having large L2ARC requires that you also
> have larger ARC, because there are data pointers in the ARC that
> point to the L2ARC data. Someone will do good to the com
On Thu, Jan 06, 2011 at 08:20:00PM -0800, Jeremy Chadwick wrote:
> HyperDrive 5M (DDR2-based; US$299)
>
> 1) Product documentation claims that "the drive has built-in ECC so you
> can use non-ECC DDR2 DIMMs" -- this doesn't make sense to me from a
> technical pe
On Fri, Jan 07, 2011 at 01:40:52PM +1100, Jean-Yves Avenard wrote:
> On 7 January 2011 12:42, Jeremy Chadwick wrote:
>
> > DDRdrive:
> > http://www.ddrdrive.com/
> > http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
> >
> > ACard ANS-9010:
> > http://techreport.com/a
On 7 January 2011 12:42, Jeremy Chadwick wrote:
> DDRdrive:
> http://www.ddrdrive.com/
> http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
>
> ACard ANS-9010:
> http://techreport.com/articles.x/16255
>
> GC-RAMDISK (i-RAM) products:
> http://us.test.giga-byte.com/Pr
On Thu, Jan 06, 2011 at 05:42:49PM -0800, Jeremy Chadwick wrote:
> On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
> > On 6 January 2011 22:26, Chris Forgeron wrote:
> > > You know, these days I'm not as happy with SSD's for ZIL. I may blog
> > > about some of the speed results
On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
> On 6 January 2011 22:26, Chris Forgeron wrote:
> > You know, these days I'm not as happy with SSD's for ZIL. I may blog about
> > some of the speed results I've been getting over the last 6mo-1yr that I've
> > been running them
On 6 January 2011 22:26, Chris Forgeron wrote:
> You know, these days I'm not as happy with SSD's for ZIL. I may blog about
> some of the speed results I've been getting over the last 6mo-1yr that I've
> been running them with ZFS. I think people should be using hardware RAM
> drives. You can g
Hi
On 7 January 2011 00:45, Daniel Kalchev wrote:
> For pure storage, that is a place you send/store files, you don't really
> need the ZIL. You also need the L2ARC only if you read over and over again
> the same dataset, which is larger than the available ARC (ZFS cache memory).
> Both will not
On 6 January 2011 14:45, Daniel Kalchev wrote:
> For pure storage, that is a place you send/store files, you don't really
> need the ZIL. You also need the L2ARC only if you read over and over again
> the same dataset, which is larger than the available ARC (ZFS cache memory).
> Both will not be s
For pure storage, that is a place you send/store files, you don't really
need the ZIL. You also need the L2ARC only if you read over and over
again the same dataset, which is larger than the available ARC (ZFS
cache memory). Both will not be significant for 'backup server'
application, because
riot [mailto:m...@my.gd]
> Sent: Thursday, January 06, 2011 5:20 AM
> To: Artem Belevich
> Cc: Chris Forgeron; freebsd-stable@freebsd.org
> Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
>
> You both make good points, thanks for the feedback :)
>
&g
ebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
You both make good points, thanks for the feedback :)
I am more concerned about data protection than performance, so I suppose raidz2
is the best choice I have with such a small scale setup.
Now the question that re
You both make good points, thanks for the feedback :)
I am more concerned about data protection than performance, so I suppose raidz2
is the best choice I have with such a small scale setup.
Now the question that remains is wether or not to use parts of the OS's ssd for
zil, cache, or both ?
-
essage-
From: owner-freebsd-sta...@freebsd.org
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
Sent: Wednesday, January 05, 2011 5:55 PM
To: Chris Forgeron
Cc: freebsd-stable@freebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
Well actually.
On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot wrote:
> Well actually...
>
> raidz2:
> - 7x 1.5 tb = 10.5tb
> - 2 parity drives
>
> raidz1:
> - 3x 1.5 tb = 4.5 tb
> - 4x 1.5 tb = 6 tb , total 10.5tb
> - 2 parity drives in split thus different raidz1 arrays
>
> So really, in both cases 2 different
it away on a boot drive.
>
> --
>
>
> -Original Message-
> From: owner-freebsd-sta...@freebsd.org
> [mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
> Sent: January-05-11 5:01 AM
> To: Damien Fleuriot
> Cc: freebsd-stable@freebsd.org
>
sd-sta...@freebsd.org] On Behalf Of Damien Fleuriot
Sent: January-05-11 5:01 AM
To: Damien Fleuriot
Cc: freebsd-stable@freebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
Hi again List,
I'm not so sure about using raidz2 anymore, I'm concerned for the per
Hi again List,
I'm not so sure about using raidz2 anymore, I'm concerned for the performance.
Basically I have 9x 1.5T sata drives.
raidz2 and 2x raidz1 will provide the same capacity.
Are there any cons against using 2x raidz1 instead of 1x raidz2 ?
I plan on using a SSD drive for the OS, 40-
On 1/3/11 2:17 PM, Ivan Voras wrote:
> On 12/30/10 12:40, Damien Fleuriot wrote:
>
>> I am concerned that in the event a drive fails, I won't be able to
>> repair the disks in time before another actually fails.
>
> An old trick to avoid that is to buy drives from different series or
> manufact
On 12/30/10 12:40, Damien Fleuriot wrote:
I am concerned that in the event a drive fails, I won't be able to
repair the disks in time before another actually fails.
An old trick to avoid that is to buy drives from different series or
manufacturers (the theory is that identical drives tend to
On 2010-Dec-30 12:40:00 +0100, Damien Fleuriot wrote:
>What are the steps for properly removing my drives from the zraid1 pool
>and inserting them in the zraid2 pool ?
I've documented my experiences in migrating from a 3-way RAIDZ1 to a
6-way RAIDZ2 at http://bugs.au.freebsd.org/dokuwiki/doku.php
On Sun, 02 Jan 2011 15:31:49 +0100, Damien Fleuriot wrote:
On 1/1/11 6:28 PM, Jean-Yves Avenard wrote:
On 2 January 2011 02:11, Damien Fleuriot wrote:
I remember getting rather average performance on v14 but Jean-Yves
reported good performance boosts from upgrading to v15.
that was v28
On 1/1/11 6:28 PM, Jean-Yves Avenard wrote:
> On 2 January 2011 02:11, Damien Fleuriot wrote:
>
>> I remember getting rather average performance on v14 but Jean-Yves
>> reported good performance boosts from upgrading to v15.
>
> that was v28 :)
>
> saw no major difference between v14 and v15.
On 2 January 2011 02:11, Damien Fleuriot wrote:
> I remember getting rather average performance on v14 but Jean-Yves
> reported good performance boosts from upgrading to v15.
that was v28 :)
saw no major difference between v14 and v15.
JY
___
freebsd
This is a home machine so I am afraid I won't have backups in place, if
only because I just won't have another machine with as much disk space.
The data is nothing critically important anyway, movies, music mostly.
My objective here is getting more used to ZFS and seeing how performance
gets.
On Thu, 30 Dec 2010 12:40:00 +0100, Damien Fleuriot wrote:
Hello list,
I currently have a ZFS zraid1 with 4x 1.5TB drives.
The system is a zfs-only FreeBSD 8.1 with zfs version 14.
I am concerned that in the event a drive fails, I won't be able to
repair the disks in time before another act
Hi,
I think it's enough to have 2 parity drives with raidz2. If a drive fails
another two has to fail for data loss. However, keep in mind that raid (in
any form) is not instead of backups.
I have a setup where a 8TB RAID5 is the main backup and serves as file
server for not important things AND
42 matches
Mail list logo