On Wed, Jun 20, 2007 at 12:03:02PM -0400, Will Murnane wrote:
> Yes. 2 disks means when one fails, you've still got an extra. In
> raid 5 boxes, it's not uncommon with large arrays for one disk to die,
> and when it's replaced, the stress on the other disks causes another
> failure. Then the arr
Oliver Schinagl wrote:
zo basically, what you are saying is that on FBSD there's no performane
issue, whereas on solaris there (can be if write caches aren't enabled)
Solaris plays it safe by default. You can, of course, override that safety.
Whether it is a performance win seems to be the sub
mike wrote:
I would be interested in hearing if there are any other configuration
options to squeeze the most space out of the drives. I have no issue
with powering down to replace a bad drive, and I expect that I'll only
have one at the most fail at a time.
This is what is known as "famous las
On Wed, Jun 20, 2007 at 09:48:08AM -0700, Eric Schrock wrote:
> On Wed, Jun 20, 2007 at 12:45:52PM +0200, Pawel Jakub Dawidek wrote:
> >
> > Will be nice to not EFI label disks, though:) Currently there is a
> > problem with this - zpool created on Solaris is not recognized by
> > FreeBSD, because
On Wed, 2007-06-20 at 12:45 +0200, Pawel Jakub Dawidek wrote:
> Will be nice to not EFI label disks, though:) Currently there is a
> problem with this - zpool created on Solaris is not recognized by
> FreeBSD, because FreeBSD claims GPT label is corrupted.
Hmm. I'd think the right answer here is
mike wrote:
> On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
>
>> One disk can be one vdev.
>> A 1+1 mirror can be a vdev, too.
>> A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too.
>>
>> - Then you concatenate vdevs to create a pool. Pools can be extended by
>> adding more vdev
On Wed, Jun 20, 2007 at 12:45:52PM +0200, Pawel Jakub Dawidek wrote:
>
> Will be nice to not EFI label disks, though:) Currently there is a
> problem with this - zpool created on Solaris is not recognized by
> FreeBSD, because FreeBSD claims GPT label is corrupted. On the other
> hand, creating ZF
Pawel Jakub Dawidek wrote:
> On Wed, Jun 20, 2007 at 01:45:29PM +0200, Oliver Schinagl wrote:
>
>> Pawel Jakub Dawidek wrote:
>>
>>> On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
>>>
>>>
> On that note, i have a different first question to start with. I
>>>
On Wed, Jun 20, 2007 at 01:45:29PM +0200, Oliver Schinagl wrote:
>
>
> Pawel Jakub Dawidek wrote:
> > On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
> >
> >>> On that note, i have a different first question to start with. I
> >>> personally am a Linux fanboy, and would love to
On 6/20/07, mike <[EMAIL PROTECTED]> wrote:
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote:
> I would not risk raidz on that many disks. A nice compromise may be 14+2
> raidz2, which should perform nicely for your workload and be pretty reliable
> when the disks start to fail.
Would anyone on
On 20 June, 2007 - Oliver Schinagl sent me these 1,9K bytes:
> Also what about full disk vs full partition, e.g. make 1 partition to
> span the entire disk vs using the entire disk.
> Is there any significant performance penalty? (So not having a disk
> split into 2 partitions, but 1 disk, 1 parti
On 6/20/07, Paul Fisher <[EMAIL PROTECTED]> wrote:
I would not risk raidz on that many disks. A nice compromise may be 14+2
raidz2, which should perform nicely for your workload and be pretty reliable
when the disks start to fail.
Would anyone on the list not recommend this setup? I could li
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of mike
> Sent: Wednesday, June 20, 2007 9:30 AM
>
> I would prefer something like 15+1 :) I want ZFS to be able to detect
> and correct errors, but I do not need to squeeze all the performance
> out of it (I'll be using it as a home
Hi Mike,
> If I was to plan for a 16 disk ZFS-based system, you would probably
> suggest me to configure it as something like 5+1, 4+1, 4+1 all raid-z
> (I don't need the double parity concept)
>
> I would prefer something like 15+1 :) I want ZFS to be able to detect
> and correct errors, but I d
On 6/20/07, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
One disk can be one vdev.
A 1+1 mirror can be a vdev, too.
A n+1 or n+2 RAID-Z (RAID-Z2) set can be a vdev too.
- Then you concatenate vdevs to create a pool. Pools can be extended by
adding more vdevs.
- Then you create ZFS file s
Hi,
> How are paired mirrors more flexiable?
well, I'm talking of a small home system. If the pool gets full, the
way to expand with RAID-Z would be to add 3+ disks (typically 4-5).
With mirror only, you just add two. So in my case it's just about
the granularity of expansion.
The reasoning is
Constantin Gonzalez wrote:
> Hi,
>
>
>> I'm quite interested in ZFS, like everybody else I suppose, and am about
>> to install FBSD with ZFS.
>>
>
> welcome to ZFS!
>
>
>> Anyway, back to business :)
>> I have a whole bunch of different sized disks/speeds. E.g. 3 300GB disks
>> @ 40mb,
Pawel Jakub Dawidek wrote:
> On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
>
>>> On that note, i have a different first question to start with. I
>>> personally am a Linux fanboy, and would love to see/use ZFS on linux. I
>>> assume that I can use those ZFS disks later with a
On Tue, Jun 19, 2007 at 07:52:28PM -0700, Richard Elling wrote:
> >On that note, i have a different first question to start with. I
> >personally am a Linux fanboy, and would love to see/use ZFS on linux. I
> >assume that I can use those ZFS disks later with any os that can
> >work/recognizes ZFS c
Hi,
> I'm quite interested in ZFS, like everybody else I suppose, and am about
> to install FBSD with ZFS.
welcome to ZFS!
> Anyway, back to business :)
> I have a whole bunch of different sized disks/speeds. E.g. 3 300GB disks
> @ 40mb, a 320GB disk @ 60mb/s, 3 120gb disks @ 50mb/s and so on.
>
Oliver Schinagl wrote:
Hello,
I'm quite interested in ZFS, like everybody else I suppose, and am about
to install FBSD with ZFS.
cool.
On that note, i have a different first question to start with. I
personally am a Linux fanboy, and would love to see/use ZFS on linux. I
assume that I can us
21 matches
Mail list logo