Neil,
many thanks for publishing this doc - it is exactly
what I was looking for !
On 7/9/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
> Er with attachment this time.
>
>
> > So I've attached the accepted proposal. There was (as expected) not
> > much discussion of this case as it was considered a
Mike Gerdts wrote:
> Perhaps a better approach is to create a pseudo file system that looks like:
>
> /pool
>/@@
>/@today
>/@yesterday
>/fs
> /@@
> /@2007-06-01
>/otherfs
>/@@
How is this d
On 7/11/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Mike Gerdts wrote:
> > Perhaps a better approach is to create a pseudo file system that looks like:
> >
> > /pool
> >/@@
> >/@today
> >/@yesterday
> >/fs
> > /@@
> >
> This "restore problem" is my key worry in deploying ZFS in the area
> where I see it as most beneficial. Another solution that would deal
> with the same problem is block-level deduplication. So far my queries
> in this area have been met with silence.
I must have missed your messages on dedup
Our main problem with TSM and ZFS is currently that there seems to be
no efficient way to do a disaster restore when the backup
resides on tape - due to the large number of filesystems/TSM filespaces.
The graphical client (dsmj) does not work at all and with dsmc one
has to start a separate resto
Richard's blog analyzes MTTDL as a function of N+P+S:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
But to understand how to best utilize an array with a fixed number of
drives, I add the following constraints:
- N+P should follow ZFS best-practice rule of N={2,4,8
Resent as HTML to avoid line-wrapping:
Richard's blog analyzes MTTDL as a function of N+P+S:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
But to understand how to best utilize an array with a fixed number of
drives, I add the following constraints:
- N+P should fol
> While its true that RAIDZ2 is /much /safer that RAIDZ, it seems that
> /any /RAIDZ configuration will outlive me and so I conclude that RAIDZ2
> is unnecessary in a practical sense... This conclusion surprises me
> given the amount of attention people give to double-parity solutions -
> what
All,
When I reformatted to HTML, I forgot ro fix the code also - here is the
correct code:
#include
#include
#define NUM_BAYS 24
#define DRIVE_SIZE_GB 300
#define MTBF_YEARS 4
#define MTTR_HOURS_NO_SPARE 16
#define MTTR_HOURS_SPARE 4
int main() {
printf("\n");
printf("%u bays w/ %u
Darren Dunham wrote:
>> While its true that RAIDZ2 is /much /safer that RAIDZ, it seems that
>> /any /RAIDZ configuration will outlive me and so I conclude that RAIDZ2
>> is unnecessary in a practical sense... This conclusion surprises me
>> given the amount of attention people give to double-p
> Are Netapp using some kind of block checksumming?
They provide an option for it, I'm not sure how often it's used.
> If Netapp doesn't do something like [ZFS checksums], that would
> explain why there's frequently trouble reconstructing, and point up a
> major ZFS advantage.
Actually, the real
#define DRIVE_SIZE_GB 300
#define MTBF_YEARS 2
#define MTTR_HOURS_NO_SPARE 48
#define MTTR_HOURS_SPARE 8
#define NUM_BAYS 10
- can have 3 (2+1) w/ 1 spares providing 1800 GB with MTTDL of 243.33 years
- can have 2 (4+1) w/ 0 spares providing 2400 GB with MTTDL of 18.25 years
- can have 1
> Are Netapp using some kind of block checksumming? That seems to be one
> of the big wins of ZFS compared to ordinary filesystems -- I have a
> higher confidence that data I haven't accessed recently is still good.
> If Netapp doesn't do something like that, that would explain why there's
>
cool. comments below...
Kent Watsen wrote:
Richard's blog analyzes MTTDL as a function of N+P+S:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
But to understand how to best utilize an array with a fixed number of
drives, I add the following constraints:
- N+P s
>> But to understand how to best utilize an array with a fixed number of
>> drives, I add the following constraints:
>> - N+P should follow ZFS best-practice rule of N={2,4,8} and P={1,2}
>> - all sets in an array should be configured similarly
>> - the MTTDL for S sets is equal to (MTTDL f
Kent Watsen wrote:
>> #define MTTR_HOURS_NO_SPARE 16
>>
>> I think this is optimistic :-)
>>
> Not really for me as the array is in my basement - so I assume that I'll
> swap in a drive when I get home from work ;)
>
Yes, it's interesting how the parameters for home setups differ from
"p
Kent Watsen wrote:
>
>>> But to understand how to best utilize an array with a fixed number of
>>> drives, I add the following constraints:
>>> - N+P should follow ZFS best-practice rule of N={2,4,8} and P={1,2}
>>> - all sets in an array should be configured similarly
>>> - the MTTDL for S
David Dyer-Bennet wrote:
> Kent Watsen wrote:
>
>>> #define MTTR_HOURS_NO_SPARE 16
>>>
>>> I think this is optimistic :-)
>>>
>>>
>> Not really for me as the array is in my basement - so I assume that I'll
>> swap in a drive when I get home from work ;)
>>
>>
> Yes, it's in
Adam Leventhal wrote:
> This is a great idea. I'd like to add a couple of suggestions:
>
> It might be interesting to focus on compression algorithms which are
> optimized for particular workloads and data types, an Oracle database for
> example.
NB. Oracle 11g has builtin compression. In genera
On 11-Jul-07, at 3:16 PM, David Dyer-Bennet wrote:
> Kent Watsen wrote:
>>> #define MTTR_HOURS_NO_SPARE 16
>>>
>>> I think this is optimistic :-)
>>>
>> Not really for me as the array is in my basement - so I assume
>> that I'll
>> swap in a drive when I get home from work ;)
>>
> Yes, it's in
Hi,
I'm struggling to get a stable ZFS replication using Solaris 10 110/06
(actual patches) and AVS 4.0 for several weeks now. We tried it on
VMware first and ended up in kernel panics en masse (yes, we read Jim
Dunham's blog articles :-). Now we try on the real thing, two X4500
servers. Well
21 matches
Mail list logo