thanks - :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 3/9/2010 1:55 PM, Matt Cowger wrote:
> That's a very good point - in this particular case, there is no option to
> change the blocksize for the application.
>
>
I have no way of guessing the effects it would have, but is there a
reason that the filesystem blocks can't be a multiple of the app
On Sat, 24 Apr 2010, Brad wrote:
Hmm so that means read requests are hitting/fulfilled by the arc cache?
Am I correct in assuming that because the ARC cache is fulfilling
read requests, the zpool and l2arc is barely touched?
That is the state of nirvana you are searching for, no?
Bob
--
Bob
Confirmed then that the issue was with the WD10EARS.
I swapped it out with the old one and things look a lot better:
pool: datos
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for
On Apr 24, 2010, at 2:17 PM, Ragnar Sundblad wrote:
> On 24 apr 2010, at 16.43, Richard Elling wrote:
>
>> I do not recall reaching that conclusion. I think the definition of the
>> problem
>> is what you continue to miss.
>
> Me too then, I think. Can you please enlighten us about the
> defini
On Sat, Apr 24, 2010 at 2:17 PM, Ragnar Sundblad wrote:
>
> On 24 apr 2010, at 16.43, Richard Elling wrote:
>
> On 24 apr 2010, at 09.18, Brandon High wrote:
> > To answer the question you linked to:
> > .shapshot/snapname.0/a/b/c/d.txt from the top of the filesystem
> > a/.snapshot/snapname.0/b/
Hmm so that means read requests are hitting/fulfilled by the arc cache?
Am I correct in assuming that because the ARC cache is fulfilling read
requests, the zpool and l2arc is barely touched?
--
This message posted from opensolaris.org
___
zfs-discuss
On Sat, Apr 24, 2010 at 9:21 AM, Peter Tripp wrote:
> Can someone with a stronger understanding of ZFS tell me why a degraded
> RaidZ2 (minus one disk) is less efficient than RaidZ1? (Besides the fact
> that your pools are always reported as degraded.) I guess the same would
> apply with RaidZ2
On 24 apr 2010, at 16.43, Richard Elling wrote:
> I do not recall reaching that conclusion. I think the definition of the
> problem
> is what you continue to miss.
Me too then, I think. Can you please enlighten us about the
definition of the problem?
>> The .snapshot directories do precisely
Confirmed then that the issue was with the WD10EARS.
I swapped it out with the old one and things look a lot better:
pool: datos
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for
- "Peter Tripp" skrev:
> Can someone with a stronger understanding of ZFS tell me why a
> degraded RaidZ2 (minus one disk) is less efficient than RaidZ1?
> (Besides the fact that your pools are always reported as degraded.) I
> guess the same would apply with RaidZ2 vs RaidZ3 - 1disk.
A deg
One of my pools (backup pool) has a disk which I suspect may be going south. I
have a replacement disk of the same size. The original pool was using one of
the partitions towards the end of the disk. I want to move the partition to the
beginning of the disk on the new disk.
Does ZFS store/use p
Thanks Roy for your reply.
I actually waited a little more than an hour, but I'm still going to wait a
little longer following your suggestion and a little hunch of mine. I just
found out that this new WD10EARS is one of the new 4k disks. I believed that
only the 2TB models where 4k.
See:
BEFORE
ZFS first does a scan of indicies and such, which requires lots of seeks. After
that, the resilvering starts. I guess if you give it an hour, it'll be done
roy
- "Leandro Vanden Bosch" skrev:
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted t
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it's better off t
On Sat, 24 Apr 2010, Brad wrote:
We're running Solaris 10 10/09 with Oracle 10G - in our previous
configs data was clearing shown in the L2ARC and ZIL but then again
we didn't have 48GB (16GB previous tests) and a jbod. Thoughts?
Clearly this is a read-optimized system. Sweet! My primar
Hi all
I've been playing a little with dedup, and it seems it needs a truckload of
memory, something I don't have on my test systems. Does anyone have performance
data for large (20TB+) systems with dedup?
roy
___
zfs-discuss mailing list
zfs-discuss@
I'm not showing any data being populated in the L2ARC or ZIL SSDs with a J4500
(48 - 500GB SATA drives).
# zpool iostat -v
capacity operationsbandwidth
poolused avail read write read write
- -
On Sat, 24 Apr 2010, aneip wrote:
What I trying to avoid is, if 1 of the disk fail I will lost all of
the data in the pool even from healthy drive. I not sure whether I
can simple pull out 1 drive and all the file which located on the
faulty drive will be lost. The file which on other drive w
Thanks for all the answer, still trying to read slowly and understand. Pardon
my English coz this is my 2nd language.
I believe I owe some more explanation.
The system is actually freenas which installed on separate disk.
3 disks, 500GB, 1TB and 1.5TB is for data only.
The first pool will be r
Had an idea, could someone please tell me why it's wrong? (I feel like it has
to be).
A RaidZ-2 pool with one missing disk offers the same failure resilience as a
healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted
to do single parity raidz pool (5disk), but after a
On Sat, 24 Apr 2010, devsk wrote:
This is really painful. My source was a backup of my folders which I
wanted as filesystems in the RAIDZ setup. So, I copied the source to
the new pool and wanted to be able to move those folders to
different filesystems within the RAIDZ. But its turning out to
On Apr 24, 2010, at 5:27 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Edward
Search the archives. This dead horse gets beaten about every 6-9 months or so.
-- richard
On Apr 24, 2010, at 7:37 AM, devsk wrote:
> Is there anything anybody has to advise? Will I be better of copying each
> folder into its own FS from source pool? How about removal of the stuff
> that's no
Is there anything anybody has to advise? Will I be better of copying each
folder into its own FS from source pool? How about removal of the stuff that's
now in this FS? How long will the removal of 770GB data containing 6millions
files take?
Cost1: copy folders into respective FS + remove alrea
On 24/04/2010 13:51, Edward Ned Harvey wrote:
But what you might not know: If any pool fails, the system will crash.
This actually depends on the failmode property setting in your pools.
The default is panic, but it also might be wait or continue - see
zpool(1M) man page for more details.
This is really painful. My source was a backup of my folders which I wanted as
filesystems in the RAIDZ setup. So, I copied the source to the new pool and
wanted to be able to move those folders to different filesystems within the
RAIDZ. But its turning out to be a brand new copy and since its a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of aneip
>
> I really new to zfs and also raid.
>
> I have 3 hard disk, 500GB, 1TB, 1.5TB.
>
> On each HD i wanna create 150GB partition + remaining space.
>
> I wanna create raidz for 3x150GB
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Haudy Kazemi
>
> Your remaining space can be configured as slices. These slices can be
> added directly to a second pool without any redundancy. If any drive
> fails, that whole non-redundant
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> >
> > Actually, I find this very surprising:
On 21/04/2010 18:37, Ben Rockwood wrote:
You've made an excellent case for benchmarking and where its useful
but what I'm asking for on this thread is for folks to share the
research they've done with as much specificity as possible for research
purposes. :)
However you can also find so
On Fri, Apr 23, 2010 at 7:17 PM, Edward Ned Harvey
wrote:
> As the thread unfolds, it appears, although netapp may sometimes have some
> problems with "mv" directories ... This is evidence that appears to be
> weakening ... Sometimes they do precisely what you would want them to do.
Richard and I
32 matches
Mail list logo