Rebuild time is not a concern for me. The concern with rebuilding was the
stress it puts on the disks for an extended period of time (increasing the
chances of another disk failure). The % of data used doesnt matter, as the
system will try to get it done at max speed, thus creating the mentioned
> 3. Should I consider using dedup if my server has only 8Gb of RAM? Or,
> will that not be enough to hold the DDT? In which case, should I add
> L2ARC / ZIL or am I better to just skip using dedup on a home file
> server?
As Cindy said, skip dedup for now. It's not stable (enough). Try to destroy
On 08/09/2010 00:41, Scott Meilicke wrote:
Craig,
3. I do not think you will get much dedupe on video, music and photos. I would
not bother. If you really wanted to know at some later stage, you could create
a new file system, enable dedupe, and copy your data (or a subset) into it just
to se
The 9/10 Update appears to have been released. Some of the more noticeable
ZFS stuff that made it in:
> * Triple parity RAID-Z (raidz3) In this release, a redundant RAID-Z
> configuration can now have either single-parity, double-parity, or
> triple- parity, which means that one, two, or thr
> From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
> Mattias Pantzare
>
> It
> is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
> vdev you have to read half the data compared to 1 vdev to resilver a
> disk.
Let's suppose you have 1T of data. You have 12-disk r
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Magda
>
> The 9/10 Update appears to have been released. Some of the more
> noticeable
> ZFS stuff that made it in:
>
> More at:
>
> http://docs.sun.com/app/docs/doc/821-1840/gijtg
Awe
On 08 September, 2010 - Edward Ned Harvey sent me these 0,6K bytes:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of David Magda
> >
> > The 9/10 Update appears to have been released. Some of the more
> > noticeable
> > ZFS stuff that ma
On Wed, September 8, 2010 09:46, Tomas Ögren wrote:
> On 08 September, 2010 - Edward Ned Harvey sent me these 0,6K bytes:
>
>> Now when is dedup going to be ready? ;-)
>
> It's not in U9 at least:
> ...
> 16 stmf property support
> 17 Triple-parity RAID-Z
> 18 Snapshot user holds
> 19 Log
On Wed, Sep 8, 2010 at 15:27, Edward Ned Harvey wrote:
>> From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
>> Mattias Pantzare
>>
>> It
>> is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
>> vdev you have to read half the data compared to 1 vdev to resilver a
>>
On 9/8/10 9:32 AM -0400 Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
The 9/10 Update appears to have been released. Some of the more
noticeable
ZFS stuff that made it in:
More at:
http://docs.sun.com/a
For those more audio-visually inclined, there's a series of short videos
on http://blogs.sun.com/video/ with George Wilson discussing what's new.
Frank Cusack wrote:
On 9/8/10 9:32 AM -0400 Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensola
Hi!
I searched the web for hours, trying to solve the NFS/ZFS low performance issue
on my just setup OSOL box (snv134). The problem is discussed in many threads
but I've found no solution.
On a nfs shared volume, I get write performance of 3,5M/sec (!!) read
performance is about 50M/sec which
On Wed, Sep 08, 2010 at 01:20:58PM -0700, Dr. Martin Mundschenk wrote:
> Hi!
>
> I searched the web for hours, trying to solve the NFS/ZFS low
> performance issue on my just setup OSOL box (snv134). The problem is
> discussed in many threads but I've found no solution.
>
> On a nfs shared volume
Hello -
After waiting an hour or so for opensolaris, I had forgot what username I put
so I booted into windows to see if I could find it, no luck.
How can I figure it out?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
On 09/ 9/10 11:37 AM, Rather not say wrote:
Hello -
After waiting an hour or so for opensolaris, I had forgot what username I put
so I booted into windows to see if I could find it, no luck.
How can I figure it out?
Not by asking here! The opensolaris-help list is more appropriate.
Boot f
Hi all:
I'm a new guy who is just started ZFS for half a year. We are using
Nexenta in corporate pilot environment. these days, when I was trying to move
around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it
seems will never end up successfully.
1. I used CP first.
On 09/ 9/10 01:14 PM, Fei Xu wrote:
Hi all:
I'm a new guy who is just started ZFS for half a year. We are using
Nexenta in corporate pilot environment. these days, when I was trying to move
around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it
seems will never e
thank you Ian. I've re-build the pool to 9*2TB Raidz2 and start the ZFS send
command. result will come out after about 3 hours.
thanks
fei
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operationsbandwidth
pool alloc free read write read write
--- - - - - - -
sh001a 37.6G 16.
On 09/ 9/10 02:42 PM, Fei Xu wrote:
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operationsbandwidth
pool alloc free read write read write
--- - - - -
>
> ve you get dedup enabled? Note the read bandwith is
> much higher.
>
> --
> Ian.
>
no, dedup is not enabled since it's still not stable enough even for test
environment.
here is a JPG of Read/Write indicator. RED line is read and GREEN line is
write.
you can see, because destination
I dig deeper into it and might find some useful information.
I attached an X25 SSD for ZIL to see if it helps. but no luck.
I run IOstate -xnz for more details and got interesting result as below.(maybe
too long)
some explaination:
1. c2d0 is SSD for ZIL
2. c0t3d0, c0t20d0, c0t21d0, c0t22d0 is so
> Hi Craig,
> Don't use the p* devices for your storage pools. They
> represent the larger fdisk partition.
>
> Use the d* devices instead, like this example below:
Good advice, something I wondered about too.
However, aside from my having guessed right once (I think...) I have no clue
why thi
On Wed, Sep 8, 2010 at 6:27 AM, Edward Ned Harvey wrote:
> Both of the above situations resilver in equal time, unless there is a bus
> bottleneck. 21 disks in a single raidz3 will resilver just as fast as 7
> disks in a raidz1, as long as you are avoiding the bus bottleneck. But 21
> disks in a
Am 09.09.2010 um 07:00 schrieb zfs-discuss-requ...@opensolaris.org:
> What's the write workload like? You could try disabling the ZIL to see
> if that makes a difference. If it does, the addition of an SSD-based
> ZIL / slog device would most certainly help.
>
> Maybe you could describe the ma
25 matches
Mail list logo