Erik Trimble wrote:
> rsync is indeed slower than star; so far as I can tell, this is due
> almost exclusively to the fact that rsync needs to build an in-memory
> table of all work being done *before* it starts to copy. After that, it
> copies at about the same rate as star (my observations).
Ian Collins wrote:
> >> *ufsrestore works fine on ZFS filesystems (although I haven't tried it
> >> with any POSIX ACLs on the original ufs filesystem, which would probably
> >> simply get lost).
> > star -copy -no-fsync is typically 30% faster that ufsdump | ufsrestore.
> >
> Does it preser
> From: Garrett D'Amore [mailto:garr...@nexenta.com]
>
> We have customers using dedup with lots of vm images... in one extreme
> case they are getting dedup ratios of over 200:1!
I assume you're talking about a situation where there is an initial VM image,
and then to clone the machine, the cus
> From: Erik Trimble [mailto:erik.trim...@oracle.com]
>
> Using the standard c_max value of 80%, remember that this is 80% of the
> TOTAL system RAM, including that RAM normally dedicated to other
> purposes. So long as the total amount of RAM you expect to dedicate to
> ARC usage (for all ZFS us
Hi,
On 05/ 5/11 03:02 PM, Edward Ned Harvey wrote:
From: Garrett D'Amore [mailto:garr...@nexenta.com]
We have customers using dedup with lots of vm images... in one extreme
case they are getting dedup ratios of over 200:1!
I assume you're talking about a situation where there is an initial VM
On Thu, 2011-05-05 at 09:02 -0400, Edward Ned Harvey wrote:
> > From: Garrett D'Amore [mailto:garr...@nexenta.com]
> >
> > We have customers using dedup with lots of vm images... in one extreme
> > case they are getting dedup ratios of over 200:1!
>
> I assume you're talking about a situation whe
I assume you're talking about a situation where there is an initial VM image,
and then to clone the machine, the customers copy the VM, correct?
If that is correct, have you considered ZFS cloning instead?
When I said dedup wasn't good for VM's, what I'm talking about is: If there is data
in
so there's an ARC entry referencing each individual DDT entry in the L2ARC?! I
had made the assumption that DDT entries would be grouped into at least minimum
block sized groups (8k?), which would have lead to a much more reasonable ARC
requirement.
seems like a bad design to me, which leads to
We have customers using dedup with lots of vm images... in one extreme case
they are getting dedup ratios of over 200:1!
You don't need dedup or sparse files for zero filling. Simple zle compression
will eliminate those for you far more efficiently and without needing massive
amounts of ram.
On Wed, May 4, 2011 at 9:04 PM, Brandon High wrote:
> On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni
> wrote:
> > The problem we've started seeing is that a zfs send -i is taking hours
> to
> > send a very small amount of data (eg. 20GB in 6 hours) while a zfs send
> full
> > transfer everyt
On Thu, May 5, 2011 at 2:17 PM, Giovanni Tirloni wrote:
> What I find it curious is that it only happens with incrementals. Full
> send's go as fast as possible (monitored with mbuffer). I was just wondering
> if other people have seen it, if there is a bug (b111 is quite old), etc.
I have b
On Thu, May 5, 2011 at 11:17 AM, Giovanni Tirloni wrote:
> What I find it curious is that it only happens with incrementals. Full
> send's go as fast as possible (monitored with mbuffer). I was just wondering
> if other people have seen it, if there is a bug (b111 is quite old), etc.
I missed tha
Have a failed drive on a ZFS pool (three RAIDZ2 vdevs, one hot spare).
The hot spare kicked in and all is well.
Is it possible to just make that hot spare disk -- already silvered
into the pool -- as a permanent part of the pool? We could then throw
in a new disk and mark it as a spare and avoid
On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
wrote:
> Generally speaking, dedup doesn't work on VM images. (Same is true for ZFS
> or netapp or anything else.) Because the VM images are all going to have
> their own filesystems internally with whatever blocksize is relevant to the
> guest O
Thanks for the information.
I think you’re right that spa_sync thread is blocked in zio_wait while holding
scl_lock
which blocks all zpool related command (such as zpool status).
Question is why zio_wait is blocked forever ? if the underlying device is
offline, could zio service just bai
Just detach the faulty disk, then the spare will become the "normal"
disk once it's finished resilvering.
#zfs detach
Then you need to the new spare :
#zfs add
There seems to be a new feature in illumos project to support a zpool
property like "spare promotion",
which would not require the
On 05/ 6/11 09:53 AM, Ray Van Dolson wrote:
Have a failed drive on a ZFS pool (three RAIDZ2 vdevs, one hot spare).
The hot spare kicked in and all is well.
Is it possible to just make that hot spare disk -- already silvered
into the pool -- as a permanent part of the pool? We could then throw
Thanks again.
No, I don’t see any bio functions, but you have shed very useful lights on the
issue.
My test platform is b147, the pool disks are from a storage system via a Qlogic
fiber HBA.
My test case is :
1. zpool set failmode=continue pool1
2. dd if=/dev/zero of=/po
On Thu, May 05, 2011 at 03:13:06PM -0700, TianHong Zhao wrote:
> Just detach the faulty disk, then the spare will become the "normal"
> disk once it's finished resilvering.
>
> #zfs detach
>
> Then you need to the new spare :
> #zfs add
>
> There seems to be a new feature in illumos project
On May 5, 2011, at 2:58 PM, Brandon High wrote:
> On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
>
>> Or if you're intimately familiar with both the guest & host filesystems, and
>> you choose blocksizes carefully to make them align. But that seems
>> complicated and likely to fail.
>
> Using
On May 5, 2011, at 6:02 AM, Edward Ned Harvey wrote:
> Is this a zfs discussion list, or a nexenta sales & promotion list?
Obviously, this is a Nextenta sales & promotion list. And Oracle. And OSX.
And BSD. And Linux. And anyone who needs help or can offer help with ZFS
technology :-) This list ha
> From: Karl Wagner [mailto:k...@mouse-hole.com]
>
> so there's an ARC entry referencing each individual DDT entry in the L2ARC?!
> I had made the assumption that DDT entries would be grouped into at least
> minimum block sized groups (8k?), which would have lead to a much more
> reasonable ARC re
On May 4, 2011, at 7:56 PM, Edward Ned Harvey wrote:
> This is a summary of a much longer discussion "Dedup and L2ARC memory
> requirements (again)"
> Sorry even this summary is long. But the results vary enormously based on
> individual usage, so any "rule of thumb" metric that has been bouncing
> From: Brandon High [mailto:bh...@freaks.com]
>
> On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
> wrote:
> > Generally speaking, dedup doesn't work on VM images. (Same is true for
> ZFS
> > or netapp or anything else.) Because the VM images are all going to
have
> > their own filesystems i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> If you have to use the 4k recordsize, it is likely to consume 32x more
> memory than the default 128k recordsize of ZFS. At this rate, it becomes
> increasingly difficult
On Thu, May 5, 2011 at 8:50 PM, Edward Ned Harvey
wrote:
> If you have to use the 4k recordsize, it is likely to consume 32x more
> memory than the default 128k recordsize of ZFS. At this rate, it becomes
> increasingly difficult to get a justification to enable the dedup. But it's
> certainly p
26 matches
Mail list logo