> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > Worse yet, your arc consumption could be so large, that
> > PROCESSES don't fit in ram anymore. In this case, your processes get
> pushed
> > out to swap space, which is really bad.
>
> This will not happen. The ARC will be asked to
> Controls whether deduplication is in effect for a
> dataset. The default value is off. The default checksum
> used for deduplication is sha256 (subject to change).
>
> This is from b159.
This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a
combination with verify (whic
On Fri, Apr 29, 2011 at 7:10 AM, Roy Sigurd Karlsbakk
wrote:
> This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a
> combination with verify (which I would use anyway, since there are always
> tiny chances of collisions), why would sha256 be a better choice?
fletcher4
Is anyone aware of any freeware program that can speed up copying tons
of data (2 TB) from UFS to ZFS on same server?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote:
> Is anyone aware of any freeware program that can speed up copying tons of
> data (2 TB) from UFS to ZFS on same server?
rsync, with --whole-file --inplace (and other options), works well for
the initial copy.
rsync, with --no-whole-file --in
Dan Shelton wrote:
> Is anyone aware of any freeware program that can speed up copying tons
> of data (2 TB) from UFS to ZFS on same server?
Try star -copy
Note that due to the problems on ZFS to deal with stable states, I recommend to
use -no-fsync and it may of course help to specify a
Is there anyway, yet, to import a pool with corrupted space_map
errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions?
I have a pool comprised of 4 raidz2 vdevs of 6 drives each. I have
almost 10 TB of data in the pool (3 TB actual disk space used due to
dedup and compression). While testing var
On 04/30/11 06:00 AM, Freddie Cash wrote:
On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote:
Is anyone aware of any freeware program that can speed up copying tons of
data (2 TB) from UFS to ZFS on same server?
rsync, with --whole-file --inplace (and other options), works well for
the initi
On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote:
> Is anyone aware of any freeware program that can speed up copying tons of
> data (2 TB) from UFS to ZFS on same server?
Setting 'sync=disabled' for the initial copy will help, since it will
make all writes asynchronous.
You will probably wan
On 4/29/2011 9:44 AM, Brandon High wrote:
On Fri, Apr 29, 2011 at 7:10 AM, Roy Sigurd Karlsbakk
wrote:
This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a
combination with verify (which I would use anyway, since there are always tiny
chances of collisions), why would
On Apr 29, 2011, at 1:37 PM, Brandon High wrote:
> On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote:
>> Is anyone aware of any freeware program that can speed up copying tons of
>> data (2 TB) from UFS to ZFS on same server?
>
> Setting 'sync=disabled' for the initial copy will help, since it
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
> Is there anyway, yet, to import a pool with corrupted space_map
> errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions?
>
> I have a pool comprised of 4 raidz2 vdevs of 6 drives each. I have
> almost 10 TB of data in the pool (3 TB actual di
On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote:
> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
> > Is there anyway, yet, to import a pool with corrupted space_map
> > errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions?
>...
> Well, by commenting out the VERIFY line for zio->io_t
On Fri, Apr 29, 2011 at 5:00 PM, Alexander J. Maidak wrote:
> On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote:
>> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
>> > Is there anyway, yet, to import a pool with corrupted space_map
>> > errors, or "zio-io_type != ZIO_TYPE_WRITE" asserti
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote:
> Running ZFSv28 on 64-bit FreeBSD 8-STABLE.
I'd suggest trying to import the pool into snv_151a (Solaris 11
Express), which is the reference and development platform for ZFS.
-B
--
Brandon High : bh...@freaks.com
___
> From: Edward Ned Harvey
> I saved the core and ran again. This time it spewed "leaked space"
messages
> for an hour, and completed. But the final result was physically
impossible (it
> counted up 744k total blocks, which means something like 3Megs per block
in
> my 2.39T used pool. I checked c
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> What does it mean / what should you do, if you run that command, and it
> starts spewing messages like this?
> leaked space: vdev 0, offset 0x3bd8096e00, size 7168
And on
> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> The size of these structures will vary according to the release you're
running.
> You can always find out the size for a particular system using ::sizeof
within
> mdb. For example, as super user :
>
> : xvm-4200m2-02 ; echo ::sizeof ddt_entry
On Thu, Apr 28, 2011 at 6:48 PM, Edward Ned Harvey
wrote:
> What does it mean / what should you do, if you run that command, and it
> starts spewing messages like this?
> leaked space: vdev 0, offset 0x3bd8096e00, size 7168
I'm not sure there's much you can do about it short of deleting
datasets
19 matches
Mail list logo