On Wed, 4 May 2011, Peter Jeremy wrote:
> Possibilities I can think of:
> - Do you have lots of snapshots? There's an overhead of a second or so
> for each snapshot to be sent.
> - Is the source pool heavily fragmented with lots of small files?
Nope, and I don't think so.
> Hopefully a silly
On 2011-May-04 08:39:39 +0800, Rich Teer wrote:
>Also related to this is a performance question. My initial test involved
>copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
>to complete. The strikes me as being a bit high for a mere 50 MB;
>are my expectation realistic or is
On Tue, May 3 at 17:39, Rich Teer wrote:
Hi all,
I'm playing around with nearline backups using zfs send | zfs recv.
A full backup made this way takes quite a lot of time, so I was
wondering: after the initial copy, would using an incremental send
(zfs send -i) make the process much quick becau
Hi all,
I'm playing around with nearline backups using zfs send | zfs recv.
A full backup made this way takes quite a lot of time, so I was
wondering: after the initial copy, would using an incremental send
(zfs send -i) make the process much quick because only the stuff that
had changed between t
Hi,
There seems to be a few threads about zpool hang, do we have a
workaround to resolve the hang issue without rebooting ?
In my case, I have a pool with disks from external LUNs via a fiber
cable. When the cable is unplugged while there is IO in the pool,
All zpool related command hang
On Tue, May 3, 2011 at 12:36 PM, Erik Trimble wrote:
> rsync is indeed slower than star; so far as I can tell, this is due almost
> exclusively to the fact that rsync needs to build an in-memory table of all
> work being done *before* it starts to copy. After that, it copies at about
rsync 3.0+ w
On Tue, May 3, 2011 at 12:36 PM, Erik Trimble wrote:
> On 5/3/2011 8:55 AM, Brandon High wrote:
>>
>> On Tue, May 3, 2011 at 5:47 AM, Joerg Schilling
>> wrote:
>>>
>>> But this is most likely slower than star and does rsync support sparse
>>> files?
>>
>> 'rsync -ASHXavP'
>>
>> -A: ACLs
>> -S: S
On 05/ 4/11 01:35 AM, Joerg Schilling wrote:
Andrew Gabriel wrote:
Dan Shelton wrote:
Is anyone aware of any freeware program that can speed up copying tons
of data (2 TB) from UFS to ZFS on same server?
I use 'ufsdump | ufsrestore'*. I would also suggest try setting
'sync=disabled' during
On 5/3/2011 8:55 AM, Brandon High wrote:
On Tue, May 3, 2011 at 5:47 AM, Joerg Schilling
wrote:
But this is most likely slower than star and does rsync support sparse files?
'rsync -ASHXavP'
-A: ACLs
-S: Sparse files
-H: Hard links
-X: Xattrs
-a: archive mode; equals -rlptgoD (no -H,-A,-X)
On Tue, May 3, 2011 at 5:47 AM, Joerg Schilling
wrote:
> But this is most likely slower than star and does rsync support sparse files?
'rsync -ASHXavP'
-A: ACLs
-S: Sparse files
-H: Hard links
-X: Xattrs
-a: archive mode; equals -rlptgoD (no -H,-A,-X)
You don't need to specify --whole-file, it'
Andrew Gabriel wrote:
> Dan Shelton wrote:
> > Is anyone aware of any freeware program that can speed up copying tons
> > of data (2 TB) from UFS to ZFS on same server?
>
> I use 'ufsdump | ufsrestore'*. I would also suggest try setting
> 'sync=disabled' during the operation, and reverting it a
Dan Shelton wrote:
Is anyone aware of any freeware program that can speed up copying tons
of data (2 TB) from UFS to ZFS on same server?
I use 'ufsdump | ufsrestore'*. I would also suggest try setting
'sync=disabled' during the operation, and reverting it afterwards.
Certainly, fastfs (a simi
Freddie Cash wrote:
> On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote:
> > Is anyone aware of any freeware program that can speed up copying tons of
> > data (2 TB) from UFS to ZFS on same server?
>
> rsync, with --whole-file --inplace (and other options), works well for
> the initial copy.
Hi, hello,
another dedup question. I just installed an ssd disk as l2arc. This
is a backup server with 6 GB RAM (ie I don't often read the same data
again), basically it has a large number of old backups on it and they
need to be deleted. Deletion speed seems to have improved although the
majorit
14 matches
Mail list logo