Hi there,
I've been comparing using the ZFS send/receive function over SSH to
simply scp'ing the contents of snapshot, and have found for me the
performance is 2x faster for scp.
Has anyone else noticed ZFS send/receive to be noticeably slower?
Best Regards,
Jason
__
Listman,
What's the average size of your files? Do you have many file
deletions/moves going on? I'm not that familiar with how Perforce
handles moving files around.
XFS is bad at small files (worse than most file systems), as SGI
optimized it for larger files (> 64K). You might see a performance
On Nov 15, 2006, at 1:09 AM, Jason J. W. Williams wrote:What's the average size of your files? Do you have many file deletions/moves going on? I'm not that familiar with how Perforce handles moving files around. average size of my files seems to be around 4k, there can be thousands of files being m
Tomas,
Apologies for delayed response...
Tomas Ögren wrote:
Interesting ! So, it is not the ARC which is consuming too much memory
It is some other piece (not sure if it belongs to ZFS) which is causing
the crunch...
Or the other possibility is that ARC ate up too much and caused a near
On 14/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>Actually, we have considered this. On both SPARC and x86, there will be
>a way to specify the root file system (i.e., the bootable dataset) to be
>booted,
>at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
>If no root f
On 15/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>I suppose it depends how 'catastrophic' the failture is, but if it's
>very low level,
>booting another root probabyl won't help, and if it's too high level, how will
>you detect it (i.e. you've booted the kernel, but it is buggy).
If it
Jason J. W. Williams wrote:
Hi there,
I've been comparing using the ZFS send/receive function over SSH to
simply scp'ing the contents of snapshot, and have found for me the
performance is 2x faster for scp.
Can you give some more details about the configuration of the two
machines involved an
Dick Davies wrote:
On 15/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>I suppose it depends how 'catastrophic' the failture is, but if it's
>very low level,
>booting another root probabyl won't help, and if it's too high level,
how will
>you detect it (i.e. you've booted the kernel, bu
>On 15/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>>
>> >I suppose it depends how 'catastrophic' the failture is, but if it's
>> >very low level,
>> >booting another root probabyl won't help, and if it's too high level, how
>> >will
>> >you detect it (i.e. you've booted the kernel, but i
>I think we first need to define what state "up" actually is. Is it the
>kernel booted ? Is it the root file system mounted ? Is it we reached
>milestone all ? Is it we reached milestone all with no services in
>maintenance ? Is it no services in maintenance that weren't on the last
> bo
Previously I wrote:
>I still don't like forcing ZFS on people, though; I've found that ZFS
>does not work on 1GB SPARC systems; I found that a rather high lower limit.
>
>(Whenever the NFS find runs over the zpool, the system hangs)
It appears that this is a regression in build 52 or 51, I filed
Hi,
Is it possible to create snapshots off ZFS clones and further clones off
those snapshots recursively?
thanks,
Prashanth
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Prashanth Radhakrishnan wrote:
Hi,
Is it possible to create snapshots off ZFS clones and further clones off
those snapshots recursively?
Yes.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
On 11/15/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
After swapping some hardware and rebooting:
SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b3
On 11/15/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
After swapping some hardware and rebooting:
SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUNW,Sun-Fire-T1000, CSN: -, HOSTNAME:
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 60b3
On November 16, 2006 1:18:22 AM +1100 James McPherson
<[EMAIL PROTECTED]> wrote:
On 11/15/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
After swapping some hardware and rebooting:
SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Tue Nov 14 21:37:55 PST 2006
PLATFORM: SUN
listman <[EMAIL PROTECTED]> writes:
>> What's the average size of your files? Do you have many file
>> deletions/moves going on? I'm not that familiar with how Perforce
>> handles moving files around.
>>
>average size of my files seems to be around 4k, there can be
>thousands of files being mov
On November 15, 2006 4:40:36 PM + Mattias Engdegård
<[EMAIL PROTECTED]> wrote:
listman <[EMAIL PROTECTED]> writes:
What's the average size of your files? Do you have many file
deletions/moves going on? I'm not that familiar with how Perforce
handles moving files around.
average size of
This is likely a variation of:
6401126 FM reports 'pool data unavailable' because of timing between FM and
mounting of file systems
Basically, what's happening is that ZFS is trying to open the pool
before the underlying device backing the vdev is available. My guess is
that your new hardware i
Frank Cusack <[EMAIL PROTECTED]> writes:
>1. branching complexity (addressed with raw cpu power)
>2. database performance (addressed with RAM)
>3. file xfer performance ("asynchronous" wrt db updates as long as the
> db is on a different disk from the files, so doesn't affect concurrency
> but
Hi Darren,
The copy is going between these two machines:
Source:
SunFire X4100 Dual 2.2Ghz Opteron (single core) 2GB RAM - SAN Attached
(STK FLX210) - ZFS RAID-Z zpool
Destination:
SunFire T2000 8-Core 1.2GHz T1 w/ 8GB RAM - SAN Attached (STK FLX210)
- ZFS RAID-Z zpool
No compression is used i
Torrey McMahon wrote:
Richard Elling - PAE wrote:
Torrey McMahon wrote:
Robert Milkowski wrote:
Hello Torrey,
Friday, November 10, 2006, 11:31:31 PM, you wrote:
[SNIP]
Tunable in a form of pool property, with default 100%.
On the other hand maybe simple algorithm Veritas has used is good
e
Hi Tomas,
Thanks for that. Got to get my out of Veritas way of looking at
mirrors.
Another reason for wanting it this way is that IF the functionality of
splitting mirrors
comes into ZFS whereby we can import them into a new pool or even a new LDOM,
then we know exactly the boundary
On Wed, Nov 15, 2006 at 12:10:30PM +0100, [EMAIL PROTECTED] wrote:
>
> >I think we first need to define what state "up" actually is. Is it the
> >kernel booted ? Is it the root file system mounted ? Is it we reached
> >milestone all ? Is it we reached milestone all with no services in
> >ma
On Tue, Nov 14, 2006 at 07:32:08PM +0100, [EMAIL PROTECTED] wrote:
>
> >Actually, we have considered this. On both SPARC and x86, there will be
> >a way to specify the root file system (i.e., the bootable dataset) to be
> >booted,
> >at either the GRUB prompt (for x86) or the OBP prompt (for SPA
On Wed, Nov 15, 2006 at 11:00:01AM +, Darren J Moffat wrote:
> I think we first need to define what state "up" actually is. Is it the
> kernel booted ? Is it the root file system mounted ? Is it we reached
> milestone all ? Is it we reached milestone all with no services in
> maintenance
On Wed, Nov 15, 2006 at 09:58:35PM +, Ceri Davies wrote:
> On Wed, Nov 15, 2006 at 12:10:30PM +0100, [EMAIL PROTECTED] wrote:
> >
> > >I think we first need to define what state "up" actually is. Is it the
> > >kernel booted ? Is it the root file system mounted ? Is it we reached
> > >mil
On Wed, Nov 15, 2006 at 04:23:18PM -0600, Nicolas Williams wrote:
> On Wed, Nov 15, 2006 at 09:58:35PM +, Ceri Davies wrote:
> > On Wed, Nov 15, 2006 at 12:10:30PM +0100, [EMAIL PROTECTED] wrote:
> > >
> > > >I think we first need to define what state "up" actually is. Is it the
> > > >kerne
On Tue, Nov 14, 2006 at 07:32:08PM +0100, [EMAIL PROTECTED] wrote:
>
> >Actually, we have considered this. On both SPARC and x86, there will be
> >a way to specify the root file system (i.e., the bootable dataset) to be
> >booted,
> >at either the GRUB prompt (for x86) or the OBP prompt (for SPA
Ceri Davies wrote:
On Tue, Nov 14, 2006 at 07:32:08PM +0100, [EMAIL PROTECTED] wrote:
Actually, we have considered this. On both SPARC and x86, there will be
a way to specify the root file system (i.e., the bootable dataset) to be
booted,
at either the GRUB prompt (for x86) or the OBP prom
Just to let you know that first set of patches for FreeBSD is now
available:
http://lists.freebsd.org/pipermail/freebsd-fs/2006-November/002385.html
--
Pawel Jakub Dawidek http://www.wheel.pl
[EMAIL PROTECTED] http://www.FreeBSD.org
FreeBSD
31 matches
Mail list logo