did a `zpool export zfs ; zpool import zfs` and got a core.
core file = core.import -- program ``/sbin/zpool'' on platform i86pc
SIGSEGV: Segmentation Fault
$c
libzfs.so.1`zfs_prop_get+0x24(0, d, 80433f0, 400, 0, 0)
libzfs.so.1`dataset_compare+0x39(80d5fd0, 80d5fe0)
libc.so.1`qsort+0x39d(80d5fd0
Have a gander below :
> Agreed - it sucks - especially for small file use. Here's a 5,000 ft view
> of the performance while unzipping and extracting a tar archive. First
> the test is run on a SPARC 280R running Build 51a with dual 900MHz USIII
> CPUs and 4Gb of RAM:
>
> $ cp emacs-21.4a.tar.g
Yes, I've tried NFS and CIFS. I wouldn't call this a problem though. This is
the way it was designed to work to prevent loss of client data. If you want
faster performance put a battery-backed RAID card in your system and turn on
write-back caching on the card so that the RAM in the RAID controlle
On 11/22/06, Chad Leigh -- Shire.Net LLC <[EMAIL PROTECTED]> wrote:
On Nov 22, 2006, at 4:11 PM, Al Hopper wrote:
> No problem there! ZFS rocks. NFS/ZFS is a bad combination.
Has anyone tried sharing a ZFS fs using samba or afs or something
else besides nfs? Do we have the same issues?
I
On Nov 22, 2006, at 4:11 PM, Al Hopper wrote:
No problem there! ZFS rocks. NFS/ZFS is a bad combination.
Has anyone tried sharing a ZFS fs using samba or afs or something
else besides nfs? Do we have the same issues?
Chad
---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting
On Tue, 21 Nov 2006, Joe Little wrote:
> On 11/21/06, Roch - PAE <[EMAIL PROTECTED]> wrote:
> >
> > Matthew B Sweeney - Sun Microsystems Inc. writes:
> > > Hi
> > > I have an application that use NFS between a Thumper and a 4600. The
> > > Thumper exports 2 ZFS filesystems that the 4600 uses a
Peter Eriksson wrote:
There is nothing in the ZFS FAQ about this. I also fail to see how FMA could make any
difference since it seems that ZFS is deadlocking somewhere in the kernel when this happens...
Some people don't see a difference between "hung" and "patiently waiting."
There are failure
Since there have been so many recent discussions about what disabling
the ZIL does (via the zil_disable tuneable), we've decided to put up a
blog that's easy to reference instead of searching through countless emails:
http://blogs.sun.com/erickustarz/entry/zil_disable
I plan another one on NFS
To accelerate NFS (in particular single threaded loads)
you need (somewhat badly) some *RAM between the Server FS and
it's storage; that *RAM is where NFS commited data may be stored.
If the *RAM does not survive a server reboot, the client is
at risk of seeing corruption.
For example, UFS ove
There is nothing in the ZFS FAQ about this. I also fail to see how FMA could
make any difference since it seems that ZFS is deadlocking somewhere in the
kernel when this happens...
It works if you wrap all the physical devices inside SVM metadevices and use
those for your
ZFS/zpool instead. Ie:
10 matches
Mail list logo