openat() isn't really what he wants. These aren't user-level xattrs, they're
ones which affect the file system, or more specifically, a particular file. I
don't think the particular xattrs in question (or analogous ones) exist for ZFS
at this point.
This message posted from opensolaris.org
_
Erik Trimble wrote:
> Ivan Wang wrote:
>> Hi all,
>>
>> Forgive me if this is a dumb question. Is it possible for a two-disk
>> mirrored zpool to be seamlessly enlarged by gradually replacing previous
>> disk with larger one?
>>
>> Say, in a constrained desktop, only space for two internal disk
Ivan Wang wrote:
> Hi all,
>
> Forgive me if this is a dumb question. Is it possible for a two-disk mirrored
> zpool to be seamlessly enlarged by gradually replacing previous disk with
> larger one?
>
> Say, in a constrained desktop, only space for two internal disks is
> available, could I ju
Hi all,
Forgive me if this is a dumb question. Is it possible for a two-disk mirrored
zpool to be seamlessly enlarged by gradually replacing previous disk with
larger one?
Say, in a constrained desktop, only space for two internal disks is available,
could I just begin with two 160G disks, the
Michael Kucharski wrote:
> We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have
> the files system mounted over v5 krb5 NFS and accessed directly. The pool
> is a 20TB pool and is using . There are three filesystems, backup, test
> and home. Test has about 20 million files and
Tim Thomas wrote:
> Hi
>
> this may be of interest:
>
> http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
>
> I appreciate that this is not a frightfully clever set of tests but I
> needed some throughout numbersand the easiest way to share the
> results is to blog.
It s
So the problem in the zfs send/receive thing, is what if your network glitches
out during the transfers?
We have these once a day due to some as-yet-undiagnosed switch problem, a
chop-out of 50 seconds or so which is enough to trip all our IPMP setups and
enough to abort SSH transfers in progre
Łukasz K wrote:
>>> Now space maps, intent log, spa history are compressed.
>> All normal metadata (including space maps and spa history) is always
>> compressed. The intent log is never compressed.
>
> Can you tell me where space map is compressed ?
we specify that it should be compressed in db
Jim Mauro wrote:
> Hi Neel - Thanks for pushing this out. I've been tripping over this for
> a while.
>
> You can instrument zfs_read() and zfs_write() to reliably track filenames:
>
> #!/usr/sbin/dtrace -s
>
> #pragma D option quiet
>
> zfs_read:entry,
> zfs_write:entry
> {
> printf("
So what are the failure modes to worry about?
I'm not exactly sure what the implications of this nocache option for my
configuration.
Say from a recent example I have an overtemp and first one array shuts down,
then the other one.
I come in after A/C is returned, shutdown and repower everythin
roland wrote:
>> Is there any solutions out there of this kind?
> i`m not that deep into solaris, but iirc there isn`t one for free.
> veritas is quite popular, but you need spend lots of bucks for this.
> maybe SAM-QFS ?
We have lots of customers using shared QFS with RAC.
QFS is on the road to o
>I was courious if I could use zfs to have it shared on those two hosts
no, that`s not possible for now.
>but aparently I was unable to do it for obvious reasons.
you will corrupt your data!
>On my linuc oracle rac I was using ocfs which works just as I need it
yes, because ocfs is build for t
Paul B. Henson wrote:
> Does zfs send/receive have to be done with root
> privileges, or can RBAC or some other mechanism be used so a lower
> privileged account could be used?
You can use delegated administration ("zfs allow someone send pool/fs").
This is in snv_69. RBAC is much more coarse-gr
We've been evaluating ZFS as a possible enterprise file system for our
campus. Initially, we were considering one large cluster, but it doesn't
look like that will scale to meet our needs. So, now we are thinking about
breaking our storage across multiple servers, probably three.
However, I don't
2007/10/12, Krzys <[EMAIL PROTECTED]>:
> Hello all, sorry if somebody already asked this or not. I was playing today
> with
> iSCSI and I was able to create zpool and then via iSCSI I can see it on two
> other hosts. I was courious if I could use zfs to have it shared on those two
> hosts but apar
eSX wrote:
> We are tesing ZFS in OpenSolairs, write TBs data to ZFS, But when the
> capacity is close to 90%, ZFS went into slowly. We do ls, rm, and write
> something, those operation is so terrible. for example, we do ls in a
> Directory which have about 4000 Directories, the time is about 5-10s
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently I was unable to do it for obvious reason
> I suspect that the bad ram module might have been the root
> cause for that "freeing free segment" zfs panic,
perhaps I removed two 2G simms but left the two 512M
simms, also removed kernelbase but the zpool import
still crashed the machine.
its also registered ECC ram, memtest86 v1.7 di
I'm not in the sd group but this bug looks similar to the bug described
in an earlier e-mail to storage-discuss/zfs-discuss titled "Possible ZFS
Bug - CausesOpenSolaris Crash". I haven't seen a core file for
either one. Going purely from the stack trace it's not clear to me how
either pan
Has there been any solution to the problem discussed above in ZFS version 8??
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
size=66560)
In-Reply-To: <[EMAIL PROTECTED]>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Approved: 3sm4u3
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=163221&tstart=0#163221
> how does one free segment(offset=77984
A few weeks ago, I wrote:
> Yesterday I tried to clone a xen dom0 zfs root
> filesystem and hit this panic (probably Bug ID 6580715):
>
>
> > ::status
> debugging crash dump vmcore.6 (64-bit) from moritz
> operating system: 5.11 wos_b73 (i86pc)
> panic message: freeing free segment (vdev=0 offse
Claus's experience leads me to ask if anyone is having success using Nevada
in a semi production environment. I ask because NV has better hardware
support, but I fear it's not as reliable on the storage side. (I'm
considering building an iSCSI/Samba/ZFS filer at work).
Blake
On 11/10/2007, Dick Davies <[EMAIL PROTECTED]> wrote:
> No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots).
>
> I have a workaround I'm about to blog
Here it is - hopefully be of some use:
http://number9.hellooperator.net/articles/2007/10/11/fast-zone-cloning-on-solaris-10
Manoj Nayak wrote:
> Hi,
>
> I am using XFS_IOC_FSGETXATTR in ioctl() call on Linux running XFS file
> system.I want to use similar thing on Solaris running ZFS file system.
See openat(2).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
Hi,
I am using XFS_IOC_FSGETXATTR in ioctl() call on Linux running XFS file
system.I want to use similar thing on Solaris running ZFS file system.
struct fsxattr fsx;
ioctl(fd, XFS_IOC_FSGETXATTR, &fsx);
The above call get additional attributes associated with files in XFS
file systems. The fi
Hi,
did you read the following?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
> Currently, pool performance can degrade when a pool is very full and
> filesystems are updated frequently, such as on a busy mail server.
> Under these circumstances, keep pool space under 8
27 matches
Mail list logo