At your suggestion I created a file locally and these were correct, in that
they inherited the acl that was applied to the top level.
-rwxrwxrwx+ 1 testuid testgid0 Jun 1 21:04 localtest
user:root:rwxpdDaARWcCos:--I:allow
everyone@:rwxpdDaARWc--s:--I
The problem is that nfs clients that connect to my solaris 11 express server
are not inheriting the acl's that are set for the share. They create files that
don't have any acl assigned to them, just the normal unix file permissions. Can
someone please provide some additional things to test so th
> When will L2ARC be available in Solaris 10?
My Sun SE said L2ARC should be in S10U7. It was scheduled for S10U6 (shipping
in a few weeks), but didn't make it in time. At least S10U6 will have ZIL
offload and ZFS boot.
--
This message posted from opensolaris.org
__
> We had a situation where write speeds to a ZFS
> consisting of 2 7TB RAID5 LUNs came to a crawl.
Sounds like you've hit Bug# 6596237 "Stop looking and start ganging". We ran
into the same problem on our X4500 Thumpers. Write throughput dropped to 200
KB/s. We now keep utilization under 90%
It's probably this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6453407
We've been affected by the same problem on our X4500 Thumpers. Although the
bug report claims a fix was delivered in solaris_nevada(snv_70), I've yet to
see an official patch released for it (we run Sola
Any progress on a defragmentation utility? We appear to be having a severe
fragmentation problem on an X4500, vanilla S10U4, no additional patches. 500GB
disks in 4 x 11 disk RAIDZ2 vdevs. It hit 97% full and fell off a
cliff...about 50KB/sec on writes. Deleting files so the zpool is at 92%
> It seems when a zfs filesystem with reserv/quota is
> 100% full users can no
> longer even delete files to fix the situation getting
> errors like these:
>
> $ rm rh.pm6895.medial.V2.tif
> rm: cannot remove `rh.pm6895.medial.V2.tif': Disk
> quota exceeded
We've run into the same problem here.
Does anybody know if native block level replication or block level
“de-duplication” as NetApp calls it will be added?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
> This gives a nice bias towards one of the following
> configurations:
>
> - 5x(7+2), 1 hot spare, 17.5TB [corrected]
> - 4x(9+2), 2 hot spares, 18.0TB
> - 6x(5+2), 4 hot spares, 15.0TB
In addition to Eric's suggestions, I would be interested in these configs for
46 disks:
5