under Solaris 10, but I would be
surprised if the ufsrestore code has changed since then.
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
r for some other tasks. Basic (Sun, not GNU) cp and mv should
> work correctly too.
Under Solaris 10 I found 'cp -pr' to be the both most reliable and fastest way
to move data into, out of, and between ZFS datasets.
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Co
764468
> 268435456 512 410246 403019 766627 768739
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> _
it.
I often find that going to the command line on Mac OS X is very
helpful, but since the GUI really is very well done for most (but certainly not
all) tasks, I usually don't remember to go to the command line until after I
have been beating my head against the GUI for too long :
d I get between 40 and 50 MB/sec throughput on a Gigabit
ethernet link. Since you have already ruled out the known sync issue with ZFS
and no SSD-based write cache, then perhaps you are running into an NFS 3 vs.
NFS 4 issue. I am not sure if Mac OS X is using NF
On 14 Oct 2012, at 20:56 , "Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)"
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Paul van der Zwan
>>
>> What was c5t2 is now
/disk@0,0
Specify disk (enter its number): ^D
What was c5t2 is now c7t1 and what was c4t1 is now c5t2.
Everything seems to be working fine, it's just a bit confusing.
How can I 'fix' this ? Delete /etc/zfs/zpool.cache and reboot ?
TIA
Paul
writes since I did the destroy
then I *might* have a chance at this ... HELP!
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Assistant Technical Director, LoneStarCon 3
ation
segment. Once you grow out of that you are (usually) fragmented.
> This may be important for large objects randomly updated "inside",
> like VM disk images and iSCSI backing stores, precreated database
> table files, maybe swapfiles, etc.
--
{1-2-3
ts are our backups we really can't disable
them, if we do we run into a different zpool 22 issue where the amount
of RAM we will need to destroy a large snapshot will be more than we
have. This is also fixed with zpool 26.
--
{1-2-3-4-5-6-----7
e chain of two and one chain of three, the
problem occurred on the chain of three)
Not specifically applicable here, but probably related and might be of
use to someone here.
--
{1-2-3-4-5-6-----7-}
Paul Kraus
-> Senior Systems Archi
---6-----7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Assistant Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical A
d for 7x24 operation.
Thanks, in advance for all of your informed opinions.
P.S. I am sending this to TWO lists, please do NOT respond to the list
you are NOT subscribed to :-)
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems
On Thu, May 3, 2012 at 10:39 AM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Paul Kraus
>>
>> If you have compression turned on (and I highly recommend turning
>> it on if yo
market. Now, as to sharing futures and NDA
material, that _should_ only be available via direct Oracle channels
(as it was under Sun as well).
--
{1-2-3-4-5-6-----7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http
ed benchmark was effectively a
"worst case" number and the full ARC the "best case". The real world,
as usual, fell somewhere in between.
Finding a benchmark tool that matches _my_ work load is why I have
started kludging together my own.
--
{1-2-3--
nce level).
> Am I off on this?
>
> System is a 240x2TB (7200RPM) system in 20 Dell MD1200 JBODs. 16 vdevs of 15
> disks each -- RAIDZ3. NexentaStor 3.1.2.
--
{1-2-----3-4-5-6-7-}
Paul Kraus
-> Senior Systems Archite
never releases the lock or closes the file
until the application exits will require a restart of the application
with an automounter / NFS approach.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( h
Tomorrow, Ian Collins wrote:
On 04/26/12 10:34 AM, Paul Archer wrote:
That assumes the data set will fit on one machine, and that machine won't
be a
performance bottleneck.
Aren't those general considerations when specifying a file server?
I suppose. But I meant specifically tha
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different
m
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local storage) that is
needed on other multiple nodes. The only
9:08pm, Stefan Ring wrote:
Sorry for not being able to contribute any ZoL experience. I've been
pondering whether it's worth trying for a few months myself already.
Last time I checked, it didn't support the .zfs directory (for
snapshot access), which you really don't want to miss after getting
>To put it slightly differently, if I used ZoL in production, would I be likely
to experience performance or stability
problems?
I saw one team revert from ZoL (CentOS 6) back to ext on some backup servers
for an application project, the killer was
stat times (find running slow etc.), perhaps
11:26am, Richard Elling wrote:
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
The point of a clustered filesystem was to be able to spread our data out
among all nodes and still have access
from any node without having to run NFS. Size of the data set (once you
get past the
9:59am, Richard Elling wrote:
On Apr 25, 2012, at 5:48 AM, Paul Archer wrote:
This may fall into the realm of a religious war (I hope not!), but
recently several people on this list have
said/implied that ZFS was only acceptable for production use on FreeBSD
(or Solaris, of
This may fall into the realm of a religious war (I hope not!), but recently
several people on this list have said/implied that ZFS was only acceptable for
production use on FreeBSD (or Solaris, of course) rather than Linux with ZoL.
I'm working on a project at work involving a large(-ish) amoun
wondering if there is something about slot 20
that may be causing drives to fail.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light
mpared them
with. In the past, my need for an increase in capacity caused
replacement of hard drives. This time I'm not sure if I'll run out of
capacity before the drives reach end of practical service life and
start failing.
--
{1-2-3-4-5-6
On Thu, Apr 5, 2012 at 10:04 AM, Weber, Markus
wrote:
> Paul wrote:
>> I have not seen any odd issues with the five J4400 configuration
>> since we went production.
>
> I'm not familiar with the J4400 at all, but isn't Sun/Oracle using -like
> NetAPP-
> I
due to the J4400 in the two J4400
configuration (we have had other issues with this system, but they
were not related to the J4400).
No data has been lost due to any of the failures or outages. Thank you ZFS.
--
{----1-2-3-4-5-6-7-}
Pa
ect that the incomplete snapshot was also corrupt in
some strange way, but could never make a solid determination of that.
We think what caused the zfs send | zfs recv to be interrupted was
hitting an e1000g Ethernet device driver bug.
--
{1-2-3-4-
d. My 22 vdev zpool takes about 300 hours for
this while the 2 vdev zpool takes over 600 hours. Both have comparable
amounts of data and snapshots. The 22 vdev zpool is on a production
server with normal I/O activity, the 2 vdev case is only receiving zfs
snapshots and doing no other I/O.
--
{---
tter performance
4 x 3 (mirror) == best performance
6 x 2 (mirror) == almost best performance
If you have the time, setup each configuration and _measure_ the
performance. If you can, load up a bunch of data (at least 33% full)
and then trigger a scrub to see how long a resilver takes. Remember
consists of two vdevs each a RAIDz2 of 23 drives and two hot
spares. The performance is dreadful, but we _have_ the data in case of
a real disaster.
--
{1-----2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garn
to start the move while people are
still using the files. I run a first rsync to copy all of them, then I
declare a very short outage window and do a final rsync to catch
anything that got changed. I do NOT use the --remove-source-files
option.
--
{1-2-3-4
last time I
checked.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Adv
hat is now a simple two-disk zpool (less than 50% full) to a
> three-disk raidz configuration, starting with one unused disk? In the end I
> want the three-disk raidz to have the same name (and mount point) as the
> original zpool. There must be an easy way to do this.
--
{1-2-
est guess is that only a
> receive would do this to you.
I have NOT seen issues recv'ing a zfs send while a scrub was running.
I am at zpool 22.
Always implement locking on tasks that should be single threaded (like
zfs send \ zfs recv on a given dataset).
--
{----1-2
products you
could try to use for this). I have been using NBU to backup NG Zones
from the Global for years. Is the version of NBU so old that it does
not support ZFS ?
--
{1-2-3-4-5-6-7-----}
Paul Kraus
-> Senior Systems Architect,
option soon to be available is Illumian:
http://www.illumian.org/
It's roughly the same as OpenIndiana but using Debian packaging rather
than IPS.
--
Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst | hen...@csupomona.e
---2-3-4-5-6-----7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Tech
FreeBSD with no outage (assuming the drive is in a hot swap capable
enclosure), but as I am not familiar with FreeBSD I do not know what
it is.
--
{1-2-3-4-5-6-----7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://w
3-4-5-6-7-----}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Adv
-15 or 10^-15) which
translates into one error per 10^14 bits (bytes ?) transferred to /
from the drive. Note the sign change on the exponent :-)
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www
s are
healthy|no pools available"`
if [ "X${not_ok}" != "X" ]
then
fault_details="There is at least one zpool error."
let fault_count=fault_count+1
new_faults[${fault_count}]=${fault_details}
fi
--
{1-2-3--
y and only for moving
data physically around, so the lack of ZFS redundancy is not an
issue).
There are over 2300 snapshots on the source side and we were
replicating close to 2000 of them.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Sen
t of address bits? Or is it something that offers functionality
> that other filesystems don't have? ;-)
The stories I have heard indicate that the name came after the TLA.
"zfs" came first and "zettabyte" later.
--
{1-2-3-4---
On Fri, Nov 11, 2011 at 1:39 PM, Linder, Doug
wrote:
> Paul Kraus wrote:
>
>>> My main reasons for using zfs are pretty basic compared to some here
>>
>> What are they ? (the reasons for using ZFS)
>
> All technical reasons aside, I can tell you one huge reason I
uch above 0 or is growing.
Keep in mind that any type of hardware RAID should report back 0
for both to the OS.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
->
apdir=hidden " to set the parameter.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.slocth
to (in fact, in the early days of Google Mail
I did just that as a backup).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Co
---5-----6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
test server, so any ideas to try and help me understand greatly
> appreciated.
What do real benchmarks (iozone, filebench, orion) show ?
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Archi
On Mon, Oct 31, 2011 at 9:07 AM, Jim Klimov wrote:
> 2011-10-31 16:28, Paul Kraus wrote:
>> Oracle has provided a loaner system with 128 GB RAM and it took 75 GB of
>> RAM
>> to destroy the problem snapshot). I had not yet posted a summary as we
>> are still working
I had not yet posted a summary as we
are still working through the overall problem (we tripped over this on
the replica, now we are working on it on the production copy).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Archite
ort use only according to the documentation),
so I created RAID0 sets of 2 drives each and ZFS sees 6 x 1TB LUNs.
ZFS then provides my redundancy and data integrity.
--
{1-2-3-4-5-----6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet Ri
--5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zf
On Sat, Oct 22, 2011 at 12:36 AM, Paul Kraus wrote:
> Recently someone posted to this list of that _exact_ situation, they loaded
> an OS to a pair of drives while a pair of different drives containing an OS
> were still attached. The zpool on the first pair ended up not being abl
e
as it does not try to change the data).
This was originally reported to me as a problem with ZFS, SAMBA,
or the ACLs I had set up. It is amazing how much _changing_ of data
goes on with no knowledge by the end users.
--
{1-2-3-4-----5-6-7
t;
>
> Can you elaborate #3? In what situation will it happen?
>
>
> Thanks.
>
> Fred
>
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
->
@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
operation rewrote the data that had been corrupted on the failing
component. No corrupt data was ever presented to the application.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetrive
ss. So far, ZFS is one of the
technologies that has not let me down. Of course, in some cases it has
taken weeks if not months to resolve or work around a "bug" in the
code, but in all cases the data was recovered.
--
{1-2-3-4-----5-6-7
I have seen too many
horror stories on this list that I just avoid it).
--
{1-2-3-4-----5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Com
not a substitute for a real online rebalance,
but it gets the job done (if you can take the data offline, I do it a
small chunk at a time).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garn
-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss
c3t5000C5001A55F7A6d0 ONLINE 0 0 0 114K repaired
c3t5000C5001A5347FEd0 ONLINE 0 0 0
spares
c3t5000C5001A485C88d0AVAIL
c3t5000C50026A0EC78d0AVAIL
errors: No known data errors
--
{----1-2-3-4--
On Wed, Oct 05, 2011 at 07:28:02PM -0700, Paul Kraus wrote:
> I have been told by Oracle Support (not first line, but someone
> from engineering in response to an escalation) that the code is done
> to put aclmode back in, and that an IDR can probably be cut against
> the 10U10
On Wed, Oct 5, 2011 at 5:56 PM, Paul B. Henson wrote:
> On Thu, Sep 29, 2011 at 07:13:40PM -0700, Paul Kraus wrote:
>
>> Another potential difference ... I have been told by Oracle Support
>> (but have not yet confirmed) that just running the latest zfs code
>> (Solaris
On Thu, Sep 29, 2011 at 07:13:40PM -0700, Paul Kraus wrote:
> Another potential difference ... I have been told by Oracle Support
> (but have not yet confirmed) that just running the latest zfs code
> (Solaris 10U10) will disable the aclmode property, even if you do not
> upgra
. I have been told by Oracle Support
(but have not yet confirmed) that just running the latest zfs code
(Solaris 10U10) will disable the aclmode property, even if you do not
upgrade the zpool version beyond 22. I expect to test this next week,
as we _need_ ACLs to work for our data.
--
{---
SB1dFB1cmw0QWNNd0RkR1ZnN0JEb2RsLXc&output=html
for the results of some of my testing.
--
{1-2-3-----4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstei
which would be viable for the
backup server, if we had a spare 20+ TB of storage just sitting
around. Copying off is NOT an option for production due to outage
window _and_ lack of spare 20+ storage :-(
--
{1-2-3-4-5-6-7-}
Paul Kraus
-
run the
machine out of memory.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123
nel structure in RAM when the system runs out of
RAM. It has not committed anything to disk.
--
{1-2-3-----4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein
, and upgrading the zpool won't
help with legacy snapshots).
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://
s from trying
to poke legacy mode bits and don't use NFS, maybe you're ok.
--
Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst | hen...@csupomona.edu
California State Polytechnic University | Pomona CA 91768
On Tue, Sep 13, 2011 at 3:31 PM, Paul B. Henson wrote:
> Did update 10 sneak out under cover of darkness or what? I didn't see
> any announcements or chatter about it, google doesn't find anything, and
> the Oracle download site still only shows update 9:
It was suppose
beyond its usual painful all the way to
unusable .
--
Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst | hen...@csupomona.edu
California State Polytechnic University | Pomona CA 91768
orage/solaris/downloads/index.html
I just tried on a U9 and U10 box. On the U10 system, I did a
simple 'chmod g+s' on a directory with an ACL, and wham, the
ACL vanished. Same operation on U9, and the ACL is preserved.
Meh, bogus :(. Thanks for the confirmation.
--
Paul B. Henson |
On 9/13/2011 5:07 AM, Paul Kraus wrote:
Patch-ID# 144500-19 is the kernel update that is the kernel from
10U10
Yep, the guy posting on sunmanagers confirmed that was the patch he
installed which broke aclmode.
Did update 10 sneak out under cover of darkness or what? I didn't se
ive zfs
tools (zfs send / recv), the native Solaris tools (cp, mv, etc.), and
Symantec NetBackup. I have not tried other commercial backup systems
as we already have NBU in house.
zfs send / recv is far, far faster than NBU.
--
{1-2-3-4-5-6----
On Mon, Sep 12, 2011 at 8:50 PM, Paul B. Henson wrote:
> I recently saw a message posted to the sunmanagers list complaining
> about installing a kernel patch and suddenly having his ACL's disappear
> completely whenever a chmod occurred. I replied and asked him to check
&g
not aclmode is gone in
update 10? I didn't think they'd backport such a feature disabling
change to Solaris 10, seems to not line up with the "long term stability
and compatibility" that's supposed to be the benefit there...
--
Paul B. Henson | (909) 979-6361 | htt
hear back on status of this new bug
(which looks like an old bug, but the old bug has been fixed in the
patches I'm now running).
On Wed, Aug 3, 2011 at 9:19 AM, Paul Kraus wrote:
> I am having a very odd problem, and so far the folks at Oracle
> Support have not provided a working s
-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-&g
s).
--
{----1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, S
ing to /etc/system and reboot:
set zfs:zfs_arc_max =
can be decimal or hex (but don't use a scale like 4g). Best to
keep it a power of 2.
--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garne
if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C6272342+0 records in
> 6272342+0 records out
> 25691512832 bytes (26 GB) copied, 70.4122 s, 365 MB/s
>
>
> real 1m10.433s
> user 0m3.187s
> sys 1m4.426s
> __
eyond the three you indicate. Keep in mind
that once the kernel finds a controller it makes an entry in
/etc/path_to_inst so that the IDs remain consistent if new controllers
are added (earlier in the search path).
--
{1-2-3-4-5-6-7---
e the "fullness"
of this zpool.
On Thu, Aug 4, 2011 at 1:25 PM, Paul Kraus wrote:
> Updates to my problem:
>
> 1. The destroy operation appears to be restarting from the same point
> after the system hangs and has to be rebooted. Oracle gave me the
> following to track prog
zfs recv
... and every time I try to import this zpool I hang the system due to
lack of memory (the box has 32 GB of RAM).
Any suggestions how to delete / destroy this incomplete snapshot
without running the system out of RAM ?
On Wed, Aug 3, 2011 at 9:56 AM, Paul Kraus wrote:
> An additional dat
ly quite valuable for the functionality of the group@ entry.
So I think the implementation of both a "discard" and "deny" aclmode
would need to incorporate the ability to modify the parts of the mode
bits that are not related to the ACL.
--
Paul B. Henson | (909) 979-6361 | http:/
this might impact
importing this zpool ?
On Wed, Aug 3, 2011 at 9:19 AM, Paul Kraus wrote:
> I am having a very odd problem, and so far the folks at Oracle
> Support have not provided a working solution, so I am asking the crowd
> here while still pursuing it via Oracle Support.
>
>
-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http:/
greatest possible access that might be allowed by the
ACL, without any consideration of deny entries or group memberships.
Is this description different than how the mode bits are currently
derived when a ZFS acl is set on an object?
--
Paul B. Henson | (909) 979-6361 | http://www.csupomo
re an
update to the NFS 4 ACL spec not very probable, particularly in the
short term...
--
Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst | hen...@csupomona.edu
California State Polytechnic University
ather than
a per file system level, but that would be considerably more difficult
to achieve 8-/.
If illumos would be willing to consider integrating a change like this,
I would like to discuss the technical details and determine the best
possible implementation.
Thanks...
--
Paul B.
hould be OK ?
[Not applicable to the root zpool, will the OS installation utility do
the right thing ?]
--
{1-2-3-4-5-6-----7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator
nd it's data. You may have to force an operation since you did not
detach the zone.
--
{1-2-3-4-----5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator
1 - 100 of 623 matches
Mail list logo