Paul B. Henson wrote:
> On Thu, 8 May 2008, Mark Shellenbaum wrote:
>
>> we already have the ability to allow users to create/destroy snapshots
>> over NFS. Look at the ZFS delegated administration model. If all you
>> want is snapshot creation/destruction then you will need to grant
>> "snapsho
On Thu, 8 May 2008, Mark Shellenbaum wrote:
> we already have the ability to allow users to create/destroy snapshots
> over NFS. Look at the ZFS delegated administration model. If all you
> want is snapshot creation/destruction then you will need to grant
> "snapshot,mount,destroy" permissions.
On Thu, 8 May 2008, eric kustarz wrote:
> Matt just sent out a code review for this today:
> 6343667 scrub/resilver has to start over when a snapshot is taken
> http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
Wow, this bug was originally opened 30-OCT-2005... I guess it was really
difficul
Hi,
If I delegate a dataset to a zone, and inside the zone, the zoneadmin
set the attribute of that dataset, where is that data kept? More to the
point, at what level is that data kept? In the zone? Or on the pool,
with the zone having privilege to modify that info at the pool?
I'm looking i
Hi, Chris,
Good topic, I'd like to see comments from expert as well.
Firstly, I think it has some punishment from NFS, ZFS/NFS has
performance lost,
and the L2ARC cache feature is the way to solve it, so far. (Has in
opensolaris, but not in s10u4 yet,
will target in s10u6 release).
And,
On Thu, 8 May 2008, Peter Tribble wrote:
> As a regular fileserver, yes - random reads of small files on raidz isn't
> too hot...
There that would pretty much be our usage scenario; home directories and
group project directories.
> I just disable NCQ and have done with it.
Doesn't that result i
On Wed, 7 May 2008, Bob Friesenhahn wrote:
> > It seems like kind of a waste to allocate 1TB to the operating system,
> > would there be any issue in taking a slice of those boot disks and
> > creating a zfs mirror with them to add to the pool?
>
> You don't want to go there. Keep in mind that th
I have a ZFS-based NFS server (Solaris 10 U4 on x86) where I am seeing
a weird performance degradation as the number of simultaneous sequential
reads increases.
Setup:
NFS client -> Solaris NFS server -> iSCSI target machine
There are 12 physical disks on the iSCSI target machine. Each
On Wed, 7 May 2008, Richard Elling wrote:
> N.B. anyone can purchase a Production Subscription for OpenSolaris which
> would get both "support" and the in-kernel CIFS server.
> http://www.sun.com/service/opensolaris/index.jsp
Wow. That's new, and very intriguing. Any idea on the potential timelin
Mike DeMarco wrote:
>> Mike DeMarco wrote:
>>
>>> I currently have a zpool with two 8Gbyte disks in
>>>
>> it. I need to replace them with a single 56Gbyte
>> disk.
>>
>>> with veritas I would just add the disk in as a
>>>
>> mirror and break off the other plex then destroy
> Mike DeMarco wrote:
> > I currently have a zpool with two 8Gbyte disks in
> it. I need to replace them with a single 56Gbyte
> disk.
> >
> > with veritas I would just add the disk in as a
> mirror and break off the other plex then destroy it.
> >
> > I see no way of being able to do this with zfs
> > The disks in the SAN servers were indeed striped together with Linux LVM
> > and exported as a single volume to ZFS.
>
> That is really going to hurt. In general, you're much better off
> giving ZFS access to all the individual LUNs. The intermediate
> LVM layer kills the concurrency that's
12 matches
Mail list logo