Richard Lowe wrote:
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a domU
is booted from a disk image located on an emulated ZFS volume. Has
this been also observed by other members of
On Thu, Aug 03, 2006 at 01:35:54AM -0700, Tom Simpson wrote:
> Well,
>
> You're spot on. Turns out that our datacentre boys change the umask of root
> to 0027.
>
> :-(
Many years ago, back in the days of Solaris 2.5.1, changing root's umask
to 027 caused problems if you, say, restarted the aut
> Anton B. Rang wrote:
> > I'd filed 6452505 (zfs create should set permissions on underlying
> > mountpoint) so that this shouldn't cause problems in the future
>
> Err.. the way you have described that, seems backward to me, and violates
> existing expected known solaris behaviour, not to
On Thu, Aug 03, 2006 at 03:50:20PM -0700, Philip Brown wrote:
>
> Err.. the way you have described that, seems backward to me, and violates
> existing expected known solaris behaviour, not to mention logical
> separation of filesystems.
> zfs should not go changing the permissions on the [presum
Anton B. Rang wrote:
I'd filed 6452505 (zfs create should set permissions on underlying mountpoint)
so that this shouldn't cause problems in the future
Err.. the way you have described that, seems backward to me, and violates
existing expected known solaris behaviour, not to mention lo
Any news on when Bug-Id #6273505 regarding removing disks and shrinking a pool
might feature in a release???
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a domU
is booted from a disk image located on an emulated ZFS volume. Has this
been also observed by other members of the community? Is th
Apologies for the internal URL, I'm including the list of patches for
the everyone's benefit:
sparc Patches
* ZFS Patches
o 118833-17 SunOS 5.10: kernel patch
o 118925-02 SunOS 5.10: unistd header file patch
o 119578-20 SunOS 5.10: FMA Patch
o 119982-
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a domU
is booted from a disk image located on an emulated ZFS volume. Has this
been also observed by other members of the community? Is there a known
explanati
Eric Schrock wrote:
On Thu, Aug 03, 2006 at 10:24:12AM -0700, Marion Hakanson wrote:
zpool create mirror c0t2d0 c0t3d0 mirror c0t0d0s5 c0t1d0s5
Is this allowed? Is it stupid? Will performance be so bad/bizarre that
it should be avoided at all costs? Anybody tried it?
Yes, it's a
On Thu, Aug 03, 2006 at 10:24:12AM -0700, Marion Hakanson wrote:
>
> zpool create mirror c0t2d0 c0t3d0 mirror c0t0d0s5 c0t1d0s5
>
> Is this allowed? Is it stupid? Will performance be so bad/bizarre that
> it should be avoided at all costs? Anybody tried it?
>
Yes, it's allowed, but it's de
Folks,
I realize this thread has run its course, but I've got a variant of
the original question: What performance problems or anomalies might
one see if mixing both whole disks _and_ slices within the same pool?
I have in mind some Sun boxes (V440, T2000, X4200) with four internal
drives. Typi
Ahh, interesting information. Thanks folks, I'm have a better
understanding of this now.
--joe
Jeff Bonwick wrote:
is zfs any less efficient with just using a portion of a
disk versus the entire disk?
As others mentioned, if we're given a whole disk (i.e. no slice
is specified) then we
Anton B. Rang wrote:
I'd filed 6452505 (zfs create should set permissions on underlying mountpoint)
so that this shouldn't cause problems in the future
6238072 might also be of interest.
Darren
___
zfs-discuss mailing list
zfs-discuss@opens
On Aug 3, 2006, at 5:14 PM, Darren Dunham wrote:
And it's portable. If you use whole disks, you can export the
pool from one machine and import it on another. There's no way
to export just one slice and leave the others behind...
I got the impression that the export command exported the con
I'd filed 6452505 (zfs create should set permissions on underlying mountpoint)
so that this shouldn't cause problems in the future
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
> > And it's portable. If you use whole disks, you can export the
> > pool from one machine and import it on another. There's no way
> > to export just one slice and leave the others behind...
>
> I got the impression that the export command exported the contents
> of the pool, not the underlyin
Path failover is not handled by ZFS. You would use mpxio, or other
software, to take care of path failover.
Pierre Klovsjo wrote:
Greetings all,
I have been given the task of playing around with ZFS and a StorEdge 9970 (HDS 9970) disk array. This setup will be duplicated into a production syst
James C. McPherson wrote:
Patrick Petit wrote:
Darren Reed wrote:
Patrick Petit wrote:
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
disp
Patrick Petit wrote:
Darren Reed wrote:
Patrick Petit wrote:
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message an
Robert Milkowski <[EMAIL PROTECTED]> writes:
> Additionally keep in mind that outer region of a disk is much faster.
> So if you want to put OS and then designate rest of the disk for
> application then probably putting ZFS on a slice beginning on cyl 0 is
> best in most scenarios.
This has the a
Darren Reed wrote:
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the system h
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the system hangs.
*Aug 3 11:11
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1] hang
caused by a SCSI error. What do you think? The error is not systematic.
When it happens, the Solaris/Xen dom0 console keeps displaying the
following message and the system hangs.
*Aug 3 11:11:23 jesma58 scsi: WARNI
On Aug 3, 2006, at 8:17 AM, Jeff Bonwick wrote:
ZFS will try to enable write cache if whole disks is given.
Additionally keep in mind that outer region of a disk is much faster.
And it's portable. If you use whole disks, you can export the
pool from one machine and import it on another. Th
Well,
You're spot on. Turns out that our datacentre boys change the umask of root to
0027.
:-(
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Greetings all,
I have been given the task of playing around with ZFS and a StorEdge 9970 (HDS
9970) disk array. This setup will be duplicated into a production system later
with zones as well.
Since i am new to ZFS and big storage array's such as the 9970 i have a few
thoughts/questions that
> With all of the talk about performance problems due to
> ZFS doing a sync to force the drives to commit to data
> being on disk, how much of a benefit is this - especially
> for NFS?
It depends. For some drives it's literally 10x.
> Also, if I was lucky enough to have a working prestoserv
> ca
Jeff Bonwick wrote:
is zfs any less efficient with just using a portion of a
disk versus the entire disk?
As others mentioned, if we're given a whole disk (i.e. no slice
is specified) then we can safely enable the write cache.
With all of the talk about performance problems due to
ZF
29 matches
Mail list logo