Hi again,
I sort of take that back, here's the past history:
Solaris 10 3/05 = Solaris 10 RR 1/05
Solaris 10 1/06 = Update 1
Solaris 10 6/06 = Update 2
Solaris 10 11/06 = Update 3
Solaris 10 8/07 = Update 4
Solaris 10 5/08 = Update 5
I did say it was a "goal" though.
-
Hi Paul,
I believe the goal is to come out w/ new Solaris updates every 4-6
months and sometimes are known as quarterly updates.
Regards.
Original Message
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
From: Paul B. Henson <[EMAIL PROTECTED]>
To: Robin Guo <[EMAI
Paul B. Henson wrote:
> Historically I've used hardware raid1 for the boot disks on my servers.
> With the availability of ZFS root, I want to explore making the two
> underlying drives directly available to the operating system and create a
> ZFS mirror to avail of error detection and self-healing
Historically I've used hardware raid1 for the boot disks on my servers.
With the availability of ZFS root, I want to explore making the two
underlying drives directly available to the operating system and create a
ZFS mirror to avail of error detection and self-healing.
The current openSolaris in
Bob Friesenhahn wrote:
> On Sat, 17 May 2008, James C. McPherson wrote:
>
>> Bob Friesenhahn wrote:
>>> On Fri, 16 May 2008, James C. McPherson wrote:
> 3) I've read that it's best practice to create the RAID set utilizing
> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this
On Sat, 17 May 2008, James C. McPherson wrote:
> Bob Friesenhahn wrote:
>> On Fri, 16 May 2008, James C. McPherson wrote:
3) I've read that it's best practice to create the RAID set utilizing
Hardware RAID utilities vice using ZFS raidz. Any wisdom on this?
>>> You've got a whacking gre
Bob Friesenhahn wrote:
> On Fri, 16 May 2008, James C. McPherson wrote:
>>> 3) I've read that it's best practice to create the RAID set utilizing
>>> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this?
>> You've got a whacking great cache in the ST2540, so you might as
>> well make u
The fix is already in Solaris 10 U6. A patch for S10U5 will only be
available when S10U6 is released.
--
Prabahar.
Veltror wrote:
> Is there any possibility that the psarc 2007/567 can be made as a patch to
> Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible
> but sin
The write throttling improvement is in build 87.
--
Prabahar.
Lori Alt wrote:
> Actually, I only meant that zfs boot was integrated
> into build 90. I don't know about the improved
> write throttling.
>
> I will check into why there was no mention of this
> on the "heads up" page.
>
> Lori
>
> A
On Fri, May 16, 2008 at 01:59:56PM -0700, Vincent Fox wrote:
> So it's pushed back to build 90 now?
Evidently, but build 90 is closed, and the bits are in. The WOS images
for build 90 are not out yet, but that's a matter of time; the bits are
in.
___
zf
On Fri, May 16, 2008 at 02:19:29PM -0600, Lori Alt wrote:
> Clarifying further: the install support for zfs root file
> systems went into build 90, but because the current install
> code is closed source, the effect of that integration will not be
> seen until the build 90 SXCE is released. At th
>Anyone out there remember the -d option for share? How do you set the
> share description using the zfs set commands, or is it even possible?
Yes, it is quite hard to find. I filed a bug about this last
summer:
http://bugs.opensolaris.org/view_bug.do?bug_id=6565879
The way to do set th
On Fri, May 16, 2008 at 03:12:02PM -0700, Zlotnick Fred wrote:
> The issues with CIFS is not just complexity; it's the total amount
> of incompatible change in the kernel that we had to make in order
> to make the CIFS protocol a first class citizen in Solaris. This
> includes changes in the VFS l
Brian Hechinger wrote:
> On Fri, May 16, 2008 at 02:32:34PM -0600, Lori Alt wrote:
>
>> Install of a zfs root can only be done with the tty-based installer
>> or with Jumpstart. I will make sure that instructions for both
>> are made available by the time that SXDE build 90 is
>> released.
>>
The issues with CIFS is not just complexity; it's the total amount
of incompatible change in the kernel that we had to make in order
to make the CIFS protocol a first class citizen in Solaris. This
includes changes in the VFS layer which would break all S10 file
systems. So in a very real sense C
On Thu, 15 May 2008, Robin Guo wrote:
> The most feature and bugfix so far towards Navada 87 (or 88? ) will
> backport into s10u6. It's about the same (I mean from outside viewer, not
> inside) with openSolaris 05/08, but certainly, some other features as
> CIFS has no plan to backport to s10u6
By my calculations that makes the possible release date for ZFS boot installer
support around the 9th June 2008. Mark that date in your diary!
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
So it's pushed back to build 90 now?
There was an announcement the other day that build 88 was being skipped and
build 89 would be the official release with ZFS boot.
Not a big deal but someone should do an announcement about the change.
This message posted from opensolaris.org
_
On Fri, May 16, 2008 at 02:32:34PM -0600, Lori Alt wrote:
> Install of a zfs root can only be done with the tty-based installer
> or with Jumpstart. I will make sure that instructions for both
> are made available by the time that SXDE build 90 is
> released.
Will the tty or jumpstart based insta
Install of a zfs root can only be done with the tty-based installer
or with Jumpstart. I will make sure that instructions for both
are made available by the time that SXDE build 90 is
released.
For an attractive, easy-to-use installer that is designed from
the outset to install systems with zfs r
On Fri, 16 May 2008, Lori Alt wrote:
> Clarifying further: the install support for zfs root file
> systems went into build 90, but because the current install
> code is closed source, the effect of that integration will not be
> seen until the build 90 SXCE is released. At that time,
> installs
Clarifying further: the install support for zfs root file
systems went into build 90, but because the current install
code is closed source, the effect of that integration will not be
seen until the build 90 SXCE is released. At that time,
installs will show a screen that give the user an opport
Actually, I only meant that zfs boot was integrated
into build 90. I don't know about the improved
write throttling.
I will check into why there was no mention of this
on the "heads up" page.
Lori
Andrew Pattison wrote:
> Were both of these items (ZFS boot install support and improved write
>
On Fri, 16 May 2008, Vincent Fox wrote:
> If/when ZFS acquires a method to ensure that spare#1 in chassis#1
> only gets used to replace failed disks in chassis#1 then I'll
> reconsider my position. Currently though there is no mechanism to
> ensure this so I could easily see a spare being pull
I run 3510FC and 2540 units in pairs. I build 2 5-disk RAID5 LUNs in each
array, with 2 disks as global spares. Each array has dual controllers and I'm
doing multipath.
Then from the server I have access to 2 LUNs from 2 arrays, and I build a ZFS
RAID-10 set from these 4 LUNs being sure each
Hello all,
I'm having the same problem here, any news?
I need to use ACL's on the GNU/Linux clients. I'm using nfsv3, and on the
GNU/Linux servers that feature was working, i think we need a solution for
solaris/opensolaris. Now, with the "dmm" project, how we can start a migration
process, if
Hello.
Anyone out there remember the -d option for share? How do you set the share
description using the zfs set commands, or is it even possible?
Thanks!
-B
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
On Fri, 16 May 2008, James C. McPherson wrote:
>
>> 3) I've read that it's best practice to create the RAID set utilizing
>> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this?
>
> You've got a whacking great cache in the ST2540, so you might as
> well make use of it.
Exporting each
On Fri, 16 May 2008, Kenny wrote:
> Sun 2540 FC Disk Array w/12 1TB disk drives
It is interesting that the 2540 is available with large disks now.
> My desire is to create 2 5disk RAID 5 sets with one hot spare each.
> Then using ZFS to pool the 2 sets into one 8 TB Pool with several
> ZFS file
It has been integrated into Nevada build 90.
Lori
andrew wrote:
> What is the current estimated ETA on the integration of install support for
> ZFS boot/root support to Nevada?
>
> Also, do you have an idea when we can expect the improved ZFS write
> throttling to integrate?
>
> Thanks
>
> An
I have been using zfs boot with lzjb compression on since build 75, from
time to time I had similar problem, not sure why.
As best practise, I do snapshot the root filesystem frequently, so that
I can rollback to the last working snapshot.
Rgds,
Andre W.
Victor Latushkin wrote:
>>> this seems
Hi everyone,
I've been experimenting with ZFS for some time and I have one question:
Is it possible to hide file with ZFS ACL ?
Let me explain what I would like to do:
A directory (chmod 0755) contains 3 subdirs: public, private an veryprivate
public has read access to everyone (0755)
private has
| Have you tried to disable vdev caching and leave file level
| prefetching?
If you mean setting zfs_vdev_cache_bshift to 13 (per the ZFS Evil
Tuning Guide) to turn off device-level prefetching then yes, I have
tried turning off just that; it made no difference.
If there's another tunable then
Robin Guo wrote:
> Hi, Brian
>
> You mean stripe type with multiple-disks or raidz type? I'm afraid
> it's still single disk
> or mirrors only. If opensolaris start new project of this kind of
> feature, it'll be backport
> to s10u* eventually, but that's need some time to go, sounds no
> pos
Hello Robert,
Friday, May 16, 2008, 3:04:48 PM, you wrote:
RM> Hello James,
>>> 2) Does anyone have experiance with the 2540?
JCM>> Kinda. I worked on adding MPxIO support to the mpt driver so
JCM>> we could support the SAS version of this unit - the ST2530.
JCM>> What sort of experience are
Hi, Brian
You mean stripe type with multiple-disks or raidz type? I'm afraid
it's still single disk
or mirrors only. If opensolaris start new project of this kind of
feature, it'll be backport
to s10u* eventually, but that's need some time to go, sounds no
possibility in U6, I think.
Brian H
On May 16, 2008, at 10:04 AM, Robert Milkowski wrote:
> Hello James,
>
>
>>> 2) Does anyone have experiance with the 2540?
>
> JCM> Kinda. I worked on adding MPxIO support to the mpt driver so
> JCM> we could support the SAS version of this unit - the ST2530.
>
> JCM> What sort of experience are
Robert Milkowski wrote:
> Yeah, I do have several of them (both 2530 and 2540).
>
> 2530 (SAS) - cables tend to pop-out sometimes when you are around
> servers... then MPxIO does not work properly if you just hot-unplug
> and hot-replug the sas cable...
If you plug the cable back in within 2
Hello Danilo,
Friday, May 16, 2008, 2:00:42 PM, you wrote:
>
...just noticed there is a bug on that, but it seems there no activity even if it is in state "accepted":
http://bugs.opensolaris.org/view_bug.do?bug_id=6538017
Should I send an email to request-sponsor AT opensolaris DOT org
Hello James,
>> 2) Does anyone have experiance with the 2540?
JCM> Kinda. I worked on adding MPxIO support to the mpt driver so
JCM> we could support the SAS version of this unit - the ST2530.
JCM> What sort of experience are you after? I'ver never used one
JCM> of these boxes in production - o
>> this seems quite easy to me but I don't know how to "move around" to
>> actually implement/propose the required changes.
>>
>>
>> To make grub aware of gzip (as it already is of lzjb) the steps should be:
>>
>>
>> 1. create a new
>>
>> /onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.
Is there any possibility that the psarc 2007/567 can be made as a patch to
Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible but
since all storage on production machines is on EMC Symmetrix with back-end
mirroring, this panic is a showstopper for us. Or is it so intert
Hello Danilo,
Friday, May 16, 2008, 1:34:56 PM, you wrote:
>
Hi Victor,
this seems quite easy to me but I don't know how to "move around" to actually implement/propose the required changes.
To make grub aware of gzip (as it already is of lzjb) the steps should be:
1. create a new
...just noticed there is a bug on that, but it seems there no activity
even if it is in state "accepted":
http://bugs.opensolaris.org/view_bug.do?bug_id=6538017
Should I send an email to request-sponsor AT opensolaris DOT org to
propose my fix?
Rgrds,
Danilo.
Il giorno 16/mag/08, alle ore
On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
> Hi, Paul
>
> At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
As far as root zfs goes, are there any plans to support more than just single
disks or mirrors in U6, or will that be for a later date?
-brian
--
"
Kenny wrote:
> Hi! I'm new to the list and new to zfs
>
> I have the following hardware and would like opinions on implementation.
>
> Sun Enterprise T5220 FC HBA Brocade 200E 4 Gbit switch Sun 2540 FC Disk
> Array w/12 1TB disk drives
>
> My plan is to create a small SAN fabric with the 5220 a
I have Sun Solaris 5.10 Generic_120011-14 and the zpool version is 4. I've
found references to version 5-10 on the Open Solaris site.
Are these versions for Open solaris only? I've searched the SUN site for ZFS
patches and found nothing (most likely operator headspace). Can I update ZFS
on m
Hi Victor,
this seems quite easy to me but I don't know how to "move around"
to actually implement/propose the required changes.
To make grub aware of gzip (as it already is of lzjb) the steps should
be:
1. create a new
/onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
starting from
In terms of permissions, yes Samba links to AD fine. I haven't done much
testing of file ownership, but you can set permissions from a windows
workstation.
However, my testing earlier this year showed that Samba treats permissions very
differently to a windows server. I had a lot of problems
Hi! I'm new to the list and new to zfs
I have the following hardware and would like opinions on implementation.
Sun Enterprise T5220
FC HBA
Brocade 200E 4 Gbit switch
Sun 2540 FC Disk Array w/12 1TB disk drives
My plan is to create a small SAN fabric with the 5220 as the initiator
(additional
> Hi,
>
> using VirtualBox I just tried to move an OpenSolaris 2008.05 boot
> environment (ZFS) on a gzip-9 compressed dataset, but I have the
> following error from grub:
>
> Error 16: Inconsistent filesystem structure
>
> Googling around I found the same error with ZFS boot and Xen in July 2
What is the current estimated ETA on the integration of install support for ZFS
boot/root support to Nevada?
Also, do you have an idea when we can expect the improved ZFS write throttling
to integrate?
Thanks
Andrew.
This message posted from opensolaris.org
Hi,
using VirtualBox I just tried to move an OpenSolaris 2008.05 boot
environment (ZFS) on a gzip-9 compressed dataset, but I have the
following error from grub:
Error 16: Inconsistent filesystem structure
Googling around I found the same error with ZFS boot and Xen in July
2007:
http:
Sumit Gupta wrote:
> The /dev/[r]dsk nodes implement the O_EXCL flag. If a node is opened using
> the O_EXCL, subsequent open(2) to that node fail. But I dont think the
> same is true for /dev/zvol/[r]dsk nodes. Is that a bug (or maybe RFE) ?
Yes, that seems like a fine RFE. Or a bug, if there'
Hello Chris,
Thursday, May 15, 2008, 5:42:32 AM, you wrote:
CS> I wrote:
CS> | I have a ZFS-based NFS server (Solaris 10 U4 on x86) where I am
CS> | seeing a weird performance degradation as the number of simultaneous
CS> | sequential reads increases.
CS> To update zfs-discuss on this: after m
55 matches
Mail list logo