Hi,
I think LU 94->96 would be fine, if there's no zone in your system,
just simply do
# cd /Solaris_11/Tools/Installers
# liveupgrade20 --nodisplay
# lucreate -c BE94 -n BE96 -p newpool (The newpool must be SMI lable)
# luupgrade -u -n BE96 -s
# luactivate BE96
# init 6
Dur
GB of free space and the snapshot from which
> the clone was created (rpool/ROOT/[EMAIL PROTECTED]) is 2.71 GB. It
> was not until I had more than 2.71 GB of free space that I could
> promote rpool/ROOT/2008.05.
>
> This behavior does not seem to be documented. Is it a bug in the
Hi, Herman
You may not use '-n' to Makefile, that'll lead swap comlain.
Hernan Freschi wrote:
> I forgot to post arcstat.pl's output:
>
> Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
> 22:32:37 556K 525K 94 515K 949K 98 515K 97 1G1G
> 22:3
U6, I think.
Brian Hechinger wrote:
> On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
>
>> Hi, Paul
>>
>> At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
>>
>
> As far as root zfs goes, are there any plans to support
version 5 would be included but
> it was not, do you think that will be in U6?
>
> On Fri, 16 May 2008, Robin Guo wrote:
>
>> Hi, Paul
>>
>> The most feature and bugfix so far towards Navada 87 (or 88? ) will
>> backport into s10u6.
>> It's about the
Hi, Paul
The most feature and bugfix so far towards Navada 87 (or 88? ) will
backport into s10u6.
It's about the same (I mean from outside viewer, not inside) with
openSolaris 05/08,
but certainly, some other features as CIFS has no plan to backport to
s10u6 yet, so ZFS
will has fully ready
>> Aubrey Li wrote:
>>
>>> Robin Guo wrote:
>>>
>>>> Hi, Aubrey
>>>>
>>>> Could you point the entry you added into menu.lst? I think it might be
>>>> the
>>>> issue that syntax not correct.
&g
from UFS slice.
Aubrey Li wrote:
Robin Guo wrote:
Hi, Aubrey
Could you point the entry you added into menu.lst? I think it might be
the
issue that syntax not correct.
Here is my menu.lst:
[EMAIL PROTECTED]:~/work/cpupm-gate$ cat /rpool/boot/grub/menu.lst
splashimage /boot/grub
system dump - no dump device configured
I really appreciate any suggestions!
Thanks,
-Aubrey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Regards,
Robin Guo, Xue-Bin Guo
Solaris Ke
solaris.org/mailman/listinfo/zfs-discuss
--
Regards,
Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo
l 11 only?
>
> Regards,
>
> Chris
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Regards,
Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China
And, I also see the performance lost while I try iSCISI from local
machine,
but I didn't gather the accurate data yet. That might be a problem need
evaluate.
I'll trace this thread to see if any advance, thanks for bring out the
topic.
- Regards,
Robin Guo
Chris Siebenmann wro
s-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Regards,
Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/
pare was disconnected from
the system. The OS was upgraded to snv_79b (SXDE 1/08) and the pool
was re-imported.
I think this weekend I'll try connecting a different drive to that
controller and see if it will remove then.
Thanks for your help.
On 2/15/08, Robin Guo <[EMAIL PROTECTED
0
> c1t3d0 ONLINE 0 0 0
> spares
> c1d0s4UNAVAIL cannot open
>
> errors: No known data errors
>
>
>
>
--
Regards,
Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun Chi
whole
> disk as presentated by Lori Alt?
>
> Roman
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
pensolaris.org/mailman/listinfo/zfs-discuss
--
Regards,
Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo
_
be supported in a rootpool.
Will Murnane wrote:
On Feb 4, 2008 4:37 PM, Robin Guo <[EMAIL PROTECTED]> wrote:
If you use a whole disk for a rootpool, you must use a slice notation
(e.g. c0d0s0) so that it is labeled with an SMI label.
Will ZFS recognize that it has t
Hi, Roman
You can use 'zpool attach' to attach mirror into it. But cannot 'zpool
add' new slice into it.
rootpool can be a single disk device, or a device slice, or in a
mirrored configuration.
If you use a whole disk for a rootpool, you must use a slice notation
(e.g. c0d0s0) so that it
guid=3365726235666077346
> path='/dev/dsk/c3t50002AC00039040Bd0p0'
> devid='id1,[EMAIL PROTECTED]/q'
> whole_disk=0
> metaslab_array=13
> metaslab_shift=31
> ashift=9
> a
e zpool history command.
> can anyone help me with the problem?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-disc
21 matches
Mail list logo