To All...
Problem solved. Operator error on my part. (but I did learn something!!
)
Thank you all very much!
--Kenny
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
Kenny wrote:
>
> How did you determine from the format output the GB vs MB amount??
>
> Where do you compute 931 GB vs 932 MB from this??
>
> 2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]
>
> 3. c6t600A0B800049F93C030D48B3EAB6d0
> /scsi_vhci/[EMAIL PROTECTED]
>
It's in t
Ok so I knew it had to be operator headspace...
I found my error and have fixed it in CAM. Thanks to all for helping my
education!!
However I do have a question. And pardon if it's a 101 type...
How did you determine from the format output the GB vs MB amount??
Where do you compute 931 G
Ok so I knew it had to be operator headspace...
I found my error and have fixed it in CAM. Thanks to all for helping my
education!!
However I do have a question. And pardon if it's a 101 type...
How did you determine from the format output the GB vs MB amount??
Where do you compute 931 G
On Thu, 28 Aug 2008, Kenny wrote:
> 2. c6t600A0B800049F93C030A48B3EA2Cd0
> /scsi_vhci/[EMAIL PROTECTED]
Good.
> 3. c6t600A0B800049F93C030D48B3EAB6d0
> /scsi_vhci/[EMAIL PROTECTED]
Oops! Oops! Oops!
It seems that some of your drives have the full 931.01G
On Thu, 28 Aug 2008, Kenny wrote:
> Bob, Thanks for the reply. Yes I did read your white paper and am using
> it!! Thanks again!!
>
> I used zpool iostat -v and it did't give the information as advertised...
> see below
The lack of size information seems quit odd.
Bob
=
exactly :)
On 8/28/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Daniel Rock wrote:
>>
>> Kenny schrieb:
>> >2. c6t600A0B800049F93C030A48B3EA2Cd0
>>
>> > /scsi_vhci/[EMAIL PROTECTED]
>> >3. c6t600A0B800049F93C030D48B3EAB6d0
>>
>> > /scsi_vhci/[EMAIL
Daniel Rock wrote:
>
> Kenny schrieb:
> >2. c6t600A0B800049F93C030A48B3EA2Cd0
>
> > /scsi_vhci/[EMAIL PROTECTED]
> >3. c6t600A0B800049F93C030D48B3EAB6d0
>
> > /scsi_vhci/[EMAIL PROTECTED]
>
> Disk 2: 931GB
> Disk 3: 931MB
>
> Do you see the difference
Kenny schrieb:
>2. c6t600A0B800049F93C030A48B3EA2Cd0
> /scsi_vhci/[EMAIL PROTECTED]
>3. c6t600A0B800049F93C030D48B3EAB6d0
> /scsi_vhci/[EMAIL PROTECTED]
Disk 2: 931GB
Disk 3: 931MB
Do you see the difference?
Daniel
_
Tim,
Per your request...
df -h
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d10 98G 4.2G92G 5%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
p
Bob, Thanks for the reply. Yes I did read your white paper and am using it!!
Thanks again!!
I used zpool iostat -v and it did't give the information as advertised... see
below
bash-3.00# zpool iostat -v
capacity
operations
On Wed, 27 Aug 2008, Kenny wrote:
>
> Thanks... Yes I did reserve one for Hot spare on the hardware
> side Guess I can change that thinking.
Disks in the 2540 are expensive. The hot spare does not need to be in
the 2540. You also use a suitably large disk (1TB) installed in your
serve
On Wed, Aug 27, 2008 at 1:51 PM, Kenny <[EMAIL PROTECTED]> wrote:
> Tcook - Sorry bout that...
>
> Solaris 10 (8/07 I think)
> ZFS version 4
>
> How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
>
> Thanks --Kenny
>
>
Please paste the output of df, zpool status, and format so we can
On Wed, 27 Aug 2008, Kenny wrote:
> Tcook - Sorry bout that...
>
> Solaris 10 (8/07 I think)
> ZFS version 4
>
> How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
You can use 'smpatch' to apply patches to your system so that
kernel/zfs wise it is essentially Sol 10 5/08. However, I
Kenny wrote:
> Arron,
>
> Thanks... Yes I did reserve one for Hot spare on the hardware side
> Guess I can change that thinking.
>
> Solaris 10 8/07 is my OS.
>
> This storage is to become our syslog repository for approx 20 servers. We
> have approx 3TB of data now and wanted space to g
Claus, Thanks for the sanity check... I thought I wasn't crazy Now on to
find out why my 9TB turned into 9GB...
Thanks again
--Kenny
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
Arron,
Thanks... Yes I did reserve one for Hot spare on the hardware side Guess
I can change that thinking.
Solaris 10 8/07 is my OS.
This storage is to become our syslog repository for approx 20 servers. We have
approx 3TB of data now and wanted space to grow and keep more online for
Claus - Thanks!! At least I know I'm not going crazy!!
Yes, I've got 11 metric 1 TB disks and would like 10TB useable (end game...)
--Kenny
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Tcook - Sorry bout that...
Solaris 10 (8/07 I think)
ZFS version 4
How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
Thanks --Kenny
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
> Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
>
> I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The host
> system ( SUN Enterprise 5220) reconizes the "disks" as each having 931GB
> space. So that should be 10+ TB in size total. However whe
Couple of questions,
What version of Solaris are you using? (cat /etc/release)
If you're exposing each disk individually through a LUN/2540 Volume, you
don't really gain anything by having a spare on the 2540 (which I assume
you're doing by only exposing 11 LUNs instead of 12). Your best bet is to
On Wed, Aug 27, 2008 at 1:08 PM, Kenny <[EMAIL PROTECTED]> wrote:
> Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
>
> I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The
> host system ( SUN Enterprise 5220) reconizes the "disks" as each having
> 9
Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The host
system ( SUN Enterprise 5220) reconizes the "disks" as each having 931GB space.
So that should be 10+ TB in size total. However when I zpool
23 matches
Mail list logo