Daniel,
What is the actual size of c1d1?
>I notice that the size of the first partition is wildly inaccurate.
If format doesn't understand the disk, then ZFS won't either.
Do you have some kind of intervening software like EMC powerpath
or are these disks under some virtualization control?
If so, I would try removing them from this control and retry the
add operation.
Cindy
On 10/29/09 12:07, Daniel wrote:
Yes I am trying to create a non-redundant pool of two disks.
The output of format -> partition for c0d0
Current partition table (original):
Total disk sectors available: 976743646 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 34 465.75GB 976743646
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 976743647 8.00MB 976760030
And for c1d1
Current partition table (original):
Total disk sectors available: 18446744072344831966 + 16384 (reserved
sectors)
Part Tag Flag First Sector
Size Last Sector
0 usr wm 34
8589934592.00TB 8446744072344831966
1 unassigned wm 0
0 0
2 unassigned wm 0
0 0
3 unassigned wm 0
0 0
4 unassigned wm 0
0 0
5 unassigned wm 0
0 0
6 unassigned wm 0
0 0
8 reserved wm 18446744072344831967
8.00MB 18446744072344848350
I notice that the size of the first partition is wildly inaccurate.
creating tank2 gives the same error.
# zpool create tank2 c1d1
cannot create 'tank2': invalid argument for this pool operation
Thanks for your help.
On Thu, Oct 29, 2009 at 1:54 PM, Cindy Swearingen
<cindy.swearin...@sun.com <mailto:cindy.swearin...@sun.com>> wrote:
I might need to see the format-->partition output for both c0d0 and
c1td1.
But in the meantime, you could try this:
# zpool create tank2 c1d1
# zpool destroy tank2
# zpool add tank c1d1
Adding the c1d1 disk to the tank pool will create a non-redundant pool
of two disks. Is this what you had in mind?
Thanks,
Cindy
On 10/29/09 10:17, Daniel wrote:
Here is the output of zpool status and format.
# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
c0d0 ONLINE 0 0 0
errors: No known data errors
format> current
Current Disk = c1d1
<ST315003- 6VS08NK-0001-16777215.>
/p...@0,0/pci-...@1f,2/i...@0/c...@1,0
On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen
<cindy.swearin...@sun.com <mailto:cindy.swearin...@sun.com>
<mailto:cindy.swearin...@sun.com
<mailto:cindy.swearin...@sun.com>>> wrote:
Hi Dan,
Could you provide a bit more information, such as:
1. zpool status output for tank
2. the format entries for c0d0 and c1d1
Thanks,
Cindy
----- Original Message -----
From: Daniel <dan.lis...@gmail.com
<mailto:dan.lis...@gmail.com> <mailto:dan.lis...@gmail.com
<mailto:dan.lis...@gmail.com>>>
Date: Thursday, October 29, 2009 9:59 am
Subject: [zfs-discuss] adding new disk to pool
To: zfs-discuss@opensolaris.org
<mailto:zfs-discuss@opensolaris.org>
<mailto:zfs-discuss@opensolaris.org
<mailto:zfs-discuss@opensolaris.org>>
> Hi,
>
> I just installed 2 new disks in my solaris box and would
like to add
> them to
> my zfs pool.
> After installing the disks I run
> # zpool add -n tank c1d1
>
> and I get:
>
> would update 'tank' to the following configuration:
> tank
> c0d0
> c1d1
>
> Which is what I want however when I omit the -n I get the
following error
>
> # zpool add tank c1d1
> cannot add to 'tank': invalid argument for this pool operation
>
> I get the same message for both dirves with and without the -f
option.
> Any help is appreciated thanks.
>
> --
> -Daniel
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
<mailto:zfs-discuss@opensolaris.org>
<mailto:zfs-discuss@opensolaris.org
<mailto:zfs-discuss@opensolaris.org>>
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
-Daniel
--
-Daniel
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss