Hi--

I guess I can't begin to understand patching.

Yes, you provided a whole disk to zpool create but it actually
creates a part(ition) 0 as you can see in the output below.

Part      Tag    Flag     First Sector        Size        Last Sector
   0      usr    wm               256      19.99GB      41927902

Part      Tag    Flag     First Sector         Size         Last Sector
    0     usr    wm               256       99.99GB      209700062

I'm sorry you had to recreate the pool. This *is* a must-have feature
and it is working as designed in Solaris 11 and with patch 148098-3 (or
whatever the equivalent is) in Solaris 10 as well.

Maybe its time for me to recheck this feature in current Solaris 10
bits.

Thanks,

Cindy



On 07/25/12 16:14, Habony, Zsolt wrote:
Thank you for your replies.

First, sorry for misleading info.  Patch 148098-03  indeed not included in 
recommended set, but trying to download it shows that 147440-15 obsoletes it
and 147440-19 is included in latest recommended patch set.
Thus time solves the problem elsewhere.

Just for fun, my case was:

A standard LUN used as a zfs filesystem, no redundancy (as storage already 
has), and no partition is used, disk is given directly to zpool.
# zpool status xxxxxxxxxxxx-oraarch
   pool: xxxxxxxxxxxx-oraarch
  state: ONLINE
  scan: none requested
config:

         NAME                                     STATE     READ WRITE CKSUM
         xxxxxxxxxxxxxx-oraarch                       ONLINE       0     0     0
           c5t60060E800570B900000070B900006547d0  ONLINE       0     0     0

errors: No known data errors

Partitioning shows this.

partition>  pr
Current partition table (original):
Total disk sectors available: 41927902 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector        Size        Last Sector
   0        usr    wm               256      19.99GB         41927902
   1 unassigned    wm                 0          0              0
   2 unassigned    wm                 0          0              0
   3 unassigned    wm                 0          0              0
   4 unassigned    wm                 0          0              0
   5 unassigned    wm                 0          0              0
   6 unassigned    wm                 0          0              0
   8   reserved    wm          41927903       8.00MB         41944286


As I mentioned I did not partition it, "zpool create" did.  I had absolutely no 
idea how to resize these partitions, where to get the available number of sectors and how 
many should be skipped and reserved ...
Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , 
restored data.

Partition looks like this now, I do not think I could have created it easily 
manually.

partition>  pr
Current partition table (original):
Total disk sectors available: 209700062 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
   0        usr    wm               256       99.99GB          209700062
   1 unassigned    wm                 0           0               0
   2 unassigned    wm                 0           0               0
   3 unassigned    wm                 0           0               0
   4 unassigned    wm                 0           0               0
   5 unassigned    wm                 0           0               0
   6 unassigned    wm                 0           0               0
   8   reserved    wm         209700063        8.00MB          209716446

Thank you for your help.
Zsolt Habony



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to