Did you set autoexpand on? Conversely, did you try doing a 'zpool online bigpool <disk>' for each disk after the replace completed?
On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote:
Hi, I've read before regarding zpool size increase by replacing the vdevs. The initial pool was a raidz2 with 4 640GB disks. I've replaced each disk with 1TB size by taking it out, inserting the new disk, doing cfgadm -c configure on port and zpool replace bigpool c6tXd0 The problem is the zpool size is the same (2.33TB raw) as seen below: # zpool list bigpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT bigpool 2.33T 1.41T 942G 60% 1.00x ONLINE - It should be ~ 3.8-3.9 TB, right? I've performed a zpool export/import, but to no avail. I'm running OpenSolaris 128a Here is the zpool status: # zpool status bigpool pool: bigpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM bigpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c6t2d0 ONLINE 0 0 0 c6t3d0 ONLINE 0 0 0 c6t4d0 ONLINE 0 0 0 c6t5d0 ONLINE 0 0 0 errors: No known data errors and here are the disks: # format </dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c6t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 63> /p...@0,0/pci8086,3...@1f,2/d...@0,0 1. c6t1d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> /p...@0,0/pci8086,3...@1f,2/d...@1,0 2. c6t2d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /p...@0,0/pci8086,3...@1f,2/d...@2,0 3. c6t3d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /p...@0,0/pci8086,3...@1f,2/d...@3,0 4. c6t4d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /p...@0,0/pci8086,3...@1f,2/d...@4,0 5. c6t5d0 <ATA-SAMSUNG HD103SJ-00E4-931.51GB> /p...@0,0/pci8086,3...@1f,2/d...@5,0 Specify disk (enter its number): Is there something that I am missing? _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Regards, markm _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss