Any ideas?

On Mon, Sep 24, 2012 at 10:46 AM, LIC mesh <licm...@gmail.com> wrote:

> That's what I thought also, but since both prtvtoc and fdisk -G see the
> two disks as the same (and I have not overridden sector size), I am
> confused.
> *
> *
> *iostat -xnE:*
> c16t5000C5002AA08E4Dd0 Soft Errors: 0 Hard Errors: 323 Transport Errors:
> 489
> Vendor: ATA      Product: ST32000542AS     Revision: CC34 Serial No:
> %FAKESERIAL%
> Size: 2000.40GB <2000398934016 bytes>
> Media Error: 207 Device Not Ready: 0 No Device: 116 Recoverable: 0
> Illegal Request: 0 Predictive Failure Analysis: 0
> c16t5000C5005295F727d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA      Product: ST2000VX000-9YW1 Revision: CV13 Serial No:
> %FAKESERIAL%
> Size: 2000.40GB <2000398934016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 0 Predictive Failure Analysis: 0
>
> *zpool status:*
>   pool: rspool
>  state: ONLINE
>   scan: resilvered 719G in 65h28m with 0 errors on Fri Aug 24 04:21:44 2012
> config:
>
>         NAME                        STATE     READ WRITE CKSUM
>         rspool                      ONLINE       0     0     0
>           raidz1-0                  ONLINE       0     0     0
>             c16t5000C5002AA08E4Dd0  ONLINE       0     0     0
>             c16t5000C5002ABE78F5d0  ONLINE       0     0     0
>             c16t5000C5002AC49840d0  ONLINE       0     0     0
>             c16t50014EE057B72DD3d0  ONLINE       0     0     0
>             c16t50014EE057B69208d0  ONLINE       0     0     0
>         cache
>           c4t2d0                    ONLINE       0     0     0
>         spares
>           c16t5000C5005295F727d0    AVAIL
>
> errors: No known data errors
>
> *root@nas:~# zpool replace rspool c16t5000C5002AA08E4Dd0
> c16t5000C5005295F727d0*
> cannot replace c16t5000C5002AA08E4Dd0 with c16t5000C5005295F727d0: devices
> have different sector alignment
>
>
>
> On Mon, Sep 24, 2012 at 9:23 AM, Gregg Wonderly <gregg...@gmail.com>wrote:
>
>> What is the error message you are seeing on the "replace"?  This sounds
>> like a slice size/placement problem, but clearly, prtvtoc seems to think
>> that everything is the same.  Are you certain that you did prtvtoc on the
>> correct drive, and not one of the active disks by mistake?
>>
>> Gregg Wonderly
>>
>> As does fdisk -G:
>> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
>> * Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
>> * PCYL     NCYL     ACYL     BCYL     NHEAD NSECT SECSIZ
>>   60800    60800    0        0        255   252   512
>> You have new mail in /var/mail/root
>> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0
>> * Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0
>> * PCYL     NCYL     ACYL     BCYL     NHEAD NSECT SECSIZ
>>   60800    60800    0        0        255   252   512
>>
>>
>>
>> On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh <licm...@gmail.com> wrote:
>>
>>> Yet another weird thing - prtvtoc shows both drives as having the same
>>> sector size,  etc:
>>> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
>>> * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
>>> *
>>> * Dimensions:
>>> *     512 bytes/sector
>>> * 3907029168 sectors
>>> * 3907029101 accessible sectors
>>> *
>>> * Flags:
>>> *   1: unmountable
>>> *  10: read-only
>>> *
>>> * Unallocated space:
>>> *       First     Sector    Last
>>> *       Sector     Count    Sector
>>> *          34       222       255
>>> *
>>> *                          First     Sector    Last
>>> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
>>>        0      4    00        256 3907012495 3907012750
>>>        8     11    00  3907012751     16384 3907029134
>>> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0
>>> * /dev/rdsk/c16t5000C5005295F727d0 partition map
>>> *
>>> * Dimensions:
>>> *     512 bytes/sector
>>> * 3907029168 sectors
>>> * 3907029101 accessible sectors
>>> *
>>> * Flags:
>>> *   1: unmountable
>>> *  10: read-only
>>> *
>>> * Unallocated space:
>>> *       First     Sector    Last
>>> *       Sector     Count    Sector
>>> *          34       222       255
>>> *
>>> *                          First     Sector    Last
>>> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
>>>        0      4    00        256 3907012495 3907012750
>>>         8     11    00  3907012751     16384 3907029134
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson <tsc...@mst.edu>wrote:
>>>
>>>> I think you can fool a recent Illumos kernel into thinking a 4k disk is
>>>> 512 (incurring a performance hit for that disk, and therefore the vdev and
>>>> pool, but to save a raidz1, it might be worth it):
>>>>
>>>> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ,
>>>> see "Overriding the Physical Sector Size"
>>>>
>>>> I don't know what you might have to do to coax it to do the replace
>>>> with a hot spare (zpool replace? export/import?).  Perhaps there should be
>>>> a feature in ZFS that notifies when a pool is created or imported with a
>>>> hot spare that can't be automatically used in one or more vdevs?  The whole
>>>> point of hot spares is to have them automatically swap in when you aren't
>>>> there to fiddle with things, which is a bad time to find out it won't work.
>>>>
>>>> Tim
>>>>
>>>> On Sun, Sep 23, 2012 at 10:52 PM, LIC mesh <licm...@gmail.com> wrote:
>>>>
>>>>> Well this is a new one....
>>>>>
>>>>> Illumos/Openindiana let me add a device as a hot spare that evidently
>>>>> has a different sector alignment than all of the other drives in the 
>>>>> array.
>>>>>
>>>>> So now I'm at the point that I /need/ a hot spare and it doesn't look
>>>>> like I have it.
>>>>>
>>>>> And, worse, the other spares I have are all the same model as said hot
>>>>> spare.
>>>>>
>>>>> Is there anything I can do with this or am I just going to be up the
>>>>> creek when any one of the other drives in the raidz1 fails?
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> zfs-discuss mailing list
>>>>> zfs-discuss@opensolaris.org
>>>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>>>
>>>>>
>>>>
>>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to