On Mon, Nov 12, 2012 at 10:39 AM, Trond Michelsen wrote:
> On Sat, Nov 10, 2012 at 5:00 PM, Tim Cook wrote:
> > On Sat, Nov 10, 2012 at 9:48 AM, Jan Owoc wrote:
> >> On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen
> >> wrote:
> >>> How can I replace the drive without migrating all the data to
On Sat, Nov 10, 2012 at 6:19 PM, Jim Klimov wrote:
> On 2012-11-10 17:16, Jan Owoc wrote:
>> Any other ideas short of block pointer rewrite?
> A few... one is an idea of what could be the cause: AFAIK the
> ashift value is not so much per-pool as per-toplevel-vdev.
> If the pool started as a set o
tron...@gmail.com said:
> That said, I've already migrated far too many times already. I really, really
> don't want to migrate the pool again, if it can be avoided. I've already
> migrated from raidz1 to raidz2 and then from raidz2 to mirror vdevs. Then,
> even though I already had a mix of 512b a
On Sat, Nov 10, 2012 at 5:04 PM, Tim Cook wrote:
> On Sat, Nov 10, 2012 at 9:59 AM, Jan Owoc wrote:
>> Apparently the currently-suggested way (at least in OpenIndiana) is to:
>> 1) create a zpool on the 4k-native drive
>> 2) zfs send | zfs receive the data
>> 3) mirror back onto the non-4k drive
On Sat, Nov 10, 2012 at 5:00 PM, Tim Cook wrote:
> On Sat, Nov 10, 2012 at 9:48 AM, Jan Owoc wrote:
>> On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen
>> wrote:
>>> How can I replace the drive without migrating all the data to a
>>> different pool? It is possible, I hope?
>> I had the same prob
On 2012-11-10 17:16, Jan Owoc wrote:
Any other ideas short of block pointer rewrite?
A few... one is an idea of what could be the cause: AFAIK the
ashift value is not so much per-pool as per-toplevel-vdev.
If the pool started as a set of the 512b drives and was then
expanded to include sets of
On Sat, Nov 10, 2012 at 9:04 AM, Tim Cook wrote:
> On Sat, Nov 10, 2012 at 9:59 AM, Jan Owoc wrote:
>> Sorry... my question was partly answered by Jim Klimov on this list:
>> http://openindiana.org/pipermail/openindiana-discuss/2012-June/008546.html
>>
>> Apparently the currently-suggested way (a
On Sat, Nov 10, 2012 at 9:59 AM, Jan Owoc wrote:
> On Sat, Nov 10, 2012 at 8:48 AM, Jan Owoc wrote:
> > On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen
> wrote:
> >> When I try to replace the old drive, I get this error:
> >>
> >> # zpool replace tank c4t5000C5002AA2F8D6d0 c4t5000C5004DE863F2d
On Sat, Nov 10, 2012 at 9:48 AM, Jan Owoc wrote:
> On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen
> wrote:
> > When I try to replace the old drive, I get this error:
> >
> > # zpool replace tank c4t5000C5002AA2F8D6d0 c4t5000C5004DE863F2d0
> > cannot replace c4t5000C5002AA2F8D6d0 with c4t5000C5
On Sat, Nov 10, 2012 at 8:48 AM, Jan Owoc wrote:
> On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen wrote:
>> When I try to replace the old drive, I get this error:
>>
>> # zpool replace tank c4t5000C5002AA2F8D6d0 c4t5000C5004DE863F2d0
>> cannot replace c4t5000C5002AA2F8D6d0 with c4t5000C5004DE86
On Sat, Nov 10, 2012 at 8:14 AM, Trond Michelsen wrote:
> When I try to replace the old drive, I get this error:
>
> # zpool replace tank c4t5000C5002AA2F8D6d0 c4t5000C5004DE863F2d0
> cannot replace c4t5000C5002AA2F8D6d0 with c4t5000C5004DE863F2d0:
> devices have different sector alignment
>
>
> H
On Tue, Sep 25, 2012 at 6:42 PM, LIC mesh wrote:
> The new drive I bought correctly identifies as 4096 byte blocksize!
> So...OI doesn't like it merging with the existing pool.
So... Any solution to this yet?
I've got a 42 drive zpool (21 mirror vdevs) with 12 2TB drives that
has 512byte blocksi
Thank you for the link!
Turns out that, even though I bought the WD20EARS and ST32000542AS
expecting a 4096 physical blocksize, they report 512.
The new drive I bought correctly identifies as 4096 byte blocksize!
So...OI doesn't like it merging with the existing pool.
Note: ST2000VX000-9YW1 rep
I'm not sure how to definitively check physical sector size on
solaris/illumos, but on linux, hdparm -I (capital i) or smartctl -i will do
it. OpenIndiana's smartctl doesn't output this information yet (and its
smartctl doesn't work on SATA disks unless attached via a SAS chip). The
issue is comp
Any ideas?
On Mon, Sep 24, 2012 at 10:46 AM, LIC mesh wrote:
> That's what I thought also, but since both prtvtoc and fdisk -G see the
> two disks as the same (and I have not overridden sector size), I am
> confused.
> *
> *
> *iostat -xnE:*
> c16t5000C5002AA08E4Dd0 Soft Errors: 0 Hard Errors: 3
That's what I thought also, but since both prtvtoc and fdisk -G see the two
disks as the same (and I have not overridden sector size), I am confused.
*
*
*iostat -xnE:*
c16t5000C5002AA08E4Dd0 Soft Errors: 0 Hard Errors: 323 Transport Errors:
489
Vendor: ATA Product: ST32000542AS Revision:
What is the error message you are seeing on the "replace"? This sounds like a
slice size/placement problem, but clearly, prtvtoc seems to think that
everything is the same. Are you certain that you did prtvtoc on the correct
drive, and not one of the active disks by mistake?
Gregg Wonderly
>
As does fdisk -G:
root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
* Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
* PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
608006080000255 252 512
You have new mail in /var/mail/root
root@nas:~# fdis
Yet another weird thing - prtvtoc shows both drives as having the same
sector size, etc:
root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
* /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
*
* Dimensions:
* 512 bytes/sector
* 3907029168 sectors
* 3907029101 accessible sectors
*
* Flags:
*
I think you can fool a recent Illumos kernel into thinking a 4k disk is 512
(incurring a performance hit for that disk, and therefore the vdev and
pool, but to save a raidz1, it might be worth it):
http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks , see
"Overriding the Physical
Well this is a new one
Illumos/Openindiana let me add a device as a hot spare that evidently has a
different sector alignment than all of the other drives in the array.
So now I'm at the point that I /need/ a hot spare and it doesn't look like
I have it.
And, worse, the other spares I have a
21 matches
Mail list logo