> when you say remove the device, I assume you mean simply make it unavailable
> for import (I can't remove it from the vdev).
Yes, that's what I meant.
> root@openindiana-01:/mnt# zpool import -d /dev/lofi
> pool: ZP-8T-RZ1-01
> id: 9952605666247778346
> state: FAULTED
> status: One or more
On Fri, Jun 15, 2012 at 10:54:34AM +0200, Stefan Ring wrote:
> >> Have you also mounted the broken image as /dev/lofi/2?
> >
> > Yep.
>
> Wouldn't it be better to just remove the corrupted device? This worked
> just fine in my case.
>
Hi Stefan,
when you say remove the device, I assume you mean
Sorry, if you meant distinguishing between true 512 and emulated
512/4k, I don't know, it may be vendor-specific as to whether they
expose it through device commands at all.
Tim
On Fri, Jun 15, 2012 at 6:02 PM, Timothy Coalson wrote:
> On Fri, Jun 15, 2012 at 5:35 PM, Jim Klimov wrote:
>> 2012-
On Fri, Jun 15, 2012 at 5:35 PM, Jim Klimov wrote:
> 2012-06-16 0:05, John Martin wrote:
>>>
>>> Its important to know...
>>
>> ...whether the drive is really 4096p or 512e/4096p.
>
>
> BTW, is there a surefire way to learn that programmatically
> from Solaris or its derivates
prtvtoc should sho
2012-06-16 0:05, John Martin wrote:
Its important to know...
...whether the drive is really 4096p or 512e/4096p.
BTW, is there a surefire way to learn that programmatically
from Solaris or its derivates (i.e. from SCSI driver options,
format/scsi/inquiry, SMART or some similar way)? Or if the
On Fri, Jun 15, 2012 at 12:56 PM, Timothy Coalson wrote:
> Thanks for the suggestions. I think it would also depend on whether
> the nfs server has tried to write asynchronously to the pool in the
> meantime, which I am unsure how to test, other than making the txgs
> extremely frequent and watch
On 06/15/12 15:52, Cindy Swearingen wrote:
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
In addition, whether the drive is really 4096p or 512e/4096p.
___
zfs-discuss mailing list
zfs-discuss@opensolar
Hi Hans,
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
Thanks,
Cindy
On 06/15/12 06:14, Hans J Albertsson wrote:
I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
T
Thanks for the suggestions. I think it would also depend on whether
the nfs server has tried to write asynchronously to the pool in the
meantime, which I am unsure how to test, other than making the txgs
extremely frequent and watching the load on the log devices. As for
the integer division givi
hi
what is the version of Solaris?
uname -a output?
regards
On 6/15/2012 10:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
depend on the version of Solaris you may not be able to use 2TB as root
regards
On 6/15/2012 9:53 AM, Hung-Sheng Tsao Ph
On Jun 14, 2012, at 1:35 PM, Robert Milkowski wrote:
>> The client is using async writes, that include commits. Sync writes do
>> not need commits.
>>
>> What happens is that the ZFS transaction group commit occurs at more-
>> or-less regular intervals, likely 5 seconds for more modern ZFS
>> sys
[Phil beat me to it]
Yes, the 0s are a result of integer division in DTrace/kernel.
On Jun 14, 2012, at 9:20 PM, Timothy Coalson wrote:
> Indeed they are there, shown with 1 second interval. So, it is the
> client's fault after all. I'll have to see whether it is somehow
> possible to get the s
by the way
when you format start with cylinder 1 donot use 0
depend on the version of Solaris you may not be able to use 2TB as root
regards
On 6/15/2012 9:53 AM, Hung-Sheng Tsao Ph.D. wrote:
yes
which version of solaris or bsd you are using?
for bsd I donot know the steps for create new BE (bo
yes
which version of solaris or bsd you are using?
for bsd I donot know the steps for create new BE (boot env)
for s10 and opensolaris and solaris express (may be other opensolaris
fork) , you use the liveupgrade
for s11 you use beadm
regards
On 6/15/2012 9:13 AM, Hans J Albertsson wrote:
I s
On 06/15/2012 03:35 PM, Johannes Totz wrote:
> On 15/06/2012 13:22, Sašo Kiselkov wrote:
>> On 06/15/2012 02:14 PM, Hans J Albertsson wrote:
>>> I've got my root pool on a mirror on 2 512 byte blocksize disks. I
>>> want to move the root pool to two 2 TB disks with 4k blocks. The
>>> server only ha
On 15/06/2012 13:22, Sašo Kiselkov wrote:
> On 06/15/2012 02:14 PM, Hans J Albertsson wrote:
>> I've got my root pool on a mirror on 2 512 byte blocksize disks. I
>> want to move the root pool to two 2 TB disks with 4k blocks. The
>> server only has room for two disks. I do have an esata connector,
2012-06-15 17:18, Jim Klimov wrote:
7) If you're on live media, try to rename the new "rpool2" to
become "rpool", i.e.:
# zpool export rpool2
# zpool export rpool
# zpool import -N rpool rpool2
# zpool export rpool
Ooops, bad typo in third line; should be:
# zpool expo
e tricks to
enforce that the new pool uses ashift=12 if that (4KB)
is your hardware native sector size. We had some info
recently on the mailing lists and carried that over to
illumos wiki:
http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks
3) # zfs snapshot -r rpoo
I suppose I must start by labelling the new disk properly, and give the s0
partition to zpool, so the new zpool can be booted?
Skickat från min Android Mobil"Hung-Sheng Tsao Ph.D." skrev:
one possible way:
1)break the mirror
2)install new hdd, format the HDD
3)create new zpool on new hdd with
one possible way:
1)break the mirror
2)install new hdd, format the HDD
3)create new zpool on new hdd with 4k block
4)create new BE on the new pool with the old root pool as source (not
sure which version of "solaris" or "openSolaris" ypu are using the
procedure may be different depend on vers
On 06/15/2012 02:14 PM, Hans J Albertsson wrote:
> I've got my root pool on a mirror on 2 512 byte blocksize disks.
> I want to move the root pool to two 2 TB disks with 4k blocks.
> The server only has room for two disks. I do have an esata connector, though,
> and a suitable external cabinet for
I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata connector, though,
and a suitable external cabinet for connecting one extra disk.
How would I go about migrati
>> Have you also mounted the broken image as /dev/lofi/2?
>
> Yep.
Wouldn't it be better to just remove the corrupted device? This worked
just fine in my case.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Hello!
Unfortunately on one of our Areca RAID controllers has encountered a
power failure which corrupted our zpool and partitions.
We have tried to assemble some new headers but it looks like not only
the headers/uberblocks but also the MOS has been damaged.
We now have moved on from trying
On Fri, Jun 15, 2012 at 07:37:50AM +0200, Stefan Ring wrote:
> > root@solaris-01:/mnt# ??zpool import -d /dev/lofi
> > ??pool: ZP-8T-RZ1-01
> > ?? ??id: 9952605666247778346
> > ??state: FAULTED
> > status: One or more devices contains corrupted data.
> > action: The pool cannot be imported due to d
25 matches
Mail list logo