> You want to install the zfs boot block, not the ufs
> bootblock.
Oh duh. I tried to correct my mistake using this:
installboot /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
And now get this:
Boot device: disk File and args:
Can't mount root
Evaluating:
The file just loaded
:49 pm
Subject: Re: [zfs-discuss] ZFS root boot failure?
To: zfs-discuss@opensolaris.org
> Followup with modified test plan:
>
> 1) Yank disk0 from V240.
> Waited for it to be marked FAULTED in zpool status -x
> 2) Inserted new disk0 scavenged from another system
> 3) Ran forma
Followup with modified test plan:
1) Yank disk0 from V240.
Waited for it to be marked FAULTED in zpool status -x
2) Inserted new disk0 scavenged from another system
3) Ran format to set s0 as full-disk to agree with other system
4) Halted system
5) boot disk1
Wanted to make sure Jumpstart mirror s
On Thu, Jun 12, 2008 at 07:31:49PM +0200, Richard Elling wrote:
> Kurt Schreiner wrote:
> > On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
> >
> >> Vincent Fox wrote:
> >>
> >>> So I decided to test out failure modes of ZFS root mirrors.
> >>>
> >>> Installed on a V240 with nv90.
Kurt Schreiner wrote:
> On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
>
>> Vincent Fox wrote:
>>
>>> So I decided to test out failure modes of ZFS root mirrors.
>>>
>>> Installed on a V240 with nv90. Worked great.
>>>
>>> Pulled out disk1, then replaced it and attached ag
Vincent,
I think you are running into some existing bugs, particularly this one:
http://bugs.opensolaris.org/view_bug.do?bug_id=6668666
Please review the list of known issues here:
http://opensolaris.org/os/community/zfs/boot/
Also check out the issues described on page 77 in this section:
Bo
> On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard
>
> pull disk1
> replace
> *resilver*
> pull disk0
> ...
> So the 2 disks should be in sync (due to
> resilvering)? Or is there
> another step needed to get the disks in sync?
That is an accurate summary. I thought
On Thu, Jun 12, 2008 at 07:28:23AM -0400, Brian Hechinger wrote:
> I think something else that might help is if ZFS were to boot, see that
> the volume it booted from is older than the other one, print a message
> to that effect and either halt the machine or issue a reboot pointing
> at the other
On Wed, Jun 11, 2008 at 10:43:26PM -0700, Richard Elling wrote:
>
> AFAIK, SVM will not handle this problem well. ZFS and Solaris
> Cluster can detect this because the configuration metadata knows
> the time difference (ZFS can detect this by the latest txg).
Having been through this myself with
On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
> Vincent Fox wrote:
> > So I decided to test out failure modes of ZFS root mirrors.
> >
> > Installed on a V240 with nv90. Worked great.
> >
> > Pulled out disk1, then replaced it and attached again, resilvered, all good.
> >
> > Now
Vincent Fox wrote:
> Ummm, could you back up a bit there?
>
> What do you mean "disk isn't sync'd so boot should fail"? I'm coming from
> UFS of course where I'd expect to be able to fix a damaged boot drive as it
> drops into a single-user root prompt.
>
> I believe I did try boot disk1 but tha
Vincent Fox wrote:
> So I decided to test out failure modes of ZFS root mirrors.
>
> Installed on a V240 with nv90. Worked great.
>
> Pulled out disk1, then replaced it and attached again, resilvered, all good.
>
> Now I pull out disk0 to simulate failure there. OS up and running fine, but
> lot
Ummm, could you back up a bit there?
What do you mean "disk isn't sync'd so boot should fail"? I'm coming from UFS
of course where I'd expect to be able to fix a damaged boot drive as it drops
into a single-user root prompt.
I believe I did try boot disk1 but that failed I think due to prior t
Sounds correct to me. The disk isn't sync'd so boot should fail. If
you pull disk0 or set disk1 as the primary boot device what does it
do? You can't expect it to resliver before booting.
On 6/11/08, Vincent Fox <[EMAIL PROTECTED]> wrote:
> So I decided to test out failure modes of ZFS root
So I decided to test out failure modes of ZFS root mirrors.
Installed on a V240 with nv90. Worked great.
Pulled out disk1, then replaced it and attached again, resilvered, all good.
Now I pull out disk0 to simulate failure there. OS up and running fine, but
lots of error message about SYNC CA
I'm going to be installing B77 in the next couple days. I've built a
server using a SiL3124 card, but the BIOS of the machine for whatever
reason doesn't get along with grub so I'm not able to actually boot
from it.
This isn't a huge concern, as I've orderd (and hopefully they will arrive
soon) a
further to this, there's quite a trick to getting fdisk to work. It's not at
all intuitive!
from here :
http://www.opensolaris.org/jive/thread.jspa?messageID=77297
run format->fdisk, delete the current partition, quit format
rerun format, answer no if you are asked to Label it now.
select fdisk
For the archive, I think the reason this happened is because I'd used ZFS
before on these disks, and fdisk wasn't happy. See here :
As mentioned above, ZFS root boot only works with SMI labeled disks - if you've
ever given ZFS the entire disk before, it'll have put an EFI label on the disk,
so
I'm seeing the same problem with b75a. I ran the manual setup instructions - I
have an IDE disk I'm using as the temporary UFS drive and a pair of SATA HDD's
to use as a zfs mirror for root/boot. Did you get it working? If so, how?
reformatting the drive(s) so that s2 is the whole disk?
The argument was c1d0s0 in the first place, i.e.:
installgrub pathToStage1 pathToStage2 /dev/rdsk/c1d0s0
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Lurie wrote:
> So I tried to debug to see what was the problem, I noticed that installgrub
> actually failed, when I tried to run it manually it says:
> cannot open/stat device /dev/rdsk/c1d0s2
>
> What am I doing wrong here ?
>
>
My first guess is that slice 2 simply doesn't exist. Did you try
Hey everyone,
I have 2 disks (found as c0d0 and c1d0), one has a newly installed SXCE b68,
the other is clean.
I've tried running zfs-actual-root-install.sh (from
http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling) as:
# ./zfs-actual-root-install.sh c1d0s0
It creates a vali
22 matches
Mail list logo