Hello Jim,
Thanks for the reply following is my o/p before setting bootfs parameter
# zpool get all rpool
NAME PROPERTY VALUE SOURCE
rpool size 68G -
rpool capacity 5%-
rpool altroot-
On 2011-May-12 00:20:28 +0800, Edward Ned Harvey
wrote:
>Backup/restore of bootable rpool to tape with a 3rd party application like
>legato etc is kind of difficult. Because if you need to do a bare metal
>restore, how are you going to do it?
This is a generic problem, not limited to ZFS. The
It turns out this was actually as simple as:
stmfadm create-lu -p guid=XXX..
I kept looking at modify-lu to change this and never thought to check the
create-lu options.
Thanks to Evaldas for the suggestion.
--
This message posted from opensolaris.org
___
Hello again.
As I kind of explained earlier, and as your screenshots display, your actual
root filesystem dataset is named "rpool/ROOT/zfsBE_patched". However either the
boot loader (eeprom parameters on SPARC) or much more likely the "rpool"
ZFS-pool level attribute "bootfs" contains a differe
Hi Ketan,
What steps lead up to this problem?
I believe the boot failure messages below are related to a mismatch
between the pool version and the installed OS version.
If you're using the JumpStart installation method, then the root pool is
re-created each time, I believe. Does it also instal
So y my system is not coming up .. i jumpstarted the system again ... but it
panics like earlier .. so how should i recover it .. and get it up ?
System was booted from network into single user mode and then rpool imported
and following is the listing
# zpool list
NAMESIZE ALLOC FREE
* Edward Ned Harvey (opensolarisisdeadlongliveopensola...@nedharvey.com) wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Arjun YK
> >
> > Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
> > and restore i
On May 10, 2011, at 11:21 PM, Naveen surisetty wrote:
> Hi,
>
> Thanks for the response, Here is my problem.
>
> I have a zfs stream back up took on zfs version 15, currently i have upgraded
> my OS, so new zfs version is 22. Restore process went well from old stream
> backup to new zfs pool.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Arjun YK
>
> Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
> and restore it backĀ if in case the disks are lost.
> Backup would be done with an enterprise tool like
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Naveen surisetty
>
> I have a zfs stream back up took on zfs version 15, currently i have
upgraded
> my OS, so new zfs version is 22. Restore process went well from old stream
> backup to new z
The press embargo on Intel Z68 chipset has been lifted and so there's a bunch
of press on it. One feature called Smart Response Technology (SRT) will sound
familiar to users of ZFS:
> Intel's SRT functions like an actual cache. Rather than caching individual
> files, Intel focuses on frequently
> It sent a series of blocks to write from the queue, newer disks wrote them
> and stay
> dormant, while older disks seek around to fit that piece of data... When old
> disks
> complete the writes, ZFS batches them a new set of tasks.
The thing is- as far as I know the OS doesn't ask the disk to
On 05/10/11 09:45 PM, Don wrote:
Is it possible to modify the GUID associated with a ZFS volume imported
into STMF?
To clarify- I have a ZFS volume I have imported into STMF and export via
iscsi. I have a number of snapshots of this volume. I need to temporarily
go back to an older snapshot wit
Hello,
Trying to understand how to backup mirrored zfs boot pool 'rpool' to tape,
and restore it back if in case the disks are lost.
Backup would be done with an enterprise tool like tsm, legato etc.
As an example, here is the layout:
# zfs list
NAME USED AVAIL
I can't actually disable the STMF framework to do this but I can try renaming
things and dumping the properties from one device to another and see if it
works- it might actually do it. I will let you know.
--
This message posted from opensolaris.org
__
Keep in mind zfs_vdev_max_pending. In the latest version of S10, this is set
to 10. ZFS will not issue more than the value of this variable requests at
a time for a LUN. Your disks may look relatively idle while ZFS
has a lot of data piled up inside just waiting to be read or written.
I have tweake
Hi,
Thanks for the response, Here is my problem.
I have a zfs stream back up took on zfs version 15, currently i have upgraded
my OS, so new zfs version is 22. Restore process went well from old stream
backup to new zfs pool. but on reboot i got error unable to mount pool tank.
So there is i
Sorry, I did not hit this type of error...
AFAIK the pool writes during zfs receive are done by current code (i.e. ZFSv22
for you) based on data read from the backup stream. So unless there are
corruptions on the pool which happened to be at the same time as you did your
restore, this procedure
> > Disks that have been in use for a longer time may have very fragmented free
> > space on one hand, and not so much of it on another, but ZFS is still
> > trying to push
> > bits around evenly. And while it's waiting on some disks, others may be
> > blocked as
> > well. Something like that...
You can try to workaround - no idea if this would really work -
0) Disable stmf and iscsi/* services
1) Create your volume's clone
2) Rename the original live volume dataset to some other name
3) Rename the clone to original dataset's name
4) Promote the clone
- now for the system it SHOULD seem li
Technically "bootfs ID" is a string which names the root dataset, typically
"rpool/ROOT/solarisReleaseNameCode". This string can be passed to Solaris
kernel as a parameter manually or by bootloader, otherwise a default current
"bootfs" is read from the root pool's attributes (not dataset attribu
Op 10-05-11 06:56, Edward Ned Harvey schreef:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> BTW, here's how to tune it:
>>
>> echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw
>>
>> echo "::arc" | sudo mdb -k | gre
22 matches
Mail list logo