Actually he likely means Boot Environments. On OpenSolaris or Solaris 11 you
would use the pkg/ beadm commands. Previous Solaris used Live Upgrade.
See the documentation for IPS.
--
bdha
On Nov 9, 2010, at 2:56, Tomas Ögren wrote:
> On 08 November, 2010 - Peter Taps sent me these 0,7K bytes:
On 11/ 9/10 01:47 AM, Peter Taps wrote:
My understanding is that there is a way to create a zfs "checkpoint" before
doing any system upgrade or installing a new software. If there is a problem, one can
simply rollback to the stable checkpoint.
I am familiar with snapshots and clones. However,
>From Oracle Support we got the following info:
Bug ID: 6992124 reboot of Sol10 u9 host makes zpool FAULTED when zpool uses
iscsi LUNs
This is a duplicate of:
Bug ID: 6907687 zfs pool is not automatically fixed when disk are brought back
online or after boot
An IDR patch already exists, but no
Hi,
If I compare zpool is like volume group or disk group, as an example on AIX we
have aixlvm.
AIX lvm provideds command like recreatevg by providing snashot devices.
In case of HPLVM or for Linux LVM, we can create a new vg/lv structure and add
the snapshoted devices in that and then we impor
I think you maybe wanting the same kind of thing that NexentaStor does when
it upgrade - takes snapshot and marks it a checkpoint in case the upgrade
fails - right? I think you may have to snap then clone from that and use
beadm thought it's something you should play with...
---
W. A. Khushil Dep
On Mon, Nov 08, 2010 at 11:51:02PM -0800, matthew patton wrote:
> > I have this with 36 2TB drives (and 2 separate boot drives).
> >
> > http://www.colfax-intl.com/jlrid/SpotLight_more_Acc.asp?L=134&S=58&B=2267
>
> That's just a Supermicro SC847.
>
> http://www.supermicro.com/products/chassis/4U/
creating a ZFS pool out of files stored on another ZFS
pool. The mainreasons that have been given for not doing this are unknown edge
andcorner cases that
may lead to deadlocks, and that it creates a complexstructure with potentially undesirable and
unintended performance andreliability implicatio
On 09/11/10 11:46 AM, Maurice Volaski wrote:
> ...
>
Is that horrendous mess Outlook's fault? If so, please consider not
using it.
--Toby
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I think my initial response got mangled. Oops.
>creating a ZFS pool out of files stored on another ZFS pool. The main
>reasons that have been given for not doing this are unknown edge and
>corner cases that may lead to deadlocks, and that it creates a complex
>structure with potentially undesirab
>http://www.supermicro.com/products/chassis/4U/?chs=847
>
>Stay away from the 24 port expander backplanes. I've gone thru several
>and they still don't work right - timeout and dropped drives under load.
>The 12-port works just fine connected to a variety of controllers. If you
>insist on the 24-po
>On 09/11/10 11:46 AM, Maurice Volaski wrote:
>> ...
>>
>
>Is that horrendous mess Outlook's fault? If so, please consider not
>using it.
Yes, it is. :-( Outlook 2011 on the Mac, which just came out, so perhaps
I'll get lucky and they will fix it..eventually.
--
Maurice Volaski, maurice.vola...@
zfs close is at zfs file system level. what i am look here is rebuild the file
system stack from bottom to top. Once i took the snapshot ( hardware) the
snapshot devices carry same copy of data and meta data.
If my snapshot device is dev2 then, the metadata will have smpoolsnap. If I
need to us
Hi,
I have downloaded and using opensolaris virtual box image which shows below
versions
zfs version 3
zpool version 14
cat /etc/release shows
2009.06 snv_111b X86
Is this final build available ??
Can i upgrade it to higher version of zfs/zpool ?
can i get any updage vdi image to seek zfs/zpoo
I'm trying to rollback from a bad patch install on Solaris 10. From the
failsafe BE I tried to rollback, but zfs is asking me to provide allow rollback
permissions. It's hard for me to tell exactly because the messages are
scrolling off the screen before I can read them. Any help would be appr
On Nov 9, 2010, at 12:24 PM, Maurice Volaski wrote:
>
>> http://www.supermicro.com/products/chassis/4U/?chs=847
>>
>> Stay away from the 24 port expander backplanes. I've gone thru several
>> and they still don't work right - timeout and dropped drives under load.
>> The 12-port works just fine c
Thank you all for your help. Looks like "beadm" is the utility I was looking
for.
When I run "beadm list," it gives me the complete list and indicates which one
is currently active. It doesn't tell me which one is the "default" boot. Can I
assume that whatever is "active" is also the "default?"
* Peter Taps (ptr...@yahoo.com) wrote:
> Thank you all for your help. Looks like "beadm" is the utility I was
> looking for.
>
> When I run "beadm list," it gives me the complete list and indicates
> which one is currently active. It doesn't tell me which one is the
> "default" boot. Can I assume
Hi ,
Currently the file system is with the capacity 50 GB. I want to reduce that
to 30 Gb.
When I am trying to set the quota as
#zfs set quota=30G
it's giving error like " cannot set property for ;size is
less than current used or reserved space.
Please suggest me the steps how to resolve it
Casper Dik wrote on 2010-09-26:
> A incremental backup:
>
> zfs snapshot -r exp...@backup-2010-07-13
> zfs send -R -I exp...@backup-2010-07-12 exp...@backup-2010-07-13 |
> zfs receive -v -u -d -F portable/export
Unfortunately "zfs receive -F" does not skip existing snap
On 11/10/10 10:29 AM, bhanu prakash wrote:
Hi ,
Currently the file system is with the capacity 50 GB. I want to reduce
that to 30 Gb.
Quota or physical limit?
When I am trying to set the quota as
#zfs set quota=30G
it's giving error like " cannot set property for ;size
is less than current
Maurice Volaski wrote:
I think my initial response got mangled. Oops.
creating a ZFS pool out of files stored on another ZFS pool. The main
reasons that have been given for not doing this are unknown edge and
corner cases that may lead to deadlocks, and that it creates a complex
structure w
Folks,
I am trying to understand if there is a way to increase the capacity of a
root-vdev. After reading zpool man pages, the following is what I understand:
1. If you add a new disk by using "zpool add," this disk gets added as a new
root-vdev. The existing root-vdevs are not changed.
2. You
On 11/10/10 04:11 PM, Peter Taps wrote:
Folks,
I am trying to understand if there is a way to increase the capacity of a
root-vdev. After reading zpool man pages, the following is what I understand:
1. If you add a new disk by using "zpool add," this disk gets added as a new
root-vdev. The ex
After making zfs filesystems on the bunch, rebooting into OI makes format
no-longer dump the core - it works. Seems there might have been something on
those drives after all.
roy
- Original Message -
> also, this last test was with two 160gig drives only, the 2TB drives
> and the SSD ar
24 matches
Mail list logo