Actually he likely means Boot Environments. On OpenSolaris or Solaris 11 you
would use the pkg/ beadm commands. Previous Solaris used Live Upgrade.
See the documentation for IPS.
--
bdha
On Nov 9, 2010, at 2:56, Tomas Ă–gren wrote:
> On 08 November, 2010 - Peter Taps sent me these 0,7K bytes:
+--
| On 2010-03-23 16:09:05, Harry Putnam wrote:
|
| Date: Tue, 23 Mar 2010 16:09:05 -0500
| From: Harry Putnam
| To: zfs-discuss@opensolaris.org
| Subject: Re: [zfs-discuss] snapshots as versioning tool
|
| Matt Cowger
+--
| On 2010-02-25 12:05:03, Ray Van Dolson wrote:
|
| Thanks Cindy. I need to stay on Solaris 10 for the time being, so I'm
| guessing I'd have to Live boot into an OpenSolaris build, fix my pool
| then hope it re-impor
+--
| On 2010-02-20 08:45:23, Charles Hedrick wrote:
|
| I hadn't considered stress testing the disks. Obviously that's a good idea.
We'll look at doing something in May, when we have the next opportunity to take
down th
+--
| On 2010-02-20 08:12:53, Charles Hedrick wrote:
|
| We recently moved a Mysql database from NFS (Netapp) to a local disk array
(J4200 with SAS disks). Shortly after moving production, the system effectively
hung. CP
Just saw this go by my twitter stream:
http://staff.science.uva.nl/~delaat/sne-2009-2010/p02/report.pdf
via @legeza
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
+--
| On 2010-02-01 23:01:33, Tim Cook wrote:
|
| On Mon, Feb 1, 2010 at 10:58 PM, matthew patton wrote:
|
| > what with the home NAS conversations, what's the trick to buy a J4500
| > without any drives? SUN like every
+--
| On 2010-01-29 10:36:29, Richard Elling wrote:
|
| Nit: Solaris 10 u9 is 10/03 or 10/04 or 10/05, depending on what you read.
| Solaris 10 u8 is 11/09.
Nit: S10u8 is 10/09.
| Scrub I/O is given the lowest priority
| On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote:
> Anything else I can get that would help this?
split(1)? :-)
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
+--
| On 2010-01-21 13:06:00, Michelle Knight wrote:
|
| Aplogies for not explaining myself correctly, I'm copying from ext3 on to ZFS
- it appears to my amateur eyes, that it is ZFS that is having the problem.
ZFS is qu
Have a simple rolling ZFS replication script:
http://dpaste.com/145790/
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
+--
| On 2009-11-09 12:18:04, Ellis, Mike wrote:
|
| Maybe to create snapshots "after the fact" as a part of some larger disaster
recovery effort.
| (What did my pool/file-system look like at 10am?... Say 30-minutes befor
> Hank Ratzesberger wrote:
> Hi, I'm Hank and I'm recovering from a crash attempting to make a zfs
> pool the root/mountpoint of a zone install.
>
> I want to make the zone appear as a completely configurable zfs file system
> to the root user of the zone. Apparently that is not exactly the way
+--
| On 2009-10-03 18:50:58, Jeff Haferman wrote:
|
| I did an rsync of this directory structure to another filesystem
| [lustre-based, FWIW] and it took about 24 hours to complete. We have
| done rsyncs on other directo
+--
| On 2009-07-31 17:00:54, Jason A. Hoffman wrote:
|
| I have thousands and thousands and thousands of zpools. I started
| collecting such zpools back in 2005. None have been lost.
I don't have thousands and thousand
Have you set the recordsize for the filesystem to the blocksize Postgres is
using (8K)? Note this has to be done before any files are created.
Other thoughts: Disable postgres's fsync, enable filesystem compression if disk
I/O is your bottleneck as opposed to CPU. I do this with MySQL and it has
p
+--
| On 2009-07-07 01:29:11, Andre van Eyssen wrote:
|
| On Mon, 6 Jul 2009, Gary Mills wrote:
|
| >As for a business case, we just had an extended and catastrophic
| >performance degradation that was the result of two Z
| > FWIW, it looks like someone at Sun saw the complaints in this thread and or
| > (more likely) had enough customer complaints. ??It appears you can buy the
| > tray independently now. ??Although, it's $500 (so they're apparently made
| > entirely of diamond and platinum). ??In Sun marketing's de
+--
| On 2009-03-18 10:14:26, Richard Elling wrote:
|
| >Just an observation, but it sort of defeats the purpose of buying sun
| >hardware with sun software if you can't even get a "this is how your
| >drives will map" o
+--
| On 2009-03-17 16:37:25, Mark J Musante wrote:
|
| >Then mirror the VTOC from the first (zfsroot) disk to the second:
| >
| ># prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
| ># zpool attach -f rpool c1
+--
| On 2009-03-17 16:13:27, Toby Thain wrote:
|
| Right, but what if you didn't realise on that screen that you needed
| to select both to make a mirror? The wording isn't very explicit, in
| my opinion. Yesterday I
I for one would like an "interactive" attribute for zpools and
filesystems, specifically for destroy.
The existing behavior (no prompt) could be the default, but all
filesystems would inherit from the zpool's attrib. so I'd only
need to set interactive=on for the pool itself, not for each
filesyst
+--
| On 2009-02-02 09:46:49, casper@sun.com wrote:
|
| And think of all the money it costs to stock and distribute that
| separate part. (And our infrastructure is still expensive; too expensive
| for a $5 part)
Fa
+--
| On 2009-02-01 20:55:46, Richard Elling wrote:
|
| The astute observer will note that the bracket for the X41xx family
| works elsewhere. For example,
|
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Sys
+--
| On 2009-02-01 16:29:59, Richard Elling wrote:
|
| The drives that Sun sells will come with the correct bracket.
| Ergo, there is no reason to sell the bracket as a separate
| item unless the customer wishes to place
+--
| On 2008-12-10 16:48:37, Jonny Gerold wrote:
|
| Hello,
| I was wondering if there are any problems with cyrus and ZFS? Or have
| all the problems of yester-release been ironed out?
Yester-release?
I've been using
+--
| On 2008-08-07 03:53:04, Marc Bevand wrote:
|
| Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are
a
| well known problem. They are caused by memory contention in the kernel heap.
| Check
Good afternoon,
I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured).
When I put a moderate amount of load on the zpool (like, say, copying many
files locally, or deleting a large number of ZFS fs), the sys
+--
| On 2008-02-12 02:40:33, Thomas Liesner wrote:
|
| Subject: Re: [zfs-discuss] Avoiding performance decrease when pool usage is
| over 80%
|
| Nobody out there who ever had problems with low diskspace?
Only in share
On Oct 16, 2007, at 4:36 PM, Jonathan Loran wrote:
>
> We use compression on almost all of our zpools. We see very little
> if any I/O slowdown because of this, and you get free disk space.
> In fact, I believe read I/O gets a boost from this, since
> decompression is cheap compared to nor
30 matches
Mail list logo