I would like to load OpenSolaris on my file server. I have previously loaded
FBSD using zfs as the storage file system. Will OpenSolaris be able to import
the pool and mount the file system created on FBSD or will I have to recreate
the the file system.
--
This message posted from opensolaris
Thanks for letting me know. I plan on attempting in a couple of weeks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
+--
| On 2010-03-23 16:09:05, Harry Putnam wrote:
|
| Date: Tue, 23 Mar 2010 16:09:05 -0500
| From: Harry Putnam
| To: zfs-discuss@opensolaris.org
| Subject: Re: [zfs-discuss] snapshots as versioning tool
|
| Matt Cowger
+--
| On 2009-10-03 18:50:58, Jeff Haferman wrote:
|
| I did an rsync of this directory structure to another filesystem
| [lustre-based, FWIW] and it took about 24 hours to complete. We have
| done rsyncs on other directo
> Hank Ratzesberger wrote:
> Hi, I'm Hank and I'm recovering from a crash attempting to make a zfs
> pool the root/mountpoint of a zone install.
>
> I want to make the zone appear as a completely configurable zfs file system
> to the root user of the zone. Apparently that is not exactly the way
+--
| On 2009-11-09 12:18:04, Ellis, Mike wrote:
|
| Maybe to create snapshots "after the fact" as a part of some larger disaster
recovery effort.
| (What did my pool/file-system look like at 10am?... Say 30-minutes befor
I had an emergency need for 400gb of storage yesterday and I spent 8 hours
looking for a way to get iSCSI working via a qlogic QLA4010 TOE card and was
unable to get my windows Qlogic 4050 cTOE card to recognize the target. I do
have a Netapp iSCSI connection on the client
cat /etc/release
Ok I have found the issue however i do not know how to get around it.
iscsiadm list target-param
Target: iqn.1986-03.com.sun:01:0003ba08d5ae.47571faa
Alias: -
Target: iqn.2000-04.com.qlogic:qla4050c.gs10731a42094.1
Alias: -
I need to attach all iSCSI targets to
iqn.2000-04.com.qlo
Have a simple rolling ZFS replication script:
http://dpaste.com/145790/
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Message: 3
> Date: Tue, 19 Jan 2010 15:48:52 -0500
> From: Miles Nordin
> To: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] zfs send/receive as backup - reliability?
> Message-ID:
> Content-Type: text/plain; charset="us-ascii"
>
> I don't think a replacement for the ufsdump/ufsresto
On Jan 19, 2010, at 18:48 , Richard Elling wrote:
>
> Many people use send/recv or AVS for disaster recovery on the inexpensive
> side. Obviously, enterprise backup systems also provide DR capabilities.
> Since ZFS has snapshots that actually work, and you can use send/receive
> or other backup
On Jan 19, 2010, at 22:54 , Ian Collins wrote:
> Allen Eastwood wrote:
>> On Jan 19, 2010, at 18:48 , Richard Elling wrote:
>>
>>
>>> Many people use send/recv or AVS for disaster recovery on the inexpensive
>>> side. Obviously, enterprise backup systems
+--
| On 2010-01-21 13:06:00, Michelle Knight wrote:
|
| Aplogies for not explaining myself correctly, I'm copying from ext3 on to ZFS
- it appears to my amateur eyes, that it is ZFS that is having the problem.
ZFS is qu
| On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote:
> Anything else I can get that would help this?
split(1)? :-)
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
+--
| On 2010-01-29 10:36:29, Richard Elling wrote:
|
| Nit: Solaris 10 u9 is 10/03 or 10/04 or 10/05, depending on what you read.
| Solaris 10 u8 is 11/09.
Nit: S10u8 is 10/09.
| Scrub I/O is given the lowest priority
+--
| On 2010-02-01 23:01:33, Tim Cook wrote:
|
| On Mon, Feb 1, 2010 at 10:58 PM, matthew patton wrote:
|
| > what with the home NAS conversations, what's the trick to buy a J4500
| > without any drives? SUN like every
Just saw this go by my twitter stream:
http://staff.science.uva.nl/~delaat/sne-2009-2010/p02/report.pdf
via @legeza
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
> There is of course the caveat of using raw devices with databases (it
> becomes harder to track usage, especially as the number of LUNs
> increases, slightly less visibility into their usage statistics at the
> OS level ). However perhaps now someone can implement the CR I filed
> a long time
guy might make a mistake and
> give you luns already mapped elsewhere on accident -- which I have
> seen happen before). And when you're forced to do it at 3am after
> already working 12 hours that day well safeguards are a good
> thing.
>
>
> On Sat, Feb 13, 2010
+--
| On 2010-02-20 08:12:53, Charles Hedrick wrote:
|
| We recently moved a Mysql database from NFS (Netapp) to a local disk array
(J4200 with SAS disks). Shortly after moving production, the system effectively
hung. CP
+--
| On 2010-02-20 08:45:23, Charles Hedrick wrote:
|
| I hadn't considered stress testing the disks. Obviously that's a good idea.
We'll look at doing something in May, when we have the next opportunity to take
down th
+--
| On 2010-02-25 12:05:03, Ray Van Dolson wrote:
|
| Thanks Cindy. I need to stay on Solaris 10 for the time being, so I'm
| guessing I'd have to Live boot into an OpenSolaris build, fix my pool
| then hope it re-impor
>> hi all
>> Try to learn how UFS root to ZFS root liveUG work.
>>
>> I download the vbox image of s10u8, it come up as UFS root.
>> add a new disks (16GB)
>> create zpool rpool
>> run lucreate -n zfsroot -p rpool
>> run luactivate zfsroot
>> run lustatus it do show zfsroot will be active in ne
Actually he likely means Boot Environments. On OpenSolaris or Solaris 11 you
would use the pkg/ beadm commands. Previous Solaris used Live Upgrade.
See the documentation for IPS.
--
bdha
On Nov 9, 2010, at 2:56, Tomas Ă–gren wrote:
> On 08 November, 2010 - Peter Taps sent me these 0,7K bytes:
Good afternoon,
I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured).
When I put a moderate amount of load on the zpool (like, say, copying many
files locally, or deleting a large number of ZFS fs), the sys
+--
| On 2008-08-07 03:53:04, Marc Bevand wrote:
|
| Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are
a
| well known problem. They are caused by memory contention in the kernel heap.
| Check
+--
| On 2008-12-10 16:48:37, Jonny Gerold wrote:
|
| Hello,
| I was wondering if there are any problems with cyrus and ZFS? Or have
| all the problems of yester-release been ironed out?
Yester-release?
I've been using
+--
| On 2009-02-01 16:29:59, Richard Elling wrote:
|
| The drives that Sun sells will come with the correct bracket.
| Ergo, there is no reason to sell the bracket as a separate
| item unless the customer wishes to place
+--
| On 2009-02-01 20:55:46, Richard Elling wrote:
|
| The astute observer will note that the bracket for the X41xx family
| works elsewhere. For example,
|
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Sys
+--
| On 2009-02-02 09:46:49, casper@sun.com wrote:
|
| And think of all the money it costs to stock and distribute that
| separate part. (And our infrastructure is still expensive; too expensive
| for a $5 part)
Fa
I for one would like an "interactive" attribute for zpools and
filesystems, specifically for destroy.
The existing behavior (no prompt) could be the default, but all
filesystems would inherit from the zpool's attrib. so I'd only
need to set interactive=on for the pool itself, not for each
filesyst
+--
| On 2009-03-17 16:13:27, Toby Thain wrote:
|
| Right, but what if you didn't realise on that screen that you needed
| to select both to make a mirror? The wording isn't very explicit, in
| my opinion. Yesterday I
+--
| On 2009-03-17 16:37:25, Mark J Musante wrote:
|
| >Then mirror the VTOC from the first (zfsroot) disk to the second:
| >
| ># prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
| ># zpool attach -f rpool c1
+--
| On 2009-03-18 10:14:26, Richard Elling wrote:
|
| >Just an observation, but it sort of defeats the purpose of buying sun
| >hardware with sun software if you can't even get a "this is how your
| >drives will map" o
| > FWIW, it looks like someone at Sun saw the complaints in this thread and or
| > (more likely) had enough customer complaints. ??It appears you can buy the
| > tray independently now. ??Although, it's $500 (so they're apparently made
| > entirely of diamond and platinum). ??In Sun marketing's de
+--
| On 2009-07-07 01:29:11, Andre van Eyssen wrote:
|
| On Mon, 6 Jul 2009, Gary Mills wrote:
|
| >As for a business case, we just had an extended and catastrophic
| >performance degradation that was the result of two Z
Have you set the recordsize for the filesystem to the blocksize Postgres is
using (8K)? Note this has to be done before any files are created.
Other thoughts: Disable postgres's fsync, enable filesystem compression if disk
I/O is your bottleneck as opposed to CPU. I do this with MySQL and it has
p
>
> Can *someone* please name a single drive+firmware or
> RAID
> controller+firmware that ignores FLUSH CACHE / FLUSH
> CACHE EXT
> commands? Or worse, responds "ok" when the flush
> hasn't occurred?
I think it would be a shorter list if one were to name the drives/controllers
that actually imp
> This is also (theoretically) why a drive purchased
> from Sun is more
> that expensive then a drive purchased from your
> neighbourhood computer
> shop:
It's more significant than that. Drives aimed at the consumer market are at a
competitive disadvantage if they do handle cache flush corr
+--
| On 2009-07-31 17:00:54, Jason A. Hoffman wrote:
|
| I have thousands and thousands and thousands of zpools. I started
| collecting such zpools back in 2005. None have been lost.
I don't have thousands and thousand
Does DNLC even play a part in ZFS, or are the Docs out of date?
"Defines the number of entries in the directory name look-up cache (DNLC).
This parameter is used by UFS and NFS to cache elements of path names that
have been resolved."
No mention of ZFS. Noticed that when discussing that with a c
On Oct 16, 2007, at 4:36 PM, Jonathan Loran wrote:
>
> We use compression on almost all of our zpools. We see very little
> if any I/O slowdown because of this, and you get free disk space.
> In fact, I believe read I/O gets a boost from this, since
> decompression is cheap compared to nor
+--
| On 2008-02-12 02:40:33, Thomas Liesner wrote:
|
| Subject: Re: [zfs-discuss] Avoiding performance decrease when pool usage is
| over 80%
|
| Nobody out there who ever had problems with low diskspace?
Only in share
+--
| On 2011-11-23 13:43:10, Harry Putnam wrote:
|
| Somehow I touched some rather peculiar file names in ~. Experimenting
| with something I've now forgotten I guess.
|
| Anyway I now have 3 zero length files with name
+--
| On 2013-02-17 18:40:47, Ian Collins wrote:
|
> One of its main advantages is it has been platform agnostic. We see
> Solaris, Illumos, BSD and more recently ZFS on Linux questions all give the
> same respect.
>
>
+--
| On 2013-02-17 01:17:58, Tim Cook wrote:
|
| While I'm sure many appreciate the offer as I do, I can tell you for me
| personally: never going to happen. Why would I spend all that time and
| energy participating in
+--
| On 2010-09-16 18:08:46, Ray Van Dolson wrote:
|
| Best practice in Solaris 10 U8 and older was to use a mirrored ZIL.
|
| With the ability to remove slog devices in Solaris 10 U9, we're
| thinking we may get more ba
I just had an SSD blow out on me, taking a v10 zpool with it. The pool
currently shows up as UNAVAIL, "missing device".
The system is currently running U9, which has `import -F`, but not `import -m`.
My understanding is the pool would need to be >=19 for that to work regardless.
I have copies of
+--
| On 2010-11-08 13:27:09, Peter Taps wrote:
|
| From zfs documentation, it appears that a "vdev" can be built from more
vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and
a mirror can be
+--
| On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
|
| Backups.
|
| Even if you upgrade your hardware to better stuff... with ECC and so on ...
| There is no substitute for backups. Period. If you care about your da
+--
| On 2010-11-15 08:48:55, Frank wrote:
|
| I am a newbie on Solaris.
| We recently purchased a Sun Sparc M3000 server. It comes with 2 identical
hard drives. I want to setup a raid 1. After searching on google, I fou
+--
| On 2010-11-15 11:27:02, Toby Thain wrote:
|
| > Backups are not going to save you from bad memory writing corrupted data to
| > disk.
|
| It is, however, a major motive for using ZFS in the first place.
In this con
Disclaimer: Solaris 10 U8.
I had an SSD die this morning and am in the process of replacing the 1GB
partition which was part of a log mirror. The SSDs do nothing else.
The resilver has been running for ~30m, and suggests it will finish sometime
before Elvis returns from Andromeda, though perhaps
+--
| On 2010-11-23 13:28:38, Tony Schreiner wrote:
|
> am I supposed to do something with c1t3d0 now?
Presumably you want to replace the dead drive with one that works?
zpool offline the dead drive, if it isn't already,
+--
| On 2011-03-16 12:33:58, Jim Mauro wrote:
|
| With ZFS, Solaris 10 Update 9, is it possible to
| detach configured log devices from a zpool?
|
| I have a zpool with 3 F20 mirrors for the ZIL. They're
| coming up corr
55 matches
Mail list logo