[zfs-discuss] Convert from FBSD

2009-12-15 Thread Allen
I would like to load OpenSolaris on my file server. I have previously loaded FBSD using zfs as the storage file system. Will OpenSolaris be able to import the pool and mount the file system created on FBSD or will I have to recreate the the file system. -- This message posted from opensolaris

Re: [zfs-discuss] Convert from FBSD

2009-12-15 Thread Allen
Thanks for letting me know. I plan on attempting in a couple of weeks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] snapshots as versioning tool

2010-03-23 Thread Bryan Allen
+-- | On 2010-03-23 16:09:05, Harry Putnam wrote: | | Date: Tue, 23 Mar 2010 16:09:05 -0500 | From: Harry Putnam | To: zfs-discuss@opensolaris.org | Subject: Re: [zfs-discuss] snapshots as versioning tool | | Matt Cowger

Re: [zfs-discuss] million files in single directory

2009-10-03 Thread Bryan Allen
+-- | On 2009-10-03 18:50:58, Jeff Haferman wrote: | | I did an rsync of this directory structure to another filesystem | [lustre-based, FWIW] and it took about 24 hours to complete. We have | done rsyncs on other directo

Re: [zfs-discuss] pool as root of zone

2009-10-15 Thread Bryan Allen
> Hank Ratzesberger wrote: > Hi, I'm Hank and I'm recovering from a crash attempting to make a zfs > pool the root/mountpoint of a zone install. > > I want to make the zone appear as a completely configurable zfs file system > to the root user of the zone. Apparently that is not exactly the way

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Bryan Allen
+-- | On 2009-11-09 12:18:04, Ellis, Mike wrote: | | Maybe to create snapshots "after the fact" as a part of some larger disaster recovery effort. | (What did my pool/file-system look like at 10am?... Say 30-minutes befor

[zfs-discuss] iSCSI Qlogic 4010 TOE card

2010-01-12 Thread Allen Jasewicz
I had an emergency need for 400gb of storage yesterday and I spent 8 hours looking for a way to get iSCSI working via a qlogic QLA4010 TOE card and was unable to get my windows Qlogic 4050 cTOE card to recognize the target. I do have a Netapp iSCSI connection on the client cat /etc/release

Re: [zfs-discuss] iSCSI Qlogic 4010 TOE card

2010-01-12 Thread Allen Jasewicz
Ok I have found the issue however i do not know how to get around it. iscsiadm list target-param Target: iqn.1986-03.com.sun:01:0003ba08d5ae.47571faa Alias: - Target: iqn.2000-04.com.qlogic:qla4050c.gs10731a42094.1 Alias: - I need to attach all iSCSI targets to iqn.2000-04.com.qlo

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-15 Thread Bryan Allen
Have a simple rolling ZFS replication script: http://dpaste.com/145790/ -- bda cyberpunk is dead. long live cyberpunk. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Allen Eastwood
> Message: 3 > Date: Tue, 19 Jan 2010 15:48:52 -0500 > From: Miles Nordin > To: zfs-discuss@opensolaris.org > Subject: Re: [zfs-discuss] zfs send/receive as backup - reliability? > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > I don't think a replacement for the ufsdump/ufsresto

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Allen Eastwood
On Jan 19, 2010, at 18:48 , Richard Elling wrote: > > Many people use send/recv or AVS for disaster recovery on the inexpensive > side. Obviously, enterprise backup systems also provide DR capabilities. > Since ZFS has snapshots that actually work, and you can use send/receive > or other backup

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Allen Eastwood
On Jan 19, 2010, at 22:54 , Ian Collins wrote: > Allen Eastwood wrote: >> On Jan 19, 2010, at 18:48 , Richard Elling wrote: >> >> >>> Many people use send/recv or AVS for disaster recovery on the inexpensive >>> side. Obviously, enterprise backup systems

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Bryan Allen
+-- | On 2010-01-21 13:06:00, Michelle Knight wrote: | | Aplogies for not explaining myself correctly, I'm copying from ext3 on to ZFS - it appears to my amateur eyes, that it is ZFS that is having the problem. ZFS is qu

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Bryan Allen
| On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote: > Anything else I can get that would help this? split(1)? :-) -- bda cyberpunk is dead. long live cyberpunk. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-29 Thread Bryan Allen
+-- | On 2010-01-29 10:36:29, Richard Elling wrote: | | Nit: Solaris 10 u9 is 10/03 or 10/04 or 10/05, depending on what you read. | Solaris 10 u8 is 11/09. Nit: S10u8 is 10/09. | Scrub I/O is given the lowest priority

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-01 Thread Bryan Allen
+-- | On 2010-02-01 23:01:33, Tim Cook wrote: | | On Mon, Feb 1, 2010 at 10:58 PM, matthew patton wrote: | | > what with the home NAS conversations, what's the trick to buy a J4500 | > without any drives? SUN like every

[zfs-discuss] Performance metrics and benchmarking of an OpenSolarisTM NFS fileserver

2010-02-10 Thread Bryan Allen
Just saw this go by my twitter stream: http://staff.science.uva.nl/~delaat/sne-2009-2010/p02/report.pdf via @legeza -- bda cyberpunk is dead. long live cyberpunk. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] Oracle Performance - ZFS vs UFS (Jason King)

2010-02-13 Thread Allen Eastwood
> There is of course the caveat of using raw devices with databases (it > becomes harder to track usage, especially as the number of LUNs > increases, slightly less visibility into their usage statistics at the > OS level ). However perhaps now someone can implement the CR I filed > a long time

Re: [zfs-discuss] Oracle Performance - ZFS vs UFS (Jason King)

2010-02-13 Thread Allen Eastwood
guy might make a mistake and > give you luns already mapped elsewhere on accident -- which I have > seen happen before). And when you're forced to do it at 3am after > already working 12 hours that day well safeguards are a good > thing. > > > On Sat, Feb 13, 2010

Re: [zfs-discuss] performance problem with Mysql

2010-02-20 Thread Bryan Allen
+-- | On 2010-02-20 08:12:53, Charles Hedrick wrote: | | We recently moved a Mysql database from NFS (Netapp) to a local disk array (J4200 with SAS disks). Shortly after moving production, the system effectively hung. CP

Re: [zfs-discuss] performance problem with Mysql

2010-02-20 Thread Bryan Allen
+-- | On 2010-02-20 08:45:23, Charles Hedrick wrote: | | I hadn't considered stress testing the disks. Obviously that's a good idea. We'll look at doing something in May, when we have the next opportunity to take down th

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Bryan Allen
+-- | On 2010-02-25 12:05:03, Ray Van Dolson wrote: | | Thanks Cindy. I need to stay on Solaris 10 for the time being, so I'm | guessing I'd have to Live boot into an OpenSolaris build, fix my pool | then hope it re-impor

Re: [zfs-discuss] zfs-discuss Digest, Vol 58, Issue 117

2010-08-28 Thread Allen Eastwood
>> hi all >> Try to learn how UFS root to ZFS root liveUG work. >> >> I download the vbox image of s10u8, it come up as UFS root. >> add a new disks (16GB) >> create zpool rpool >> run lucreate -n zfsroot -p rpool >> run luactivate zfsroot >> run lustatus it do show zfsroot will be active in ne

Re: [zfs-discuss] How to create a checkpoint?

2010-11-09 Thread Bryan Allen
Actually he likely means Boot Environments. On OpenSolaris or Solaris 11 you would use the pkg/ beadm commands. Previous Solaris used Live Upgrade. See the documentation for IPS. -- bdha On Nov 9, 2010, at 2:56, Tomas Ă–gren wrote: > On 08 November, 2010 - Peter Taps sent me these 0,7K bytes:

[zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Bryan Allen
Good afternoon, I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured). When I put a moderate amount of load on the zpool (like, say, copying many files locally, or deleting a large number of ZFS fs), the sys

Re: [zfs-discuss] ZFS on 32bit.

2008-08-07 Thread Bryan Allen
+-- | On 2008-08-07 03:53:04, Marc Bevand wrote: | | Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are a | well known problem. They are caused by memory contention in the kernel heap. | Check

Re: [zfs-discuss] ZFS with IMAP(Cyrus)

2008-12-10 Thread Bryan Allen
+-- | On 2008-12-10 16:48:37, Jonny Gerold wrote: | | Hello, | I was wondering if there are any problems with cyrus and ZFS? Or have | all the problems of yester-release been ironed out? Yester-release? I've been using

Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Bryan Allen
+-- | On 2009-02-01 16:29:59, Richard Elling wrote: | | The drives that Sun sells will come with the correct bracket. | Ergo, there is no reason to sell the bracket as a separate | item unless the customer wishes to place

Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Bryan Allen
+-- | On 2009-02-01 20:55:46, Richard Elling wrote: | | The astute observer will note that the bracket for the X41xx family | works elsewhere. For example, | http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Sys

Re: [zfs-discuss] j4200 drive carriers

2009-02-02 Thread Bryan Allen
+-- | On 2009-02-02 09:46:49, casper@sun.com wrote: | | And think of all the money it costs to stock and distribute that | separate part. (And our infrastructure is still expensive; too expensive | for a $5 part) Fa

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Bryan Allen
I for one would like an "interactive" attribute for zpools and filesystems, specifically for destroy. The existing behavior (no prompt) could be the default, but all filesystems would inherit from the zpool's attrib. so I'd only need to set interactive=on for the pool itself, not for each filesyst

Re: [zfs-discuss] How do I "mirror" zfs rpool, x4500?

2009-03-17 Thread Bryan Allen
+-- | On 2009-03-17 16:13:27, Toby Thain wrote: | | Right, but what if you didn't realise on that screen that you needed | to select both to make a mirror? The wording isn't very explicit, in | my opinion. Yesterday I

Re: [zfs-discuss] How do I "mirror" zfs rpool, x4500?

2009-03-17 Thread Bryan Allen
+-- | On 2009-03-17 16:37:25, Mark J Musante wrote: | | >Then mirror the VTOC from the first (zfsroot) disk to the second: | > | ># prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2 | ># zpool attach -f rpool c1

Re: [zfs-discuss] How do I "mirror" zfs rpool, x4500?

2009-03-18 Thread Bryan Allen
+-- | On 2009-03-18 10:14:26, Richard Elling wrote: | | >Just an observation, but it sort of defeats the purpose of buying sun | >hardware with sun software if you can't even get a "this is how your | >drives will map" o

Re: [zfs-discuss] j4200 drive carriers

2009-03-30 Thread Bryan Allen
| > FWIW, it looks like someone at Sun saw the complaints in this thread and or | > (more likely) had enough customer complaints. ??It appears you can buy the | > tray independently now. ??Although, it's $500 (so they're apparently made | > entirely of diamond and platinum). ??In Sun marketing's de

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Bryan Allen
+-- | On 2009-07-07 01:29:11, Andre van Eyssen wrote: | | On Mon, 6 Jul 2009, Gary Mills wrote: | | >As for a business case, we just had an extended and catastrophic | >performance degradation that was the result of two Z

Re: [zfs-discuss] Losts of small files vs fewer big files

2009-07-07 Thread Bryan Allen
Have you set the recordsize for the filesystem to the blocksize Postgres is using (8K)? Note this has to be done before any files are created. Other thoughts: Disable postgres's fsync, enable filesystem compression if disk I/O is your bottleneck as opposed to CPU. I do this with MySQL and it has p

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-28 Thread Rennie Allen
> > Can *someone* please name a single drive+firmware or > RAID > controller+firmware that ignores FLUSH CACHE / FLUSH > CACHE EXT > commands? Or worse, responds "ok" when the flush > hasn't occurred? I think it would be a shorter list if one were to name the drives/controllers that actually imp

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-28 Thread Rennie Allen
> This is also (theoretically) why a drive purchased > from Sun is more > that expensive then a drive purchased from your > neighbourhood computer > shop: It's more significant than that. Drives aimed at the consumer market are at a competitive disadvantage if they do handle cache flush corr

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-07-31 Thread Bryan Allen
+-- | On 2009-07-31 17:00:54, Jason A. Hoffman wrote: | | I have thousands and thousands and thousands of zpools. I started | collecting such zpools back in 2005. None have been lost. I don't have thousands and thousand

Re: [zfs-discuss] zfs-discuss Digest, Vol 46, Issue 50

2009-08-08 Thread Allen Eastwood
Does DNLC even play a part in ZFS, or are the Docs out of date? "Defines the number of entries in the directory name look-up cache (DNLC). This parameter is used by UFS and NFS to cache elements of path names that have been resolved." No mention of ZFS. Noticed that when discussing that with a c

Re: [zfs-discuss] HAMMER

2007-10-16 Thread Bryan Allen
On Oct 16, 2007, at 4:36 PM, Jonathan Loran wrote: > > We use compression on almost all of our zpools. We see very little > if any I/O slowdown because of this, and you get free disk space. > In fact, I believe read I/O gets a boost from this, since > decompression is cheap compared to nor

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Bryan Allen
+-- | On 2008-02-12 02:40:33, Thomas Liesner wrote: | | Subject: Re: [zfs-discuss] Avoiding performance decrease when pool usage is | over 80% | | Nobody out there who ever had problems with low diskspace? Only in share

Re: [zfs-discuss] grrr, How to get rid of mis-touched file named `-c'

2011-11-23 Thread Bryan Horstmann-Allen
+-- | On 2011-11-23 13:43:10, Harry Putnam wrote: | | Somehow I touched some rather peculiar file names in ~. Experimenting | with something I've now forgotten I guess. | | Anyway I now have 3 zero length files with name

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread Bryan Horstmann-Allen
+-- | On 2013-02-17 18:40:47, Ian Collins wrote: | > One of its main advantages is it has been platform agnostic. We see > Solaris, Illumos, BSD and more recently ZFS on Linux questions all give the > same respect. > >

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread Bryan Horstmann-Allen
+-- | On 2013-02-17 01:17:58, Tim Cook wrote: | | While I'm sure many appreciate the offer as I do, I can tell you for me | personally: never going to happen. Why would I spend all that time and | energy participating in

Re: [zfs-discuss] Best practice for Sol10U9 ZIL -- mirrored or not?

2010-09-16 Thread Bryan Horstmann-Allen
+-- | On 2010-09-16 18:08:46, Ray Van Dolson wrote: | | Best practice in Solaris 10 U8 and older was to use a mirrored ZIL. | | With the ability to remove slog devices in Solaris 10 U9, we're | thinking we may get more ba

[zfs-discuss] ZFS recovery tool for Solaris 10 with a dead slog?

2010-11-04 Thread Bryan Horstmann-Allen
I just had an SSD blow out on me, taking a v10 zpool with it. The pool currently shows up as UNAVAIL, "missing device". The system is currently running U9, which has `import -F`, but not `import -m`. My understanding is the pool would need to be >=19 for that to work regardless. I have copies of

Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-08 Thread Bryan Horstmann-Allen
+-- | On 2010-11-08 13:27:09, Peter Taps wrote: | | From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Bryan Horstmann-Allen
+-- | On 2010-11-15 10:21:06, Edward Ned Harvey wrote: | | Backups. | | Even if you upgrade your hardware to better stuff... with ECC and so on ... | There is no substitute for backups. Period. If you care about your da

Re: [zfs-discuss] New system, Help needed!

2010-11-15 Thread Bryan Horstmann-Allen
+-- | On 2010-11-15 08:48:55, Frank wrote: | | I am a newbie on Solaris. | We recently purchased a Sun Sparc M3000 server. It comes with 2 identical hard drives. I want to setup a raid 1. After searching on google, I fou

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Bryan Horstmann-Allen
+-- | On 2010-11-15 11:27:02, Toby Thain wrote: | | > Backups are not going to save you from bad memory writing corrupted data to | > disk. | | It is, however, a major motive for using ZFS in the first place. In this con

[zfs-discuss] Replacing log devices takes ages

2010-11-19 Thread Bryan Horstmann-Allen
Disclaimer: Solaris 10 U8. I had an SSD die this morning and am in the process of replacing the 1GB partition which was part of a log mirror. The SSDs do nothing else. The resilver has been running for ~30m, and suggests it will finish sometime before Elvis returns from Andromeda, though perhaps

Re: [zfs-discuss] drive replaced from spare

2010-11-23 Thread Bryan Horstmann-Allen
+-- | On 2010-11-23 13:28:38, Tony Schreiner wrote: | > am I supposed to do something with c1t3d0 now? Presumably you want to replace the dead drive with one that works? zpool offline the dead drive, if it isn't already,

Re: [zfs-discuss] detach configured log devices?

2011-03-16 Thread Bryan Horstmann-Allen
+-- | On 2011-03-16 12:33:58, Jim Mauro wrote: | | With ZFS, Solaris 10 Update 9, is it possible to | detach configured log devices from a zpool? | | I have a zpool with 3 F20 mirrors for the ZIL. They're | coming up corr