[zfs-discuss] Fileserver help.

2010-04-12 Thread Daniel
Hi all. Im pretty new to the whole OpenSolaris thing, i've been doing a bit of research but cant find anything on what i need. I am thinking of making myself a home file server running OpenSolaris with ZFS and utilizing Raid/Z I was wondering if there is anything i can get that will allow Wind

[zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
ver when I omit the -n I get the following error # zpool add tank c1d1 cannot add to 'tank': invalid argument for this pool operation I get the same message for both dirves with and without the -f option. Any help is appreciated thanks. -- -Daniel ___

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
or c0d0 and c1d1 > > Thanks, > > Cindy > - Original Message - > From: Daniel > Date: Thursday, October 29, 2009 9:59 am > Subject: [zfs-discuss] adding new disk to pool > To: zfs-discuss@opensolaris.org > > > > Hi, > > > > I just installed 2

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Daniel
pool destroy tank2 > # zpool add tank c1d1 > > Adding the c1d1 disk to the tank pool will create a non-redundant pool > of two disks. Is this what you had in mind? > > Thanks, > > Cindy > > > On 10/29/09 10:17, Daniel wrote: > >> Here is the output of zpoo

[zfs-discuss] zpool detach on non-mirrored drive

2008-12-16 Thread Daniel
I'm using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I'm very environmentally concious, so I don't want to leave old drives in there to consume power a

Re: [zfs-discuss] zpool detach on non-mirrored drive

2008-12-16 Thread Daniel
tcook, Thanks for your response. Well, I don't imagine there would be a lot of requests from enterprise customers with deep pockets. My impression has been that OS is targetting the little guy though, and as such, this would really be a welcome feature. -- This message posted from opensolaris.

Re: [zfs-discuss] zpool detach on non-mirrored drive

2008-12-17 Thread Daniel
> It is unfortunately that you ask this question after > you've installed the > new disks; now both the old and the new disks are > part of the same zpool. That's awesome, I did not know that this would work. I'm glad I made this post. I actually have not yet replaced any drive, in fact, this ve

Re: [zfs-discuss] zpool detach on non-mirrored drive

2008-12-18 Thread Daniel
Is it possible to do a replace on the root filesystem as well? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool detach on non-mirrored drive

2008-12-18 Thread Daniel
Is it possible to do a replace on / as well? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool detach on non-mirrored drive

2008-12-18 Thread Daniel
Cindy, This is helpful! Thank you very much :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Expand zpool capacity

2010-03-02 Thread Daniel Carosone
On Tue, Mar 02, 2010 at 02:04:52PM -0800, Erik Trimble wrote: > I don't believe that is true for VM installations like Vladimir's, > though I certainly could be wrong. I think you are :-) > Vladimir - I would say your best option is to simply back up your data > from the OpenSolaris VM, and

Re: [zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-03-04 Thread Daniel Carosone
On Tue, Mar 02, 2010 at 03:14:04PM -0800, Richard Elling wrote: > That is just a shorthand for snapshotting (snapshooting? :-) datasets. :-) > There still is no pool snapshot feature. One could pick nits about "zpool split" .. -- Dan. pgppVa56AxgBa.pgp Description: PGP signature _

Re: [zfs-discuss] What's the advantage of using multiple filesystems in a

2010-03-04 Thread Daniel Carosone
In addition to all the other good advice in the thread, I will emphasise the benefit of having smaller snapshot granularity. I have found this to be one of the most valuable and comprelling reasons when I have chosen to create a separate filesystem. If there's data that changes often and I don't

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-11 Thread Daniel Carosone
On Wed, Mar 10, 2010 at 02:54:18PM +0100, Svein Skogen wrote: > Are there any good options for encapsulating/decapsulating a zfs send > stream inside FEC (Forward Error Correction)? This could prove very > useful both for backup purposes, and for long-haul transmissions. I used par2 for this for s

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-11 Thread Daniel Carosone
On Thu, Mar 11, 2010 at 07:23:43PM +1100, Daniel Carosone wrote: > You have reminded me to go back and look again, and either find that > whatever issue was at fault last time was transient and now gone, or > determine what it actually was and get it resolved. > > In case you

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-11 Thread Daniel Carosone
On Thu, Mar 11, 2010 at 02:00:41AM -0800, Svein Skogen wrote: > I can't help but keep wondering if not some sort of FEC wrapper > (optional of course) might solve both the "backup" and some of the > long-distance-transfer (where retransmissions really isn't wanted) > issues. Retransmissions aren

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 10:15:53AM -0500, Bob Friesenhahn wrote: > Clearly there are many more reads per second occuring on the zfs > filesystem than the ufs filesystem. yes > Assuming that the application-level requests are really the same From the OP, the workload is a "find /". So, ZFS mak

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 08:43:13PM -0500, David Dyer-Bennet wrote: > My own stuff is intended to be backed up by a short-cut combination -- > zfs send/receive to an external drive, which I then rotate off-site (I > have three of a suitable size). However, the only way that actually > works s

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 03:36:22AM -0700, Kashif Mumtaz wrote: > I did another test on both machine. And write performance on ZFS > extraordinary slow. > - > In ZFS data was being write around 1037 kw/s while disk remain busy

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 05:21:17AM -0700, Tonmaus wrote: > > No, because the parity itself is not verified. > > Aha. Well, my understanding was that a scrub basically means reading > all data, and compare with the parities, which means that these have > to be re-computed. Is that correct? A scru

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Daniel Carosone
As noted, the ratio caclulation applies over the data attempted to dedup, not the whole pool. However, I saw a commit go by just in the last couple of days about the dedupratio calculation being misleading, though I didn't check the details. Presumably this will be reported differently from the

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 09:54:28PM -0700, Tonmaus wrote: > > (and the details of how much and how low have changed a few times > > along the version trail). > > Is there any documentation about this, besides source code? There are change logs and release notes, and random blog postings along th

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread Daniel Carosone
On Fri, Mar 19, 2010 at 06:34:50PM +1100, taemun wrote: > A pool with a 4-wide raidz2 is a completely nonsensical idea. No, it's not - not completely. > It has the same amount of accessible storage as two striped mirrors. And > would be slower in terms of IOPS, and be harder to upgrade in the fu

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread Daniel Carosone
On Fri, Mar 19, 2010 at 12:59:39AM -0700, homerun wrote: > Thanks for comments > > So possible choises are : > > 1) 2 2-way mirros > 2) 4 disks raidz2 > > BTW , can raidz have spare ? so is there one posible choise more : > 3 disks raidz with 1 spare ? raidz2 is basically this, with a pre-silve

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-21 Thread Daniel Carosone
On Sat, Mar 20, 2010 at 09:50:10PM -0700, Erik Trimble wrote: > Nah, the 8x2.5"-in-2 are $220, while the 5x3.5"-in-3 are $120. And they have a sas expander inside, unlike every other variant of these I've seen so far. Cabling mess win. -- Dan. pgpNzVMcKh5yn.pgp Description: PGP signature ___

Re: [zfs-discuss] ZFS+CIFS: Volume Shadow Services, or Simple Symlink?

2010-03-21 Thread Daniel Carosone
On Sun, Mar 21, 2010 at 08:59:29PM -0400, Edward Ned Harvey wrote: > > > ln -s .zfs/snapshot snapshots > > > > > > Voila. All Windows or Mac or Linux or whatever users are able to > > > easily access snapshots. Not being a CIFS user, could you clarify/confirm for me.. is this just a "presentatio

Re: [zfs-discuss] pool use from network poor performance

2010-03-23 Thread Daniel Carosone
On Mon, Mar 22, 2010 at 10:58:05PM -0700, homerun wrote: > if i access to datapool from network , smb , nfs , ftp , sftp , jne... > i get only max 200 KB/s speeds > compared to rpool that give XX MB/S speeds to and from network it is slow. > > Any ideas what reasons might be and how try to find re

Re: [zfs-discuss] CR 6880994 and pkg fix

2010-03-23 Thread Daniel Carosone
On Tue, Mar 23, 2010 at 07:22:59PM -0400, Frank Middleton wrote: > On 03/22/10 11:50 PM, Richard Elling wrote: > >> Look again, the checksums are different. > > Whoops, you are correct, as usual. Just 6 bits out of 256 different... > > Look which bits are different - digits 24, 53-56 in both cas

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Daniel Carosone
On Wed, Mar 24, 2010 at 08:02:06PM +0100, Svein Skogen wrote: > Maybe someone should look at implementing the zfs code for the XScale > range of io-processors (such as the IOP333)? NetBSD runs on (many of) those. NetBSD has an (in-progress, still-some-issues) ZFS port. Hopefully they will converg

Re: [zfs-discuss] SSD As ARC

2010-03-27 Thread Daniel Carosone
On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote: > You can't share a device (either as ZIL or L2ARC) between multiple pools. Discussion here some weeks ago reached suggested that an L2ARC device was used for all ARC evictions, regardless of the pool. I'd very much like an authorita

Re: [zfs-discuss] ZFS and 4kb sector Drives (All new western digital GREEN Drives?)

2010-03-27 Thread Daniel Carosone
On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote: > not sure if 32bit BSD supports 48bit LBA Solaris is the only otherwise-modern OS with this daft limitation. -- Dan. pgpE9xlpyJDRZ.pgp Description: PGP signature ___ zfs-discuss mailing

Re: [zfs-discuss] ZFS and 4kb sector Drives (All new western digital GREEN Drives?)

2010-03-27 Thread Daniel Carosone
On Sat, Mar 27, 2010 at 08:47:26PM +1100, Daniel Carosone wrote: > On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote: > > not sure if 32bit BSD supports 48bit LBA > > Solaris is the only otherwise-modern OS with this daft limitation. Ok, it's not due to LBA48, bu

[zfs-discuss] on alignment and verification

2010-03-28 Thread Daniel Carosone
There's been some talk about alignment lately, both for flash and WD disks. What's missing, at least from my perspective, is a clear an unambiguous test so users can verify that their zfs pools are aligned correctly. This should be a test that sees through all the layers of BIOS and SMI/EFI and z

Re: [zfs-discuss] on alignment and verification

2010-03-28 Thread Daniel Carosone
On Mon, Mar 29, 2010 at 12:21:39PM +1100, Daniel Carosone wrote: > #1. Use xxd (or similar) to examine the contents of the raw disk > > This relies on knowing what to look for, and how that is aligned to > the start of the partition and to to metaslab addresses and offsets > tha

Re: [zfs-discuss] on alignment and verification

2010-03-28 Thread Daniel Carosone
On Sun, Mar 28, 2010 at 09:32:02PM -0700, Richard Elling wrote: > This is documented in the ZFS on disk format doc. Yep, I've been there in the meantime.. ;-) > Use prtvtoc or format to see the beginning of the slice relative to the > beginning of the partition. I dunno how you tell the start of

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Daniel Carosone
On Mon, Mar 29, 2010 at 06:38:47PM -0400, David Magda wrote: > A new ARC case: I read this earlier this morning. Welcome news indeed! I have some concerns about the output format, having worked with similar requirements in the past. In particular: as part of the monotone VCS when reporting works

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Daniel Carosone
On Tue, Mar 30, 2010 at 12:37:15PM +1100, Daniel Carosone wrote: > There will also need to be clear rules on output ordering, with > respect to renames, where multiple changes have happened to renamed > files. Separately, but relevant in particular to the above due to the potential

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-29 Thread Daniel Carosone
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote: > The caiman installer allows you to control the size of the partition > on the boot disk but it doesn't allow you (at least I couldn't > figure out how) to control the size of the slices. So you end with > slice0 filling the entire partit

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-29 Thread Daniel Carosone
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote: > You can: > - install to a partition that's the size you want rpool > - expand the partition to the full disk - expand the s2 slice to the full disk > - leave the s0 slice for rpool alone > - make another sl

Re: [zfs-discuss] bit-flipping in RAM...

2010-03-31 Thread Daniel Carosone
On Thu, Apr 01, 2010 at 12:38:29AM +0100, Robert Milkowski wrote: > So I wasn't saying that it can work or that it can work in all > circumstances but rather I was trying to say that it probably shouldn't > be dismissed on a performance argument alone as for some use cases It would be of grea

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Sun, Apr 04, 2010 at 11:46:16PM -0700, Willard Korfhage wrote: > Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. Then why do you suspect the ram? Especially with 12 disks, another likely candidate could be an overloaded power supply. While there may be problems showing u

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 07:43:26AM -0400, Edward Ned Harvey wrote: > Is the database running locally on the machine? Or at the other end of > something like nfs? You should have better performance using your present > config than just about any other config ... By enabling the log devices, > such

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 06:32:13PM -0700, Learner Study wrote: > I'm wondering what is the correct flow when both raid5 and de-dup are > enabled on a storage volume > > I think we should do de-dup first and then raid5 ... is that > understanding correct? Not really. Strictly speaking, ZFS do

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 06:58:57PM -0700, Learner Study wrote: > Hi Jeff: > > I'm a bit confused...did you say "Correct" to my orig email or the > reply from Daniel... Jeff is replying to your mail, not mine. It looks like he's read your question a little differ

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 09:46:58PM -0500, Tim Cook wrote: > On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage > wrote: > > > It certainly has symptoms that match a marginal power supply, but I > > measured the power consumption some time ago and found it comfortably within > > the power supply's c

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: > By the way, I see that now one of the disks is listed as degraded - too many > errors. Is there a good way to identify exactly which of the disks it is? It's hidden in iostat -E, of all places. -- Dan. pgpB1dUBrSfPC.pgp Descrip

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote: > On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone wrote: > > > On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: > > > By the way, I see that now one of the disks is listed as degraded - too > &

Re: [zfs-discuss] "refreservation" and ZFS Volume

2010-04-06 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 01:44:20PM -0400, Tony MacDoodle wrote: > I am trying to understand how "refreservation" works with snapshots. > > If I have a 100G zfs pool > > I have 4 20G volume groups in that pool. > > refreservation = 20G on all volume groups. > > Now when I want to do a sn

Re: [zfs-discuss] ZFS on-disk DDT block arrangement

2010-04-06 Thread Daniel Carosone
On Wed, Apr 07, 2010 at 01:52:23AM +1000, taemun wrote: > I was wondering if someone could explain why the DDT is seemingly > (from empirical observation) kept in a huge number of individual blocks, > randomly written across the pool, rather than just a large binary chunk > somewhere. It's not rea

Re: [zfs-discuss] "refreservation" and ZFS Volume

2010-04-06 Thread Daniel Carosone
On Wed, Apr 07, 2010 at 06:27:09AM +1000, Daniel Carosone wrote: > You have reminded me.. I wrote some patches to the zfs manpage to help > clarify this issue, while travelling, and never got around to posting > them when I got back. I'll dig them up off my netbook later

Re: [zfs-discuss] SSD sale on newegg

2010-04-06 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 06:53:04PM -0700, Richard Elling wrote: > >> Disagree. Swap is a perfectly fine workload for SSDs. Under ZFS, > >> even more so. I'd really like to squash this rumour and thought we > >> were making progress on that front :-( Today, there are millions or > >> thousand

[zfs-discuss] compression property not received

2010-04-07 Thread Daniel Bakken
rties, snapshots, descendent file systems, and clones are preserved." Snapshots are preserved, but the compression property is not. Any ideas why this doesn't work as advertised? Thanks, Daniel Bakken ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] compression property not received

2010-04-07 Thread Daniel Bakken
Cindy, The source server is OpenSolaris build 129 (zpool version 22) and the destination is stock OpenSolaris 2009.06 (zpool version 14). Both filesystems are zfs version 3. Mystified, Daniel Bakken On Wed, Apr 7, 2010 at 10:57 AM, Cindy Swearingen < cindy.swearin...@oracle.com>

Re: [zfs-discuss] compression property not received

2010-04-07 Thread Daniel Bakken
-vFd sas And now I have gzip compression enabled locally: zfs get compression sas/archive NAME PROPERTY VALUE SOURCE sas/archive compression gzip local Not pretty, but it works. Daniel Bakken On Wed, Apr 7, 2010 at 12:51 PM, Cindy Swearingen < cindy.swea

Re: [zfs-discuss] compression property not received

2010-04-07 Thread Daniel Bakken
The receive side is running build 111b (2009.06), so I'm not sure if your advice actually applies to my situation. Daniel Bakken On Tue, Apr 6, 2010 at 10:57 PM, Tom Erickson wrote: > After build 128, locally set properties override received properties, and > this would be t

Re: [zfs-discuss] compression property not received

2010-04-07 Thread Daniel Bakken
t it would destroy the progress I've made so far transferring the filesystem. Thanks, Daniel On Wed, Apr 7, 2010 at 12:52 AM, Tom Erickson wrote: > > The advice regarding received vs local properties definitely does not > apply. You could still confirm the presence of the compressio

Re: [zfs-discuss] compression property not received

2010-04-07 Thread Daniel Bakken
f zfs receive handled failures more gracefully, and attempted to set as many properties as possible. Thanks to Cindy and Tom for their help. Daniel On Wed, Apr 7, 2010 at 2:31 AM, Tom Erickson wrote: > > Now I remember that 'zfs receive' used to give up after the first property >

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Daniel Carosone
Go with the 2x7 raidz2. When you start to really run out of space, replace the drives with bigger ones. You will run out of space eventually regardless; this way you can replace 7 at a time, not 14 at a time. With luck, each replacement will last you long enough that the next replacement will c

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Daniel Carosone
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote: > Daniel Carosone wrote: >> Go with the 2x7 raidz2. When you start to really run out of space, >> replace the drives with bigger ones. > > While that's great in theory, there's getting to be a consensus

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Daniel Carosone
On Thu, Apr 08, 2010 at 03:48:54PM -0700, Erik Trimble wrote: > Well To be clear, I don't disagree with you; in fact for a specific part of the market (at least) and a large part of your commentary, I agree. I just think you're overstating the case for the rest. > The problem is (and this i

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Daniel Carosone
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote: > On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote: > > > > As for error rates, this is something zfs should not be afraid > > of. Indeed, many of us would be happy to get drives with less internal > >

[zfs-discuss] zfs send hangs

2010-04-09 Thread Daniel Bakken
the destination server is version 14 (build 111b). Rsync does not have this problem and performs extremely well. However, it will not transfer snapshots. Two other send/receives (234GB and 451GB) between the same servers have worked fine without hanging. Thanks, Daniel Bak

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Daniel Carosone
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote: > If I could find a reasonable backup method that avoided external > enclosures altogether, I would take that route. I'm tending to like bare drives. If you have the chassis space, there are 5-in-3 bays that don't need extra driv

Re: [zfs-discuss] vPool unavailable but RaidZ1 is online

2010-04-09 Thread Daniel Carosone
On Sun, Apr 04, 2010 at 07:13:58AM -0700, Kevin wrote: > I am trying to recover a raid set, there are only three drives that > are part of the set. I attached a disk and discovered it was bad. > It was never part of the raid set. Are you able to tell us more precisely what you did with this disk

Re: [zfs-discuss] Sync Write - ZIL log performance - Feedback for ZFS developers?

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 11:50:05AM -0500, Bob Friesenhahn wrote: > Huge synchronous bulk writes are pretty rare since usually the > bottleneck is elsewhere, such as the ethernet. Also, large writes can go straight to the pool, and the zil only logs the intent to commit those blocks (ie, link them

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 12:56:04PM -0500, Tim Cook wrote: > At that price, for the 5-in-3 at least, I'd go with supermicro. For $20 > more, you get what appears to be a far more solid enclosure. My intent with that link was only to show an example, not make a recommendation. I'm glad others have

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 02:51:45PM -0500, Harry Putnam wrote: > [Note: This discussion started in another thread > > Subject: about backup and mirrored pools > > but the subject has been significantly changed so started a new > thread] > > Bob Friesenhahn writes: > > > Luckily, since you a

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 06:20:54PM -0500, Bob Friesenhahn wrote: > Since he is already using mirrors, he already has enough free space > since he can move one disk from each mirror to the "main" pool (which > unfortunately, can't be the boot 'rpool' pool), send the data, and then > move the se

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-11 Thread Daniel Carosone
On Sun, Apr 11, 2010 at 07:03:29PM -0400, Edward Ned Harvey wrote: > Heck, even if the faulted pool spontaneously sent the server into an > ungraceful reboot, even *that* would be an improvement. Please look at the pool property "failmode". Both of the preferences you have expressed are available

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 09:32:50AM -0600, Tim Haley wrote: > Try explicitly enabling fmd to send to syslog in > /usr/lib/fm/fmd/plugins/syslog-msgs.conf Wow, so useful, yet so well hidden I never even knew to look for it. Please can this be on by default? Please? -- Dan. pgpDwZouV1dUr.pgp Desc

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 06:17:47PM -0500, Harry Putnam wrote: > But, I'm too unskilled in solaris and zfs admin to be risking a total > melt down if I try that before gaining a more thorough understanding. Grab virtualbox or something similar and set yourself up a test environment. In general, an

Re: [zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 08:01:27PM -0700, Peter Tripp wrote: > So I decided I would attach the disks to 2nd system (with working fans) where > I could backup the data to tape. So here's where I got dumb...I ran 'zpool > export'. Of course, I never actually ended up attaching the disks to another

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote: > So I turned deduplication on on my staging FS (the one that gets mounted > on the database servers) yesterday, and since then I've been seeing the > mount hang for short periods of time off and on. (It lights nagios up > like a Chris

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote: > I realize that I did things in the wrong order. I should have removed the > oldest snapshot first, on to the newest, and then removed the data in the > FS itself. For the problem in question, this is irrelevant. As discussed in the

Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote: > On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote: > > > From my experience dealing with > 4TB you stop writing after 80% of zpool > > utilization > > YMMV. I have routinely completely filled zpools. There have been some > improvement

Re: [zfs-discuss] Setting up ZFS on AHCI disks

2010-04-16 Thread Daniel Carosone
On Fri, Apr 16, 2010 at 11:46:01AM -0700, Willard Korfhage wrote: > The drives are recent - 1.5TB drives I'm going to bet this is a 32-bit system, and you're getting screwed by the 1TB limit that applies there. If so, you will find clues hidden in dmesg from boot time about this, as the drives ar

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-17 Thread Daniel Carosone
On Sat, Apr 17, 2010 at 05:36:19PM -0400, Ethan wrote: > >From wikipedia, PCI is > 133 MB /s (32-bit at 33 MHz) > 266 MB/s (32-bit at 66 MHz or 64-bit at 33 MHz) > 533 MB/s (64-bit at 66 MHz) > > Not quite the 3GB/s hoped for. Not quite, but somewhat closer

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 07:02:38PM -0700, Don wrote: > If you have a pair of heads talking to shared disks with ZFS- what can you do > to ensure the second head always has a current copy of the zpool.cache file? > I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't > im

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 10:33:36PM -0500, Bob Friesenhahn wrote: > Probably the DDRDrive is able to go faster since it should have lower > latency than a FLASH SSD drive. However, it may have some bandwidth > limits on its interface. It clearly has some. They're just as clearly well in excess

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 07:37:10PM -0700, Don wrote: > I'm not sure to what you are referring when you say my "running BE" Running boot environment - the filesystem holding /etc/zpool.cache -- Dan. pgpbKUgqnePjv.pgp Description: PGP signature ___ zfs-d

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Mon, Apr 19, 2010 at 03:37:43PM +1000, Daniel Carosone wrote: > the filesystem holding /etc/zpool.cache or, indeed, /etc/zfs/zpool.cache :-) -- Dan. pgpSCBv4eR19k.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-disc

Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Daniel Carosone
I have certainly moved a root pool from one disk to another, with the same basic process, ie: - fuss with fdisk and SMI labels (sigh) - zpool create - snapshot, send and recv - installgrub - swap disks I looked over the "root pool recovery" section in the Best Practices guide at the time,

Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Daniel Carosone
On Tue, Apr 20, 2010 at 12:55:10PM -0600, Cindy Swearingen wrote: > You can use the OpenSolaris beadm command to migrate a ZFS BE over > to another root pool, but you will also need to perform some manual > migration steps, such as > - copy over your other rpool datasets > - recreate swap and dump

Re: [zfs-discuss] SSD best practices

2010-04-22 Thread Daniel Carosone
On Thu, Apr 22, 2010 at 09:58:12PM -0700, thomas wrote: > Assuming newer version zpools, this sounds like it could be even > safer since there is (supposedly) less of a chance of catastrophic > failure if your ramdisk setup fails. Use just one remote ramdisk or > two with battery backup.. whatever

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-26 Thread Daniel Carosone
On Mon, Apr 26, 2010 at 10:02:42AM -0700, Chris Du wrote: > SAS: full duplex > SATA: half duplex > > SAS: dual port > SATA: single port (some enterprise SATA has dual port) > > SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write and 1 read > SATA: 1 active channel - 1 read or 1 writ

Re: [zfs-discuss] ZFS version information changes (heads up)

2010-04-27 Thread Daniel Carosone
On Tue, Apr 27, 2010 at 11:29:04AM -0600, Cindy Swearingen wrote: > The revised ZFS Administration Guide describes the ZFS version > descriptions and the Solaris OS releases that provide the version > and feature, starting on page 293, here: > > http://hub.opensolaris.org/bin/view/Community+Group+z

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-27 Thread Daniel Carosone
On Tue, Apr 27, 2010 at 10:36:37AM +0200, Roy Sigurd Karlsbakk wrote: > - "Daniel Carosone" skrev: > > SAS: Full SCSI TCQ > > SATA: Lame ATA NCQ > > What's so lame about NCQ? Primarily, the meager number of outstanding requests; write cache is needed to

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Daniel Carosone
On Wed, May 05, 2010 at 04:34:13PM -0400, Edward Ned Harvey wrote: > The suggestion I would have instead, would be to make the external drive its > own separate zpool, and then you can incrementally "zfs send | zfs receive" > onto the external. I'd suggest doing both, to different destinations :)

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Daniel Carosone
On Sun, May 09, 2010 at 09:24:38PM -0500, Mike Gerdts wrote: > The best thing to do with processes that can be swapped out forever is > to not run them. Agreed, however: #1 Shorter values of "forever" (like, say, "daily") may still be useful. #2 This relies on knowing in advance what these proc

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-05-17 Thread Daniel Carosone
On Tue, May 11, 2010 at 04:15:24AM -0700, Bertrand Augereau wrote: > Is there a O(nb_blocks_for_the_file) solution, then? > > I know O(nb_blocks_for_the_file) == O(nb_bytes_in_the_file), from Mr. > Landau's POV, but I'm quite interested in a good constant factor. If you were considering the hash

Re: [zfs-discuss] Announce: zfsdump

2010-07-03 Thread Daniel Carosone
On Wed, Jun 30, 2010 at 12:54:19PM -0400, Edward Ned Harvey wrote: > If you're talking about streaming to a bunch of separate tape drives (or > whatever) on a bunch of separate systems because the recipient storage is > the bottleneck instead of the network ... then "split" probably isn't the > mos

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-07-07 Thread Daniel Carosone
On Tue, Jul 06, 2010 at 05:29:54PM +0200, Arne Jansen wrote: > Daniel Carosone wrote: > > Something similar would be useful, and much more readily achievable, > > from ZFS from such an application, and many others. Rather than a way > > to compare reliably between two fil

Re: [zfs-discuss] 1068E mpt driver issue

2010-07-07 Thread Daniel Bakken
Upgrade the HBA firmware to version 1.30. We had the same problem, but upgrading solved it for us. Daniel Bakken On Wed, Jul 7, 2010 at 1:57 PM, Joeri Vanthienen wrote: > Hi, > > We're using the following components with snv134: > - 1068E HBA (supermicro) > - 3U SAS / SAT

Re: [zfs-discuss] 1068E mpt driver issue

2010-07-07 Thread Daniel Bakken
mware: http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/combo/sas3442e-r/index.html I hope it works for you. Daniel On Wed, Jul 7, 2010 at 2:48 PM, Jacob Ritorto wrote: > Well, OK, but where do I find it? > > I'd still expect some problems with FCODE - vs.

[zfs-discuss] preparing for future drive additions

2010-07-14 Thread Daniel Taylor
Hello, I'm about the build a opensolaris NAS system, currently we have two drives and are planning on adding two more at a later date (2TB enterprise level HDD are a bit expensive!). Whats the best configuration for setting up these drives bearing in mind I want to expand in the future? I was

Re: [zfs-discuss] preparing for future drive additions

2010-07-14 Thread Daniel Taylor
think off is to export the pool, redo everything with RAIDZ and then import the data? I presume that would work? But I would lose settings like samba shares? Thanks again! - Daniel On 14 Jul 2010, at 21:59, Cindy Swearingen wrote: Yes, that is true. If you have 4 2 TB drives, you would only get

Re: [zfs-discuss] preparing for future drive additions

2010-07-14 Thread Daniel Taylor
And if I did that would I keep the snapshots? This system is going to be our backup storage NAS, so losing the snapshots is actually worse than losing the extra 2TB. Thanks, - Daniel On 14 Jul 2010, at 23:06, Cindy Swearingen wrote: You can't transition a mirrored pool to a RAIDZ pool

Re: [zfs-discuss] preparing for future drive additions

2010-07-14 Thread Daniel Taylor
Perfect. Thank you you've been a great help, I have lots to think about (and test) now! Thanks again, nice to know this list is so responsive! - Daniel On 14 Jul 2010, at 23:34, Cindy Swearingen wrote: Yes, if you created snapshots of your file systems and stored them remotely, you

Re: [zfs-discuss] preparing for future drive additions

2010-07-15 Thread Daniel Taylor
#x27;d only lose one drive and have 3 usable (so 6TB, which is what I was going for) I can only fit 4 drives into the server chassis and I was hoping to get 6TB out of it. Thanks, - Daniel On 14 Jul 2010, at 21:28, Cindy Swearingen wrote: Hi Daniel, No conversion from a mirrored to RA

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Daniel Taylor
-disc...@opensolaris.org/msg05920.html sourcehost: zfs send | netcat $remotehost $remoteport desthost: netcat -l -p $myport | zfs receive Hope that helps, - Daniel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

  1   2   3   4   5   >