Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-12-09 Thread erik.ableson
On 9 déc. 2010, at 13:41, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> Also, if you have a NFS datastore, which is not available at the time of > ESX >> bootup, then the NFS datastore d

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-19 Thread erik.ableson
On 19 nov. 2010, at 15:04, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Günther >> >> Disabling the ZIL (Don't) > > This is relative. There are indeed situations where it's acceptable to > disable ZIL.

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-19 Thread erik.ableson
On 19 nov. 2010, at 03:53, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> >> SAS Controller >> and all ZFS Disks/ Pools are passed-through to Nexenta to have full > ZFS-Disk >> control like on real hardware. > > This is precisely the thing I'm int

Re: [zfs-discuss] Newbie question : snapshots, replication and recovering failure of Site B

2010-11-01 Thread erik.ableson
On 26 oct. 2010, at 16:21, Matthieu Fecteau wrote: > Hi, > > I'm planning to use the replication scripts on that page : > http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html > > It uses the timeslider (other way possible) to take snapshots, uses zfs > send/rece

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-15 Thread erik.ableson
On 15 oct. 2010, at 22:19, Ian D wrote: > A little setback We found out that we also have the issue with the Dell > H800 controllers, not just the LSI 9200-16e. With the Dell it's initially > faster as we benefit from the cache, but after a little while it goes sour- > from 350MB/sec down

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-14 Thread erik.ableson
On 13 oct. 2010, at 18:37, Marty Scholes wrote: > The only thing that still stands out is that network operations (iSCSI and > NFS) to external drives are slow, correct? > > Just for completeness, what happens if you scp a file to the three different > pools? If the results are the same as NF

[zfs-discuss] Slow zfs import solved (beware iDRAC/ILO)

2010-10-13 Thread erik.ableson
Just a note to pass on in case anyone runs into the same situation. I have a DELL R510 that is running just fine, up until the day that I needed to import a pool from a USB hard drive. I plug in the disk, check it with rmformat and try to import the zpool. And it sits there for practically fore

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-16 Thread erik.ableson
On 15 sept. 2010, at 22:04, Mike Mackovitch wrote: > On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote: >> any resolution to this issue? I'm experiencing the same annoying >> lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver >> 3. Would somehow going back to the earlier 8/

Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-29 Thread erik.ableson
Hmmm, that's odd. I have a number of VMs running on NFS (hosted on ESX, rather than Xen) with no problems at all. I did add a SLOG device to get performance up to a reasonable level, but it's been running flawlessly for a few months now. Previously I was using iSCSI for most of the connections,

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-04-30 Thread erik.ableson
On 30 avr. 2010, at 13:47, Euan Thoms wrote: > Well I'm so impressed with zfs at the moment! I just got steps 5 and 6 (form > my last post) to work, and it works well. Not only does it send the increment > over to the backup drive, the latest increment/snapshot appears in the > mounted filesys

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread erik.ableson
No idea about the build quality, but is this the sort of thing you're looking for? Not cheap, integrated RAID (sigh), but one cable only http://www.pc-pitstop.com/das/fit-500.asp Cheap, simple, 4 eSATA connections on one box http://www.pc-pitstop.com/sata_enclosures/scsat4eb.asp Still cheap, us

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-19 Thread erik.ableson
On 19 mars 2010, at 17:11, Joerg Schilling wrote: >> I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet >> the implication is that a tar archive stored on a tape is considered a >> backup ? > > You cannot get a single file out of the zfs send datastream. zfs send is a bloc

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread erik.ableson
On 18 mars 2010, at 15:51, Damon Atkins wrote: > A system with 100TB of data its 80% full and the a user ask their local > system admin to restore a directory with large files, as it was 30days ago > with all Windows/CIFS ACLS and NFSv4/ACLS etc. > > If we used zfs send, we need to go back to

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread erik.ableson
On 18 mars 2010, at 16:58, David Dyer-Bennet wrote: > On Thu, March 18, 2010 04:50, erik.ableson wrote: > >> It would appear that the bus bandwidth is limited to about 10MB/sec >> (~80Mbps) which is well below the theoretical 400Mbps that 1394 is >> supposed to be able

[zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread erik.ableson
An interesting thing I just noticed here testing out some Firewire drives with OpenSolaris. Setup : OpenSolaris 2009.06 and a dev version (snv_129) 2 500Gb Firewire 400 drives with integrated hubs for daisy-chaining (net: 4 devices on the chain) - one SATA bridge - one PATA bridge Created a zp

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-16 Thread erik.ableson
On 16 mars 2010, at 21:00, Marc Nicholas wrote: > On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen wrote: > > > I'll write you a Perl script :) > > I think there are ... several people that'd like a script that gave us > back some of the ease of the old shareiscsi one-off, instead of having > to s

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread erik.ableson
I' ve found that the NFS host based settings required the FQDN, and that the reverse lookup must be available in your DNS. Try "rw,root=host1.mydomain.net" Cheers, Erik On 10 mars 2010, at 05:47, mingli wrote: > And I update the sharenfs option with "rw,ro...@100.198.100.0/24", it works > fin

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-08 Thread erik.ableson
On 8 mars 2010, at 11:33, Svein Skogen wrote: > Let's say for a moment I should go for this solution, with the rpool tucked > away on an usb-stick in the same case as the LTO-3 tapes it "matches" > timelinewise (I'm using HP C8017A kits) as a zfs send -R to a file on the > USB stick. (If, and

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread erik.ableson
On 27 janv. 2010, at 12:10, Georg S. Duck wrote: > Hi, > I was suffering for weeks from the following problem: > a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of > data. The dataset was deprecated, so I chose to destroy it after I had > deleted some files; eventually

Re: [zfs-discuss] 2.5" JBOD

2010-01-25 Thread erik.ableson
On 24 janv. 2010, at 08:36, Erik Trimble wrote: > These days, I've switched to 2.5" SATA laptop drives for large-storage > requirements. > They're going to cost more $/GB than 3.5" drives, but they're still not > horrible ($100 for a 500GB/7200rpm Seagate Momentus). They're also easier to > cr

Re: [zfs-discuss] Dedup memory overhead

2010-01-22 Thread erik.ableson
On 21 janv. 2010, at 22:55, Daniel Carosone wrote: > On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote: > >> What I'm trying to get a handle on is how to estimate the memory >> overhead required for dedup on that amount of storage. > > We'd a

[zfs-discuss] Dedup memory overhead

2010-01-21 Thread erik.ableson
Hi all, I'm going to be trying out some tests using b130 for dedup on a server with about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What I'm trying to get a handle on is how to estimate the memory overhead required for dedup on that amount of storage. From what I gather

Re: [zfs-discuss] Dumb idea?

2009-10-26 Thread erik.ableson
Or in OS X with smart folders where you define a set of search terms and as write operations occur on the known filesystems the folder contents will be updated to reflect the current state of the attached filesystems The structures you defined seemed to be designed around the idea of

[zfs-discuss] SSD value [was SSD over 10gbe not any faster than 10K SAS over GigE]

2009-10-13 Thread erik.ableson
On 13 oct. 2009, at 15:24, Derek Anderson wrote: Simple answer: Man hour math. I have 150 virtual machines on these disks for shared storage. They hold no actual data so who really cares if they get lost. However 150 users of these virtual machines will save 5 minutes or so every day of

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread erik.ableson
Depending on the data content that you're dealing you can compress the snapshots inline with the send/receive operations by piping the data through gzip. Given that we've been talking about 500Mb text files, this seems to be a very likely solution. There was some mention in the Kernel Keyn

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-30 Thread erik.ableson
Heh :-) Disk usage is directly related to available space. At home I have a 4x1Tb raidz filled to overflowing with music, photos, movies, archives, and backups for 4 other machines in the house. I'll be adding another 4 and an SSD shortly. It starts with importing CDs into iTunes or WMP, t

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-07 Thread erik.ableson
On 7 août 09, at 02:03, Stephen Green wrote: I used a 2GB ram disk (the machine has 12GB of RAM) and this jumped the backup up to somewhere between 18-40MB/s, which means that I'm only a couple of hours away from finishing my backup. This is, as far as I can tell, magic (since I started th

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread erik.ableson
You're running into the same problem I had with 2009.06 as they have "corrected" a bug where the iSCSI target prior to 2009.06 didn't honor completely SCSI sync commands issued by the initiator. Some background : Discussion: http://opensolaris.org/jive/thread.jspa?messageID=388492 "correcte

Re: [zfs-discuss] Help with setting up ZFS

2009-07-27 Thread erik.ableson
The zfs send command generates a differential file between the two selected snapshots so you can send that to anything you'd like. The catch of course is that then you have a collection of files on your Linux box that are pretty much useless since your can't mount them or read the contents

Re: [zfs-discuss] [nfs-discuss] NFS, ZFS & ESX

2009-07-08 Thread erik.ableson
ync mount, but I can't set this on the server side where it would be used by the servers automatically. Erik erik.ableson wrote: OK - I'm at my wit's end here as I've looked everywhere to find some means of tuning NFS performance with ESX into returning something accepta

[zfs-discuss] NFS, ZFS & ESX

2009-07-07 Thread erik.ableson
OK - I'm at my wit's end here as I've looked everywhere to find some means of tuning NFS performance with ESX into returning something acceptable using osol 2008.11. I've eliminated everything but the NFS portion of the equation and am looking for some pointers in the right direction. Co

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread erik.ableson
This is something that I've run into as well across various installs very similar to the one described (PE2950 backed by an MD1000). I find that overall the write performance across NFS is absolutely horrible on 2008.11 and 2009.06. Worse, I use iSCSI under 2008.11 and it's just fine with

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread erik.ableson
On 7 mai 09, at 04:03, Adam Leventhal wrote: After all this discussion, I am not sure if anyone adequately answered the original poster's question as to whether at 2540 with SAS 15K drives would provide substantial synchronous write throughput improvement when used as a L2ARC device. I

[zfs-discuss] Update: sharenfs settings ignored

2009-04-20 Thread erik.ableson
ge: From: "erik.ableson" Date: 17 avril 2009 13:15:21 HAEC To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] sharenfs settings ignored Hi there, I'm working on a new OS 2008.11 setup here and running into a few issues with the nfs integration. Notably, it appears that subnet valu

[zfs-discuss] sharenfs settings ignored

2009-04-17 Thread erik.ableson
Hi there, I'm working on a new OS 2008.11 setup here and running into a few issues with the nfs integration. Notably, it appears that subnet values attributed to sharenfs are ignored and gives back a permission denied for all connection attempts. I have another environment where permissi