Re: [zfs-discuss] Snapshots and Data Loss

2010-04-18 Thread Geoff Nordli
>On Apr 13, 2010, at 5:22 AM, Tony MacDoodle wrote: > >> I was wondering if any data was lost while doing a snapshot on a >running system? > >ZFS will not lose data during a snapshot. > >> Does it flush everything to disk or would some stuff be lost? > >Yes, all ZFS data will be committed to disk a

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Mon, Apr 19, 2010 at 03:37:43PM +1000, Daniel Carosone wrote: > the filesystem holding /etc/zpool.cache or, indeed, /etc/zfs/zpool.cache :-) -- Dan. pgpSCBv4eR19k.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 07:37:10PM -0700, Don wrote: > I'm not sure to what you are referring when you say my "running BE" Running boot environment - the filesystem holding /etc/zpool.cache -- Dan. pgpbKUgqnePjv.pgp Description: PGP signature ___ zfs-d

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Richard Elling
On Apr 18, 2010, at 7:02 PM, Don wrote: > If you have a pair of heads talking to shared disks with ZFS- what can you do > to ensure the second head always has a current copy of the zpool.cache file? By definition, the zpool.cache file is always up to date. > I'd prefer not to lose the ZIL, fail

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 10:33:36PM -0500, Bob Friesenhahn wrote: > Probably the DDRDrive is able to go faster since it should have lower > latency than a FLASH SSD drive. However, it may have some bandwidth > limits on its interface. It clearly has some. They're just as clearly well in excess

Re: [zfs-discuss] Fileserver help.

2010-04-18 Thread Haudy Kazemi
Any comments on NexentaStor Community/Developer Edition vs EON for NAS/small server/home server usage? It seems like Nexenta has been around longer or at least received more press attention. Are there strong reasons to recommend one over the other? (At one point usable space would have been

Re: [zfs-discuss] Can't import pool due to missing log device

2010-04-18 Thread Andrew Kener
3 - community edition Andrew On Apr 18, 2010, at 11:15 PM, Richard Elling wrote: > Nexenta version 2 or 3? > -- richard > > On Apr 18, 2010, at 7:13 PM, Andrew Kener wrote: > >> Hullo All: >> >> I'm having a problem importing a ZFS pool. When I first built my fileserver >> I created two V

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Bob Friesenhahn
On Sun, 18 Apr 2010, Edward Ned Harvey wrote: This seems to be the test of the day. time tar jxf gcc-4.4.3.tar.bz2 I get 22 seconds locally and about 6-1/2 minutes from an NFS client. There's no point trying to accelerate your disks if you're only going to use a single client over gigabit.

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-18 Thread Erik Trimble
Harry Putnam wrote: Erik Trimble writes: Bottom line: if you can live without true hot-swap capability (i.e. shutdown the machine to change a drive), then save yourself $75 and go with 2 3114 cards. That sounds like it would do all I need. I currently have the 3114s' little sister i

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
I'm not sure to what you are referring when you say my "running BE" I haven't looked at the zpool.cache file too closely but if the devices don't match between the two systems for some reason- isn't that going to cause a problem? I was really asking if there is a way to build the cache file with

[zfs-discuss] Can't import pool due to missing log device

2010-04-18 Thread Andrew Kener
Hullo All: I'm having a problem importing a ZFS pool. When I first built my fileserver I created two VDEVs and a log device as follows: raidz1-0 ONLINE c12t0d0 ONLINE c12t1d0 ONLINE c12t2d0 ONLINE c12t3d0 ONLINE raidz1-2 ONLINE c12t4d0 ONLINE c

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Daniel Carosone
On Sun, Apr 18, 2010 at 07:02:38PM -0700, Don wrote: > If you have a pair of heads talking to shared disks with ZFS- what can you do > to ensure the second head always has a current copy of the zpool.cache file? > I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't > im

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
But if the X25E doesn't honor cache flushes then it really doesn't matter if they are mirrored- they both may cache the data, not write it out, and leave me screwed. I'm running 2009.06 and not one of the newer developer candidates that handle ZIL losses gracefully (or at all- at least as far a

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
If you have a pair of heads talking to shared disks with ZFS- what can you do to ensure the second head always has a current copy of the zpool.cache file? I'd prefer not to lose the ZIL, fail over, and then suddenly find out I can't import the pool on my second head. -- This message posted from

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Don > > I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives > in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a > mirrored ZIL) connected to two servers (so if

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Bob Friesenhahn > > On Sun, 18 Apr 2010, Christopher George wrote: > > > > In summary, the DDRdrive X1 is designed, built and tested with > immense > > pride and an overwhelming attention to de

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Don
So if the Intel X25E is a bad device- can anyone recommend an SLC device with good firmware? (Or an MLC drive that performs as well?) I've got 80 spindles in 5 16 bay drive shelves (76 15k RPM SAS drives in 19 4 disk raidz sets, 2 hot spares, and 2 bays set aside for a mirrored ZIL) connected t

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Miles Nordin
> "re" == Richard Elling writes: re> a well managed system will not lose zpool.cache or any other re> file. I would complain this was circular reasoning if it weren't such obvious chest-puffing bullshit. It's normal even to the extent of being a best practice to have no redundancy f

Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-18 Thread Will Murnane
On Sun, Apr 18, 2010 at 20:08, Harry Putnam wrote: > Seems like you can get some pretty large discrepancies in sizes of > pools. and directories. They all answer different things, sure, but they're all things that an administrator might want to know. > zpool list "How many bytes are in use on the

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Bob Friesenhahn
On Sun, 18 Apr 2010, Christopher George wrote: In summary, the DDRdrive X1 is designed, built and tested with immense pride and an overwhelming attention to detail. Sounds great. What performance does DDRdrive X1 provide for this simple NFS write test from a single client over gigabit ethern

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Christopher George
There is no definitive answer (yes or no) on whether to mirror a dedicated log device, as reliability is one of many variables. This leads me to the frequently given but never satisfying "it depends". In a time when too many good questions go unanswered, let me take advantage of our less rigid "r

[zfs-discuss] Large size variations - what is canonical method

2010-04-18 Thread Harry Putnam
Seems like you can get some pretty large discrepancies in sizes of pools. and directories. zpool list zfs list du All have different readings for each and every pool. Then zfs and du will differ on each fs inside the pool. I don't understand why this late in the game there is not some canonic

Re: [zfs-discuss] SSD sale on newegg

2010-04-18 Thread Bob Friesenhahn
On Sun, 18 Apr 2010, Carson Gaspar wrote: Before (Mac OS 10.6.3 NFS client over GigE, local subnet, source file in RAM): carson:arthas 0 $ time tar jxf /Volumes/RamDisk/gcc-4.4.3.tar.bz2 real92m33.698s user0m20.291s sys 0m37.978s That's awful! carson:arthas 130 $ time tar jxf

Re: [zfs-discuss] SSD sale on newegg

2010-04-18 Thread Carson Gaspar
Carson Gaspar wrote: I just found an 8 GB SATA Zeus (Z4S28I) for £83.35 (~US$127) shipped to California. That should be more than large enough for my ZIL @home, based on zilstat. The web site says EOL, limited to current stock. http://www.dpieshop.com/stec-zeus-z4s28i-8gb-25-sata-ssd-solid-s

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Richard Elling
On Apr 18, 2010, at 5:23 AM, Edward Ned Harvey wrote: >> From: Richard Elling [mailto:richard.ell...@gmail.com] >> >> On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: >> >>> For zpool < 19, which includes all present releases of Solaris 10 and >>> Opensolaris 2009.06, it is critical to mir

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Richard Elling
On Apr 18, 2010, at 10:48 AM, Miles Nordin wrote: >> "re" == Richard Elling writes: > >>> A failed unmirrored log device would be the >>> permanent death of the pool. > >re> It has also been shown that such pools are recoverable, albeit >re> with tedious, manual procedures required.

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
> IMHO, whether a dedicated log device needs redundancy > (mirrored), should > be determined by the dynamics of each end-user > environment (zpool version, > goals/priorities, and budget). > Well, I populate a chassis with dual HBAs because my _perception_ is they tend to fail more than other ca

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Miles Nordin
> "re" == Richard Elling writes: >> A failed unmirrored log device would be the >> permanent death of the pool. re> It has also been shown that such pools are recoverable, albeit re> with tedious, manual procedures required. for the 100th time, No, they're not, not if you lo

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Christopher George
IMHO, whether a dedicated log device needs redundancy (mirrored), should be determined by the dynamics of each end-user environment (zpool version, goals/priorities, and budget). If mirroring is deemed important, a key benefit of the DDRdrive X1, is the HBA / storage device integration. For examp

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-18 Thread Roy Sigurd Karlsbakk
>From wikipedia, PCI is 133 MB /s (32-bit at 33 MHz) 266 MB/s (32-bit at 66 MHz or 64-bit at 33 MHz) 533 MB/s (64-bit at 66 MHz) Not quite the 3GB/s hoped for. But how fast do drives themselves tend to be? I rarely see above 80-100MB/s, although my drives are just consumer-level 7200RPM

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-18 Thread Harry Putnam
Erik Trimble writes: > Since we're talking about an old PCI slot here, I'd say there's really > two good options: > > A SiliconImage Sil3114-based card, which is a 32-bit/66Mhz card, with > 4 SATA-1 ports, usually for $25 > > A Supermicro AOC-SAT2-MV8 card, which is a 64-bit/100Mhz PCI-X card > (

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
Or, DDRDrive X1 ? Would the X1 need to be mirrored? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Dave Vrona > > > On 18 apr 2010, at 00.52, Dave Vrona wrote: > > > > > Ok, so originally I presented the X-25E as a > > "reasonable" approach. After reading the follow-ups, > > I'm second gues

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: > > > For zpool < 19, which includes all present releases of Solaris 10 and > > Opensolaris 2009.06, it is critical to mirror your ZIL log device. A > failed > > unmirrored log dev

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
The Acard device mentioned in this thread looks interesting: http://opensolaris.org/jive/thread.jspa?messageID=401719񢄷 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailm

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Dave Vrona
> > On 18 apr 2010, at 00.52, Dave Vrona wrote: > > > Ok, so originally I presented the X-25E as a > "reasonable" approach. After reading the follow-ups, > I'm second guessing my statement. > > > > Any decent alternatives at a reasonable price? > > How much is reasonable? :-) How about $1000

Re: [zfs-discuss] Is it safe/possible to idle HD's in a ZFS Vdev to save wear/power?

2010-04-18 Thread Enrico Maria Crisostomo
Hi. I'm using two SIIG eSATA II PCIe PRO adapters on a Sun Ultra 24 workstation, too. The adapters are connected to four external eSATA drives that made up a zpool used for scheduled back-up purposes. I'm now running SXCE b129, live upgraded from b116. Before the live upgrade the external disks we

Re: [zfs-discuss] SSD best practices

2010-04-18 Thread Ragnar Sundblad
On 18 apr 2010, at 06.43, Richard Elling wrote: > On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote: > >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of Dave Vrona >>> >>> 1) Mirroring. Leaving cost out of it, should ZIL and/or L2A