[zfs-discuss] Help! OS drive lost, can I recover data?

2012-08-27 Thread Adam
rocess? Thanks - Adam... ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
e introspection into the zpool thread that is using cpu but not having much luck finding anything meaningful. Occasionally the cpu usage for that thread will drop, and when it does performance of the filesystem increases. > On Wed, 2011-05-04 at 15:40 -0700, Adam Serediuk wrote: >> Dedu

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
dedup enabled and the DDT no > longer fits in RAM? That would create a huge performance cliff. > > -Original Message- > From: zfs-discuss-boun...@opensolaris.org on behalf of Eric D. Mudama > Sent: Wed 5/4/2011 12:55 PM > To: Adam Serediuk > Cc: zfs-discuss@opens

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
free for all devices On May 4, 2011, at 12:28 PM, Michael Schuster wrote: > On Wed, May 4, 2011 at 21:21, Adam Serediuk wrote: >> We have an X4540 running Solaris 11 Express snv_151a that has developed an >> issue where its write performance is absolutely abysmal. Even touching a

[zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
oth iostat and zpool iostat show very little to zero load on the devices even while blocking. Any suggestions on avenues of approach for troubleshooting? Thanks, Adam ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.o

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Adam Serediuk
ng on the progress of Illumos and others but for now things are still too uncertain to make the financial commitment. - Adam ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-12 Thread Adam Leventhal
> In my case, it gives an error that I need at least 11 disks (which I don't) > but the point is that raidz parity does not seem to be limited to 3. Is this > not true? RAID-Z is limited to 3 parity disks. The error message is giving you false hope and that's a bug. If you had plugged in 11 dis

Re: [zfs-discuss] ZFS compression

2010-07-25 Thread Adam Leventhal
aults to lzjb which is fast; but gzip-9 > can be twice as good. (I've just done some tests on the MacZFS port on my > blog for more info) Here's a good blog comparing some ZFS compression modes in the context of the Sun Storage 7000: http://blogs.sun.com/dap/en

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Adam Leventhal
Hey Robert, I've filed a bug to track this issue. We'll try to reproduce the problem and evaluate the cause. Thanks for bringing this to our attention. Adam On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote: > On 23/06/2010 18:50, Adam Leventhal wrote: >>> Does it mean

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Adam Leventhal
istribute parity. What is the total width of your raidz1 stripe? Adam -- Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Adam Leventhal
Hey Robert, How big of a file are you making? RAID-Z does not explicitly do the parity distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths to distribute IOPS. Adam On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote: > Hi, > > > zpool create test

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Adam Leventhal
Hey Karsten, Very interesting data. Your test is inherently single-threaded so I'm not surprised that the benefits aren't more impressive -- the flash modules on the F20 card are optimized more for concurrent IOPS than single-threaded latency. Adam On Mar 30, 2010, at 3:30 AM, Kar

Re: [zfs-discuss] Fishworks 2010Q1 and dedup bug?

2010-03-04 Thread Adam Leventhal
s to ensure that their deployments are successful, and fixing problems as they come up. > The hardware on the other hand is incredible in terms of resilience and > performance, no doubt. Which makes me think the pretty interface becomes an > annoyance sometimes. Let's wait for

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Adam Serediuk
, etc all make a large different when dealing with very large data sets. On 24-Feb-10, at 2:05 PM, Adam Serediuk wrote: I manage several systems with near a billion objects (largest is currently 800M) on each and also discovered slowness over time. This is on X4540 systems with average file

Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Adam Serediuk
l gigE line speed even on fully random workloads. Your mileage may very but for now I am very happy with the systems finally (and rightfully so given their performance potential!) -- Adam Serediuk___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] ZFS dedup for VAX COFF data type

2010-02-21 Thread Adam Leventhal
> Hi Any idea why zfs does not dedup files with this format ? > file /opt/XXX/XXX/data > VAX COFF executable - version 7926 With dedup enabled, ZFS will identify and remove duplicated regardless of the data format. Adam -- Adam Leventhal, Fishworkshttp://blog

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-18 Thread Adam Leventhal
Hey Bob, > My own conclusions (supported by Adam Leventhal's excellent paper) are that > > - maximum device size should be constrained based on its time to > resilver. > > - devices are growing too large and it is about time to transition to > the next small

Re: [zfs-discuss] Hybrid storage ... thing

2010-02-05 Thread Adam Leventhal
e notion of a hybrid drive is nothing new. As with any block-based caching, this device has no notion of the semantic meaning of a given block so there's only so much intelligence it can bring to bear on the problem. Adam -- Adam Leventhal, Fishworks

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-13 Thread Adam Leventhal
n with the product > designers. Congratulations! This is great news for ZFS. I'll be very interested to see the results members of the community can get with your device as part of their pool. COMSTAR iSCSI performance should be dramatically improved in particular. Adam -- Adam Leventhal,

Re: [zfs-discuss] raidz stripe size (not stripe width)

2010-01-04 Thread Adam Leventhal
| D15 | 1K per device with an additional 1K for parity. Adam On Jan 4, 2010, at 3:17 PM, Brad wrote: > If a 8K file system block is written on a 9 disk raidz vdev, how is the data > distributed (writtened) between all devices in the vdev since a zfs write is > one continuously IO

Re: [zfs-discuss] raidz data loss stories?

2009-12-25 Thread Adam Leventhal
in other words card about perf) why would > you ever use raidz instead of throwing more drives at the problem and doing > mirroring with identical parity? You're right that a mirror is a degenerate form of raidz1, for example, but mirrors allow for specific optimizations. While the redundan

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Adam Leventhal
ly wrote an article for ACM Queue that examines recent trends in hard drives and makes the case for triple-parity RAID. It's at least peripherally relevant to this conversation: http://blogs.sun.com/ahl/entry/acm_triple_parity_raid Adam -- Adam Leventhal, Fishworks

Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Adam Leventhal
> Thanks for the response Adam. > > Are you talking about ZFS list? > > It displays 19.6 as allocated space. > > What does ZFS treat as hole and how does it identify? ZFS will compress blocks of zeros down to nothing and treat them like sparse files. 19.6 is pretty close t

Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Adam Leventhal
Hi Giridhar, The size reported by ls can include things like holes in the file. What space usage does the zfs(1M) command report for the filesystem? Adam On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote: > Hi, > > Reposting as I have not gotten any response. > > Here is the i

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-09 Thread Adam Leventhal
The host pool I assume, because > clone contents are (in this scenario) "just some new data"? The dedup property applies to all writes so the settings for the pool of origin don't matter, just those on the destination pool. Adam -- Adam Leventhal, Fishworks

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-09 Thread Adam Leventhal
t when you have the new bits. Adam On Dec 9, 2009, at 3:40 AM, Kjetil Torgrim Homme wrote: > I'm planning to try out deduplication in the near future, but started > wondering if I can prepare for it on my servers. one thing which struck > me was that I should change the checksum alg

Re: [zfs-discuss] mpt errors on snv 127

2009-12-01 Thread Adam Cheal
he problem occurs on all of them. - Adam -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Adam Cheal
l disks in the JBOD(s), not specific ones. Usually one or two disks start to timeout which snowballs into all of them when the bus resets. We have 15 of these systems running, all with the same config using 2 foot external cables...changing cables doesn`t help. We have no

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-29 Thread Adam Cheal
> > I thought you had just set > > set xpv_psm:xen_support_msi = -1 > > which is different, because that sets the > xen_support_msi variable > which lives inside the xpv_psm module. > > Setting mptsas:* will have no effect on your system > if you do not > have an mptsas card installed. The mpts

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-29 Thread Adam Cheal
> Hi Adam, > thanks for this info. I've talked with my colleagues > in Beijing (since > I'm in Beijing this week) and we'd like you to try > disabling MSI/MSI-X > for your mpt instances. In /etc/system, add > > set mpt:mpt_enable_msi = 0 > > then

Re: [zfs-discuss] ZFS Send Priority and Performance

2009-11-20 Thread Adam Serediuk
would have been with it enabled but I wasn't about to find out. Thanks On 20-Nov-09, at 11:48 AM, Richard Elling wrote: On Nov 20, 2009, at 11:27 AM, Adam Serediuk wrote: I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv.

[zfs-discuss] ZFS Send Priority and Performance

2009-11-20 Thread Adam Serediuk
occurring? The process is currently: zfs_send -> mbuffer -> LAN -> mbuffer -> zfs_recv -- Adam ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-26 Thread Adam Leventhal
grated to ON as you can see from the consistent work of Eric Schrock. Adam -- Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-25 Thread Adam Cheal
So, while we are working on resolving this issue with Sun, let me approach this from the another perspective: what kind of controller/drive ratio would be the minimum recommended to support a functional OpenSolaris-based archival solution? Given the following: - the vast majority of IO to the s

Re: [zfs-discuss] Checksums

2009-10-25 Thread Adam Leventhal
ake a substantial hit in throughput moving from one to the other. Tim, That all really depends on your specific system and workload. As with any performance related matter experimentation is vital for making your final decision. Adam -- Adam Leventhal, Fishworks

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Adam Cheal
The controller connects to two disk shelves (expanders), one per port on the card. If you look back in the thread, you'll see our zpool config has one vdev per shelf. All of the disks are Western Digital (model WD1002FBYS-18A6B0) 1TB 7.2K, firmware rev. 03.00C06. Without actually matching up the

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Adam Cheal
The iostat I posted previously was from a system we had already tuned the zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 in actv per disk). I reset this value in /etc/system to 7, rebooted, and started a scrub. iostat output showed busier disks (%b is higher, which

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
Here is example of the pool config we use: # zpool status pool: pool002 state: ONLINE scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009 config: NAME STATE READ WRITE CKSUM pool002 ONLINE 0 0 0 raidz2 ONLINE

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
And therein lies the issue. The excessive load that causes the IO issues is almost always generated locally from a scrub or a local recursive "ls" used to warm up the SSD-based zpool cache with metadata. The regular network IO to the box is minimal and is very read-centric; once we load the box

Re: [zfs-discuss] Checksums

2009-10-23 Thread Adam Leventhal
l of the blocks be re-checksummed with a zfs > send/receive on the receiving side? As with all property changes, new writes get the new properties. Old data is not rewritten. Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
LSI's sales literature on that card specs "128 devices" which I take with a few hearty grains of salt. I agree that with all 46 drives pumping out streamed data, the controller would be overworked BUT the drives will only deliver data as fast as the OS tells them to. Just because the speedometer

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
I don't think there was any intention on Sun's part to ignore the problem...obviously their target market wants a performance-oriented box and the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY channels = 1 channel per drive = no contention for channels. The x4540 is a monste

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
Just submitted the bug yesterday, under advice of James, so I don't have a number you can refer to you...the "change request" number is 6894775 if that helps or is directly related to the future bugid. >From what I seen/read this problem has been around for awhile but only rears >its ugly head

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Adam Cheal
Our config is: OpenSolaris snv_118 x64 1 x LSISAS3801E controller 2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives) Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has one ZFS filesyst

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Adam Cheal
I've filed the bug, but was unable to include the "prtconf -v" output as the comments field only accepted 15000 chars total. Let me know if there is anything else I can provide/do to help figure this problem out as it is essentially preventing us from doing any kind of heavy IO to these pools,

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Adam Cheal
James: We are running Phase 16 on our LSISAS3801E's, and have also tried the recently released Phase 17 but it didn't help. All firmware NVRAM settings are default. Basically, when we put the disks behind this controller under load (e.g. scrubbing, recursive ls on large ZFS filesystem) we get th

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Adam Cheal
Cindy: How can I view the bug report you referenced? Standard methods show my the bug number is valid (6694909) but no content or notes. We are having similar messages appear with snv_118 with a busy LSI controller, especially during scrubbing, and I'd be interested to see what they mentioned in

Re: [zfs-discuss] ZFS mirror resilver process

2009-10-18 Thread Adam Mellor
I Too have seen this problem. I had done a zfs send from my main pool "terra" (6 disk raidz on seagate 1TB drives) to a mirror pair of WD Green 1TB drives. ZFS send was successful, however i noticed the pool was degraded after a while (~1 week) with one of the mirror disks constantly re-silverin

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Adam Leventhal
block-interleaved parity RAID-5block-interleaved distributed parity RAID-6block-interleaved double distributed parity raidz1 is most like RAID-5; raidz2 is most like RAID-6. There's no RAID level that covers more than two parity disks, but raidz3 is most like RAID-6, but wi

Re: [zfs-discuss] Problem with RAID-Z in builds snv_120 - snv_123

2009-09-03 Thread Adam Leventhal
rrors that I should take a look at? Absolutely not. That is an unrelated issue. This problem is isolated to RAID-Z. > And good luck with the fix for build 124. Are talking days or weeks for the > fix to be available, do you think? :) -- Days or hours. Adam -- Adam Le

[zfs-discuss] Problem with RAID-Z in builds snv_120 - snv_123

2009-09-03 Thread Adam Leventhal
cription of the two issues. This is for interest only and does not contain additional discussion of symptoms or prescriptive action. Adam ---8<--- 1. In situations where a block read from a RAID-Z vdev fails to checksum but there were no errors from any of the child vdevs (e.g. hard driv

Re: [zfs-discuss] 7110: Would it self upgrade the system zpool?

2009-09-02 Thread Adam Leventhal
Hi Trevor, We intentionally install the system pool with an old ZFS version and don't provide the ability to upgrade. We don't need or use (or even expose) any of the features of the newer versions so using a newer version would only create problems rolling back to earlier relea

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Adam Leventhal
pe to have and update to the last either later today or tomorrow. Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-01 Thread Adam Leventhal
Hi James, After investigating this problem a bit I'd suggest avoiding deploying RAID-Z until this issue is resolved. I anticipate having it fixed in build 124. Apologies for the inconvenience. Adam On Aug 28, 2009, at 8:20 PM, James Lever wrote: On 28/08/2009, at 3:23 AM, Adam Leve

Re: [zfs-discuss] change raidz1 to raidz2 with BP rewrite?

2009-08-30 Thread Adam Leventhal
ut while it might be satisfying to add another request for it, Matt is already cranking on it as fast as he can and more requests for it are likely to have the opposite of the intended effect. Adam -- Adam Leventhal, Fishworkshttp

Re: [zfs-discuss] change raidz1 to raidz2 with BP rewrite?

2009-08-29 Thread Adam Leventhal
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And how is status on BP rewrite? Far away? Not started yet? Planning? BP rewrite is an important component technology, but there's a bunch beyond that. It's not a high priority right now for us at Sun. Adam -- Adam

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Adam Leventhal
Hey Gary, There appears to be a bug in the RAID-Z code that can generate spurious checksum errors. I'm looking into it now and hope to have it fixed in build 123 or 124. Apologies for the inconvenience. Adam On Aug 25, 2009, at 5:29 AM, Gary Gendel wrote: I have a 5-500GB disk R

Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-26 Thread Adam Sherman
But the real question is whether the "enterprise" drives would have avoided your problem. A. -- Adam Sherman +1.613.797.6819 On 2009-08-26, at 11:38, Troels Nørgaard Nielsen wrote: Hi Tim Cook. If I was building my own system again, I would prefer not to go with consumer har

Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Adam Sherman
dz2 vdevs, then you can even do better with copies=3 ;-) Maybe this is noted somewhere, but I did not realize that "copies" invoked logic that distributed the copies among vdevs? Can you please provide some pointers about this? Thanks, A. -- Adam Sherman CTO, V

Re: [zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)

2009-08-12 Thread Adam Sherman
I believe you will get .5 TB in this example, no? A. -- Adam Sherman +1.613.797.6819 On 2009-08-12, at 16:44, Erik Trimble wrote: Eric D. Mudama wrote: On Wed, Aug 12 at 12:11, Erik Trimble wrote: Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB, 500 GB, etc), can I

Re: [zfs-discuss] SSD (SLC) for cache...

2009-08-12 Thread Adam Leventhal
2ARC. Save your money. That's our assessment, but it's highly dependent on the specific characteristics of the MLC NAND itself, the SSD controller, and, of course, the workload. Adam -- Adam Leventhal, Fishworks

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-10 Thread Adam Sherman
oming in, so space won't be an issue. I'd like to have the CF cards as read-only as possible though. By sharable, what do you mean exactly? Thanks a lot for the advice, A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 __

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-07 Thread Adam Sherman
d the new-style ZFS-based "boot environments"? Is there going to be a difference for me? I plan to run OSOL, latest. A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-07 Thread Adam Sherman
Thanks for everyone's input! A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman
Excellent advice, thans Ian. A. -- Adam Sherman +1.613.797.6819 On 2009-08-06, at 15:16, Ian Collins wrote: Adam Sherman wrote: On 4-Aug-09, at 16:54 , Ian Collins wrote: Use a CompactFlash card (the board has a slot) for root, 8 drives in raidz2 tank, backup the root regularly If

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman
e a cheaper one that takes only 1 CF card? I just ordered a pair of the Syba units, cheap enough too test out anyway. Now to find some reasonably priced 8GB CompactFlash cards… Thanks, A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman
this. This product looks really interesting: http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp But I can't confirm it will show both cards as separate disks… A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 _

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-06 Thread Adam Sherman
don't think you can move the bulk - /usr. See: http://docs.sun.com/source/820-4893-13/compact_flash.html#50589713_78631 Good link. So I suppose I can move /var out and that would deal with most (all?) of the writes. Good plan! A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman
d idea. Of course, my system only has a single x16 PCI-E slot in it. :) A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman
hot. I've used them on a few machines, opensolaris and freebsd. I'm a big fan of compact flash. What about USB sticks? Is there a difference in practice? Thanks for the advice, A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman
500GB-7200-ST9500430SS-602367/?matched_search=ST9500430SS Which retailer is that? A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman
be golden there. You are suggesting booting from a mirrored pair of CF cards? I'll have to wait until I see the system to know if I have room, but that's a good idea. I've got lots of unused SATA ports. Thanks, A. -- Adam Sherman CTO, Versature C

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-04 Thread Adam Sherman
$350 CDN for the 500GB model, would have put this system way over budget. A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-04 Thread Adam Sherman
On 4-Aug-09, at 16:08 , Bob Friesenhahn wrote: On Tue, 4 Aug 2009, Adam Sherman wrote: 4. Use a CompactFlash card (the board has a slot) for root, 8 drives in raidz2 tank, backup the root regularly If booting/running from CompactFlash works, then I like this one. Backing up root should be

[zfs-discuss] Pool Layout Advice Needed

2009-08-04 Thread Adam Sherman
oader on the CF card in order to have root on the raidz2 tank 5.5. Figure out how to have the kernel and bootloader on the CF card in order to have 4 pairs of mirrored drives in a tank, supposing #2 doesn't work Comments, suggestions, questions, criticism? Thanks, A. -- Adam Sh

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-07-31 Thread Adam Sherman
, Joyent Inc. I believe I have about a TB of data on at least one of Jason's pools and it seems to still be around. ;) A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] I Still Have My Data

2009-07-31 Thread Adam Sherman
My test setup of 8 x 2G virtual disks under Virtual Box on top of Mac OS X is running nicely! I haven't lost a *single* byte of data. ;) A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Adam Sherman
to ignore the sync/flush command. Caching is still enabled (it wasn't the problem). Thanks! As Russell points on in the last post to that thread, it doesn't seem possible to do this with virtual SATA disks? Odd. A. -- Adam Sherman CTO, Versature Cor

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Adam Sherman
suspicion they all behave similarly dangerously, but actual data would be useful. Also, I think it may have already been posted, but I haven't found the option to disable VirtualBox' disk cache. Anyone have the incantation handy? Thanks, A -- Adam Sherman CTO, Versature Corp.

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-23 Thread Adam Leventhal
gt; it is also much slower under other. > IIRC some builds ago there were some fixes integrated so maybe it is > different now. Absolutely. I was talking more or less about optimal timing. I realize that due to the priorities within ZFS and real word loads that it can take far longer. A

Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread Adam Sherman
In the context of a low-volume file server, for a few users, is the low-end Intel SSD sufficient? A. -- Adam Sherman +1.613.797.6819 On 2009-07-23, at 14:09, Greg Mason wrote: I think it is a great idea, assuming the SSD has good write performance. This one claims up to 230MB/s read and

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-22 Thread Adam Leventhal
ion; raidz2, quadratic; now raidz3 is N-cubed. There's really no way around it. Fortunately with proper scrubbing encountering data corruption in one stripe on three different drives is highly unlikely. Adam -- Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-21 Thread Adam Leventhal
Don't hear about triple-parity RAID that often: Author: Adam Leventhal Repository: /hg/onnv/onnv-gate Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651 Total changesets: 1 Log message: 6854612 triple-parity RAID-Z http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/ 0

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-21 Thread Adam Leventhal
rm of 'optional' I/Os purely for the purpose of coalescing writes into larger chunks. I hope that's clear; if it's not, stay tuned for the aforementioned blog post. Adam -- Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-21 Thread Adam Leventhal
make sure that the parts already developed are truely enterprise- grade. While I don't disagree that the focus for ZFS should be ensuring enterprise-class reliability and performance, let me assure you that requirements are driven by the market and not by marketing. Adam -- Adam

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-21 Thread Adam Sherman
to me when putting a mess of them into a SAS JBOD with an expander? Thanks for everyone's great feedback, this thread has been highly educating. A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-17 Thread Adam Sherman
, why is that one cheaper than: http://www.provantage.com/lsi-logic-lsi00124~7LSIG03W.htm Just newer? A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
s for my X4100s: http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA $280 or so, looks like. Might be overkill for me though. A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ z

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
utions I should have a look at to get >=12 SATA disks externally attached to my systems? Thanks! A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.or

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
ly interested in wrt management uses of SES? I'm really just exploring. Where can I read about how FMA is going to help with failures in my setup? Thanks, A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
Another thought in the same vein, I notice many of these systems support "SES-2" for management. Does this do anything useful under Solaris? Sorry for these questions, I seem to be having a tough time locating relevant information on the web. Thanks, A. -- Adam Sherman CTO,

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
are you using against it? Thanks for pointing to relevant documentation. The manual for the Supermicro cases [1, 2] does a pretty good job IMO explaining the different options. See page D-14 and on in the 826 manual, or page D-11 and on in the 846 manual. I'll read though that, thanks fo

[zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Adam Sherman
s for pointing to relevant documentation. A. -- Adam Sherman CTO, Versature Corp. Tel: +1.877.498.3772 x113 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 7110 questions

2009-06-18 Thread Adam Leventhal
Hey Lawrence, Make sure you're running the latest software update. Note that this forumn is not the appropriate place to discuss support issues. Please contact your official Sun support channel. Adam On Thu, Jun 18, 2009 at 12:06:02PM -0700, lawrence ho wrote: > We have a 7110 on try

Re: [zfs-discuss] 7110 questions

2009-06-18 Thread Adam Leventhal
10; it has plenty of PCI slots. Ditto. > finally, one question - I presume that I need to devote a pair of disks > to the OS, so I really only get 14 disks for data. Correct? That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB = 300G

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Adam Leventhal
utely right. The L2ARC is for accelerating reads only and will not affect write performance. Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD - slow down with age

2009-02-16 Thread Adam Leventhal
ting our use of SSDs with ZFS as a ZIL device, an L2ARC device, and eventually as primary storage. We'll first focus on the specific SSDs we certify for use in our general purpose servers and the Sun Storage 7000 series, and help influence the industry to move to standards that we

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Adam Leventhal
This is correct, and you can read about it here: http://blogs.sun.com/ahl/entry/fishworks_launch Adam On Fri, Jan 23, 2009 at 05:03:57PM +, Ross Smith wrote: > That's my understanding too. One (STEC?) drive as a write cache, > basically a write optimised SSD. And cheaper, l

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-19 Thread Adam Leventhal
the advantage of being far more dynamic and of only applying the space tax in situations where it actually applies. Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-19 Thread Adam Leventhal
id layout... Yes, I'm not saying it shouldn't be done. I'm asking what the right answer might be. Adam -- Adam Leventhal, Fishworks http://blogs.sun.com/ahl ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   >