Re: [zfs-discuss] zfs send to older version

2012-10-24 Thread Carson Gaspar
On 10/24/12 3:59 AM, Darren J Moffat wrote: So in this case you should have a) created the pool with a version that matches the pool version of the backup server and b) make sure you create the ZFS file systems with a version that is supposed by the backup server. And AI allows you to set the

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Carson Gaspar
On 6/18/12 4:07 PM, Koopmann, Jan-Peter wrote: Thanks. Just noticed that the Hitachi 3TB drives are not available. The 4TB ones are but with 512b emulated only. However I can get Barracudas 7200.14 with supposedly real 4k quite cheap. Anyone any experience with those? I might be getting one or tw

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Carson Gaspar
On 6/18/12 12:19 AM, Koopmann, Jan-Peter wrote: Hi Carson, I have 2 Sans Digital TR8X JBOD enclosures, and they work very well. They also make a 4-bay TR4X. http://www.sansdigital.com/towerraid/tr4xb.html http://www.sansdigital.com/towerraid/tr8xb.html looks nice! The only th

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-17 Thread Carson Gaspar
On 6/17/12 6:36 PM, Timothy Coalson wrote: No problem, and yes, I think that should work. One thing to keep in mind, though, is that if the internals of the enclosure simply split the multilane SAS cable into 4 connectors without an expander, and you use SATA drives, the controller will use SAT

Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-17 Thread Carson Gaspar
On 6/17/12 3:21 PM, Koopmann, Jan-Peter wrote: Hi Tim, you might be able to use an adapter to the SFF-8088 external 4 lane SAS connector, which may increase your options. So what you are saying is that something like this will do the trick? http://www.pc-pitstop.com/sata_enclosu

Re: [zfs-discuss] Is there an actual newsgroup for zfs-discuss?

2012-06-11 Thread Carson Gaspar
On 6/11/12 3:12 PM, Alan Hargreaves wrote: There is a ZFS Community on the Oracle Communities that was just kicked off this month - https://communities.oracle.com/portal/server.pt/community/oracle_solaris_zfs_file_system/526 Thanks for the heads up, but it's just another horrid Oracle web UI wi

Re: [zfs-discuss] Unexpected error adding a cache device to existing pool

2012-05-14 Thread Carson Gaspar
On 5/14/12 2:02 AM, Ian Collins wrote: Adding the log was OK: zpool add -f export log mirror c10t3d0s0 c10t4d0s0 But adding the cache fails: zpool add -f export cache c10t3d0s1 c10t4d0s1 invalid vdev specification the following errors must be manually repaired: /dev/dsk/c10t3d0s2 is part of ac

Re: [zfs-discuss] cluster vs nfs

2012-04-26 Thread Carson Gaspar
On 4/26/12 2:17 PM, J.P. King wrote: Shared storage is evil (in this context). Corrupt the storage, and you have no DR. Now I am confused. We're talking about storage which can be used for failover, aren't we? In which case we are talking about HA not DR. Depends on how you define DR - we h

Re: [zfs-discuss] cluster vs nfs

2012-04-26 Thread Carson Gaspar
On 4/25/12 10:10 PM, Richard Elling wrote: On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote: And applications that don't pin the mount points, and can be idled during the migration. If your migration is due to a dead server, and you have pending writes, you have no choice but to reboo

Re: [zfs-discuss] cluster vs nfs

2012-04-25 Thread Carson Gaspar
On 4/25/12 6:57 PM, Paul Kraus wrote: On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote: On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling wrote: Nothing's changed. Automounter + data migration -> rebooting clients (or close enough to rebooting). I.e., outage. Uhhh, not if you

Re: [zfs-discuss] [vserver] hybrid zfs pools as iSCSI targets for vserver

2011-08-07 Thread Carson Gaspar
maximum memory page size and is limited to no more than 4KB. iSCSI appears to acknowledge every individual block that is sent. That means the most data one can stream without an ACK is 4KB. That means the throughput is limited by the latency of the network rather than the bandwidth. I am _far_

Re: [zfs-discuss] [vserver] hybrid zfs pools as iSCSI targets for vserver

2011-08-07 Thread Carson Gaspar
On 8/7/11 6:36 AM, Roy Sigurd Karlsbakk wrote: That's why, back in 1992, the sliding window protocol was created (http://tools.ietf.org/html/rfc1323), so that a peer won't wait for a TCP ACK before resuming operation. It was part of TCP _long_ before that (it was never as stupid as XMODEM ;

Re: [zfs-discuss] dual protocal on one file system?

2011-03-17 Thread Carson Gaspar
On 3/17/2011 8:11 AM, David Magda wrote: From: Paul Kraus [mailto:p...@kraus-haus.org] [...] 2. Unix / Solaris limitation of 16 / 32 group membership #2 is fixed in OpenSolaris as of snv_129: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4088757 The new limit is 1024--the

Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-17 Thread Carson Gaspar
On 2/16/11 9:58 PM, Krunal Desai wrote: When I try to do a SMART status read (more than just a simple identify), looks like the 1068E drops the drive for a little bit. I bought the Intel-branded LSI SAS3081E: Current active firmware version is 0120 (1.32.00) Firmware image's version is MPTFW

Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-02 Thread Carson Gaspar
On 2/2/11 5:47 PM, Krunal Desai wrote: Fails for me, my version does not recognize the 'sat' option. I've been using -d scsi: movax@megatron:~# smartctl -h smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen So build the current version of smartmontools. As you should

Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-02 Thread Carson Gaspar
On 2/2/11 5:43 PM, Krunal Desai wrote: I updated firmware on both of my USAS-L8i (LSI1068E based), and while controller numbering has shifted around in Solaris (went from c10/c11 to c11/c12, not a big deal I think), suddently smartctl is able to pull temperatures. Can't get a full SMART listing,

Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-02 Thread Carson Gaspar
On 2/1/11 5:52 PM, Krunal Desai wrote: SMART status was reported healthy as well (got smartctl kind of working), but I cannot read the SMART data of my disks behind the 1068E due to limitations of smartmontools I guess. (e.g. 'smartctl -d scsi -a /dev/rdsk/c10t0d0' gives me serial #, model, and

Re: [zfs-discuss] How to avoid striping ?

2010-10-18 Thread Carson Gaspar
eally"? Use metattach to grow a metadevice or soft partition. Use growfs to grow UFS on the grown device. He is probably referring to the fact that growfs locks the filesystem. -- Carson Gaspar ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-10 Thread Carson Gaspar
On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote: Ok, now I know it's not related to the I/O performance, but to the ZFS itself. At some time all 3 pools were locked in that way: extended device statistics errors --- r/sw/s kr/s kw/s wait actv wsv

Re: [zfs-discuss] preparing for future drive additions

2010-07-14 Thread Carson Gaspar
Cindy Swearingen wrote: Hi Daniel, No conversion from a mirrored to RAIDZ configuration is available yet. Well... you can do it, but it's a bit byzantine, and leaves you without redundancy during the migration. 1) Add your new disks 2) Create a sparse file at least as large as your smallest

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Carson Gaspar
Ragnar Sundblad wrote: I was referring to the case where zfs has written data to the drive but still hasen't issued a cache flush, and before the cache flush the drive is reset. If zfs finally issues a cache flush and then isn't informed that the drive has been reset, data is lost. I hope this

Re: [zfs-discuss] SSDs adequate ZIL devices?

2010-06-16 Thread Carson Gaspar
Arne Jansen wrote: David Magda wrote: On Wed, June 16, 2010 10:44, Arne Jansen wrote: David Magda wrote: I'm not sure you'd get the same latency and IOps with disk that you can with a good SSD: http://blogs.sun.com/brendan/entry/slog_screenshots [...] Please keep in mind I'm talking ab

Re: [zfs-discuss] zfs/lofi/share panic

2010-05-27 Thread Carson Gaspar
Jan Kryl wrote: the bug (6798273) has been closed as incomplete with following note: "I cannot reproduce any issue with the given testcase on b137." So you should test this with b137 or newer build. There have been some extensive changes going to treeclimb_* functions, so the bug is probably fi

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread Carson Gaspar
Forrest Aldrich wrote: I've seen this product mentioned before - the problem is, we use Veritas heavily on a public network and adding yet another software dependency would be a hard sell. :( Be very certain that you need synchronous replication before you do this. For some ACID systems it re

Re: [zfs-discuss] scsi messages and mpt warning in log - harmless, or indicating a problem?

2010-05-19 Thread Carson Gaspar
Willard Korfhage wrote: This afternoon, messages like the following started appearing in /var/adm/messages: May 18 13:46:37 fs8 scsi: [ID 365881 kern.info] /p...@0,0/pci8086,2...@1/pci15d9,a...@0 (mpt0): May 18 13:46:37 fs8 Log info 0x3108 received for target 5. May 18 13:46:37 fs8

Re: [zfs-discuss] Hard drives for ZFS NAS

2010-05-12 Thread Carson Gaspar
Miles Nordin wrote: "bh" == Brandon High writes: bh> From what I've read, the Hitachi and Samsung drives both bh> support CCTL, which is in the ATA-8 spec. There's no way to bh> toggle it on from OpenSolaris (yet) and it doesn't persist bh> through reboot so it's not really ide

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-05 Thread Carson Gaspar
Glenn Lagasse wrote: How about ease-of-use, all you have to do is plug in the usb disk and zfs will 'do the right thing'. You don't have to remember to run zfs send | zfs receive, or bother with figuring out what to send/recv etc etc etc. It should be possible to automate that via syseventd/s

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-21 Thread Carson Gaspar
Nicolas Williams wrote: On Wed, Apr 21, 2010 at 01:03:39PM -0500, Jason King wrote: ISTR POSIX also doesn't allow a number of features that can be turned on with zfs (even ignoring the current issues that prevent ZFS from being fully POSIX compliant today). I think an additional option for the

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-21 Thread Carson Gaspar
Richard Elling wrote: So you are saying that the OnTap .snapshot directory is equivalent to a symlink to $FSROOT/.zfs/snapshot? That would "solve" the directory shuffle problem. Not quite. It's equivalent(ish) to: cd "$MYDIR" && mkdir .snapshot && cd .snapshot for s in "$FSROOT"/.zfs/snapsho

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-20 Thread Carson Gaspar
Nicolas Williams wrote: On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote: On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote: "zfs list -t snapshot" lists in time order. Good to know. I'll keep that in mind for my "zfs send" scripts but it's not relevant for the

Re: [zfs-discuss] SSD sale on newegg

2010-04-19 Thread Carson Gaspar
Bob Friesenhahn wrote: On Sun, 18 Apr 2010, Carson Gaspar wrote: Before (Mac OS 10.6.3 NFS client over GigE, local subnet, source file in RAM): carson:arthas 0 $ time tar jxf /Volumes/RamDisk/gcc-4.4.3.tar.bz2 real92m33.698s user0m20.291s sys 0m37.978s That's awful! ..

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Carson Gaspar
Edward Ned Harvey wrote: I'm saying that even a single pair of disks (maybe 4 disks if you're using cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck is the 1Gb Ethernet, you won't gain anything (significant) by accelerating the stuff that isn't the bottleneck. And you a

Re: [zfs-discuss] SSD sale on newegg

2010-04-18 Thread Carson Gaspar
Carson Gaspar wrote: I just found an 8 GB SATA Zeus (Z4S28I) for £83.35 (~US$127) shipped to California. That should be more than large enough for my ZIL @home, based on zilstat. The web site says EOL, limited to current stock. http://www.dpieshop.com/stec-zeus-z4s28i-8gb-25-sata-ssd-solid

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Carson Gaspar
Carson Gaspar wrote: Miles Nordin wrote: "re" == Richard Elling writes: How do you handle the case when a hotplug SATA drive is powered off unexpectedly with data in its write cache? Do you replay the writes, or do they go down the ZFS hotplug write hole? If zfs never got

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-12 Thread Carson Gaspar
Miles Nordin wrote: "re" == Richard Elling writes: How do you handle the case when a hotplug SATA drive is powered off unexpectedly with data in its write cache? Do you replay the writes, or do they go down the ZFS hotplug write hole? If zfs never got a positive response to a cache flush,

Re: [zfs-discuss] SSD sale on newegg

2010-04-06 Thread Carson Gaspar
Erik Trimble wrote: On Tue, 2010-04-06 at 08:26 -0700, Anil wrote: Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the latest recommendations for a log device? http://bit.ly/aL1dne The Vertex LE models should do well as ZIL (though not as well as an X25-E or a Zeus)

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-06 Thread Carson Gaspar
Willard Korfhage wrote: Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't show any serial numbers for the disk attached to the Areca raid card. Does Areca provide any Solaris tools that will show you the drive info? If you are using the Areca in JBOD mode, smartctl will f

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Carson Gaspar
Jeroen Roodhart wrote: The thread was started to get insight in behaviour of the F20 as ZIL. _My_ particular interest would be to be able to answer why perfomance doesn't seem to scale up when adding vmod-s... My best guess would be latency. If you are latency bound, adding additional paralle

Re: [zfs-discuss] RAIDZ2 configuration

2010-04-01 Thread Carson Gaspar
Brandon High wrote: On Thu, Apr 1, 2010 at 11:46 AM, Carson Gaspar <mailto:car...@taltos.org>> wrote: "Nonsensical" may be a bit strong, but I can see no possible use case where a 3 disk raidz2 isn't better served by a 3-way mirror. Once bp_rewrite is done, you

Re: [zfs-discuss] RAIDZ2 configuration

2010-04-01 Thread Carson Gaspar
Cindy Swearingen wrote: If someone new to ZFS wants to take 3 old (but reliable) disks and make a raidz2 configuration for testing, we would not consider this is a nonsensical idea. You can then apply what you learn about ZFS space allocation and redundancy to a new configuration. "Nonsensical

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Carson Gaspar
Richard Elling wrote: On Mar 30, 2010, at 3:32 PM, Jeroen Roodhart wrote: If you are going to trick the system into thinking a volatile cache is nonvolatile, you might as well disable the ZIL -- the data corruption potential is the same. I'm sorry? I believe the F20 has a supercap or the like?

Re: [zfs-discuss] RAID10

2010-03-26 Thread Carson Gaspar
Slack-Moehrle wrote: And I should mention that I have a boot drive (500gb SATA) so I dont have to consider booting from the RAID, I just want to use it for storage. - Original Message - From: "Slack-Moehrle" To: "zfs-discuss" Sent: Friday, March 26, 2010 11:39:35 AM Subject: [zfs-disc

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Carson Gaspar
Freddie Cash wrote: So, is it just a "standard" that hardware/software RAID setups require 3 drives for a RAID5 array? And 4 drives for RAID6? It's padding on the sharp edges. See my earlier post - a 2 disk RAID5 is silly, use a mirror. A 3 disk RAID6 is silly, use a 3-way mirror. Both are

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Carson Gaspar
Bruno Sousa wrote: What do you mean by "Using fewer than 4 disks in a raidz2 defeats the purpose of raidz2, as you will always be in a degraded mode" ? Does it means that having 2 vdevs with 3 disks it won't be redundant in the advent of a drive failure? Technically a 3 disk raidz2 won't be de

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Carson Gaspar
Fajar A. Nugraha wrote: On Thu, Mar 25, 2010 at 10:31 AM, Carson Gaspar wrote: Fajar A. Nugraha wrote: You will do best if you configure the raid controller to JBOD. Problem: HP's storage controller doesn't support that mode. It does, ish. It forces you to create a bunch of single

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Carson Gaspar
Carson Gaspar wrote: Fajar A. Nugraha wrote: On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey wrote: I think the point is to say: ZFS software raid is both faster and more reliable than your hardware raid. Surprising though it may be for a newcomer, I have statistics to back that up

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Carson Gaspar
Fajar A. Nugraha wrote: On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey wrote: I think the point is to say: ZFS software raid is both faster and more reliable than your hardware raid. Surprising though it may be for a newcomer, I have statistics to back that up, Can you share it? You w

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-18 Thread Carson Gaspar
Bob Friesenhahn wrote: On Thu, 18 Mar 2010, erik.ableson wrote: Ditto on the Linux front. I was hoping that Solaris would be the exception, but no luck. I wonder if Apple wouldn't mind lending one of the driver engineers to OpenSolaris for a few months... Perhaps the issue is the filesyst

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Carson Gaspar
Someone wrote (I haven't seen the mail, only the unattributed quote): My guess is unit conversion and rounding. Your pool has 11 base 10 TB, which is 10.2445 base 2 TiB. Likewise your fs has 9 base 10 TB, which is 8.3819 base 2 TiB. Not quite. 11 x 10^12 =~ 10.004 x (1024^4). So, th

Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Carson Gaspar
Tonmaus wrote: I am lacking 1 TB on my pool: u...@filemeister:~$ zpool list daten NAMESIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT daten10T 3,71T 6,29T37% 1.00x ONLINE - u...@filemeister:~$ zpool status daten pool: daten state: ONLINE scrub: none requested config: NAME

Re: [zfs-discuss] getting drive serial number

2010-03-08 Thread Carson Gaspar
Khyron wrote: I believe Richard Elling recommended "cfgadm -v". I'd also suggest "iostat -E", with and without "-n" for good measure. So that's "iostat -E" and "iostat -En". As long as you know the physical drive specification for the drive (ctd which appears to be c9t1d0 from the other e-m

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-02 Thread Carson Gaspar
Tomas Ögren wrote: On 02 March, 2010 - Carson Gaspar sent me these 0,5K bytes: I strongly suggest that folks who are thinking about this examine what NetApp does when exporting NTFS security model qtrees via NFS. It constructs a mostly bogus set of POSIX permission info based on the ACL

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-02 Thread Carson Gaspar
I strongly suggest that folks who are thinking about this examine what NetApp does when exporting NTFS security model qtrees via NFS. It constructs a mostly bogus set of POSIX permission info based on the ACL. All access is enforced based on the actual ACL. Sadly for NFSv3 clients there is no w

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-15 Thread Carson Gaspar
Richard Elling wrote: ... As you can see, so much has changed, hopefully for the better, that running performance benchmarks on old software just isn't very interesting. NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they would not be competitive in the market. The n

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-23 Thread Carson Gaspar
Paul Armstrong wrote: I'm surprised at the number as well. Running it again, I'm seeing it jump fairly high just before the fork errors: bash-4.0# ps -ef | grep zfsdle | wc -l 20930 (the next run of ps failed due to the fork error). So maybe it is running out of processes. ZFS file data fro

Re: [zfs-discuss] mpt errors on snv 127

2009-12-01 Thread Carson Gaspar
Travis Tabbal wrote: If someone from Sun will confirm that it should work to use the mpt driver from 2009.06, I'd be willing to set up a BE and try it. I still have the snapshot from my 2009.06 install, so I should be able to mount that and grab the files easily enough. I tried, it doesn't work

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Carson Gaspar
Carson Gaspar wrote: Mark Johnson wrote: I think there are two different bugs here... I think there is a problem with MSIs and some variant of mpt card on xVM. These seem to be showing up as timeout errors. Disabling MSIs for this adapter seems to fix this problem. For folks seeing this

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Carson Gaspar
Mark Johnson wrote: I think there are two different bugs here... I think there is a problem with MSIs and some variant of mpt card on xVM. These seem to be showing up as timeout errors. Disabling MSIs for this adapter seems to fix this problem. For folks seeing this problem, what HBA adapter a

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-23 Thread Carson Gaspar
Travis Tabbal wrote: I have a possible workaround. Mark Johnson has been emailing me today about this issue and he proposed the following: You can try adding the following to /etc/system, then rebooting... set xpv_psm:xen_support_msi = -1 I am also running XVM, and after modifying /etc/syste

[zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-21 Thread Carson Gaspar
For all of those suffering from mpt timeouts in snv_127, I decided to give the ancient itmpt driver a whirl. It works fine, and in my brief testing a zfs scrub that would generate about 1 timeout every 2 minutes or so now runs with no problems. The downside is that lsiutil and raidctl both fai

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar
On 10/26/09 5:33 PM, p...@paularcher.org wrote: I can't find much on gam_server on Solaris (couldn't find too much on it at all, really), and port_create is apparently a system call. (I'm not a developer--if I can't write it in BASH, Perl, or Ruby, I can't write it.) I appreciate the suggestions,

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar
On 10/26/09 3:31 PM, Richard Elling wrote: How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-file(1)? :-) The docs are... ummm... "skimpy" is being rather polite. The docs I can find via Google say that they will launch some random unspecified daemons via d-bus (I assume gvfsd ans

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Carson Gaspar
On 10/25/09 5:38 PM, Paul Archer wrote: 5:12pm, Cyril Plisko wrote: while there is no inotify for Solaris, there are similar technologies available. Check port_create(3C) and gam_server(1) I can't find much on gam_server on Solaris (couldn't find too much on it at all, really), and port_cre

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Carson Gaspar
On 10/24/09 9:43 AM, Richard Elling wrote: OK, here we see 4 I/Os pending outside of the host. The host has sent them on and is waiting for them to return. This means they are getting dropped either at the disk or somewhere between the disk and the controller. When this happens, the sd driver w

Re: [zfs-discuss] PSARC 2009/571: ZFS deduplication properties

2009-10-24 Thread Carson Gaspar
On 10/24/09 8:37 AM, Richard Elling wrote: At LISA09 in Baltimore next week, Darren is scheduled to give an update on the ZFS crypto project. We should grab him, take him to our secret rendition site at Inner Harbor, force him into a comfy chair, and beer-board him until he confesses. I can su

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Carson Gaspar
On 10/22/09 4:07 PM, James C. McPherson wrote: Adam Cheal wrote: It seems to be timing out accessing a disk, retrying, giving up and then doing a bus reset? ... ugh. New bug time - bugs.opensolaris.org, please select Solaris / kernel / driver-mpt. In addition to the error messages and descript

[zfs-discuss] ZFS saved my data success story

2009-10-08 Thread Carson Gaspar
To recap for those who don't recall my plaintive cries for help, I lost a pool due to the following sequence of events: - One drive in my raidz array becomes flaky, has frequent "stuck" I/Os due to drive error recovery, trashing performance - I take flaky drive offline (zpool offline...) - I b

Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Carson Gaspar
Maurilio Longo wrote: I did try to use smartmontools, but it cannot report SMART logs nor start SMART tests, so I don't know how to look at their internal state. Really? That's odd... You could also have a firmware bug on your disks. You might try lowering the number of tagged commands per d

Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Carson Gaspar
Maurilio Longo wrote: the strange thing is that this is happening on several disks (can it be that are all failing?) Possible, but less likely. I'd suggest running some disk I/O tests, looking at the drive error counters before/after. What is the controller bug you're talking about? I'm run

Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Carson Gaspar
Maurilio Longo wrote: Hi, I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made up of a six disks in a raid-z pool with a hot spare. ... Now, the problem is that issuing an iostat -Cmnx 10 or any other time intervall, I've seen, sometimes, a complete stall of disk I/O due to a d

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-10-01 Thread Carson Gaspar
Also can someone tell me if I'm too late for an uberblock rollback to help me? Diffing "zdb -l" output between c7t0 and c7t1 I see: -txg=12968048 +txg=12968082 Is that too large a txg gap to roll back, or is it still possible? Carson Gaspar wrote: Carson Gaspar wro

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-10-01 Thread Carson Gaspar
Carson Gaspar wrote: I'm booted back into snv118 (booting with the damaged pool disks disconnected so the host would come up without throwing up). After hot plugging the disks, I get: bash-3.2# /usr/sbin/zdb -eud media zdb: can't open media: File exists OK, things are now

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
Carson Gaspar wrote: Carson Gaspar wrote: I'll also note that the kernel is certainly doing _something_ with my pool... from "iostat -n -x 5": extended device statistics r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 40.55.4 154

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
Carson Gaspar wrote: I'll also note that the kernel is certainly doing _something_ with my pool... from "iostat -n -x 5": extended device statistics r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 40.55.4 1546.40.0 0.0 0.3

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
Carson Gaspar wrote: Carson Gaspar wrote: Victor Latushkin wrote: Carson Gaspar wrote: is zdb happy with your pool? Try e.g. zdb -eud I'm booted back into snv118 (booting with the damaged pool disks disconnected so the host would come up without throwing up). After hot pluggin

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
Carson Gaspar wrote: Victor Latushkin wrote: Carson Gaspar wrote: is zdb happy with your pool? Try e.g. zdb -eud I'm booted back into snv118 (booting with the damaged pool disks disconnected so the host would come up without throwing up). After hot plugging the disks, I get:

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
Victor Latushkin wrote: Carson Gaspar wrote: zpool online media c7t0d0 j...@opensolaris:~# zpool online media c7t0d0 cannot open 'media': no such pool Already tried that ;-) -- This message posted from opensolaris.org D'oh! Of course, I should have been paying attention t

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
> On Wed, 30 Sep 2009 11:01:13 PDT, Carson Gaspar > wrote: > > >> zpool online media c7t0d0 > > > >j...@opensolaris:~# zpool online media c7t0d0 > >cannot open 'media': no such pool > > > >Already tried that ;-) > > Perhaps you

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
> >> zpool online media c7t0d0 > > > > j...@opensolaris:~# zpool online media c7t0d0 > > cannot open 'media': no such pool > > > > Already tried that ;-) > > -- > > This message posted from opensolaris.org > > > > > D'oh! Of course, I should have been paying attention > to the fact that the > pool

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
> zpool online media c7t0d0 j...@opensolaris:~# zpool online media c7t0d0 cannot open 'media': no such pool Already tried that ;-) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

[zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread Carson Gaspar
One of the disks in my RAIDZ array was behaving oddly (lots of bus errors) so I took it offline to replace it. I shut down the server, put in the replacement disk, and rebooted. Only to discover that a different drive had chosen that moment to fail completely. So I replace the failing (but not y

Re: [zfs-discuss] Adding new disks and ditto block behaviour

2009-09-17 Thread Carson Gaspar
Joe Toppi wrote: I have machine that had 2x 1TB drives in it. They were in the same zpool and that entire zpool is set to "copies=2". From what I understand this will store all my data twice, and if the SPA is doing its job right it will store the copies on different disks and store the checksum

Re: [zfs-discuss] zfs send older version?

2009-09-16 Thread Carson Gaspar
Erik Trimble wrote: You are correct in that restoring a full stream creates the appropriate versioned filesystem. That's not the problem. The /much/ more likely scenario is this: (1) Let's say I have a 2008.11 server. I back up the various ZFS filesystems, with both incremental and full stre

Re: [zfs-discuss] zfs send older version?

2009-09-16 Thread Carson Gaspar
Erik Trimble wrote: > I haven't see this specific problem, but it occurs to me thus: For the reverse of the original problem, where (say) I back up a 'zfs send' stream to tape, then later on, after upgrading my system, I want to get that stream back. Does 'zfs receive' support reading a ver

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-12 Thread Carson Gaspar
Carson Gaspar wrote: Except you replied to me, not to the person who has SSDs. I have dead standard hard disks, and the mpt driver is just not happy. After applying 141737-04 to my Sol 10 system, things improved greatly, and the constant bus resets went away. After upgrading to OpenSolaris 6

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-12 Thread Carson Gaspar
James C. McPherson wrote: On Thu, 10 Sep 2009 12:31:11 -0700 Carson Gaspar wrote: Alex Li wrote: We finally resolved this issue by change LSI driver. For details, please refer to here http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/ Anyone from Sun have any knowledge

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-10 Thread Carson Gaspar
Alex Li wrote: We finally resolved this issue by change LSI driver. For details, please refer to here http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/ Anyone from Sun have any knowledge of when the open source mpt driver will be less broken? Things improved greatly for

Re: [zfs-discuss] Can ZFS dynamically grow pool sizes? (re: Windows Home Server)

2009-08-12 Thread Carson Gaspar
Erik Trimble wrote: Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a RAIDZ, you will get only 1TB of usable space. Of course, there is always the ability to use partitions instead of the whole disk, but I'm not going to go into that. Suffice to say, RAIDZ (and practically

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread Carson Gaspar
Ross Walker wrote: On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote: Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support recently. Yes, but the LSI support of SSDs is on later controllers. Please cite your source for that statement. The PERC 6/e is an LSI 1078. The LS

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread Carson Gaspar
Ross Walker wrote: I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential write). It's a Dell PERC 6/e with 512MB onboard. ... there, dedicated slog device with NVRAM speed. It would be even better to have a pair of SSDs behind the NVRAM, but it's hard to find compatible SSDs for

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-25 Thread Carson Gaspar
Frank Middleton wrote: Finally, a number of posters blamed VB for ignoring a flush, but according to the evil tuning guide, without any application syncs, ZFS may wait up to 5 seconds before issuing a synch, and there must be all kinds of failure modes even on bare hardware where it never gets a

Re: [zfs-discuss] recover data after zpool create

2009-07-08 Thread Carson Gaspar
stephen bond wrote: can you provide an example of how to read from dd cylinder by cylinder? What's a cylinder? That's a meaningless term these days. You dd byte ranges. Pick whatever byte range you want. If you want mythical cylinders, fetch the cylinder size from "format" and use that as yo

Re: [zfs-discuss] Sans Digital Tower Raid TR8M

2009-07-03 Thread Carson Gaspar
Martin Englund wrote: I'm wondering if someone has tried using Sans Digital's Tower Raid TR8M[1] with ZFS (I'm especially curious about the bundled 2-port eSATA PCIe Host Bus Adapter) The port multiplier issue will probably prevent this from working right now, as someone else has already menti

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Carson Gaspar
Miles Nordin wrote: There's also been talk of two tools, MegaCli and lsiutil, which are both binary only and exist for both Linux and Solaris, and I think are used only with the 1078 cards but maybe not. lsiutil works with LSI chips that use the Fusion-MPT interface (SCSI, SAS, and FC), inclu

Re: [zfs-discuss] BugID formally known as 6746456

2009-06-24 Thread Carson Gaspar
Rob Healey wrote: Does anyone know if related problems to the panic's dismissed as "duplicate of 6746456" ever resulted in Solaris 10 patches? It sounds like they were actually solved in OpenSolaris but S10 is still panicing predictably when Linux NFS clients try to change a nobody UID/GID on a Z

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-22 Thread Carson Gaspar
James C. McPherson wrote: Use raidctl(1m). For fwflash(1m), this is on the "future project" list purely because we've got much higher priority projects on the boil - if we couldn't use raidctl(1m) this would be higher up the list. Nice to see that raidctl can do that. Although I don't see a wa

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-22 Thread Carson Gaspar
James C. McPherson wrote: On Sun, 21 Jun 2009 19:01:31 -0700 As a member of the team which works on mpt(7d), I'm disappointed that\ you believe you need to use lsiutil to "fully access all the functionality" of the board. What gaps have you found in mpt(7d) and the standard OpenSolaris tools th

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Carson Gaspar
I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It works just fine. You need to get "lsiutil" from the LSI web site to fully access all the functionality, and they cleverly hide the download link only under their FC HBAs on their support site, even though it works for everyth

Re: [zfs-discuss] zfs on 32 bit?

2009-06-14 Thread Carson Gaspar
Daniel Carosone wrote: This sounds like FUD. There's a comprehensive test suite, and it apparently passes. It's not exactly FUD. If you search the list archives, you'll find messages about multiple bugs in the 32-bit code. I strongly suspect that these have been fixed in the interim, but it

Re: [zfs-discuss] multiple devs for rpool

2009-06-10 Thread Carson Gaspar
Lori Alt wrote: A root pool is composed of one top-level vdev, which can be a mirror (i.e. 2 or more disks). A raidz vdev is not supported for the root pool yet. It might be supported in the future, but the timeframe is unknown at this time. The original poster was asking about a zpool of m

  1   2   >