Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-26 Thread paul
> This controller card, you have turned off any raid functionality, yes? ZFS > has total control of all discs, by itself? No hw raid intervening? > -- > This message posted from opensolaris.org > > yes, it's an LSI 150-6, with the BIOS turned off, which turns it into

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-29 Thread paul
e SC846 I got has a single backplane for the SAS/SATA drives, and one connector to the LSI card. Of course, for what I'm doing, that's fine. Paul Oh, I think the SC846 I got was about $1100. http://www.cdw.com/shop/search/results.aspx?key=sc846&searchscope=All&sr=1&a

Re: [zfs-discuss] Desire simple but complete copy - How?

2009-09-30 Thread paul
> they mean. > Have you ruled out using 'zfs send' / 'zfs receive' for some reason? And have you looked at rsync? I generally find rsync to be the easiest and most reliable tool for replicating directory structures. You may want to look at the GNU v

Re: [zfs-discuss] Would ZFS work for a high-bandwidth video SAN?

2009-09-30 Thread paul
gt; > FWIW, most enclosures like the ones we have been discussing lately have an internal bay for a boot/OS drive--so you'll probably have all 12 hot-swap bays available for data drives. Paul ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread paul
c7t2d0 ONLINE > c7t4d0 ONLINE > c7t3d0 ONLINE > c7t0d0 OFFLINE > c7t7d0 ONLINE > c7t1d0 ONLINE > c7t6d0 ONLINE > -- > This message posted from opensolaris.org > > zpool online media c7t0d0 Paul ___

Re: [zfs-discuss] Help importing pool with "offline" disk

2009-09-30 Thread paul
the fact that the pool wasn't imported. My guess is that if you move /etc/zfs/zfs.cache out of the way, then reboot, ZFS will have to figure out what disks are out there again, find your disk, and realize it is online. Paul ___ zfs-

Re: [zfs-discuss] "Hot Space" vs. hot spares

2009-10-01 Thread paul
to help if I can. If nothing else, my wife makes a mean chocolate chip cookie! Think a batch of those would help? Paul Archer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread paul
x, portability is not a primary goal at this stage but if you have portability patches they are welcome." Unfortunately, I'm trying for a Solaris solution. I already had a Linux solution (the 'inotify' I started out with). Paul ___ zfs

[zfs-discuss] ZFS GUI - where is it?

2009-11-19 Thread Paul
Hi there, my first post (yay). I have done much googling and everywhere I look I see people saying "just browse to https://localhost:6789 and it is there". Well its not, I am running 2009.06 (snv_111b) the current latest stable release I do believe? This is my first major foray into the world o

[zfs-discuss] integrated failure recovery thoughts

2008-08-11 Thread paul
As most of the zfs recovery problems seem to stem from zfs’s own strict insistence that data be ideally consistent with its corresponding checksum, which of course is good when correspondingly consistent data may be recovered from somewhere, but catastrophic otherwise; it seem clear that zfs must s

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-12 Thread paul
Although I don't know for sure that most such errors are in fact single bit in nature, I can only surmise they most likely statistically are absent detection otherwise; as with the exception of error corrected memory systems and/or check-summed communication channels, each transition of data betw

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-13 Thread paul
Given that the checksum algorithms utilized in zfs are already fairly CPU intensive, I can't help but wonder if it's verified that a majority of checksum inconsistency failures appear to be single bit; if it may be advantageous to utilize some computationally simpler hybrid form of a checksum/ha

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-13 Thread paul
Bob wrote: > ... Given the many hardware safeguards against single (and several) bit > errors, > the most common data error will be large. For example, the disk drive may > return data from the wrong sector. - actually data integrity check bits as may exist within memory systems and/or communi

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-14 Thread paul
bob wrote: > On Wed, 13 Aug 2008, paul wrote: > >> Shy extremely noisy hardware and/or literal hard failure, most >> errors will most likely always be expressed as 1 bit out of some >> very large N number of bits. > > This claim ignores the fact that most compute

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-14 Thread paul
Yes, Thank you. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] integrated failure recovery thoughts

2008-08-14 Thread paul
I apologize for in effect suggesting that which was previously suggested in an earlier thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2008-March/046234.html And discovering that the feature to attempt worst case single bit recovery had apparently already been present in some form in

Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread paul
Kyle wrote: > ... If I recall, the low priority was based on the percieved low demand > for the feature in enterprise organizations. As I understood it shrinking a > pool is percieved as being a feature most desired by home/hobby/development > users, and that enterprises mainly only grow thier po

[zfs-discuss] zvol snapshot at size 100G

2008-11-12 Thread Paul
Hi, Can ZFS snapshot be performed at zvol size of 100GB ? I have no problem with the zvol snapshot at size of 1GB or 10GB. Thanks, Paul -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zvol snapshot at size 100G

2008-11-13 Thread Paul
I apologize for lack of info regarding to previous post. # zpool list NAMESIZE USED AVAILCAP HEALTH ALTROOT gwvm_zpool 3.35T 3.16T 190G94% ONLINE - rpool 135G 27.5G 107G20% ONLINE - ... # zfs list ... gwvm_zpool/gwpo19stby

Re: [zfs-discuss] zvol snapshot at size 100G

2008-11-13 Thread Paul
First, I would like to thank everyone for response. Second, here is the output for the clarification # zfs list ... NAMEUSED AVAIL REFER MOUNTPOINT gwvm_zpool/gwpo19stby100G 2.49G 18K

[zfs-discuss] solaris pivot-root (Re: ZFS Mountroot and Bootroot Comparison)

2007-10-13 Thread Paul
d are already there. I was trapped by this some time ago, some libs were on /usr :/ Now I'm fine with UFS root on SVM mirror and /var on ZFS RAID 0+1 (mountpoint=legacy). FYI I'm on SPARC. Cheers, Paul This message posted from opensolaris.org

Re: [zfs-discuss] Due to 128KB limit in ZFS it can't saturate disks

2007-10-24 Thread Paul
e="ssd" parent="scsi_vhci" sd_max_xfer_size=0x80; (I have FC drives) Where can I teach myself about the disadvantages? I searched for an article or paper about "Why 128k blocksize is enough" written by the ZFS designer, but could not find it... Thx in adv

[zfs-discuss] Different Sized Disks Recommendation

2007-10-29 Thread Paul
Hi, I was first attracted to ZFS (and therefore OpenSolaris) because I thought that ZFS allowed the used of different sized disks in raidz pools without wasted disk space. Further research has confirmed that this isn't possible--by default. I have seen a little bit of documentation around using

Re: [zfs-discuss] zfs boot on Sparc?

2007-12-21 Thread Paul
ut beeing limited to a non-striped mirror (i.e. vdev mirror a b mirror c d mirror e f)? Merry Xmas and Happy New Year, Paul This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/

[zfs-discuss] RFE: File revisions on ZFS

2008-01-15 Thread Paul
the FS compression rev:max_revisions=integer|"none" (default)|"unlimited" rev:min_revisions=integer|"none" (default) rev:min_free=integer[specifier] # spec can the usual b,k,M,G,T, % or "none" and for convenience a few file attributes alike: rev:max

Re: [zfs-discuss] ZFS shared /home between zones

2008-01-20 Thread Paul
a loopback mount, not a dataset, does what you want. in zonecfg, do: > add fs > set special=/export/home > set dir=/home > set type=lofs > add options rw,nodevices,noexec,nosetuid > end > verify # man zonecfg Make sure the local zones have the same userids as the global zone, best would be to use

[zfs-discuss] zfs send/receive locks system threads (Bug?)

2008-06-26 Thread Paul
([EMAIL PROTECTED],min-10,min-20,...} and every hour ([EMAIL PROTECTED],hourly-01,...}, delete these snapshots prior to the send/receive operation. Thanks in advance, Paul This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] zpool reporting corrupt metadata

2010-03-11 Thread Paul Tetley
re in two AIC JBODs connectected via SAS. - HBA is an LSI 3801E - Server is 1RU SuperMicro Intel. Any advice appreciated! :-) Paul Tetley NearMap Pty Ltd ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool reporting corrupt metadata

2010-03-14 Thread Paul Tetley
would advise giving up on my zpool on what was apparrently a transient error. Regards, Paul Tetley On Fri, Mar 12, 2010 at 4:12 PM, Richard Elling wrote: > On Mar 11, 2010, at 11:28 PM, Paul Tetley wrote: > > Hi, > > > > My zpool is reporting unrecoverable errors with the metadat

[zfs-discuss] lazy zfs destroy

2010-03-17 Thread Chris Paul
OK I have a very large zfs snapshot I want to destroy. When I do this, the system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with 128GB of memory. Now this may be more of a function of the IO device, but let's say I don't care that this zfs destroy finishes quickly. I actual

[zfs-discuss] snapshots taking too much space

2010-04-12 Thread Paul Archer
USED AVAIL REFER MOUNTPOINT bpool/backups/oracle_bac...@20100411-023130 479G - 681G - bpool/backups/oracle_bac...@20100411-104428 515G - 721G - bpool/backups/oracle_bac...@20100412-144700 0 - 734G - Thanks for any help, Paul _

Re: [zfs-discuss] snapshots taking too much space

2010-04-13 Thread Paul Archer
Yesterday, Arne Jansen wrote: Paul Archer wrote: Because it's easier to change what I'm doing than what my DBA does, I decided that I would put rsync back in place, but locally. So I changed things so that the backups go to a staging FS, and then are rsync'ed over to another

[zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Paul Archer
I haven't turned dedup off again yet, because I'd like to figure out how to get past this problem. Can anyone give me an idea of why the mounts might be hanging, or where to look for clues? And has anyone had this problem with dedup and NFS before? FWIW, the clients are a mix of Solar

[zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Paul Archer
this point, but I'd have to destroy the snapshot first, so I'm in the same boat, yes? TIA, Paul ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Paul Archer
n try adding more ram to the system. -- Thanks for the info. Unfortunately, I'm not sure I'll be able to add more RAM any time soon. But I'm certainly going to try, as this is the primary backup server for our Oracle databases. Thanks again, Paul PS It's got 8GB right now. Y

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-15 Thread Paul Archer
3:08pm, Daniel Carosone wrote: On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote: So I turned deduplication on on my staging FS (the one that gets mounted on the database servers) yesterday, and since then I've been seeing the mount hang for short periods of time off and on

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-15 Thread Paul Archer
Yesterday, Erik Trimble wrote: Daniel Carosone wrote: On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote: So I turned deduplication on on my staging FS (the one that gets mounted on the database servers) yesterday, and since then I've been seeing the mount hang for short perio

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-15 Thread Paul Archer
3:26pm, Daniel Carosone wrote: On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote: I realize that I did things in the wrong order. I should have removed the oldest snapshot first, on to the newest, and then removed the data in the FS itself. For the problem in question, this is

Re: [zfs-discuss] rpool on ssd. endurance question.

2010-04-26 Thread Paul Gress
On 04/26/10 11:54 PM, Yuri Vorobyev wrote: Hello. If anybody uses SSD for rpool more than half-year, can you post SMART information about HostWrites attribute? I want to see how SSD wear for system disk purposes. I'd be happy to, exactly what commands shall I run?

Re: [zfs-discuss] rpool on ssd. endurance question.

2010-04-27 Thread Paul Gress
Revision: 1.01 Serial No: Size: 0.00GB <0 bytes> Media Error: 0 Device Not Ready: 10 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 bash-4.0$ If you can come up with a way I can get you more info, post a response. Paul __

Re: [zfs-discuss] rpool on ssd. endurance question.

2010-04-27 Thread Paul Gress
000 000 000Old_age Always - 0 # Is all this data what your looking for? Paul ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Is dedupe ready for prime time?

2010-05-18 Thread Paul Choi
dozen VMs suddenly losing their datastore? I'd love to hear from your experience. Thanks, -Paul Choi ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is dedupe ready for prime time?

2010-05-18 Thread Paul Choi
Roy, Thanks for the info. Yeah, the bug you mentioned is pretty critical. In terms of SSDs, I have Intel X25-M for L2ARC and X25-E for ZIL. And the host has 24G RAM. I'm just waiting for that "2010.03" release or whatever we want to call it when it's released... -Paul

[zfs-discuss] Performance Testing

2010-08-11 Thread Paul Kraus
example). -- {1-2-3-4-5-6-7-----} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Adv

[zfs-discuss] ZFS and VMware

2010-08-11 Thread Paul Kraus
looking for general recommendations and experiences. Thanks. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company (

Re: [zfs-discuss] Problems with big ZFS send/receive in b134

2010-08-11 Thread Paul Kraus
hould be much faster) ? The first full might run afoul of the 2 hour snapshots (and deletions), but I would not expect the incremental to. I am syncing about 20 TB of data between sites this way every 4 hours over a 100 Mb link. I put the snapshot management and the site to site replication in the

Re: [zfs-discuss] ZFS and VMware

2010-08-12 Thread Paul Kraus
iSCSI and are looking to learn from other's experience as well as our own. For example, is anyone using NFS with Oracle Cluster for HA storage for VMs or are sites trusting to a single NFS server ? -- {1-2-3-4-5-6-7---

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer
Something to do with the fact that this is a very old SATA card (LSI 150-6)? This is driving me crazy. I finally got my zpool working under Solaris so I'd have some stability, and I've got no performance. Paul Archer Friday, Paul Archer wrote: Since I got my zfs pool working under

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer
.0 0.3 0.33.33.1 9 14 c11d0 0.00.0 0.00.0 0.0 0.00.00.0 0 0 c12t0d0 Paul Archer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer
0d0 Try using 'format -e' on the drives, go into 'cache' then 'write-cache' and display the current state. You can try to manually enable it from there. I tried this, but the 'cache' menu item didn't show up.

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-27 Thread Paul Archer
d that I hadn't used before because it's PCI-X, and won't fit on my current motherboard.) I'll report back what I get with it tomorrow or the next day, depending on the timing on the resilver. Paul Archer ___ zfs-discuss maili

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Paul Archer
Yesterday, Paul Archer wrote: I estimate another 10-15 hours before this disk is finished resilvering and the zpool is OK again. At that time, I'm going to switch some hardware out (I've got a newer and higher-end LSI card that I hadn't used before because it's PCI-X,

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Paul Archer
8:30am, Paul Archer wrote: And the hits just keep coming... The resilver finished last night, so rebooted the box as I had just upgraded to the latest Dev build. Not only did the upgrade fail (love that instant rollback!), but now the zpool won't come online: r...@shebop:~# zpool i

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Paul Archer
ors * 2930277101 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * First SectorLast * Partition Tag FlagsSector CountSector Mount Directory 0 1700 34 2930277101 2930277134 Thanks for the help! Paul Archer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Paul Archer
In light of all the trouble I've been having with this zpool, I bought a 2TB drive, and I'm going to move all my data over to it, then destroy the pool and start over. Before I do that, what is the best way on an x86 system to format/label the disks? Tha

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Paul Archer
Cool. FWIW, there appears to be an issue with the LSI 150-6 card I was using. I grabbed an old server m/b from work, and put a newer PCI-X LSI card in it, and I'm getting write speeds of about 60-70MB/sec, which is about 40x the write speed I was seeing with the old card. Paul Tom

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Paul Archer
11:04pm, Paul Archer wrote: Cool. FWIW, there appears to be an issue with the LSI 150-6 card I was using. I grabbed an old server m/b from work, and put a newer PCI-X LSI card in it, and I'm getting write speeds of about 60-70MB/sec, which is about 40x the write speed I was seeing wit

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-29 Thread Paul Archer
a fair bit of noise--but I think if you had it in a closet with some soundproofing, it wouldn't be bad. And if you went with a smaller enclosure (12 drives, for instance) that would help. Paul ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-29 Thread Paul Archer
connector to the LSI card. Of course, for what I'm doing, that's fine. Paul Oh, I think the SC846 I got was about $1100. http://www.cdw.com/shop/search/results.aspx?key=sc846&searchscope=All&sr=1&Find+it.x=0&Find+it.y=0 One thing I forgot to mention: there is a wart w

[zfs-discuss] dedup video

2009-10-13 Thread Paul Archer
Someone posted this link: https://slx.sun.com/1179275620 for a video on ZFS deduplication. But the site isn't responding (which is typical of Sun, since I've been dealing with them for the last 12 years). Does anyone know of a mirror site, or if the video is on YouT

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2009-10-14 Thread Paul Gress
would like to merge 2# and 3# to get more disk space in OpenSolaris. Is it possible to eliminate the NTFS partition and add it to the ZFS partition? Thanks in advance and regards, Julio Why don't you just format partition 2 to zfs, then add it to pool Solaris2 or rpool, whatever it'

[zfs-discuss] zfs inotify?

2009-10-25 Thread Paul Archer
/data/images/incoming, and a /data/images/incoming/100canon directory gets created, then the files under that directory will automatically be monitored as well. Thanks, Paul Archer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs inotify?

2009-10-25 Thread Paul Archer
s out to be the best way to go). I was hoping that there'd be a script out there already, but I haven't turned up anything yet. Paul ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread Paul Lyons
e the "miniroot" from the install media is only version 10. This is not good. Any advice? I am already thinking about installing U7 on my test box to demonstrate. Glad I haven't rolled out u8 into production. Thanks, Paul -- This message posted from opensolaris.org ___

Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-31 Thread Paul Lyons
;> >>> >>> >>> On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons >> paulrly...@gmail.com>> wrote: >>> >>>When I boot off Solaris 10 U8 I get the error that pool is >>>formatted using an incompatible version. >>> >>&g

[zfs-discuss] ZFS Random Read Performance

2009-11-24 Thread Paul Kraus
tself ? Is there another benchmark I should be using ? P.S. I posted a OpenOffice.org spreadsheet of my test resulsts here: http://www.ilk.org/~ppk/Geek/throughput-summary.ods -- {1-2-3-4-5-6-7-----} Paul Kraus -> Senior Systems A

Re: [zfs-discuss] ZFS Random Read Performance

2009-11-24 Thread Paul Kraus
rall. A big SAMBA file server. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -&g

Re: [zfs-discuss] ZFS Random Read Performance

2009-11-25 Thread Paul Kraus
Richard, First, thank you for the detailed reply ... (comments in line below) On Tue, Nov 24, 2009 at 6:31 PM, Richard Elling wrote: > more below... > > On Nov 24, 2009, at 9:29 AM, Paul Kraus wrote: > >> On Tue, Nov 24, 2009 at 11:03 AM, Richard Elling >> wro

Re: [zfs-discuss] ZFS Random Read Performance

2009-11-25 Thread Paul Kraus
-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/) ->

[zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Paul Bruce
Hi, I'm just about to build a ZFS system as a home file server in raidz, but I have one question - pre-empting the need to replace one of the drives if it ever fails. How on earth do you determine the actual physical drive that has failed ? I've got the while zpool status thing worked out, but h

[zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-22 Thread Paul Armstrong
c1t50060E8010037135d41 ONLINE c1t50060E8010037135d45 ONLINE c1t50060E8010037135d49 ONLINE c1t50060E8010037135d53 ONLINE c1t50060E8010037135d57 ONLINE Thanks, Paul -- This message posted from opensolaris.org __

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-22 Thread Paul Armstrong
bash-4.0# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 256 pipe size(512 bytes, -p) 10 stack size (kbytes, -s) 10240 cpu time

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-22 Thread Paul Armstrong
I'm surprised at the number as well. Running it again, I'm seeing it jump fairly high just before the fork errors: bash-4.0# ps -ef | grep zfsdle | wc -l 20930 (the next run of ps failed due to the fork error). So maybe it is running out of processes. ZFS file data from ::memstat just went do

Re: [zfs-discuss] Recovering ZFS stops after syseventconfd can't fork

2009-12-28 Thread Paul Armstrong
he disks: LABEL 3 failed to unpack label 3 Thanks, Paul -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] best way to configure raidz groups

2009-12-31 Thread Paul Armstrong
Rather than hacking something like that, he could use a Disk on Module (http://en.wikipedia.org/wiki/Disk_on_module) or something like http://www.tomshardware.com/news/nanoSSD-Drive-Elecom-Japan-SATA,8538.html (which I suspect may be a DOM but I've not poked around sufficiently to see).

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Paul Gress
(because they eat 80% of disk space) it seems to be quite challenging. I've been following this thread. Would it be faster to do the reverse. Copy the 20% of disk then format then move the 20% back. Paul ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Drive Identification

2010-01-24 Thread Paul Gress
On 01/24/10 04:10 AM, Lutz Schumann wrote: Is there a way (besides format and causing heavy I/O on the device in question) how to identify a drive. Is there some kind of SES (enclosure service) for this ?? (e.g. "and now let the red led blink") Try /usr/bin/iostat -En ___

Re: [zfs-discuss] S10 version question

2011-09-29 Thread Paul Kraus
. I have been told by Oracle Support (but have not yet confirmed) that just running the latest zfs code (Solaris 10U10) will disable the aclmode property, even if you do not upgrade the zpool version beyond 22. I expect to test this next week, as we _need_ ACLs to work for our data. -- {---

Re: [zfs-discuss] S10 version question

2011-10-05 Thread Paul Kraus
On Wed, Oct 5, 2011 at 5:56 PM, Paul B. Henson wrote: > On Thu, Sep 29, 2011 at 07:13:40PM -0700, Paul Kraus wrote: > >> Another potential difference ... I have been told by Oracle Support >> (but have not yet confirmed) that just running the latest zfs code >> (Solaris

Re: [zfs-discuss] ZFS issue on read performance

2011-10-11 Thread Paul Kraus
c3t5000C5001A55F7A6d0 ONLINE 0 0 0 114K repaired c3t5000C5001A5347FEd0 ONLINE 0 0 0 spares c3t5000C5001A485C88d0AVAIL c3t5000C50026A0EC78d0AVAIL errors: No known data errors -- {----1-2-3-4--

Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Paul Kraus
-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Advisor, RPI Players ___ zfs-discuss

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Paul Kraus
not a substitute for a real online rebalance, but it gets the job done (if you can take the data offline, I do it a small chunk at a time). -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garn

[zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-18 Thread Paul Kraus
I have seen too many horror stories on this list that I just avoid it). -- {1-2-3-4-----5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Com

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-18 Thread Paul Kraus
ss. So far, ZFS is one of the technologies that has not let me down. Of course, in some cases it has taken weeks if not months to resolve or work around a "bug" in the code, but in all cases the data was recovered. -- {1-2-3-4-----5-6-7

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Paul Kraus
operation rewrote the data that had been corrupted on the failing component. No corrupt data was ever presented to the application. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetrive

Re: [zfs-discuss] about btrfs and zfs

2011-10-19 Thread Paul Kraus
@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-21 Thread Paul Kraus
t; > > Can you elaborate #3? In what situation will it happen? > > > Thanks. > > Fred > -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) ->

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-24 Thread Paul Kraus
e as it does not try to change the data). This was originally reported to me as a problem with ZFS, SAMBA, or the ACLs I had set up. It is amazing how much _changing_ of data goes on with no knowledge by the end users. -- {1-2-3-4-----5-6-7

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-24 Thread Paul Kraus
On Sat, Oct 22, 2011 at 12:36 AM, Paul Kraus wrote: > Recently someone posted to this list of that _exact_ situation, they loaded > an OS to a pair of drives while a pair of different drives containing an OS > were still attached. The zpool on the first pair ended up not being abl

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-25 Thread Paul Kraus
--5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Advisor, RPI Players ___ zf

Re: [zfs-discuss] ZFS in front of MD3000i

2011-10-25 Thread Paul Kraus
ort use only according to the documentation), so I created RAID0 sets of 2 drives each and ZFS sees 6 x 1TB LUNs. ZFS then provides my redundancy and data integrity. -- {1-2-3-4-5-----6-7-} Paul Kraus -> Senior Systems Architect, Garnet Ri

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-31 Thread Paul Kraus
I had not yet posted a summary as we are still working through the overall problem (we tripped over this on the replica, now we are working on it on the production copy). -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Archite

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-31 Thread Paul Kraus
On Mon, Oct 31, 2011 at 9:07 AM, Jim Klimov wrote: > 2011-10-31 16:28, Paul Kraus wrote: >> Oracle has provided a loaner system with 128 GB RAM and it took 75 GB of >> RAM >> to destroy the problem snapshot). I had not yet posted a summary as we >> are still working

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-31 Thread Paul Kraus
test server, so any ideas to try and help me understand greatly > appreciated. What do real benchmarks (iozone, filebench, orion) show ? -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Archi

Re: [zfs-discuss] (Incremental) ZFS SEND at sub-snapshot level

2011-10-31 Thread Paul Kraus
---5-----6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) -> Technical Advisor, RPI Players ___

Re: [zfs-discuss] (OT) forums and email

2011-11-02 Thread Paul Kraus
to (in fact, in the early days of Google Mail I did just that as a backup). -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Co

Re: [zfs-discuss] Remove corrupt files from snapshot

2011-11-03 Thread Paul Kraus
apdir=hidden " to set the parameter. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) -> Sound Coordinator, Schenectady Light Opera Company ( http://www.slocth

Re: [zfs-discuss] zpool scrub bad block list

2011-11-08 Thread Paul Kraus
uch above 0 or is growing. Keep in mind that any type of hardware RAID should report back 0 for both to the OS. -- {1-2-3-4-5-6-7-} Paul Kraus -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) ->

Re: [zfs-discuss] about btrfs and zfs

2011-11-11 Thread Paul Kraus
On Fri, Nov 11, 2011 at 1:39 PM, Linder, Doug wrote: > Paul Kraus wrote: > >>> My main reasons for using zfs are pretty basic compared to some here >> >> What are they ? (the reasons for using ZFS) > > All technical reasons aside, I can tell you one huge reason I

Re: [zfs-discuss] about btrfs and zfs

2011-11-14 Thread Paul Kraus
t of address bits?  Or is it something that offers functionality > that other filesystems don't have?     ;-) The stories I have heard indicate that the name came after the TLA. "zfs" came first and "zettabyte" later. -- {1-2-3-4---

  1   2   3   4   5   6   7   >