[zfs-discuss] Couple Questions about replacing a drive in a zpool

2010-03-08 Thread Jonathan
First a little background, I'm running b130, I have a zpool with two Raidz1(each 4 drives, all WD RE4-GPs) "arrays" (vdev?). They're in a Norco-4220 case ("home" server), which just consists of SAS backplanes (aoc-usas-l8i ->8087->backplane->SATA drives). A couple of the drives are showing a

Re: [zfs-discuss] Couple Questions about replacing a drive in a zpool

2010-03-08 Thread Jonathan
> First a little background, I'm running b130, I have a > zpool with two Raidz1(each 4 drives, all WD RE4-GPs) > "arrays" (vdev?). They're in a Norco-4220 case > ("home" server), which just consists of SAS > backplanes (aoc-usas-l8i ->8087->backplane->SATA > drives). A couple of the drives are sh

[zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
I just started replacing drives in this zpool (to increase storage). I pulled the first drive, and replaced it with a new drive and all was well. It resilvered with 0 errors. This was 5 days ago. Just today I was looking around and noticed that my pool was degraded (I see now that this occurred

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
I just ran 'iostat -En'. This is what was reported for the drive in question (all other drives showed 0 errors across the board. All drives indicated the "illegal request... predictive failure analysis" -- c7t1d0

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
Yeah, -- $smartctl -d sat,12 -i /dev/rdsk/c5t0d0 smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
> > Do worry about media errors. Though this is the most > common HDD > error, it is also the cause of data loss. > Fortunately, ZFS detected this > and repaired it for you. Right. I assume you do recommend swapping the faulted drive out though? Other file systems may not > be so gracious. >

[zfs-discuss] Migrating ZFS/data pool to new pool on the same system

2010-05-04 Thread Jonathan
Can anyone confirm my action plan is the proper way to do this? The reason I'm doing this is I want to create 2xraidz2 pools instead of expanding my current 2xraidz1 pool. So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1 pool over, destroy that pool and then add it as a 1xraidz2 vde

Re: [zfs-discuss] zfs resilvering

2008-09-26 Thread jonathan
asis in reality until it's about 1% do or so. I think there is some bookkeeping or something ZFS does at the start of a scrub or resilver that throws off the time estimate for a while. Thats just my experience with it but it's been like that pretty consistently for me. Jonathan

Re: [zfs-discuss] Drive Checksum error

2008-12-16 Thread Jonathan
u start seeing hundreds of errors be sure to check things like the cable. I had a SATA cable come loose on a home ZFS fileserver and scrub was throwing 100's of errors even though the drive itself was fine, I don't want to think about what could have happened with UFS... H

Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jonathan
's easier just to spend the money on enough hardware to do it properly without the chance of data loss and the extended down time. "Doesn't invest the time in" may be a be a better phrase than "avoids" though. I doubt Sun actually goes out of their way to make things harder for people. Hope that helps, Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Jonathan
Michael Shadle wrote: > On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble wrote: > >> zpool add tank raidz1 disk_1 disk_2 disk_3 ... >> >> (The syntax is just like creating a pool, only with add instead of create.) > > so I can add individual disks to the existing tank zpool anytime i want? Using th

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Jonathan
blocks will be allocated for the new files. that`s because rsync will > write entirely new file and rename it over the old one. ZFS will allocate new blocks either way, check here http://all-unix.blogspot.com/2007/03/zfs-cow-and-relate-features.html for more information about how

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Jonathan
Daniel Rock wrote: > Jonathan schrieb: >> OpenSolaris Forums wrote: >>> if you have a snapshot of your files and rsync the same files again, >>> you need to use "--inplace" rsync option , otherwise completely new >>> blocks will be allocated for the

[zfs-discuss] different/high atch/s, pflt/s, vflt/s on two systems

2007-09-27 Thread jonathan
. Thoughts on what I should be looking at? much thanks, Jonathan. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] modified mdb and zdb

2010-07-28 Thread Jonathan Cifuentes
Hi, I would really apreciate if any of you can help me get the modified mdb and zdb (in any version of OpenSolaris) for digital forensic reserch purpose. Thank you. Jonathan Cifuentes

[zfs-discuss] Using multiple logs on single SSD devices

2010-08-02 Thread Jonathan Loran
ill the GUID for each pool get found by the system from the partitioned log drives? Please give me your sage advice. Really appreciate it. Jon - _/ _/ / - Jonathan Loran - - -/ /

Re: [zfs-discuss] Using multiple logs on single SSD devices

2010-08-03 Thread Jonathan Loran
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Jonathan Loran >> > Because you're at pool v15, it does not matter if the log device fails while > you&

Re: [zfs-discuss] zfsdump

2009-11-04 Thread Jonathan Adams
The real problem for us is down to the fact that with ufsdump and ufsrestore they handled tape spanning and zfs send does not. we looked into having a wrapper to "zfs send" to a file and running gtar (which does support tape spanning), or cpio ... then we looked at the amount we started storing

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-25 Thread Jonathan Borden
/work with the LSI-SAS expander in the supermicro chassis. Using an 1068e based HBA works fine and works well with osol. Jonathan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opens

[zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jonathan Walker
Hey all, New to ZFS, I made a critical error when migrating data and configuring zpools according to needs - I stored a snapshot stream to a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]". When I attempted to receive the stream onto to the newly configured pool, I ended up with a

[zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jonathan Walker
>> New to ZFS, I made a critical error when migrating data and >> configuring zpools according to needs - I stored a snapshot stream to >> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]". > >Why is this a critical error, I thought you were supposed to be >able to save the outp

[zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-21 Thread Jonathan Chang
Hello all, I'm building a file server (or just a storage that I intend to access by Workgroup from primarily Windows machines) using zfs raidz2 and openindiana 148. I will be using this to stream blu-ray movies and other media, so I will be happy if I get just 20MB/s reads, which seems like a pr

Re: [zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-21 Thread Jonathan Chang
Do you mean that OI148 might have a bug that Solaris 11 Express might solve? I will download the Solaris 11 Express LiveUSB and give it a shot. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

Re: [zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-22 Thread Jonathan Chang
Nevermind this, I destroyed the raid volume, then checked each hard drive one by one, and when I put it back together, the problem fixed itself. I'm now getting 30-60MB/s read and write, which is still slow as heck, but works well for my application. -- This message posted from opensolaris.org

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-29 Thread Jonathan Loran
it be possible to have a number of possible places to store this > log? What I'm thinking is that if the system drive is unavailable, > ZFS could try each pool in turn and attempt to store the log there. > > In fact e-mail alerts or external error logging would be a great > addition to ZFS. Surely it makes sense that filesy

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-30 Thread Jonathan Loran
e best position to monitor the device. > > > > The primary goal of ZFS is to be able to correctly read data which was > > successfully committed to disk. There are programming interfaces > > (e.g. fsync(), msync()) which may be used to en

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Jonathan Loran
Miles Nordin wrote: >> "s" == Steve <[EMAIL PROTECTED]> writes: >> > > s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354 > > no ECC: > > http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets > This MB will take these: http://www.inte

[zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-10 Thread Jonathan Wheeler
it's not so!), why can't I at least have the 20GB of data that it can restore before it bombs out with that checksum error. Thanks for any help with this! Jonathan This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jonathan Loran
Jorgen Lundman wrote: > # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 "vendor 0x11ab device > 0x6081" > pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081 > Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller > > But it claims resolved for our version:

Re: [zfs-discuss] corrupt zfs stream? "checksum mismatch"

2008-08-12 Thread Jonathan Wheeler
other helpful chap pointed out, if tar encounters an error in the bitstream it just moves on until it finds usable data again. Can zfs not do something similar? I'll take whatever i can get! Jonathan This message posted from opensolaris.org ___ z

Re: [zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-13 Thread Jonathan Wheeler
over the /home fs from the pre-zfsroot.zfs dump? Since there seems to be a problem with the first fs (faith/virtualmachines), I need to find a way to skip restoring that zfs, so it can focus on the faith/home fs. How can this be achieved with zfs receive? Jonathan This message posted from

Re: [zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-13 Thread Jonathan Wheeler
ID=220125 It's way over my head, but if anyone can tell me the mdb commands I'm happy to try them, even if they do kill my cat. I don't really have anything to loose with a copy of the data, and I'll do it all in a VM anyway. Thanks, Jonathan This message posted from

Re: [zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-15 Thread Jonathan Wheeler
e a chance of being recovered. If it stops half way, it has _no_ chance of recovering that data, so I favor my odds of letting it go on to at least try :) Or is that an entirely new CR itself? Jonathan This message posted from opensolaris.org

Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-28 Thread Jonathan Loran
value of a failure in one year: Fe = 46% failures/month * 12 months = 5.52 failures Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Science

Re: [zfs-discuss] zfs-auto-snapshot default schedules

2008-09-25 Thread Jonathan Hogg
s requires me to a) type more; and b) remember where the top of the filesystem is in order to split the path. This is obviously more of a pain if the path is 7 items deep, and the split means you can't just use $PWD. [My choice of .snapshot/nightly.0 is a deliberate nod to the

Re: [zfs-discuss] zfs-auto-snapshot default schedules

2008-09-25 Thread Jonathan Hogg
On 25 Sep 2008, at 17:14, Darren J Moffat wrote: > Chris Gerhard has a zfs_versions script that might help: > http://blogs.sun.com/chrisg/entry/that_there_is Ah. Cool. I will have to try this out. Jonathan ___ zfs-discuss mailing list zfs-d

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Jonathan Loran
two vdevs out of two raidz to see if you get twice the throughput, more or less. I'll bet the answer is yes. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ /

[zfs-discuss] [Fwd: Another ZFS question]

2008-09-27 Thread jonathan sai
Hi Please see the query below. Appreciate any help. Rgds jonathan Original Message Would you mind helping me ask your tech guy whether there will be repercussions when I try to run this command in view of the situation below: # /*zpool add -f zhome raidz

Re: [zfs-discuss] [storage-discuss] ZFS Success Stories

2008-10-20 Thread Jonathan Loran
tools, resilience of the platform, etc.).. > > .. Of course though, I guess a lot of people who may have never had a > problem wouldn't even be signed up on this list! :-) > > > Thanks! > ___ > storage-discuss mailing li

Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread Jonathan Hogg
y, give it a go and see what happens. I'm sure I can still dimly recall a time when 500MHz/512MB was a kick-ass system... Jonathan (*) This machine can sustain 110MB/s off of the 4-disk RAIDZ1 set, which is substantially more than I can get over my 100Mb network. ___

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Jonathan Loran
the system board for this machine would make use of ECC memory either, which is not good from a ZFS perspective. How many SATA plugs are there on the MB in this guy? Jon -- - _/ _/ / - Jonathan Loran - - -/ / /I

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-22 Thread Jonathan Edwards
not quite .. it's 16KB at the front and 8MB back of the disk (16384 sectors) for the Solaris EFI - so you need to zero out both of these of course since these drives are <1TB you i find it's easier to format to SMI (vtoc) .. with format -e (choose SMI, label, save, validate - then choose EFI

Re: [zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jonathan Edwards
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote: Jim Dunham wrote: ZFS the filesystem is always on disk consistent, and ZFS does maintain filesystem consistency through coordination between the ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for SNDR, ZFS caches a lot o

[zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
es in tact? I'm going to perform a full backup of this guy (not so easy on my budget), and I would rather only get the good files. Thanks, Jon - _/ _/ / - Jonathan Loran - - -/ / /

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
on On Jun 1, 2009, at 2:41 PM, Paul Choi wrote: "zpool clear" just clears the list of errors (and # of checksum errors) from its stats. It does not modify the filesystem in any manner. You run "zpool clear" to make the zpool forget that it ever had any issues. -Paul Jonat

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
he zfs layer, and also do backups. Unfortunately for me, penny pinching has precluded both for us until now. Jon On Jun 1, 2009, at 4:19 PM, A Darren Dunham wrote: On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote: Kinda scary then. Better make sure we delete all the bad fil

Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Jonathan Edwards
i've seen a problem where periodically a 'zfs mount -a' and sometimes a 'zpool import ' can create what appears to be a race condition on nested mounts .. that is .. let's say that i have: FS mountpoint pool/export pool/fs1

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 12:03 AM, Bob Friesenhahn wrote: % ./diskqual.sh c1t0d0 130 MB/sec c1t1d0 130 MB/sec c2t202400A0B83A8A0Bd31 13422 MB/sec c3t202500A0B83A8A0Bd31 13422 MB/sec c4t600A0B80003A8A0B096A47B4559Ed0 191 MB/sec c4t600A0B80003A8A0B096E47B456DAd0 192 MB/sec c4t600A0B80003A8A0B00

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 11:57 AM, Bob Friesenhahn wrote: This brings me to the absurd conclusion that the system must be rebooted immediately prior to each use. see Phil's later email .. an export/import of the pool or a remount of the filesystem should clear the page cache - with mmap'd files

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Jonathan Borden
> > > We have a SC846E1 at work; it's the 24-disk, 4u > version of the 826e1. > > It's working quite nicely as a SATA JBOD enclosure. > We'll probably be > buying another in the coming year to have more > capacity. > Good to hear. What HBA(s) are you using against it? > I've got one too and it

Re: [zfs-discuss] Books on File Systems and File System Programming

2009-08-15 Thread Jonathan Edwards
On Aug 14, 2009, at 11:14 AM, Peter Schow wrote: On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette wrote: I saw this question on another mailing list, and I too would like to know. And I have a couple questions of my own. == Paraphrased from other list == Does anyone have a

Re: [zfs-discuss] This is the scrub that never ends...

2009-09-10 Thread Jonathan Edwards
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote: On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote: Some hours later, here I am again: scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go Any suggestions? Let it run for another day. A pool on a build server I manage takes ab

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread Jonathan Edwards
Roch I've been chewing on this for a little while and had some thoughts On Jan 15, 2007, at 12:02, Roch - PAE wrote: Jonathan Edwards writes: On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given files

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 09:25, Peter Eriksson wrote: too much of our future roadmap, suffice it to say that one should expect much, much more from Sun in this vein: innovative software and innovative hardware working together to deliver world-beating systems with undeniable economics. Yes p

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 06:54, Roch - PAE wrote: [EMAIL PROTECTED] writes: Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I forgo

Re: [zfs-discuss] Thumper Origins Q

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 12:41, Bryan Cantrill wrote: well, "Thumper" is actually a reference to Bambi You'd have to ask Fowler, but certainly when he coined it, "Bambi" was the last thing on anyone's mind. I believe Fowler's intention was "one that thumps" (or, in the unique parlance of a

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Jonathan Edwards
On Jan 25, 2007, at 10:16, Torrey McMahon wrote: Albert Chin wrote: On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote: On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill <[EMAIL PROTECTED]> wrote: On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote: Well, he did sa

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Jonathan Edwards
On Jan 25, 2007, at 14:34, Bill Sommerfeld wrote: On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote: So there's no way to treat a 6140 as JBOD? If you wanted to use a 6140 with ZFS, and really wanted JBOD, your only choice would be a RAID 0 config on the 6140? Why would you want to

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Jonathan Edwards
On Jan 25, 2007, at 17:30, Albert Chin wrote: On Thu, Jan 25, 2007 at 02:24:47PM -0600, Al Hopper wrote: On Thu, 25 Jan 2007, Bill Sommerfeld wrote: On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote: So there's no way to treat a 6140 as JBOD? If you wanted to use a 6140 with ZFS, an

Re: [zfs-discuss] multihosted ZFS

2007-01-26 Thread Jonathan Edwards
On Jan 26, 2007, at 13:52, Marion Hakanson wrote: [EMAIL PROTECTED] said: . . . realize that the pool is now in use by the other host. That leads to two systems using the same zpool which is not nice. Is there any solution to this problem, or do I have to get Sun Cluster 3.2 if I want to

Re: [zfs-discuss] ZFS or UFS - what to do?

2007-01-29 Thread Jonathan Edwards
On Jan 26, 2007, at 09:16, Jeffery Malloch wrote: Hi Folks, I am currently in the midst of setting up a completely new file server using a pretty well loaded Sun T2000 (8x1GHz, 16GB RAM) connected to an Engenio 6994 product (I work for LSI Logic so Engenio is a no brainer). I have config

Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-29 Thread Jonathan Edwards
On Jan 29, 2007, at 14:17, Jeffery Malloch wrote: Hi Guys, SO... From what I can tell from this thread ZFS if VERY fussy about managing writes,reads and failures. It wants to be bit perfect. So if you use the hardware that comes with a given solution (in my case an Engenio 6994) to ma

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-02 Thread Jonathan Edwards
On Feb 2, 2007, at 15:35, Nicolas Williams wrote: Unlike traditional journalling replication, a continuous ZFS send/recv scheme could deal with resource constraints by taking a snapshot and throttling replication until resources become available again. Replication throttling would mean losing s

Re: [zfs-discuss] Which label a ZFS/ZPOOL device has ? VTOC or EFI ?

2007-02-04 Thread Jonathan Edwards
On Feb 3, 2007, at 02:31, dudekula mastan wrote: After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages. Feb 3 07:47:00 scoobyb Corrupt label; wrong magic number Feb 3 07:47:00 scoobyb scsi: [ID 107833 kern.warning] WARNING: / scsi_vhci/[

Re: [zfs-discuss] se3510 and ZFS

2007-02-06 Thread Jonathan Edwards
On Feb 6, 2007, at 06:55, Robert Milkowski wrote: Hello zfs-discuss, It looks like when zfs issues write cache flush commands se3510 actually honors it. I do not have right now spare se3510 to be 100% sure but comparing nfs/zfs server with se3510 to another nfs/ufs server with se3510 w

Re: Re[2]: [zfs-discuss] se3510 and ZFS

2007-02-06 Thread Jonathan Edwards
On Feb 6, 2007, at 11:46, Robert Milkowski wrote: Does anybody know how to tell se3510 not to honor write cache flush commands? JE> I don't think you can .. DKIOCFLUSHWRITECACHE *should* tell the array JE> to flush the cache. Gauging from the amount of calls that zfs makes to JE>

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Jonathan Edwards
Roch what's the minimum allocation size for a file in zfs? I get 1024B by my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/ znode allocation) since we never pack file data in the inode/znode. Is this a problem? Only if you're trying to pack a lot files small byte fil

Re: [zfs-discuss] Re: Perforce on ZFS

2007-02-20 Thread Jonathan Edwards
On Feb 20, 2007, at 15:05, Krister Johansen wrote: what's the minimum allocation size for a file in zfs? I get 1024B by my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/ znode allocation) since we never pack file data in the inode/znode. Is this a problem? Only if you're t

Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Jonathan Edwards
right on for optimizing throughput on solaris .. a couple of notes though (also mentioned in the QFS manuals): - on x86/x64 you're just going to have an sd.conf so just increase the max_xfer_size for all with a line at the bottom like: sd_max_xfer_size=0x80; (note: if you look at

[zfs-discuss] Move data from the zpool (root) to a zfs file system

2007-04-13 Thread Jonathan Loran
be very much appreciated. Thanks, Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ /

Re: [zfs-discuss] Issue with adding existing EFI disks to a zpool

2007-05-05 Thread Jonathan Edwards
You know you've got an empty label if you get stderr entries at the top of the format output, or syslog messages around "corrupt label - bad magic number" Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Jonathan Edwards
On May 15, 2007, at 13:13, Jürgen Keil wrote: Would you mind also doing: ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1 to see the raw performance of underlying hardware. This dd command is reading from the block device, which might cache dataand probably splits requests into

Re: [zfs-discuss] Re: shareiscsi is cool, but what about sharefc or sharescsi?

2007-06-01 Thread Jonathan Edwards
On Jun 1, 2007, at 18:37, Richard L. Hamilton wrote: Can one use a spare SCSI or FC controller as if it were a target? we'd need an FC or SCSI target mode driver in Solaris .. let's just say we used to have one, and leave it mysteriously there. smart idea though! --- .je

Re: [zfs-discuss] ZFS raid is very slow???

2007-07-07 Thread Jonathan Edwards
B file write of zeros .. or use a better opensource tool like iozone to get a better fix on single thread vs multi-thread, read/write mix, and block size differences for your given filesystem and storage layout jonathan ___ zfs-discuss mailing li

Re: [zfs-discuss] Samba with ZFS ACL

2007-09-04 Thread Jonathan Edwards
;ll need the following in the smb.conf [public] section: vfs objects = zfsacl nfs4: mode = special and for other issues around samba and the zfs_acl patch you should really watch jurasek's blog: http://blogs.sun.com/jurasek/ jonathan _

Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-06 Thread Jonathan Edwards
On Sep 6, 2007, at 14:48, Nicolas Williams wrote: >> Exactly the articles point -- rulings have consequences outside of >> the >> original case. The intent may have been to store logs for web server >> access (logical and prudent request) but the ruling states that >> RAM albeit >> working m

Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Jonathan Adams
ill I see the benefit of compression > on the blocks > that are copied by the mirror being resilvered? No; resilvering just re-copies the existing blocks, in whatever compression state they are in. You need to re-write the files *at the filesystem layer* to get the blocks compressed. Cheer

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-13 Thread Jonathan Loran
ks! > Kent > > > > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-14 Thread Jonathan Loran
C-SAT2-MV8.cfm) > for about $100 each > >> Good luck, > Getting there - can anybody clue me into how much CPU/Mem ZFS > needs?I have an old 1.2Ghz with 1Gb of mem laying around - would > it be sufficient? > > > Thanks! > Kent &g

Re: [zfs-discuss] The ZFS-Man.

2007-09-21 Thread Jonathan Edwards
On Sep 21, 2007, at 14:57, eric kustarz wrote: >> Hi. >> >> I gave a talk about ZFS during EuroBSDCon 2007, and because it won >> the >> the best talk award and some find it funny, here it is: >> >> http://youtube.com/watch?v=o3TGM0T1CvE >> >> a bit better version is here: >> >> http:

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-22 Thread Jonathan Loran
roblem of worrying about where a user's files are when they want to access them :(. -- - _____/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Spa

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-24 Thread Jonathan Loran
Paul B. Henson wrote: On Sat, 22 Sep 2007, Jonathan Loran wrote: My gut tells me that you won't have much trouble mounting 50K file systems with ZFS. But who knows until you try. My questions for you is can you lab this out? Yeah, after this research phase has been comp

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards
On Sep 25, 2007, at 19:57, Bryan Cantrill wrote: > > On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote: >> It seems like ZIL is a separate issue. > > It is very much the issue: the seperate log device work was done > exactly > to make better use of this kind of non-volatile memory.

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards
On Sep 26, 2007, at 14:10, Torrey McMahon wrote: > You probably don't have to create a LUN the size of the NVRAM > either. As > long as its dedicated to one LUN then it should be pretty quick. The > 3510 cache, last I checked, doesn't do any per LUN segmentation or > sizing. Its a simple front

Re: [zfs-discuss] Sun 6120 array again

2007-10-01 Thread Jonathan Edwards
SCSI based, but solid and cheap enclosures if you don't care about support: http://search.ebay.com/search/search.dll?satitle=Sun+D1000 On Oct 1, 2007, at 12:15, Andy Lubel wrote: > I gave up. > > The 6120 I just ended up not doing zfs. And for our 6130 since we > don't > have santricity or t

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jonathan Loran
rites enough to make a difference? Possibly not. Anton This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- - _/ _/

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-05 Thread Jonathan Loran
Nicolas Williams wrote: On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote: I can envision a highly optimized, pipelined system, where writes and reads pass through checksum, compression, encryption ASICs, that also locate data properly on disk. ... I've argued b

Re: [zfs-discuss] HAMMER

2007-10-16 Thread Jonathan Loran
rg/mailman/listinfo/zfs-discuss -- - _/ _____/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / /

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
Richard Elling wrote: > Jonathan Loran wrote: ... > Do not assume that a compressed file system will send compressed. > IIRC, it > does not. Let's say, if it were possible to detect the remote compression support, couldn't we send it compressed? With higher compression

Re: [zfs-discuss] df command in ZFS?

2007-10-18 Thread Jonathan Edwards
On Oct 18, 2007, at 11:57, Richard Elling wrote: > David Runyon wrote: >> I was presenting to a customer at the EBC yesterday, and one of the >> people at the meeting said using df in ZFS really drives him crazy >> (no, >> that's all the detail I have). Any ideas/suggestions? > > Filter it. T

Re: [zfs-discuss] df command in ZFS?

2007-10-18 Thread Jonathan Edwards
On Oct 18, 2007, at 13:26, Richard Elling wrote: > > Yes. It is true that ZFS redefines the meaning of available space. > But > most people like compression, snapshots, clones, and the pooling > concept. > It may just be that you want zfs list instead, df is old-school :-) exactly - i'm not

Re: [zfs-discuss] Distribued ZFS

2007-10-21 Thread Jonathan Edwards
On Oct 20, 2007, at 20:23, Vincent Fox wrote: > To my mind ZFS has a serious deficiency for JBOD usage in a high- > availability clustered environment. > > Namely, inability to tie spare drives to a particular storage group. > > Example in clustering HA setups you would would want 2 SAS JBOD >

Re: [zfs-discuss] Count objects/inodes

2007-11-10 Thread Jonathan Edwards
Hey Bill: what's an object here? or do we have a mapping between "objects" and block pointers? for example a zdb -bb might show: th37 # zdb -bb rz-7 Traversing all blocks to verify nothing leaked ... No leaks (block sum matches space maps exactly) bp count: 47

Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-12 Thread Jonathan Edwards
On Nov 10, 2007, at 23:16, Carson Gaspar wrote: > Mattias Pantzare wrote: > >> As the fsid is created when the file system is created it will be the >> same when you mount it on a different NFS server. Why change it? >> >> Or are you trying to match two different file systems? Then you also >> ha

Re: [zfs-discuss] mdb ::memstat including zfs buffer details?

2007-11-12 Thread Jonathan Adams
think it should be too bad (for ::memstat), given that (at least in Nevada), all of the ZFS caching data belongs to the "zvp" vnode, instead of "kvp". The work that made that change was: 4894692 caching data in heap inflates crash dump Of course, this so-called "fr

Re: [zfs-discuss] mdb ::memstat including zfs buffer details?

2007-11-12 Thread Jonathan Adams
ata buffers are attached to zvp; however, we still keep metadata in > the crashdump. At least right now, this means that cached ZFS metadata > has kvp as its vnode. > > Still, it's better than what you get currently. Cheers, - jonathan _

Re: [zfs-discuss] Yager on ZFS

2007-11-13 Thread Jonathan Stewart
where 1-1.5MB jpegs and the errors moved around so I could have just copied a file repeatedly until I got a good copy but that would have been a lot of work. Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] I screwed up my zpool

2007-12-03 Thread jonathan soons
revised indentation: mirror2 / # zpool status pool: tank state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM tankONLINE 0 0 0 raidz2ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 --

[zfs-discuss] I screwed up my zpool

2007-12-03 Thread jonathan soons
ith c4t0d0 plus some more disks since there are more than the recommended number of disks in tank already. jonathan soons This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.o

  1   2   3   >