Re: [zfs-discuss] /bin/cp vs /usr/gnu/bin/pc

2010-06-26 Thread Dennis Clarke
> Hello, > > I came across this blog post: > > http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/ > > and would like to hear from you performance gurus how this 2007 > article relates to the 2010 ZFS implementation? What should I use and > why?

Re: [zfs-discuss] zfs iostat - which unit bit vs. byte

2010-06-17 Thread Dennis Clarke
> Hi-- > > ZFS command operations involving disk space take input and display using > numeric values specified as exact values, or in a human-readable form > with a suffix of B, K, M, G, T, P, E, Z for bytes, kilobytes, megabytes, > gigabytes, terabytes, petabytes, exabytes, or zettabytes. > Let'

Re: [zfs-discuss] Dedup... still in beta status

2010-06-16 Thread Dennis Clarke
6120 fibre arrays ( in HA config ) and I could not get it to become a warm brick like you describe. How many processors does your machine have ? -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open source for

Re: [zfs-discuss] Dedup performance hit

2010-06-14 Thread Dennis Clarke
n is to either buy more RAM, or find > something you can use as an L2ARC cache device for your pool. Ideally, > it would be an SSD. However, in this case, a plain hard drive would do > OK (NOT one already in a pool).To add such a device, you would do: > 'zpool add tank my

Re: [zfs-discuss] swap - where is it coming from?

2010-06-10 Thread Dennis Clarke
> Re-read the section on"Swap Space and Virtual Memory" for particulars on > how Solaris does virtual memory mapping, and the concept of Virtual Swap > Space, which is what 'swap -s' is really reporting on. The Solaris Internals book is awesome for this sort of thing. A bit over the top in detail

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Dennis Clarke
0 c4t2004CF9B63D0d0s0 ONLINE 0 0 0 So the manner in which any given IO transaction gets to the zfs filesystem just gets ever more complicated and convoluted and it makes me wonder if I am tossing away performance to get higher levels of safety. -- Dennis Clarke dcla...@

Re: [zfs-discuss] zfs/lofi/share panic

2010-05-27 Thread Dennis Clarke
happens. r...@aequitas:/# unshare /mnt r...@aequitas:/# share -F nfs -o nosub,nosuid,sec=sys,ro,anon=0 /mnt r...@aequitas:/# unshare /mnt r...@aequitas:/# Guess I must now try this with a ZFS fs under that iso file. -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open sourc

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Dennis Clarke
>On 05-17-10, Thomas Burgess wrote:  >psrinfo -pv shows: > >The physical processor has 8 virtual processors (0-7) >    x86  (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz) >               AMD Opteron(tm) Processor 6128   [  Socket: G34 ] > That's odd. Please try this : # kstat -m

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-15 Thread Dennis Clarke
- Original Message - From: Thomas Burgess Date: Saturday, May 15, 2010 8:09 pm Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris? To: Orvar Korvar Cc: zfs-discuss@opensolaris.org > Well i just wanted to let everyone know that preliminary results are good. > The liv

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-12 Thread Dennis Clarke
ossible.[1] If you want you can ssh in to the blastwave server farm and jump on that also ... I'm always game to play with such things. -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open source for Solar

Re: [zfs-discuss] why both dedup and compression?

2010-05-07 Thread Dennis Clarke
> On 06/05/2010 21:07, Erik Trimble wrote: >> VM images contain large quantities of executable files, most of which >> compress poorly, if at all. > > What data are you basing that generalisation on ? note : I can't believe someone said that. warning : I just detected a fast rise time on my peda

Re: [zfs-discuss] ZFS kstat Stats

2010-04-08 Thread Dennis Clarke
> Do the following ZFS stats look ok? > >> ::memstat > Page Summary Pages MB %Tot > > Kernel 106619 832 28% > ZFS File Data 79817 623 21% > Anon 28553 223 7% > Exec and libs 3055 23 1% > Page cache 18024 140 5% > Free (cachelist) 2880 22 1% > Fre

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-10 Thread Dennis Clarke
>>>>>> "ea" == erik ableson writes: >>>>>> "dc" == Dennis Clarke writes: > > >> "rw,ro...@100.198.100.0/24", it works fine, and the NFS client > >> can do the write without error. > >

Re: [zfs-discuss] sharenfs option rw,root=host1 don't take effect

2010-03-09 Thread Dennis Clarke
> Hi All, > I had create a ZFS filesystem test and shared it with "zfs set > sharenfs=root=host1 test", and I checked the sharenfs option and it > already update to "root=host1": Try to use a backslash to escape those special chars like so : zfs set sharenfs=nosub\,nosuid\,rw\=hostname1\:hostna

Re: [zfs-discuss] getting drive serial number

2010-03-07 Thread Dennis Clarke
> build from source: > http://smartmontools.sourceforge.net/ > You can find it at http://blastwave.network.com/csw/unstable/ Just install it with pkgadd or use pkgtrans to extract it and then run the binary. -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open sour

Re: [zfs-discuss] false DEGRADED status based on "cannot open" device at boot.

2010-02-17 Thread Dennis Clarke
> On Wed, 17 Feb 2010, Dennis Clarke wrote: >> >>NAME STATE READ WRITE CKSUM >>mercury_rpool ONLINE 0 0 0 >> mirror ONLINE 0 0 0 >>c3t0d0s0 ONLINE 0 0 0 >>

[zfs-discuss] false DEGRADED status based on "cannot open" device at boot.

2010-02-17 Thread Dennis Clarke
boot with init S or 3 or 6. -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open source for Solaris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Dennis Clarke
otable. You probably have to go through the installboot procedure for that. -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open source for Solaris ___ zfs-discuss mail

Re: [zfs-discuss] possible to remove a mirror pair from a zpool?

2010-01-10 Thread Dennis Clarke
> No, sorry Dennis, this functionality doesn't exist yet, but > is being worked, > but will take a while, lots of corner cases to handle. > > James Dickens > uadmin.blogspot.com 1 ) dammit 2 ) looks like I need to do a full offline backup and then restore to shrink a zpool. As usual, Thanks

[zfs-discuss] possible to remove a mirror pair from a zpool?

2010-01-10 Thread Dennis Clarke
Suppose the requirements for storage shrink ( it can happen ) is it possible to remove a mirror set from a zpool? Given this : # zpool status array03 pool: array03 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Dennis Clarke
an also use star which may speed things up, safely. star -copy -p -acl -sparse -dump -xdir -xdot -fs=96m -fifostats -time \ -C source_dir . destination_dir that will buffer the transport of the data from source to dest via memory and work to keep that buffer full as data is written on the out

Re: [zfs-discuss] invalid mountpoint 'mountpoint=legacy' ?

2009-12-22 Thread Dennis Clarke
I hate it when I do that .. 30 secs later I see -m mountpoint which is a Property but not specified as -o foo-bar format. erk # ptime zpool create -f -o autoreplace=on -o version=10 \ > -m legacy \ > fibre01 mirror c2t0d0 c3t16d0 \ > mirror c2t1d0 c3t17d0 \ > mirror c2t2d0 c3t18d0 \ > mirror c2t

[zfs-discuss] invalid mountpoint 'mountpoint=legacy' ?

2009-12-22 Thread Dennis Clarke
none' real 14.884950400 user0.998020300 sys 3.334027400 -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open source for Solaris ___ zfs-discuss mailing

Re: [zfs-discuss] quotas on zfs at solaris 10 update 9 (10/09)

2009-12-10 Thread Dennis Clarke
t reports a long list of names under /export/home or similar. Then you can easily see the used space per filesystem. Allocating user quotas and then asking the simple questions seems mysterious to me also. I am looking into this for my own reasons and will stay in touch. -- Dennis Clarke dcl

Re: [zfs-discuss] b128a available w/deduplication

2009-12-03 Thread Dennis Clarke
> Dennis Clarke wrote: >>> FYI, >>> OpenSolaris b128a is available for download or image-update from the >>> dev repository. Enjoy. >> >> I thought that dedupe has been out for weeks now ? > > The source has, yes. But what Richard was referring

Re: [zfs-discuss] b128a available w/deduplication

2009-12-03 Thread Dennis Clarke
> FYI, > OpenSolaris b128a is available for download or image-update from the > dev repository. Enjoy. I thought that dedupe has been out for weeks now ? Dennis ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] dedupe question

2009-11-08 Thread Dennis Clarke
7.85G 7.85G dedup = 1.96, compress = 1.51, copies = 1.00, dedup * compress / copies = 2.95 # I have no idea what any of that means, yet :-) -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org &l

Re: [zfs-discuss] dedupe question

2009-11-08 Thread Dennis Clarke
> On Sat, 7 Nov 2009, Dennis Clarke wrote: >> >> Now the first test I did was to write 26^2 files [a-z][a-z].dat in 26^2 >> directories named [a-z][a-z] where each file is 64K of random >> non-compressible data and then some english text. > > What method did you

Re: [zfs-discuss] dedupe question

2009-11-07 Thread Dennis Clarke
> On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote: >> Does the dedupe functionality happen at the file level or a lower block >> level? > > it occurs at the block allocation level. > >> I am writing a large number of files that have the fol structure : &g

[zfs-discuss] dedupe question

2009-11-07 Thread Dennis Clarke
Does the dedupe functionality happen at the file level or a lower block level? I am writing a large number of files that have the fol structure : -- file begins 1024 lines of random ASCII chars 64 chars long some tilde chars .. about 1000 of then some text ( english ) for 2K more text ( engl

Re: [zfs-discuss] Quick dedup question

2009-11-07 Thread Dennis Clarke
e12.5G- neptune_rpool allocated 21.3G- I'm currently running tests with this : http://www.blastwave.org/dclarke/crucible_source.txt -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related t

Re: [zfs-discuss] SunOS neptune 5.11 snv_127 sun4u sparc SUNW, Sun-Fire-880

2009-11-03 Thread Dennis Clarke
> Dennis Clarke wrote: >> I just went through a BFU update to snv_127 on a V880 : >> >> neptune console login: root >> Password: >> Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console >> Last login: Mon Nov 2 16:40:36 on console >> Sun Microsystems

[zfs-discuss] SunOS neptune 5.11 snv_127 sun4u sparc SUNW, Sun-Fire-880

2009-11-03 Thread Dennis Clarke
re at all or shall I just wait for the putback to hit the mercurial repo ? Yes .. this is sort of begging .. but I call it "enthusiasm" :-) -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to op

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Dennis Clarke
a SHA512 based de-dupe implementation would be possible and even realistic. That would solve the hash collision concern I would think. Merely thinking out loud here ... -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org &l

[zfs-discuss] root pool can not have multiple vdevs ?

2009-10-27 Thread Dennis Clarke
This seems like a bit of a restriction ... is this intended ? # cat /etc/release Solaris Express Community Edition snv_125 SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms.

Re: [zfs-discuss] You really do need ECC RAM

2009-10-10 Thread Dennis Clarke
ore comparable, ranging from 3351–4530 correctable errors per year." B. Schroeder, E. Pinheiro, W.-D. Weber. "DRAM errors in the wild: A Large-Scale Field Study." Sigmetrics/Performance 2009 see http://www.cs.toronto.edu/~bianca/ -- Dennis Clarke dcla...@opensolaris.ca

Re: [zfs-discuss] True in U4? "Tar and cpio...save and restore ZFS File attributes and ACLs"

2009-10-01 Thread Dennis Clarke
h those strange ACL's there. $ cd /home/dclarke/test $ rm -rf destination I'll do some more testing with star 1.5a89 and let you know what I see. -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open sourc

Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-16 Thread Dennis Clarke
html that was fast . Cyril, long time no hear. :-( Hows life the universe and risc processors for you these days ? -- Dennis Clarke dcla...@opensolaris.ca <- Email related to the open source Solaris dcla...@blastwave.org <- Email related to open source for Solaris ps: I have been busy po

Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-16 Thread Dennis Clarke
It wasn't : # zfs get refquota,refreservation,quota,reservation fibre0 NAMEPROPERTYVALUE SOURCE fibre0 refquotanone default fibre0 refreservation none default fibre0 quota none default fibre0 reservation none default what the

[zfs-discuss] zpool iostat reports seem odd. bug ?

2009-08-10 Thread Dennis Clarke
like the write traffic to the new device is being ignored in the non-verbose output data. -- Dennis Clarke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Zpool lazy mirror ?

2009-07-19 Thread Dennis Clarke
self replies are so degrading ( pun intended ) I see this patch : Document Audience: PUBLIC Document ID:139555-08 Title: SunOS 5.10: Kernel Patch Copyright Notice: Copyright © 2009 Sun Microsystems, Inc. All Rights Reserved Update Date:Fri Jul 10 04:29:40 MDT 2009 I have a

[zfs-discuss] ZFS Zpool lazy mirror ?

2009-07-19 Thread Dennis Clarke
Pardon me but I had to change subject lines just to get out of that other thread. In that other thread .. you were saying : >> dick hoogendijk uttered: >> true. Furthermore, much so-called consumer hardware is very good these >> days. My guess is ZFS should work quite reliably on that hardware.

Re: [zfs-discuss] The zfs performance decrease when enable the MPxIO round-robin

2009-07-19 Thread Dennis Clarke
> > To enable mpxio, you need to have > > mpxio-disable="no"; > > in your fp.conf file. You should run /usr/sbin/stmsboot -e to make > this happen. If you *must* edit that file by hand, always run > /usr/sbin/stmsboot -u afterwards to ensure that your system's MPxIO > config is correctly updated.

[zfs-discuss] Thank you.

2009-07-15 Thread Dennis Clarke
e needs to buy the ZFS guys some keg(s) of whatever beer they want. Or maybe new Porsche Cayman S toys. That would be gratitude as something more than just words. Thank you. -- Dennis Clarke ps: the one funny thing is that I had to get a few things swapped out and I guess that resets th

Re: [zfs-discuss] first use send/receive... somewhat confused.

2009-07-13 Thread Dennis Clarke
> Dennis Clarke writes: > >> This will probably get me bombed with napalm but I often just >> use star from Jörg Schilling because its dead easy : >> >> star -copy -p -acl -sparse -dump -C old_dir . new_dir >> >> and you're done.[1] >> &g

Re: [zfs-discuss] first use send/receive... somewhat confused.

2009-07-13 Thread Dennis Clarke
> Richard Elling writes: > >> You can only send/receive snapshots. However, on the receiving end, >> there will also be a dataset of the name you choose. Since you didn't >> share what commands you used, it is pretty impossible for us to >> speculate what you might have tried. > > I thought I m

Re: [zfs-discuss] zfs on 32 bit?

2009-06-16 Thread Dennis Clarke
> On Tue, 16 Jun 2009, roland wrote: > >> so, we have a 128bit fs, but only support for 1tb on 32bit? >> >> i`d call that a bug, isn`t it ? is there a bugid for this? ;) > > I'd say the bug in this instance is using a 32-bit platform in 2009! :-) Rich, a lot of embedded industrial solutions are

Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Dennis Clarke
o be enabled. I agree that "Compression is a choice" and would add : Compression is a choice and it is the default. Just my feelings on the issue. Dennis Clarke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quick adding devices question

2009-05-29 Thread Dennis Clarke
rror, and so on. In either case, new_device begins to resilver immediately. so yeah, you have it. Want to go for bonus points? Try to read into that man page to figure out how to add a hot spare *after* you are all mirrored up. -- Dennis Clarke _

[zfs-discuss] using zdb -e -bbcsL to debug that hung thread issue

2009-05-10 Thread Dennis Clarke
.@blastwave.org ------ Dennis Clarke wrote: > # w > 3:14pm up 11:24, 3 users, load average: 0.46, 0.29, 0.23 > User tty login@ idle JCPU PCPU what > dclarke console 1:22pm 1:52 2:02 1:31 /usr/lib/nwam-manager > dclarke pts/4 1:44pm 1:10

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-10 Thread Dennis Clarke
>> CTRL+C does nothing and kill -9 pid does nothing to this command. >> >> feels like a bug to me > > Yes, it is: > > http://bugs.opensolaris.org/view_bug.do?bug_id=6758902 > Now I recall why I had to reboot. Seems as if a lot of commands hang now. Things like : df -ak zfs list zpool list t

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-10 Thread Dennis Clarke
> Dennis Clarke wrote: >>> Dennis Clarke wrote: >>>>>>> It may be because it is blocked in kernel. >>>>>>> Can you do something like this: >>>>>>> echo "0t::pid2proc|::walk thread|::findstack >>>>>

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-10 Thread Dennis Clarke
> Dennis Clarke wrote: >>>>> It may be because it is blocked in kernel. >>>>> Can you do something like this: >>>>> echo "0t::pid2proc|::walk thread|::findstack -v" >>> So we see that it cannot complete import here and is waitin

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-10 Thread Dennis Clarke
ONLINE c0d0p0ONLINE # please see ALL the details at : http://www.blastwave.org/dclarke/blog/files/kernel_thread_stuck.README also see output from fmdump -eV http://www.blastwave.org/dclarke/blog/files/fmdump_e.log Please let me know what else you may need. -- Dennis Clarke __

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-09 Thread Dennis Clarke
> Dennis Clarke wrote: >> I tried to import a zpool and the process just hung there, doing nothing. >> It has been ten minutes now so I tries to hit CTRL-C. That did nothing. > > It may be because it is blocked in kernel. > > Can you do something like this: > >

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-09 Thread Dennis Clarke
> Dennis Clarke wrote: >> I tried to import a zpool and the process just hung there, doing >> nothing. >> It has been ten minutes now so I tries to hit CTRL-C. That did nothing. >> > > This symptom is consistent with a process blocked waiting on disk I/O.

Re: [zfs-discuss] is zpool import unSIGKILLable ?

2009-05-09 Thread Dennis Clarke
> Dennis Clarke wrote: >> I tried to import a zpool and the process just hung there, doing >> nothing. >> It has been ten minutes now so I tries to hit CTRL-C. That did nothing. >> > > This symptom is consistent with a process blocked waiting on disk I/O. >

[zfs-discuss] is zpool import unSIGKILLable ?

2009-05-09 Thread Dennis Clarke
I tried to import a zpool and the process just hung there, doing nothing. It has been ten minutes now so I tries to hit CTRL-C. That did nothing. So then I tried : Sun Microsystems Inc. SunOS 5.11 snv_110 November 2008 r...@opensolaris:~# ps -efl F S UID PID PPID C PRI NI

Re: [zfs-discuss] [on-discuss] Reliability at power failure?

2009-04-19 Thread Dennis Clarke
>> And after some 4 days without any CKSUM error, how can yanking the >> power cord mess boot-stuff? > > Maybe because on the fifth day some hardware failure occurred? ;-) ha ha ! sorry .. that was pretty funny. -- Dennis ___ zfs-discuss mailing lis

Re: [zfs-discuss] ZFS Honesty after a power failure

2009-03-24 Thread Dennis Clarke
> Hey, Dennis - > > I can't help but wonder if the failure is a result of zfs itself finding > some problems post restart... Yes, yes, this is what I am feeling also, but I need to find the data also and then I can sleep at night. I am certain that ZFS does not just toss out faults on a whim bec

Re: [zfs-discuss] ZFS Honesty after a power failure

2009-03-24 Thread Dennis Clarke
> On Tue, 24 Mar 2009, Dennis Clarke wrote: >> >> You would think so eh? >> But a transient problem that only occurs after a power failure? > > Transient problems are most common after a power failure or during > initialization. Well the issue here is that power w

Re: [zfs-discuss] ZFS Honesty after a power failure

2009-03-24 Thread Dennis Clarke
> On Tue, 24 Mar 2009, Dennis Clarke wrote: >> >> However, I have repeatedly run into problems when I need to boot after a >> power failure. I see vdevs being marked as FAULTED regardless if there >> are >> actually any hard errors reported by the on disk SMART Fi

[zfs-discuss] ZFS Honesty after a power failure

2009-03-24 Thread Dennis Clarke
c1t1d0s7 ONLINE 0 0 0 errors: No known data errors # fmadm faulty -afg # I do TOTALLY trust that last line that says "No known data errors" which makes me wonder if the Severe FAULTs are for unknown data errors :-) -- Dennis Clarke sig du jour : "An app

[zfs-discuss] Question about zpool create parameter "version"

2009-03-04 Thread Dennis Clarke
mirror c8t2004CFAC0E97d0 c8t202037F859F1d0 \ > mirror c8t2004CFB53F97d0 c8t202037F84044d0 \ > mirror c8t2004CFA3C3F2d0 c8t2004CF2FCE99d0 \ > mirror c8t2004CF9645A8d0 c8t2004CFA3F328d0 \ > mirror c8t202037F812EAd0 c8t2004CF96FF00d0 \ > mir

Re: [zfs-discuss] zpool import minor bug in snv_64a

2007-06-25 Thread Dennis Clarke
> You've tripped over a variant of: > > 6335095 Double-slash on /. pool mount points > > - Eric > oh well .. no points for originality then I guess :-) Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] zpool import minor bug in snv_64a

2007-06-25 Thread Dennis Clarke
> On Mon, Jun 25, 2007 at 02:34:21AM -0400, Dennis Clarke wrote: note that it was well after 2 AM for me .. half blind asleep that's my excuse .. I'm sticking to it. :-) >> >> > in /usr/src/cmd/zpool/zpool_main.c : >> > >> >> at line 680

Re: [zfs-discuss] zpool import minor bug in snv_64a

2007-06-24 Thread Dennis Clarke
> in /usr/src/cmd/zpool/zpool_main.c : > at line 680 forwards we can probably check for this scenario : if ( ( altroot != NULL ) && ( altroot[0] != '/') ) { (void) fprintf(stderr, gettext("invalid alternate root '%s': " "must be an absolute path\n"), altroot); nvlist_free(nvroot);

[zfs-discuss] zpool import minor bug in snv_64a

2007-06-24 Thread Dennis Clarke
Not sure if this has been reported or not. This is fairly minor but slightly annoying. After fresh install of snv_64a I run zpool import to find this : # zpool import pool: zfs0 id: 13628474126490956011 state: ONLINE status: The pool is formatted using an older on-disk version. action: T

Re: [zfs-discuss] FYI: X4500 (aka thumper) sale

2007-04-27 Thread Dennis Clarke
On 4/23/07, Richard Elling <[EMAIL PROTECTED]> wrote: FYI, Sun is having a big, 25th Anniversary sale. X4500s are half price -- 24 TBytes for $24k. ZFS runs really well on a X4500. http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101 I appologize for those not in the US or UK and ca

Re: [zfs-discuss] Re: Re: ZFS disables nfs/server on a host

2007-04-27 Thread Dennis Clarke
On 4/27/07, Ben Miller <[EMAIL PROTECTED]> wrote: I just threw in a truss in the SMF script and rebooted the test system and it failed again. The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007 324:read(7, 0x000CA00C, 5120) = 0 324:llsee

Re: [zfs-discuss] HowTo: UPS + ZFS & NFS + no fsync

2007-04-26 Thread Dennis Clarke
On 4/26/07, Roch - PAE <[EMAIL PROTECTED]> wrote: You might set zil_disable to 1 (_then_ mount the fs to be shared). But you're still exposed to OS crashes; those would still corrupt your nfs clients. For the love of God do NOT do stuff like that. Just create ZFS on a pile of disks the way t

[zfs-discuss] *** High Praise for ZFS and NFS services ***

2007-04-24 Thread Dennis Clarke
Dear ZFS and OpenSolaris people : I recently upgraded a large NFS server upwards from Solaris 8. This is a production manufacturing facility with football field sized factory floors and 25 tonne steel products. Many on-site engineers on AIX and CATIA as well as Solaris users and Windows and ev

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-18 Thread Dennis Clarke
On 4/18/07, J.P. King <[EMAIL PROTECTED]> wrote: > Can we discuss this with a few objectives ? Like define "backup" and > then describe mechanisms that may achieve one? Or a really big > question that I guess I have to ask, do we even care anymore? Personally I think you would benefit from s

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-18 Thread Dennis Clarke
On 4/18/07, Nicolas Williams <[EMAIL PROTECTED]> wrote: On Wed, Apr 18, 2007 at 03:47:55PM -0400, Dennis Clarke wrote: > Maybe with a definition of what a "backup" is and then some way to > achieve it. As far as I know the only real backup is one that can be > tossed int

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-18 Thread Dennis Clarke
On 4/18/07, Bill Sprouse <[EMAIL PROTECTED]> wrote: It seems that neither Legato nor NetBackup seem to lend themselves well to the notion of lots of file systems within storage pools from an administration perspective. Is there a preferred methodology for doing traditional backups to tape fr

[zfs-discuss] zpool iostat : This command can be tricky ...

2007-04-15 Thread Dennis Clarke
I really need to take a longer look here. /* * zpool iostat [-v] [pool] ... [interval [count]] * * -v Display statistics for individual vdevs * * This command can be tricky because we want to be able to deal with pool . . . I think I may need to deal with a raw option here ?

[zfs-discuss] modify zpool_main.c for raw iostat data

2007-04-15 Thread Dennis Clarke
bort(); /* NOTREACHED */ } The iostat_cbdata struct would need a new int element also : typedef struct iostat_cbdata { zpool_list_t *cb_list; /* * The cb_raw int is added here by Dennis Clarke */ int cb_raw; int cb_verbose; int cb_iterat

Re: [zfs-discuss] Re: Re: update on zfs boot support

2007-03-11 Thread Dennis Clarke
> Robert Milkowski wrote: >> Hello Ivan, >> Sunday, March 11, 2007, 12:01:28 PM, you wrote: >> >> IW> Got it, thanks, and a more general question, in a single disk >> IW> root pool scenario, what advantage zfs will provide over ufs w/ >> IW> logging? And when zfs boot integrated in neveda, will l

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Dennis Clarke
> > You don't honestly, really, reasonably, expect someone, anyone, to look > at the stack well of course he does :-) and I looked at it .. all of it and I can tell exactly what the problem is but I'm not gonna say because its a trick question. so there. Dennis

[zfs-discuss] Re: [osol-help] How to recover from "rm *"?

2007-02-18 Thread Dennis Clarke
> On Sun, 18 Feb 2007, Calvin Liu wrote: > >> I want to run command "rm Dis*" in a folder but mis-typed a space in it >> so it became "rm Dis *". Unfortunately I had pressed the return button >> before I noticed the mistake. So you all know what happened... :( :( :( > > Ouch! > >> How can I get th

Re: Re[2]: [zfs-discuss] 118855-36 & ZFS

2007-02-05 Thread Dennis Clarke
d and you can install them and run them in a very stable fashion long term. Once you add a single patch to that system you have wandered out of "this is shipped on media" to somewhere else. -- Dennis Clarke ___ zfs-discuss m

[zfs-discuss] impressive

2007-02-01 Thread Dennis Clarke
boldly plowing forwards I request a few disks/vdevs to be mirrored all at the same time : bash-3.2# zpool status zfs0 pool: zfs0 state: ONLINE scrub: resilver completed with 0 errors on Thu Feb 1 04:17:58 2007 config: NAME STATE READ WRITE CKSUM zfs0 ONLI

Re: [zfs-discuss] panic with zfs

2007-01-24 Thread Dennis Clarke
> Am 24.1.2007 14:59 Uhr, Dennis Clarke schrieb: > >>> Jan 23 17:25:26 newponit genunix: [ID 408822 kern.info] NOTICE: glm0: >>> fault detected in device; service still available >>> Jan 23 17:25:26 newponit genunix: [ID 611667 kern.info] NOTICE: glm0: >>&

Re: [zfs-discuss] panic with zfs

2007-01-24 Thread Dennis Clarke
> Ihsan Dogan wrote: > >>>I think you hit a major bug in ZFS personally. >> >> For me it also looks like a bug. > > I think we don't have enough information to judge. If you have a supported > version of Solaris, open a case and supply all the data (crash dump!) you > have. I agree we need da

Re: [zfs-discuss] panic with zfs

2007-01-24 Thread Dennis Clarke
> Hello Michael, > > Am 24.1.2007 14:36 Uhr, Michael Schuster schrieb: > >>> -- >>> [EMAIL PROTECTED] # zpool status >>> pool: pool0 >>> state: ONLINE >>> scrub: none requested >>> config: >> >> [...] >> >>> Jan 23 18:51:38 newponit ^

Re: [zfs-discuss] panic with zfs

2007-01-24 Thread Dennis Clarke
> Hello, > > We're setting up a new mailserver infrastructure and decided, to run it > on zfs. On a E220R with a D1000, I've setup a storage pool with four > mirrors: Good morning Ihsan ... I see that you have everything mirrored here, thats excellent. When you pulled a disk, was it a

Re: [zfs-discuss] Re: Heavy writes freezing system

2007-01-17 Thread Dennis Clarke
>> What do you mean by UFS wasn't an option due to >> number of files? > > Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle > Financials environment well exceeds this limitation. > what ? $ uname -a SunOS core 5.10 Generic_118833-17 sun4u sparc SUNW,UltraSPARC-IIi-cEngine

Re: [zfs-discuss] NFS and ZFS, a fine combination

2007-01-08 Thread Dennis Clarke
> Roch - PAE wrote: >> >> Just posted: >> >>http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine > > Nice article. Now what about when we do this with more than one disk > and compare UFS/SVM or VxFS/VxVM with ZFS as the back end - all with > JBOD storage ? > > How then does ZFS compare as

Re: [zfs-discuss] NFS and ZFS, a fine combination

2007-01-08 Thread Dennis Clarke
> On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote: >> > http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine >> >> So just to confirm; disabling the zil *ONLY* breaks the semantics of >> fsync() >> and synchronous writes from the application perspective; it will do >> *NOTHING* >

Re: [zfs-discuss] NFS and ZFS, a fine combination

2007-01-08 Thread Dennis Clarke
2. A severe test, as of patience or belief; a trial. [ Dennis Clarke [EMAIL PROTECTED] ] * TEST 1 ) file write. Building file structure at /export/nfs/local_test/ This test will create 62^3 = 238328 files o

Re: [zfs-discuss] HOWTO make a mirror after the fact

2007-01-07 Thread Dennis Clarke
>> Note that "attach" has no option for -n which would just show me the >> damage I am about to do :-( > > In general, ZFS does a lot of checking before committing a change to the > configuration. We make sure that you don't do things like use disks > that are already in use, partitions aren't ove

[zfs-discuss] HOWTO make a mirror after the fact

2007-01-07 Thread Dennis Clarke
zpool other than tar to a DLT. The last thing I want to do is destroy my data when I am trying to add redundency. Any thoughts ? -- Dennis Clarke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS over NFS extra slow?

2007-01-02 Thread Dennis Clarke
> Another thing to keep an eye out for is disk caching. With ZFS, > whenever the NFS server tells us to make sure something is on disk, we > actually make sure it's on disk by asking the drive to flush dirty data > in its write cache out to the media. Needless to say, this takes a > while. > > W

Re: [zfs-discuss] Re: Re[2]: ZFS in a SAN environment

2006-12-20 Thread Dennis Clarke
ware bugs. but it does imply that the software is way better than the hardware eh ? -- Dennis Clarke ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Re[2]: ZFS in a SAN environment

2006-12-19 Thread Dennis Clarke
> Anton B. Rang wrote: >>> "INFORMATION: If a member of this striped zpool becomes unavailable or >>> develops corruption, Solaris will kernel panic and reboot to protect your >>> data." >>> >> >> OK, I'm puzzled. >> >> Am I the only one on this list who believes that a kernel panic, instead >> of

Re: [zfs-discuss] Re: bare metal ZFS ? How To ?

2006-11-24 Thread Dennis Clarke
easily with any built in tools in the SXCR these days. There is already an RFE filed on that but I think its low priority. You can recover a zpool easily enough with zpool import but if you ever lose a few disks or some disaster hits then you had better have Veritas NetBackup or similar in place.

Re: [zfs-discuss] Re: bare metal ZFS ? How To ?

2006-11-24 Thread Dennis Clarke
> Dennis, > i'm not sure if this will help you, but i had something similar happen and > was able to get my zpool back. > > i decided to install (not upgrade) Nevada snv-51 which was the current build > at the time. I had (and thankfully still have) a zpool which i'd created > under snv-37 on a se

Re: [zfs-discuss] bare metal ZFS ? How To ?

2006-11-23 Thread Dennis Clarke
> On 11/23/06, James Dickens <[EMAIL PROTECTED]> wrote: >> On 11/23/06, Dennis Clarke <[EMAIL PROTECTED]> wrote: >> > >> > assume worst case >> > >> > someone walks up to you and drops an array on you. >> They say "its

[zfs-discuss] bare metal ZFS ? How To ?

2006-11-23 Thread Dennis Clarke
s "noot boot" shell? Is there any way to backup those ZFS filesystems while booted from CDROM/DVD or boot net ? Essentially, if I had nothing but bare metal here and a tape drive can I access the zpool that resides on six 36GB disks on controller 2 or am

Re: Re: [zfs-discuss] poor NFS/ZFS performance

2006-11-22 Thread Dennis Clarke
Have a gander below : > Agreed - it sucks - especially for small file use. Here's a 5,000 ft view > of the performance while unzipping and extracting a tar archive. First > the test is run on a SPARC 280R running Build 51a with dual 900MHz USIII > CPUs and 4Gb of RAM: > > $ cp emacs-21.4a.tar.g

  1   2   >