[zfs-discuss] iSCSI target zfs to SATA zfs

2009-11-20 Thread Michael
Hey guys, I have a zpool built from iSCSI targets from several machines at present, i'm considering buying a 16 port SATA controller and putting all the drives into one machine, if i remove all the drives from the machines offering the iSCSI targets and place them into the 1 machine, connected via

[zfs-discuss] deduplication requirements

2011-02-07 Thread Michael
Hi guys, I'm currently running 2 zpools each in a raidz1 configuration, totally around 16TB usable data. I'm running it all on an OpenSolaris based box with 2gb memory and an old Athlon 64 3700 CPU, I understand this is very poor and underpowered for deduplication, so I'm looking at building a new

Re: [zfs-discuss] zfs incremental send stream size

2009-08-18 Thread michael
Is there perhaps a workaround for this? A way to condense the free blocks information? If not, any idea when an improvement might be implemented? We are currently suffering from incremental snapshots that refer to zero new blocks, but where incremental snapshots required over a gigabyte even

Re: [zfs-discuss] New zfs pr0n server :)))

2007-08-29 Thread michael
do either of you know the current story about this card? i can't get it to work at all in solaris 10, but i'm very new to the OS. thanks! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

Re: [zfs-discuss] New zfs pr0n server :)))

2007-09-01 Thread michael
to clarify: i mean the promise sata300 tx4. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-09 Thread Michael
Excellent. Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd13): Oct 9 13:36:01 zeta1 Error for Command: readError Level: Retryable Scrubbing now. Big thanks gg

[zfs-discuss] zpool status backwards scrub progress on when using iostat

2007-10-09 Thread Michael
I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool. I some bad blocks on one of the disks Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd13): Oct 9 13:36:01 zeta1 Error f

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-10 Thread Michael
Thanks. Looks like I have this bug. Is it a hardware problem combined with a software problem? Oct 9 09:35:43 zeta1 sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]: Oct 9 09:35:43 zeta1 port 3: device reset Oct 9 09:35:43 zeta1 s

Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-13 Thread Michael
I've got the box on eval, and just pushing through its paces. Ideally I would be replicating to another x4500, but I don't have another one and didn't want to use 22 disks for another pool. This message posted from opensolaris.org ___ zfs-discuss ma

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-10-28 Thread Michael
I got it too. Its a brand new x4500 (my 2nd eval box after the other one use to freeze up). I got this while running a java program that tries and reads a 128G file while writing a 100G file in 2 threads with 128K blocks. Oct 29 00:56:28 zeta1 marvell88sx: [ID 670675 kern.info] NOTICE: marvell88

Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Michael Shadle
On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wrote: > I got a norco 4020 (the 4220 is good too) > > Both of those cost around 300-350 dolars.  That is a 4u case with 20 hot > swap bays. Typically rackmounts are not designed for quiet. He said quietness is #2 in his priorities... Or does the N

Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Michael Shadle
> It's very nice. > > > On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle wrote: >> >> On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wrote: >> >> > I got a norco 4020 (the 4220 is good too) >> > >> > Both of those cost around 300-350 dol

Re: [zfs-discuss] ZFS for my home RAID? Or Linux Software RAID?

2010-03-07 Thread Michael Shadle
On Sun, Mar 7, 2010 at 6:09 PM, Slack-Moehrle wrote: > OpenSolaris or FreeBSD with ZFS? zfs for sure. it's nice having something bitrot-resistant. it was designed with data integrity in mind. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

[zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Michael Hassey
Sorry if this is too basic - So I have a single zpool in addition to the rpool, called xpool. NAMESIZE USED AVAILCAP HEALTH ALTROOT rpool 136G 109G 27.5G79% ONLINE - xpool 408G 171G 237G42% ONLINE - I have 408 in the pool, am using 171 leaving me 237 GB. The

Re: [zfs-discuss] Posible newbie question about space between zpool and zf

2010-03-15 Thread Michael Hassey
That solved it. Thank you Cindy. Zpool list NOT reporting raidz overhead is what threw me... Thanks again. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listin

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Michael DeMan
By the way, I would like to chip in about how informative this thread has been, at least for me, despite (and actually because of) the strong opinions on some of the posts about the issues involved. >From what I gather, there is still an interesting failure possibility with >ZFS, although prob

[zfs-discuss] newbie, WAS: Re: SSD best practices

2010-04-19 Thread Michael DeMan
Also, pardon my typos, and my lack of re-titling my subject to note that it is a fork from the original topic. Corrections in text that I noticed after finally sorting out getting on the mailing list are below... On Apr 19, 2010, at 3:26 AM, Michael DeMan wrote: > By the way, > >

Re: [zfs-discuss] newbie, WAS: Re: SSD best practices

2010-04-19 Thread Michael DeMan
In all honesty, I haven't done much at sysadmin level with Solaris since it was SunOS 5.2. I found ZFS after becoming concerned with reliability of traditional RAID5 and RAID6 systems once drives exceeded 500GB. I have a few months running ZFS on FreeBSD lately on a test/augmentation basis wit

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread Michael Schuster
whereas answers regarding questions like the one you're floating come from the marketing/management side of the house. The best chance for you to find out about this is to talk to your Oracle sales rep. Michael -- michael.schus...@oracle.com Recursion, n.: see 'Recursion' __

Re: [zfs-discuss] Making ZFS better: rm files/directories from snapshots

2010-04-20 Thread Michael Bosch
ndependent clones and sharing / moving between filesystems? Michael Bosch -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-23 Thread Michael Sullivan
, Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 On 23 Apr 2010, at 10:22 , BM wrote: > On Tue, Apr 20, 2010 at 2:18 PM, Ken Gunderson wrote: >> Greetings All: >> >> G

Re: [zfs-discuss] Mapping inode numbers to file names

2010-04-28 Thread Michael Schuster
- consider hard links. (and sorry for not answering sooner, this obvious one didn't occur to me earlier). Michael -- michael.schus...@oracle.com http://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing li

[zfs-discuss] Exporting iSCSI - it's still getting all the ZFS protection, right?

2010-05-03 Thread Michael Shadle
Quick sanity check here. I created a zvol and exported it via iSCSI to a Windows machine so Windows could use it as a block device. Windows formats it as NTFS, thinks it's a local disk, yadda yadda. Is ZFS doing it's magic checksumming and whatnot on this share, even though it is seeing junk data

[zfs-discuss] b134 pool borked!

2010-05-04 Thread Michael Mattsson
might be of use. I suspect it has something to do with the DDT table. Best Regards Michael zdb output: rpool: version: 22 name: 'rpool' state: 0 txg: 10643295 pool_guid: 16751367988873007995 hostid: 13336047 hostname: '' vdev_children: 1 vdev_

[zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
m to find an answer. Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
Ok, thanks. So, if I understand correctly, it will just remove the device from the VDEV and continue to use the good ones in the stripe. Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 On 5

Re: [zfs-discuss] b134 pool borked!

2010-05-04 Thread Michael Mattsson
90 reads and not a single comment? Not the slightest hint of what's going on? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Michael Mattsson
This is how my zpool import command looks like: Attached you'll find the output of zdb -l of each device. pool: tank id: 10904371515657913150 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE raidz1-0 ONLIN

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Michael Mattsson
Thanks for your reply! I ran memtest86 and it did not report any errors. The disk controller I've not replaced, yet. The server is up in multi-user mode with the broken pool in an un-imported state. Format now works and properly lists all my devices without panic'ing. zpool import panic's the b

Re: [zfs-discuss] b134 pool borked!

2010-05-05 Thread Michael Mattsson
I got a suggestion to check what fmdump -eV gave to look for PCI errors if the controller might be broken. Attached you'll find the last panic's fmdump -eV. It indicates that ZFS can't open the drives. That might suggest a broken controller, but my slog is on the motherboard's internal controll

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Michael Sullivan
Hi Ed, Thanks for your answers. Seem to make sense, sort of… On 6 May 2010, at 12:21 , Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Michael Sullivan >> >> I have a question I canno

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Michael Sullivan
On 6 May 2010, at 13:18 , Edward Ned Harvey wrote: >> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com] >> >> While it explains how to implement these, there is no information >> regarding failure of a device in a striped L2ARC set of SSD's. I have > &

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
anything to come close in its approach to disk data management. Let's just hope it keeps moving forward, it is truly a unique way to view disk storage. Anyway, sorry for the ramble, but to everyone, thanks again for the answers. Mike --- Michael Sullivan michael.p.sulli...@m

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
ead a block from more devices simultaneously, it will cut the latency of the overall read. On 7 May 2010, at 02:57 , Marc Nicholas wrote: > Hi Michael, > > What makes you think striping the SSDs would be faster than round-robin? > > -marc > > On Thu, May 6, 2010 at 1:09 PM,

Re: [zfs-discuss] why both dedup and compression?

2010-05-06 Thread Michael Sullivan
rks really well. > > -- > -Peter Tribble > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Mi

Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Michael Schuster
standard monitoring tools? If not, what other tools exist that can do the same? "zpool iostat" for one. Michael -- michael.schus...@oracle.com http://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mail

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Michael DeMan
I agree on the motherboard and peripheral chipset issue. This, and the last generation AMD quad/six core motherboards all seem to use the AMD SP56x0/SP5100 chipset, which I can't find much information about support on for either OpenSolaris or FreeBSD. Another issue is the LSI SAS2008 chipset f

[zfs-discuss] zpool replace lockup / replace process now stalled, how to fix?

2010-05-17 Thread Michael Donaghy
er a proper replace of the failed partitions? Many thanks, Michael ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Michael Schuster
On 19.05.10 17:53, John Andrunas wrote: Not to my knowledge, how would I go about getting one? (CC'ing discuss) man savecore and dumpadm. Michael On Wed, May 19, 2010 at 8:46 AM, Mark J Musante wrote: Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010,

Re: [zfs-discuss] zpool replace lockup / replace process now stalled, how to fix?

2010-05-21 Thread Michael Donaghy
For the record, in case anyone else experiences this behaviour: I tried various things which failed, and finally as a last ditch effort, upgraded my freebsd, giving me zpool v14 rather than v13 - and now it's resilvering as it should. Michael On Monday 17 May 2010 09:26:23 Michael Do

Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Michael Shadle
On Fri, Jun 11, 2010 at 2:50 AM, Alex Blewitt wrote: > You are sadly mistaken. > > From GNU.org on license compatibilities: > > http://www.gnu.org/licenses/license-list.html > >        Common Development and Distribution License (CDDL), version 1.0 >        This is a free software license. It has

Re: [zfs-discuss] b134 pool borked!

2010-06-30 Thread Michael Mattsson
Just in case any stray searches finds it way here, this is what happened to my pool: http://phrenetic.to/zfs -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

[zfs-discuss] Consequences of resilvering failure

2010-07-06 Thread Michael Johnson
detected in the middle of resilvering.) I will of course have a backup of the pool, but I may opt for additional backup if the entire pool could be lost due to data corruption (as opposed to just a few files potentially being lost). Thanks, Michael [1] http://dlc.sun.com/osol/docs/co

[zfs-discuss] Encryption?

2010-07-10 Thread Michael Johnson
storing backups of my personal files on this), so if there's a chance that ZFS wouldn't handle errors well when on top of encryption, I'll just go without it. Thanks, Michael ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Michael Johnson
on 11/07/2010 15:54 Andriy Gapon said the following: >on 11/07/2010 14:21 Roy Sigurd Karlsbakk said the following: >> >> I'm planning on running FreeBSD in VirtualBox (with a Linux host) >> and giving it raw disk access to four drives, which I plan to >> configure as a raidz2 volume.

Re: [zfs-discuss] Encryption?

2010-07-12 Thread Michael Johnson
Nikola M wrote: >Freddie Cash wrote: >> You definitely want to do the ZFS bits from within FreeBSD. >Why not using ZFS in OpenSolaris? At least it has most stable/tested >implementation and also the newest one if needed? I'd love to use OpenSolaris for exactly those reasons, but I'm wary of using

Re: [zfs-discuss] Encryption?

2010-07-12 Thread Michael Johnson
saying that you employ enough kernel hackers to keep up even without Oracle? (I am admittedly ignorant about the OpenSolaris developer community; this is all based on others' statements and opinions that I've read.) Michael ___

[zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Michael Johnson
I just don't need more than 1 TB of available storage right now, or for the next several years.) This is on an AMD64 system, and the OS in question will be running inside of VirtualBox, with raw access to the drives. Thanks, Michael

Re: [zfs-discuss] Recommended RAM for ZFS on various platforms

2010-07-16 Thread Michael Johnson
Garrett D'Amore wrote: >On Fri, 2010-07-16 at 10:24 -0700, Michael Johnson wrote: >> I'm currently planning on running FreeBSD with ZFS, but I wanted to >>double-check >> how much memory I'd need for it to be stable. The ZFS wiki currently says >you >

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 3:11 PM, Haudy Kazemi wrote: > ' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still > can't understand how it helps me to identify model of specific drive. Curious: [r...@nas01 ~]# zpool status -x pool: tank state: DEGRADED status: One or more de

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote: > Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core" > and watch the drive activity lights.  The drive in the pool which isn't > blinking like crazy is a faulted/offlined drive. > > Ugly and oh-so-hackerish, but it

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote: > Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core" > and watch the drive activity lights.  The drive in the pool which isn't > blinking like crazy is a faulted/offlined drive. Actually I guess my real question is

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling wrote: > Aren't you assuming the I/O error comes from the drive? > fmdump -eV okay - I guess I am. Is this just telling me "hey stupid, a checksum failed" ? In which case why did this never resolve itself and the specific device get marked as degra

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Michael Shadle
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling wrote: > I depends on if the problem was fixed or not.  What says >        zpool status -xv > >  -- richard [r...@nas01 ~]# zpool status -xv pool: tank state: DEGRADED status: One or more devices has experienced an unrecoverable error. An

Re: [zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread Michael Schuster
them is to destroy snapshots. Or have I still misunderstood the question? yes, I think so. Here's how I read it: the snapshots contain lots more than the core files, and OP wants to remove only the core files (I'm assuming they weren't discovered before the snapshot

Re: [zfs-discuss] ZFS p[erformance drop with new Xeon 55xx and 56xx cpus

2010-08-11 Thread michael schuster
- provide measurements (lockstat, iostat, maybe some DTrace) before and during test, add some timestamps so people can correlate data to events. - anything else you can think of that might be relevant. HTH Michael ___ zfs-discuss mailing list z

[zfs-discuss] Degraded Pool, Spontaneous Reboots

2010-08-12 Thread Michael Anderson
Hello, I've been getting warnings that my zfs pool is degraded. At first it was complaining about a few corrupt files, which were listed as hex numbers instead of filenames, i.e. VOL1:<0x0> After a scrub, a couple of the filenames appeared - turns out they were in snapshots I don't really nee

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
This seems like you're doing an awful lot of planning for only 8 SATA + 4 SAS bays? I agree - SOHO usage of ZFS is still a scary "will this work?" deal. I found a working setup and I cloned it. It gives me 16x SATA + 2x SATA for mirrored boot, 4GB ECC RAM and a quad core processor - total cost wit

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
Yeah - give me a bit to rope together the parts list and double check it, and I will post it on my blog. On Mon, Sep 28, 2009 at 2:34 PM, Ware Adams wrote: > On Sep 28, 2009, at 4:20 PM, Michael Shadle wrote: > >> I agree - SOHO usage of ZFS is still a scary "will this work?&qu

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
rackmount chassis aren't usually designed with acoustics in mind :) however i might be getting my closet fitted so i can put half a rack in. might switch up my configuration to rack stuff soon. On Mon, Sep 28, 2009 at 3:04 PM, Thomas Burgess wrote: > personally i like this case: > > > http://www

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Michael Shadle
7;s got 4 fans but they are > really big and don't make nearly as much noise as you'd think.  honestly, > it's not bad at all.  I know someone who sits it vertically as well, > honestly, it's a good case for the money > > > On Mon, Sep 28, 2009 at 6:06 PM, Micha

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-30 Thread Michael Shadle
i looked at possibly doing one of those too - but only 5 disks was too small for me. and i was too nervous about compatibility with mini-itx stuff. On Wed, Sep 30, 2009 at 6:22 PM, Jorgen Lundman wrote: > > I too went with a 5in3 case for HDDs, in a nice portable Mini-ITX case, with > Intel Atom.

Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Michael Schuster
uld be the first step for you. HTH -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Michael Schuster
On 01.10.09 08:25, camps support wrote: I did zpool import -R /tmp/z rootpool It only mounted /export and /rootpool only had /boot and /platform. I need to be able to get /etc and /var? zfs set mountpoint ... zfs mount -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n

Re: [zfs-discuss] million files in single directory

2009-10-03 Thread michael schuster
, but check the man-page). "echo * | wc" is also a way to find out what's in a directory, but you'll miss "."files, and the shell you're using may have an influence .. HTH Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see &#x

Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent

2009-10-22 Thread michael schuster
). How's that possible ? just a few thoughts: - how do you measure how much space your data consumes? - how do you copy? - is the other FS also ZFS? Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see 'Recursion'

Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent

2009-10-22 Thread michael schuster
Stathis Kamperis wrote: 2009/10/23 michael schuster : Stathis Kamperis wrote: Salute. I have a filesystem where I store various source repositories (cvs + git). I have compression enabled on and zfs get compressratio reports 1.46x. When I copy all the stuff to another filesystem without

[zfs-discuss] can't delete a zpool

2009-11-09 Thread Michael Barrett
OpenSolaris 2009.06 I have a ST2540 Fiber Array directly attached to a X4150. There is a zpool on the fiber device. The zpool went into a faulted state, but I can't seem to get it back via scrub or even delete it? Do I have to re-install the entire OS if I want to use that device again? T

[zfs-discuss] upgrading to the latest zfs version

2009-11-18 Thread Michael Armstrong
Hi guys, after reading the mailings yesterday i noticed someone was after upgrading to zfs v21 (deduplication) i'm after the same, i installed osol-dev-127 earlier which comes with v19 and then followed the instructions on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to da

Re: [zfs-discuss] Dedup question

2009-11-23 Thread Michael Schuster
is this about a different FS *on top of* zpools/zvols? If so, I'll have to defer to Team ZFS. HTH Michael -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS dedup clarification

2009-11-27 Thread Michael Schuster
} && zfs get used rpool/export/home && cp /testfile /export/home/d${i}; done as far as I understood it, the dedup works during writing, and won't deduplicate already written data (this is planned for a later release). isn't he doing just that (writing, that

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Michael Schuster
have been compressed using compress. It is the equivalent of uncompress-c. Input files are not affected. :-) cheers Michael -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-dis

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread michael schuster
all it 'ln' ;-) and that even works on ufs. Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread michael schuster
ntics of ZFS. I actually was thinking of creating a hard link (without the -s option), but your point is valid for hard and soft links. cheers Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see 'Recursion'

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Michael Schuster
files to end up in one file; the traditional concat operation will cause all the data to be read and written back, at which point dedup will kick in, and so most of the processing has already been spent. (Per, please correct/comment) Michael -- Michael Schusterhttp://blogs.sun.com/recur

[zfs-discuss] Zpool problems

2009-12-07 Thread Michael Armstrong
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge pkg. When I'm writing files to the fs they are appearing as 1kb files and if I do zpool status or scrub or anything the command is just hanging. However I can still read the zpool ok, just write is having problems and any

Re: [zfs-discuss] Deduplication - deleting the original

2009-12-08 Thread Michael Schuster
, there is no "original" and "copy"; rather, every directory entry points to "the data" (the inode, in ufs-speak), and if one directory entry of several is deleted, only the reference count changes. It's probably a little more complicated with dedup, but I think

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-08 Thread Michael Herf
Am in the same boat, exactly. Destroyed a large set and rebooted, with a scrub running on the same pool. My reboot stuck on "Reading ZFS Config: *" for several hours (disks were active). I cleared the zpool.cache from single-user and am doing an import (can boot again). I wasn't able to boot m

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-09 Thread Michael Herf
zpool import done! Back online. Total downtime for 4TB pool was about 8 hours, don't know how much of this was completing the destroy transaction. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-12 Thread Michael Herf
Most manufacturers have a utility available that sets this behavior. For WD drives, it's called WDTLER.EXE. You have to make a bootable USB stick to run the app, but it is simple to change the setting to the enterprise behavior. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-13 Thread Michael Herf
Note you don't get the better vibration control and other improvements the enterprise drives have. So it's not exactly that easy. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensol

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-15 Thread Michael Herf
I have also had slow scrubbing on filesystems with lots of files, and I agree that it does seem to degrade badly. For me, it seemed to go from 24 hours to 72 hours in a matter of a few weeks. I did these things on a pool in-place, which helped a lot (no rebuilding): 1. reduced number of snapshots

Re: [zfs-discuss] zfs zend is very slow

2009-12-16 Thread Michael Herf
Mine is similar (4-disk RAIDZ1) - send/recv with dedup on: <4MB/sec - send/recv with dedup off: ~80M/sec - send > /dev/null: ~200MB/sec. I know dedup can save some disk bandwidth on write, but it shouldn't save much read bandwidth (so I think these numbers are right). There's a warning in a Je

Re: [zfs-discuss] zfs zend is very slow

2009-12-17 Thread Michael Herf
I have observed the opposite, and I believe that all writes are slow to my dedup'd pool. I used local rsync (no ssh) for one of my migrations (so it was restartable, as it took *4 days*), and the writes were slow just like zfs recv. I have not seen fast writes of real data to the deduped volume,

Re: [zfs-discuss] zfs zend is very slow

2009-12-17 Thread Michael Herf
My ARC is ~3GB. I'm doing a test that copies 10GB of data to a volume where the blocks should dedupe 100% with existing data. First time, the test that runs <5MB sec, seems to average 10-30% ARC *miss* rate. <400 arc reads/sec. When things are working at disk bandwidth, I'm getting 3-5% ARC misse

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread Michael Herf
Anyone who's lost data this way: were you doing weekly scrubs, or did you find out about the simultaneous failures after not touching the bits for months? mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/l

Re: [zfs-discuss] Troubleshooting dedup performance

2009-12-23 Thread Michael Herf
For me, arcstat.pl is a slam-dunk predictor of dedup throughput. If my "miss%" is in the single digits, dedup write speeds are reasonable. When the arc misses go way up, dedup writes get very slow. So my guess is that this issue depends entirely on whether or not the DDT is in RAM or not. I don't h

Re: [zfs-discuss] Troubleshooting dedup performance

2009-12-24 Thread Michael Herf
FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be running visibly faster (somewhere around 3-5x faster). echo zfs_prefetch_disable/W0t1 | mdb -kw Anyone else see a result like this? I'm using the "read" bandwidth from the sending pool from "zpool iostat -x 5" to estimate transf

[zfs-discuss] adding extra drives without creating a second parity set

2009-12-27 Thread Michael Armstrong
ddie Cash) 2. Re: Benchmarks results for ZFS + NFS, using SSD's as slog devices (ZIL) (Richard Elling) 3. Re: Troubleshooting dedup performance (Michael Herf) 4. ZFS write bursts cause short app stalls (Saso Kiselkov) -

[zfs-discuss] Scrub slow (again) after dedupe

2009-12-29 Thread Michael Herf
I have a 4-disk RAIDZ, and I reduced the time to scrub it from 80 hours to about 14 by reducing the number of snapshots, adding RAM, turning off atime, compression, and some other tweaks. This week (after replaying a large volume with dedup=on) it's back up, way up. I replayed a 700G filesystem to

[zfs-discuss] $100 SSD = >5x faster dedupe

2009-12-31 Thread Michael Herf
I've written about my slow-to-dedupe RAIDZ. After a week of.waitingI finally bought a little $100 30G OCZ Vertex and plugged it in as a cache. After <2 hours of warmup, my zfs send/receive rate on the pool is >16MB/sec (reading and writing each at 16MB as measured by zpool iostat). That's

Re: [zfs-discuss] $100 SSD = >5x faster dedupe

2009-12-31 Thread Michael Herf
Make that 25MB/sec, and rising... So it's 8x faster now. mike -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] $100 SSD = >5x faster dedupe

2010-01-03 Thread Michael Herf
Just l2arc. Guess I can always repartition later. mike On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier wrote: > Are you using the SSD for l2arc or zil or both? > -- > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-dis

[zfs-discuss] send/recv, apparent data loss

2010-01-05 Thread Michael Herf
I replayed a bunch of filesystems in order to get dedupe benefits. Only thing is a couple of them are rolled back to November or so (and I didn't notice before destroy'ing the old copy). I used something like: zfs snapshot pool/f...@dd zfs send -Rp pool/f...@dd | zfs recv -d pool/fs2 (after done.

Re: [zfs-discuss] send/recv, apparent data loss

2010-01-05 Thread Michael Herf
left it on. Anything possible there? The only other thing is that I did "zfs rollback" for a totally unrelated filesystem in the pool, but I have no idea if this could have affected it. (I've verified that I got the right one with "zpool history".) mike On Tue, Jan 5, 2

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
ptionally slow partially because it will sort the output. that's what '-f' was supposed to avoid, I'd guess. Michael -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
e is the directory pointer is shuffled around. This is not the case with ZFS data sets, even though they're on the same pool? no - mv doesn't know about zpools, only about posix filesystems. -- Michael Schusterhttp://blogs.sun.com/recursion Recursion, n.: see 'Recursion&

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster
to get rid of them (because they eat 80% of disk space) it seems to be quite challenging. I've been following this thread. Would it be faster to do the reverse. Copy the 20% of disk then format then move the 20% back. I'm not sure the OS installation would survive that

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Michael Herf
Many large-scale photo hosts start with netapp as the default "good enough" way to handle multiple-TB storage. With a 1-5% cache on top, the workload is truly random-read over many TBs. But these workloads almost assume a frontend cache to take care of hot traffic, so L2ARC is just a nice implement

Re: [zfs-discuss] zpool fragmentation issues? (dovecot)

2010-01-14 Thread Michael Keller
> The best Mail Box to use under Dovecot for ZFS is > MailDir, each email is store as a individual file. Can not agree on that. dbox is about 10x faster - at least if you have > 1 messages in one mailbox folder. Thats not because of ZFS but dovecot just handles dbox files (one for each messa

  1   2   3   4   >