Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Richard Elling
On Sep 13, 2010, at 9:41 PM, Haudy Kazemi wrote: > Richard Elling wrote: >> On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote: From: Richard Elling [mailto:rich...@nexenta.com ] This operational definition of "fragmentation" comes from the single- user, single-task

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Haudy Kazemi
Richard Elling wrote: On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote: From: Richard Elling [mailto:rich...@nexenta.com] This operational definition of "fragmentation" comes from the single- user, single-tasking world (PeeCees). In that world, only one thread writes files from one appl

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Richard Elling
On Sep 12, 2010, at 7:49 PM, Michael Eskowitz wrote: > I recently lost all of the data on my single parity raid z array. Each of > the drives was encrypted with the zfs array built within the encrypted > volumes. > > I am not exactly sure what happened. Murphy strikes again! > The files we

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Michael Eskowitz
Oh and yes, raidz1. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Michael Eskowitz
I don't know what happened. I was in the process of copying files onto my new file server when the copy process from the other machine failed. I turned on the monitor for the fileserver and found that it had rebooted by itself at some point (machine fault maybe?) and when I remounted the drive

Re: [zfs-discuss] ZFS online device management

2010-09-13 Thread Richard Elling
On Sep 13, 2010, at 5:51 PM, Chris Mosetick wrote: > So are there now any methods to achieve the scenario I described to shrink a > pools size with existing ZFS tools? I don't see a definitive way listed on > the old shrinking thread. Today, there is no way to accomplish what you want without

Re: [zfs-discuss] ZFS online device management

2010-09-13 Thread Chris Mosetick
So are there now any methods to achieve the scenario I described to shrink a pools size with existing ZFS tools? I don't see a definitive way listed on the old shrinking thread . Thank you, -Chris On Mon, Sep 13, 2010 at 4:55 PM, Richa

Re: [zfs-discuss] ZFS online device management

2010-09-13 Thread Richard Elling
On Sep 13, 2010, at 4:40 PM, Chris Mosetick wrote: > Can anyone elaborate on the "zpool split" command. I have not seen any > examples in use am I am very curious about it. Say I have 12 disks in a pool > named tank. 6 in a RAIDZ2 + another 6 in a RAIDZ2. All is well, and I'm not > even clos

Re: [zfs-discuss] ZFS online device management

2010-09-13 Thread Chris Mosetick
Can anyone elaborate on the "zpool split" command. I have not seen any examples in use am I am very curious about it. Say I have 12 disks in a pool named tank. 6 in a RAIDZ2 + another 6 in a RAIDZ2. All is well, and I'm not even close to maximum capacity in the pool. Say I want to swap out 6 of

Re: [zfs-discuss] zfs compression with Oracle - anyone implemented?

2010-09-13 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Brad > > Hi! I'd been scouring the forums and web for admins/users who deployed > zfs with compression enabled on Oracle backed by storage array luns. > Any problems with cpu/memory overhead?

[zfs-discuss] zfs compression with Oracle - anyone implemented?

2010-09-13 Thread Brad
Hi! I'd been scouring the forums and web for admins/users who deployed zfs with compression enabled on Oracle backed by storage array luns. Any problems with cpu/memory overhead? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-13 Thread David Dyer-Bennet
On Tue, September 7, 2010 15:58, Craig Stevenson wrote: > 3. Should I consider using dedup if my server has only 8Gb of RAM? Or, > will that not be enough to hold the DDT? In which case, should I add > L2ARC / ZIL or am I better to just skip using dedup on a home file server? I would not cons

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-13 Thread Charles J. Knipe
> > At first we blamed de-dupe, but we've disabled that. Next we > suspected > > the SSD log disks, but we've seen the problem with those removed, as > > well. > > Did you have dedup enabled and then disabled it? If so, data can (or > will) be deduplicated on the drives. Currently the only way of

Re: [zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread Robert Mustacchi
Or you can go into udev's persistent rules and set it up such that the drives always get the correct names. I'd guess you'll probably find them somewhere under /etc/udev/rules.d or something similar. It will likely save you trouble in the long run, as they likely are getting shuffled with eithe

[zfs-discuss] What append to ZFS bp rewrite?

2010-09-13 Thread Steeve Roy
I am currently preparing a big SAN deployment using ZFS. As I will start with 60tB of data with a growing rate of 25% per year, I need some online defrag, data redistribution against drive as storage pool increase etc... When can we expect to get the bp rewrite feature into ZFS? Thanks! S

Re: [zfs-discuss] [osol-code] What append to ZFS bp rewrite?

2010-09-13 Thread Will Fiveash
On Fri, Sep 10, 2010 at 08:36:13AM -0700, Steeve Roy wrote: > I am currently preparing a big SAN deployment using ZFS. As I will start with > 60tB of data with a growing rate of 25% per year, I need some online defrag, > data redistribution against drive as storage pool increase etc... > > When

[zfs-discuss] zpool upgrade and zfs upgrade behavior on b145

2010-09-13 Thread Chris Mosetick
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. Th

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-13 Thread Hatish Narotam
Hi, *The PCIE 8x port gives me 4GBps, which is 32Gbps. No problem there. Each ESata port guarantees 3Gbps, therefore 12Gbps limit on the controller.* I was simply listing the bandwidth available at the different stages of the data cycle. The PCIE port gives me 32Gbps. The Sata card gives me a pos

Re: [zfs-discuss] NetApp/Oracle-Sun lawsuit done

2010-09-13 Thread Craig Cory
Run away! Run fast little Netapp. Don't anger the sleeping giant - Oracle! David Magda wrote: > Seems that things have been cleared up: > >> NetApp (NASDAQ: NTAP) today announced that both parties have agreed to >> dismiss their pending patent litigation, which began in 2007 between Sun >> Micro

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-13 Thread Hatish Narotam
A, I see. But I think your math is a bit out: 62.5e6 iop @ 100iops = 625000 seconds = 10416m = 173h = 7D6h. So 7 days & 6 hours. Thats long, but I can live with it. This isnt for an enterprise environment. While the length of time is of worry in terms of increasing the chance another drive wi

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-13 Thread Hatish Narotam
Mattias, what you say makes a lot of sense. When I saw *Both of the above situations resilver in equal time*, I was like "no way!" But like you said, assuming no bus bottlenecks. This is my exact breakdown (cheap disks on cheap bus :P) : PCI-E 8X 4-port ESata Raid Controller. 4 x ESata to 5Sata P

Re: [zfs-discuss] [mdb-discuss] onnv_142 - vfs_mountroot: cannot mount root

2010-09-13 Thread Gavin Maltby
On 09/07/10 23:26, Piotr Jasiukajtis wrote: Hi, After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system. Here is what I get. Any idea why it's not able to import rpool? I saw this issue also on older builds on a different machines. This sounds (based on the presence of

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-13 Thread Hatish Narotam
Makes sense. My understanding is not good enough to confidently make my own decisions, and I'm learning as Im going. The BPG says: - The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups If there was a reason leading up to this statement,

Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-13 Thread Piotr Jasiukajtis
This is snv_128 x86. > ::arc hits = 39811943 misses=630634 demand_data_hits = 29398113 demand_data_misses=490754 demand_metadata_hits = 10413660 demand_metadata_misses=133461 prefetch_data_hits= 0 pre

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Richard Elling
On Sep 13, 2010, at 10:54 AM, Orvar Korvar wrote: > To summarize, > > A) resilver does not defrag. > > B) zfs send receive to a new zpool means it will be defragged Define "fragmentation"? If you follow the wikipedia definition of "defragmentation" then the answer is no, zfs send/receive doe

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-13 Thread Roy Sigurd Karlsbakk
> At first we blamed de-dupe, but we've disabled that. Next we suspected > the SSD log disks, but we've seen the problem with those removed, as > well. Did you have dedup enabled and then disabled it? If so, data can (or will) be deduplicated on the drives. Currently the only way of de-deduping t

Re: [zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread Brian
That seems to have done the trick. I was worried because in the past I've had problems importing faulted file systems. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mail

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Orvar Korvar
To summarize, A) resilver does not defrag. B) zfs send receive to a new zpool means it will be defragged Correctly understood? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread LaoTsao 老曹
try export and import the zpool On 9/13/2010 1:26 PM, Brian wrote: I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unh

[zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread Brian
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf. The pool is of course very unhappy about the mirrors are no longer matched up and one device is "mis

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread David Dyer-Bennet
On Mon, September 13, 2010 07:14, Edward Ned Harvey wrote: >> From: Richard Elling [mailto:rich...@nexenta.com] >> >> This operational definition of "fragmentation" comes from the single- >> user, >> single-tasking world (PeeCees). In that world, only one thread writes >> files >> from one applica

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-13 Thread Charles J. Knipe
> > > Charles, > > Just like UNIX, there are several ways to drill down > on the problem.  I > would probably start with a live crash dump (savecore > -L) when you see > the problem.  Another method would be to grap > multiple "stats" commands > during the problem to see where you can drill down

Re: [zfs-discuss] ZFS archive image

2010-09-13 Thread Lori Alt
On 09/13/10 09:40 AM, Buck Huffman wrote: I have a flash archive that is stored in a ZFS snapshot stream. Is there a way to mount this image so I can read files from it. No, but you can use the "flar split" command to split the flash archive into its constituent parts, one of which will be

[zfs-discuss] ZFS archive image

2010-09-13 Thread Buck Huffman
I have a flash archive that is stored in a ZFS snapshot stream. Is there a way to mount this image so I can read files from it. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Edward Ned Harvey
> From: Richard Elling [mailto:rich...@nexenta.com] > > > > Regardless of multithreading, multiprocessing, it's absolutely > possible to > > have contiguous files, and/or file fragmentation. That's not a > > characteristic which depends on the threading model. > > Possible, yes. Probable, no. C

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Richard Elling
On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote: >> From: Richard Elling [mailto:rich...@nexenta.com] >> >> This operational definition of "fragmentation" comes from the single- >> user, >> single-tasking world (PeeCees). In that world, only one thread writes >> files >> from one application

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Orvar Korvar > > I was thinking to delete all zfs snapshots before zfs send receive to > another new zpool. Then everything would be defragmented, I thought. You don't need to delete snaps bef

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Orvar Korvar
I was thinking to delete all zfs snapshots before zfs send receive to another new zpool. Then everything would be defragmented, I thought. (I assume snapshots works this way: I snapshot once and do some changes, say delete file "A" and edit file "B". When I delete the snapshot, the file "A" is

Re: [zfs-discuss] Hang on zpool import (dedup related)

2010-09-13 Thread Pawel Jakub Dawidek
On Sun, Sep 12, 2010 at 11:24:06AM -0700, Chris Murray wrote: > Absolutely spot on George. The import with -N took seconds. > > Working on the assumption that esx_prod is the one with the problem, I bumped > that to the bottom of the list. Each mount was done in a second: > > # zfs mount zp > #

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Orvar Korvar
That sounds strange. What happened? You used raidz1? You can mount your zpool into an earlier snapshot. Have you tried that? Or, you can mount your pool within the last 30 seconds or so, I think. -- This message posted from opensolaris.org ___ zfs-disc

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Edward Ned Harvey
> From: Richard Elling [mailto:rich...@nexenta.com] > > This operational definition of "fragmentation" comes from the single- > user, > single-tasking world (PeeCees). In that world, only one thread writes > files > from one application at one time. In those cases, there is a reasonable > expectat

Re: [zfs-discuss] How to migrate to 4KB sector drives?

2010-09-13 Thread Casper . Dik
>On Sun, Sep 12, 2010 at 10:07 AM, Orvar Korvar > wrote: >> No replies. Does this mean that you should avoid large drives with 4KB >> sectors, that is, new dri ves? ZFS does not handle new drives? > >Solaris 10u9 handles 4k sectors, so it might be in a post-b134 release of osol. > Build 118 add