Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Toby Thain
On 24-Jul-09, at 6:41 PM, Frank Middleton wrote: On 07/24/09 04:35 PM, Bob Friesenhahn wrote: Regardless, it [VirtualBox] has committed a crime. But ZFS is a journalled file system! Any hardware can lose a flush; No, the problematic default in VirtualBox is flushes being *ignored*, whic

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread David Magda
On Jul 24, 2009, at 22:17, Bob Friesenhahn wrote: A journaling filesystem uses a journal (transaction log) to roll back (replace with previous data) the unordered writes in an incomplete transaction. In the case of ZFS, it is only necessary to go back to the most recent checkpoint and any

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Frank Middleton wrote: On 07/24/09 04:35 PM, Bob Friesenhahn wrote: Regardless, it [VirtualBox] has committed a crime. But ZFS is a journalled file system! Any hardware can lose a flush; From my understanding, ZFS is not a journalled file system. ZFS relies on ordere

Re: [zfs-discuss] OpenSolaris 2009.06 - ZFS Install Issue

2009-07-24 Thread Ian Collins
Kyle wrote: If I run `zpool create -f tank raidz1 c3d0 c3d1 c6d0 c6d1` it causes the OS not to boot saying "Cannot find active partition". If I leave c3d1 out.. ie. `zpool create -f tank raidz1 c3d0 c6d0 c6d1` and reboot everything is fine. This makes no sense to me since c4d0 is showing up

[zfs-discuss] OpenSolaris 2009.06 - ZFS Install Issue

2009-07-24 Thread Kyle
I've installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is caus

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Jorgen Lundman
That is because you had only one other choice: filesystem level copy. With ZFS I believe you will find that snapshots will allow you to have better control over this. The send/receive process is very, very similar to a mirror resilver, so you are only carrying your previous process forward into

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread David Magda
On Jul 24, 2009, at 16:00, Miles Nordin wrote: Is there a correct way to configure it, or will always any componoent of the overall system other than ZFS get blamed when ZFS loses a pool? By default VB does not respect the 'disk sync' command that a guest OS could send--it's just ignored.

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Ian Collins
Frank Middleton wrote: On 07/24/09 04:35 PM, Bob Friesenhahn wrote: Regardless, it [VirtualBox] has committed a crime. But ZFS is a journalled file system! Even a journalled file system has to trust the journal. If the storage says the journal is committed and its isn't, all bets are off.

[zfs-discuss] Metadata size

2009-07-24 Thread John
Hi, I am trying to understand in details how much metadata is being cached in ARC and L2ARC for my workload. Looking at 'kstat -n arcstats', I see: ARC Current Size: 19217 MB (size=19,644,754,928) ARC Metadata Size: 112MB (hdr_size=117,896,760) I am trying to understand what l2_hdr_size means?

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Frank Middleton
On 07/24/09 04:35 PM, Bob Friesenhahn wrote: Regardless, it [VirtualBox] has committed a crime. But ZFS is a journalled file system! Any hardware can lose a flush; it's just more likely in a VM, especially when anything Microsoft is involved, and the whole point of journalling is to prevent th

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Ian Collins
Rob Logan wrote: > The post I read said OpenSolaris guest crashed, and the guy clicked > the ``power off guest'' button on the virtual machine. I seem to recall "guest hung". 99% of solaris hangs (without a crash dump) are "hardware" in nature. (my experience backed by an uptime of 1116days) so

Re: [zfs-discuss] slog writing patterns vs SSD tech.

2009-07-24 Thread Richard Elling
On Jul 24, 2009, at 2:33 PM, Bob Friesenhahn wrote: On Fri, 24 Jul 2009, Kyle McDonald wrote: http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8 This an interesting test report. Something quite interesting for zfs is if the write rate is continually high, then the write performanc

Re: [zfs-discuss] slog writing patterns vs SSD tech.

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Kyle McDonald wrote: http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8 This an interesting test report. Something quite interesting for zfs is if the write rate is continually high, then the write performance will be limited by the FLASH erase performance, regard

[zfs-discuss] zilstat updated

2009-07-24 Thread Richard Elling
Have you ever wondered if adding a separate log device can improve your performance? zilstat is a DTrace script which helps answer that question. I have updated zilstat to offer the option of tracking ZIL activity on a per-txg commit basis. By default, ZIL activity is tracked chronologically at f

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Blake
On Fri, Jul 24, 2009 at 4:35 PM, Bob Friesenhahn wrote: > On Fri, 24 Jul 2009, Miles Nordin wrote: >> >> The post I read said OpenSolaris guest crashed, and the guy clicked >> the ``power off guest'' button on the virtual machine.  The host never >> crashed.  so whether the IDE cache flush paramete

Re: [zfs-discuss] slog writing patterns vs SSD tech.

2009-07-24 Thread Kyle McDonald
Miles Nordin wrote: "km" == Kyle McDonald writes: km> hese drives do seem to do a great job at random writes, most km> of the promise shows at sequential writes, so Does the slog km> attempt to write sequentially through the space given to it? NO! Everyone who is u

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Miles Nordin wrote: The post I read said OpenSolaris guest crashed, and the guy clicked the ``power off guest'' button on the virtual machine. The host never crashed. so whether the IDE cache flush parameter was set or not, Clicking ``power off guest'' is the same as walk

Re: [zfs-discuss] slog writing patterns vs SSD tech. (was SSD's and ZFS...)

2009-07-24 Thread Richard Elling
On Jul 24, 2009, at 10:46 AM, Kyle McDonald wrote: Bob Friesenhahn wrote: Of course, it is my understanding that the zfs slog is written sequentially so perhaps this applies instead: Actually, reading up on these drives I've started to wonder about the slog writing pattern. While these

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Rob Logan
> The post I read said OpenSolaris guest crashed, and the guy clicked > the ``power off guest'' button on the virtual machine. I seem to recall "guest hung". 99% of solaris hangs (without a crash dump) are "hardware" in nature. (my experience backed by an uptime of 1116days) so the finger is stil

Re: [zfs-discuss] slog writing patterns vs SSD tech.

2009-07-24 Thread Miles Nordin
> "km" == Kyle McDonald writes: km> hese drives do seem to do a great job at random writes, most km> of the promise shows at sequential writes, so Does the slog km> attempt to write sequentially through the space given to it? when writing to the slog, some user-visible applicatio

Re: [zfs-discuss] Soon out of space (after upgrade to 2009.06)

2009-07-24 Thread Axelle Apvrille
Ok -- thanks for your reply. I just wonder what's in those 7 G, if a 3G pool is enough... ? Why has it grown so much ? I think I do not understand exactly the relationship between BE and the ZFS pools: if I destroy the BE, that doesn't destroy the data, does it ? it puts back the content of rpo

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Miles Nordin
> "re" == Richard Elling writes: re> The root cause of this thread's woes have absolutely nothing re> to do with ECC RAM. It has everything to do with VirtualBox re> configuration. What part of VirtualBox configuration? The post I read said OpenSolaris guest crashed, and the guy

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-24 Thread Miles Nordin
> "c" == chris writes: > "hk" == Haudy Kazemi writes: c> why would anyone use something called basic? But there must be c> a catch if they provided several ECC support modes. They are just taiwanese. They have no clue wtf they are doing and do not care about quality since t

Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 19:36:52 +0200 dick hoogendijk wrote: > Thank you for your support 'till now. One final question:.. Alas, it's not a final qustion. It still does not work. I have no idea what else I could have forgotten. This is what I have on arwen (local) and westmark (remote): r...@westm

Re: [zfs-discuss] slog writing patterns vs SSD tech. (was SSD's and ZFS...)

2009-07-24 Thread Kyle McDonald
Bob Friesenhahn wrote: Of course, it is my understanding that the zfs slog is written sequentially so perhaps this applies instead: Actually, reading up on these drives I've started to wonder about the slog writing pattern. While these drives do seem to do a great job at random writes, most

[zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-24 Thread Marcelo Leal
Hello all... I'm seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I'm aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks)

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Bob Friesenhahn
Ok, I re-tested my rotating rust with these iozone options (note that -o requests syncronous writes): iozone -t 6 -k 8 -i 0 -i 2 -O -r 8K -o -s 1G and obtained these results: Children see throughput for 6 random writers=5700.49 ops/sec Parent sees throughput for 6 ran

Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 10:00:30 -0600 cindy.swearin...@sun.com wrote: > Reproducing this will be difficult in my environment since > our domain info is automatically setup... Hey, no sweat ;-) I only asked because I don't want to do the "send blah" again. but then again, computers don't get tired.

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Nicolas Williams
On Fri, Jul 24, 2009 at 05:01:15PM +0200, dick hoogendijk wrote: > On Fri, 24 Jul 2009 10:44:36 -0400 > Kyle McDonald wrote: > > ... then it seems like a shame (or a waste?) not to equally > > protect the data both before it's given to ZFS for writing, and after > > ZFS reads it back and returns

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Robert Milkowski
dick hoogendijk wrote: On Fri, 24 Jul 2009 10:44:36 -0400 Kyle McDonald wrote: ... then it seems like a shame (or a waste?) not to equally protect the data both before it's given to ZFS for writing, and after ZFS reads it back and returns it to you. But that was not the question. Th

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Bob Friesenhahn wrote: This seems like rather low random write performance. My 12-drive array of rotating rust obtains 3708.89 ops/sec. In order to be effective, it seems that a synchronous write log should perform considerably better than the backing store. Actually,

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Richard Elling
On Jul 14, 2009, at 10:45 PM, Jorgen Lundman wrote: Hello list, Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs boot. Very often, if we needed to grow a cluster by another machine or two, we would simply clone a running live server. Generally the procedure for

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Richard Elling
On Jul 24, 2009, at 3:18 AM, Michael McCandless wrote: I've read in numerous threads that it's important to use ECC RAM in a ZFS file server. It is important to use ECC RAM. The embedded market and server market demand ECC RAM. It is only the el-cheapo PC market that does not. Going back to s

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Tristan Ball wrote: I've used 8K IO sizes for all the stage one tests - I know I might get it to go faster with a larger size, but I like to know how well systems will do when I treat them badly! The Stage_1_Ops_thru_run is interesting. 2000+ ops/sec on random writes, 5000

Re: [zfs-discuss] Soon out of space (after upgrade to 2009.06)

2009-07-24 Thread Lori Alt
In general, questions about beadm and related tools should be sent or at least cross-posted to install-disc...@opensolaris.org. Lori On 07/24/09 07:04, Jean-Noël Mattern wrote: Axelle, You can safely run "beadm destroy opensolaris" if everything's allright with your new opensolaris-1 boot

Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread Cindy . Swearingen
Hi Dick, I haven't see this problem when I've tested these steps. And its been awhile since I've seen the nobody:nobody problem, but it sounds like NFSMAPID didn't get set correctly. I think this question is asked during installation and generally is set to the default DNS domain name. The dom

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 10:44:36 -0400 Kyle McDonald wrote: > ... then it seems like a shame (or a waste?) not to equally > protect the data both before it's given to ZFS for writing, and after > ZFS reads it back and returns it to you. But that was not the question. The question was: [quote] "My q

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 07:19:40 -0700 (PDT) Rich Teer wrote: > Given that data integrity is presumably important in every non-gaming > computing use, I don't understand why people even consider not using > ECC RAM all the time. The hardware cost delta is a red herring: I live in Holland and it is

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Kyle McDonald
Michael McCandless wrote: I've read in numerous threads that it's important to use ECC RAM in a ZFS file server. My question is: is there any technical reason, in ZFS's design, that makes it particularly important for ZFS to require ECC RAM? I think, basically the idea is, that if you're goin

Re: [zfs-discuss] why is zpool import still hanging in opensolaris 2009.06 ??? no fix yet ???

2009-07-24 Thread Blake
This sounds like a bug I hit - if you have zvols on your pool, and automatic snapshots enabled, the thousands of resultant snapshots have to be polled by devfsadm during boot, which take a long time - several seconds per zvol. I remove the auto-snapshot property from my zvols and the slow boot sto

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Kyle McDonald
Tristan Ball wrote: It just so happens I have one of the 128G and two of the 32G versions in my drawer, waiting to go into our "DR" disk array when it arrives. Hi Tristan, Just so I can be clear, What model/brand are the drives you were testing? -Kyle I dropped the 128G into a spare De

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Rich Teer
On Fri, 24 Jul 2009, Michael McCandless wrote: > I've read in numerous threads that it's important to use ECC RAM in a > ZFS file server. > > My question is: is there any technical reason, in ZFS's design, that > makes it particularly important for ZFS to require ECC RAM? [...] > Some of the po

Re: [zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
On Fri, 24 Jul 2009 15:55:02 +0200 dick hoogendijk wrote: > [share to local system] > westmark# zfs set sharenfs=on store/snaps I left out the options and changed the /store/snaps directory permissions to 777. Now the snapshot can be send from the host but it gets u:g permssions like nobody:nobo

[zfs-discuss] ZFS Root Pool Recovery (from the FAQ)

2009-07-24 Thread dick hoogendijk
Hi, I followed the faq on this, but get erros I can't understand. As I do want to make backups I really hope someone can tell me what's wrong. == [ what I did ] [my remote system] westmark# zfs create store/snaps westmark# zfs list NAME USED AVAIL REFER MOUNTPOINT store 108

Re: [zfs-discuss] Soon out of space (after upgrade to 2009.06)

2009-07-24 Thread Jean-Noël Mattern
Axelle, You can safely run "beadm destroy opensolaris" if everything's allright with your new opensolaris-1 boot env. You will get back your space (something around 7.18 GB). There's something strang with the mountpoint of rpool which should be /rpool and not /a/rpool, maybe you'll have to f

[zfs-discuss] Soon out of space (after upgrade to 2009.06)

2009-07-24 Thread Axelle Apvrille
Hi, I have upgraded from to 2008.11 to 2009.06. The upgrade process created a new boot environment (named opensolaris-1 in my case), but I am now getting out of space in my ZFS pool. So, can I safely erase the old boot environment, and if so will that get me back the disk space I need ? BE

[zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Michael McCandless
I've read in numerous threads that it's important to use ECC RAM in a ZFS file server. My question is: is there any technical reason, in ZFS's design, that makes it particularly important for ZFS to require ECC RAM? Is ZFS especially vulnerable, moreso than other filesystems, to bit errors in RAM

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Andrew Gabriel
Darren J Moffat wrote: Jorgen Lundman wrote: Jorgen Lundman wrote: However, "zpool detach" appears to mark the disk as blank, so nothing will find any pools (import, import -D etc). zdb -l will show labels, For kicks, I tried to demonstrate this does indeed happen, so I dd'ed the first 1

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
Hi, thanks for pointing out issue, we haven't run updates on server yet. Yours Markus Kovero -Original Message- From: Henrik Johansson [mailto:henr...@henkis.net] Sent: 24. heinäkuuta 2009 12:26 To: Markus Kovero Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] No files but poo

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Jorgen Lundman
Darren J Moffat wrote: Maybe the 2 disk mirror is a special enough case that this could be worth allowing without having to deal with all the other cases as well. The only reason I think it is a special enough cases is because it is the config we use for the root/boot pool. See 6849185 an

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Henrik Johansson
On 24 jul 2009, at 09.33, Markus Kovero wrote: During our tests we noticed very disturbing behavior, what would be causing this? System is running latest stable opensolaris. Any other means to remove ghost files rather than destroying pool and restoring from backups? This looks like bu

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Darren J Moffat
Jorgen Lundman wrote: Jorgen Lundman wrote: However, "zpool detach" appears to mark the disk as blank, so nothing will find any pools (import, import -D etc). zdb -l will show labels, For kicks, I tried to demonstrate this does indeed happen, so I dd'ed the first 1024 1k blocks from the di

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
Yes, server has been rebooted several times and there is no available space, is it possible to delete ghosts that zdb sees somehow? how this can happen? Yours Markus Kovero -Original Message- From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias Pantzare Sent: 24. he

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Mattias Pantzare
On Fri, Jul 24, 2009 at 09:57, Markus Kovero wrote: > r...@~# zfs list -t snapshot > NAME                             USED  AVAIL  REFER  MOUNTPOINT > rpool/ROOT/opensola...@install   146M      -  2.82G  - > r...@~# Then it is probably some process that has a deleted file open. You can find those

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
r...@~# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/opensola...@install 146M - 2.82G - r...@~# -Original Message- From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias Pantzare Sent: 24. heinäkuuta 2009 10:56

Re: [zfs-discuss] No files but pool is full?

2009-07-24 Thread Mattias Pantzare
On Fri, Jul 24, 2009 at 09:33, Markus Kovero wrote: > During our tests we noticed very disturbing behavior, what would be causing > this? > > System is running latest stable opensolaris. > > Any other means to remove ghost files rather than destroying pool and > restoring from backups? You may hav

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-24 Thread Ross
Interesting, so the more drive failures you have, the slower the array gets? Would I be right in assuming that the slowdown is only up to the point where FMA / ZFS marks the drive as faulted? -- This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] No files but pool is full?

2009-07-24 Thread Markus Kovero
During our tests we noticed very disturbing behavior, what would be causing this? System is running latest stable opensolaris. Any other means to remove ghost files rather than destroying pool and restoring from backups? r...@~# zpool status testpool pool: testpool state: ONLINE scrub: scrub