Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Joerg Schilling
Richard Elling  wrote:

> I think a picture is emerging that if you have enough RAM, the
> ARC is working very well. Which means that the ARC management
> is suspect.
>
> I propose the hypothesis that ARC misses are not prefetched.  The
> first time through, prefetching works.  For the second pass, ARC
> misses are not prefetched, so sequential reads go slower. 

You may be right as it may be that the cache is not filled by new important
data because there is already 100% of unimportant data inside.


Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-15 Thread Thomas Liesner
You can't replace it because this disk is still a valid member of the pool, 
although it is marked faulty.
Put in a replacement disk, add this to the pool and replace the faulty one with 
the new disk.

Regards,
Tom
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Ross
Yes, that makes sense.  For the first run, the pool has only just been mounted, 
so the ARC will be empty, with plenty of space for prefetching.

On the second run however, the ARC is already full of the data that we just 
read, and I'm guessing that the prefetch code is less aggressive when there is 
already data in the ARC.  Which for normal use may be what you want - it's 
trying to keep things in the ARC in case they are needed.

However that does mean that ZFS prefetch is always going to suffer performance 
degradation on a live system, although early signs are that this might not be 
so severe in snv_117.

I wonder if there is any tuning that can be done to counteract this?  Is there 
any way to tell ZFS to bias towards prefetching rather than preserving data in 
the ARC?  That may provide better performance for scripts like this, or for 
random access workloads.

Also, could there be any generic algorithm improvements that could help.  Why 
should ZFS keep data in the ARC if it hasn't been used?  This script has 8GB 
files, but the ARC should be using at least 1GB of RAM.  That's a minimum of 
128 files in memory, none of which would have been read more than once.  If 
we're reading a new file now, prefetching should be able to displace any old 
object in the ARC that hasn't been used - in this case all 127 previous files 
should be candidates for replacement.

I wonder how that would interact with a L2ARC.  If that was fast enough I'd 
certainly want to allocate more of the ARC to prefetching.

Finally, would it make sense for the ARC to always allow a certain percentage 
for prefetching, possibly with that percentage being tunable, allowing us to 
balance the needs of the two systems according to the expected usage?

Ross
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread My D. Truong
> It would be good to see results from a few
> OpenSolaris users running a 
> recent 64-bit kernel, and with fast storage to see if
> this is an 
> OpenSolaris issue as well.

Bob,

Here's an example of an OpenSolaris machine, 2008.11 upgraded to the 117 devel 
release.  X4540, 32GB RAM.  The file count was bumped up to 9000 to be a little 
over double the RAM.

r...@deviant:~# ./zfs-cache-test.ksh gauss
System Configuration: Sun Microsystems Sun Fire X4540
System architecture: i386
System release level: 5.11 snv_117
CPU ISA list: amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium i486 i386 
i86

Pool configuration:
  pool: gauss
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
gauss   ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t1d0  ONLINE   0 0 0
c5t1d0  ONLINE   0 0 0
c6t1d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c9t1d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t2d0  ONLINE   0 0 0
c5t2d0  ONLINE   0 0 0
c6t2d0  ONLINE   0 0 0
c7t2d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c9t2d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t3d0  ONLINE   0 0 0
c5t3d0  ONLINE   0 0 0
c6t3d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0
c9t3d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t4d0  ONLINE   0 0 0
c5t4d0  ONLINE   0 0 0
c6t4d0  ONLINE   0 0 0
c7t4d0  ONLINE   0 0 0
c8t4d0  ONLINE   0 0 0
c9t4d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t5d0  ONLINE   0 0 0
c5t5d0  ONLINE   0 0 0
c6t5d0  ONLINE   0 0 0
c7t5d0  ONLINE   0 0 0
c8t5d0  ONLINE   0 0 0
c9t5d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t6d0  ONLINE   0 0 0
c5t6d0  ONLINE   0 0 0
c6t6d0  ONLINE   0 0 0
c7t6d0  ONLINE   0 0 0
c8t6d0  ONLINE   0 0 0
c9t6d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t7d0  ONLINE   0 0 0
c5t7d0  ONLINE   0 0 0
c6t7d0  ONLINE   0 0 0
c7t7d0  ONLINE   0 0 0
c8t7d0  ONLINE   0 0 0
c9t7d0  ONLINE   0 0 0

errors: No known data errors

zfs create gauss/zfscachetest
Creating data file set (9000 files of 8192000 bytes) under /gauss/zfscachetest 
...
Done!
zfs unmount gauss/zfscachetest
zfs mount gauss/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
144000768 blocks

real9m15.87s
user0m5.16s
sys 1m29.32s

Doing second 'cpio -C 131072 -o > /dev/null'
144000768 blocks

real28m57.54s
user0m5.47s
sys 1m50.32s

Feel free to clean up with 'zfs destroy gauss/zfscachetest'.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-15 Thread Laurent Blume
I don't have a replacement, but I don't want the disk to be used right now by 
the volume: how do I do that?
This is exactly the point of the offline command as explained in the 
documentation: disabling unreliable hardware, or removing it temporarily.
So this is a huge bug of the documentation?

What's the point of it if its own purpose doesn't work? I'm really puzzled now.

Laurent
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn

On Wed, 15 Jul 2009, Ross wrote:

Yes, that makes sense.  For the first run, the pool has only just 
been mounted, so the ARC will be empty, with plenty of space for 
prefetching.


I don't think that this hypothesis is quite correct.  If you use 
'zpool iostat' to monitor the read rate while reading a large 
collection of files with total size far larger than the ARC, you will 
see that there is no fall-off in read performance once the ARC becomes 
full.  The performance problem occurs when there is still metadata 
cached for a file but the file data has since been expunged from the 
cache.  The implication here is that zfs speculates that the file data 
will be in the cache if the metadata is cached, and this results in a 
cache miss as well as disabling the file read-ahead algorithm.  You 
would not want to do read-ahead on data that you already have in a 
cache.


Recent OpenSolaris seems to take a 2X performance hit rather than the 
4X hit that Solaris 10 takes.  This may be due to improvement of 
existing algorithm function performance (optimizations) rather than a 
related design improvement.


I wonder if there is any tuning that can be done to counteract this? 
Is there any way to tell ZFS to bias towards prefetching rather than 
preserving data in the ARC?  That may provide better performance for 
scripts like this, or for random access workloads.


Recent zfs development focus has been on how to keep prefetch from 
damaging applications like database where prefetch causes more data to 
be read than is needed.  Since OpenSolaris now apparently includes an 
option setting which blocks file data caching and prefetch, this seems 
to open the door for use of more aggressive prefetch in the normal 
mode.


In summary, I agree with Richard Elling's hypothesis (which is the 
same as my own).


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn

On Wed, 15 Jul 2009, My D. Truong wrote:


Here's an example of an OpenSolaris machine, 2008.11 upgraded to the 
117 devel release.  X4540, 32GB RAM.  The file count was bumped up 
to 9000 to be a little over double the RAM.


Your timings show a 3.1X hit so it appears that the OpenSolaris 
improvement is not as much as was assumed.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't offline a RAID-Z2 device: "no valid replica"

2009-07-15 Thread Thomas Liesner
You could offline the disk if [b]this[/b] disk (not the pool) had a replica. 
Nothing wrong with the documentation. Hmm, maybe it is little misleading here. 
I walked into the same "trap".

The pool is not using the disk anymore anyway, so (from the zfs point of view) 
there is no need to offline the disk. If you want to stop the io-system from 
trying to access the disk, pull it out or wait until it gives up...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Richard Elling



Bob Friesenhahn wrote:

On Wed, 15 Jul 2009, Ross wrote:

Yes, that makes sense.  For the first run, the pool has only just 
been mounted, so the ARC will be empty, with plenty of space for 
prefetching.


I don't think that this hypothesis is quite correct.  If you use 
'zpool iostat' to monitor the read rate while reading a large 
collection of files with total size far larger than the ARC, you will 
see that there is no fall-off in read performance once the ARC becomes 
full.


Unfortunately, "zpool iostat" doesn't really tell you anything about
performance.  All it shows is bandwidth. Latency is what you need
to understand performance, so use iostat.

The performance problem occurs when there is still metadata cached for 
a file but the file data has since been expunged from the cache.  The 
implication here is that zfs speculates that the file data will be in 
the cache if the metadata is cached, and this results in a cache miss 
as well as disabling the file read-ahead algorithm.  You would not 
want to do read-ahead on data that you already have in a cache.


I realized this morning that what I posted last night might be
misleading to the casual reader. Clearly the first time through
the data is prefetched and misses the cache.  On the second
pass, it should also miss the cache, if it were a simple cache.
But the ARC tries to be more clever and has ghosts -- where
the data is no longer in cache, but the metadata is.  I suspect
the prefetching is not being used for the ghosts.  The arcstats
will show this. As benr blogs,
   "These Ghosts lists are magic. If you get a lot of hits to the
   ghost lists, it means that ARC is WAY too small and that
   you desperately need either more RAM or an L2 ARC
   device (likely, SSD). Please note, if you are considering
   investing in L2 ARC, check this FIRST."
http://www.cuddletech.com/blog/pivot/entry.php?id=979
This is the explicit case presented by your test. This also
explains why the entry from the system with an L2ARC
did not have the performance "problem."

Also, another test would be to have two large files.  Read from
one, then the other, then from the first again.  Capture arcstats
from between the reads and see if the haunting stops ;-)
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn

On Wed, 15 Jul 2009, Richard Elling wrote:


Unfortunately, "zpool iostat" doesn't really tell you anything about
performance.  All it shows is bandwidth. Latency is what you need
to understand performance, so use iostat.


You are still thinking about this as if it was a hardware-related 
problem when it is clearly not. Iostat is useful for analyzing 
hardware-related problems in the case where the workload is too much 
for the hardware, or the hardware is non-responsive. Anyone who runs 
this crude benchmark will discover that iostat shows hardly any disk 
utilization at all, latencies are low, and read I/O rates are low 
enough that they could be satisfied by a portable USB drive.  You can 
even observe the blinking lights on the front of the drive array and 
see that it is lightly loaded.  This explains why a two disk mirror is 
almost able to keep up with a system with 40 fast SAS drives.


This is the opposite situation from the zfs writes which periodically 
push the hardware to its limits.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Richard Elling



Bob Friesenhahn wrote:

On Wed, 15 Jul 2009, Richard Elling wrote:


Unfortunately, "zpool iostat" doesn't really tell you anything about
performance.  All it shows is bandwidth. Latency is what you need
to understand performance, so use iostat.


You are still thinking about this as if it was a hardware-related 
problem when it is clearly not. Iostat is useful for analyzing 
hardware-related problems in the case where the workload is too much 
for the hardware, or the hardware is non-responsive. Anyone who runs 
this crude benchmark will discover that iostat shows hardly any disk 
utilization at all, latencies are low, and read I/O rates are low 
enough that they could be satisfied by a portable USB drive.  You can 
even observe the blinking lights on the front of the drive array and 
see that it is lightly loaded.  This explains why a two disk mirror is 
almost able to keep up with a system with 40 fast SAS drives.


heh. What you would be looking for is evidence of prefetching.  If there
is a lot of prefetching, the actv will tend to be high and latencies 
relatively

low.  If there is no prefetching, actv will be low and latencies may be
higher. This also implies that if you use IDE disks, which cannot handle
multiple outstanding I/Os, the performance will look similar for both runs.

Or, you could get more sophisticated and use a dtrace script to look at
the I/O behavior to determine the latency between contiguous I/O
requests. Something like iopattern is a good start, though it doesn't
try to measure the time between requests, it would be easy to add.
http://www.richardelling.com/Home/scripts-and-programs-1/iopattern
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn

On Wed, 15 Jul 2009, Richard Elling wrote:


heh. What you would be looking for is evidence of prefetching.  If 
there is a lot of prefetching, the actv will tend to be high and 
latencies relatively low.  If there is no prefetching, actv will be 
low and latencies may be higher. This also implies that if you use 
IDE disks, which cannot handle multiple outstanding I/Os, the 
performance will look similar for both runs.


Ok, here are some stats for the "poor" (initial "USB" rates) and 
"terrible" (sub-"USB" rates) cases.


"poor" (29% busy) iostat:

extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c1t0d0
0.01.20.0   11.4  0.0  0.00.04.5   0   0 c1t1d0
   91.20.0 11654.70.0  0.0  0.80.09.2   0  27 
c4t600A0B80003A8A0B096147B451BEd0
   95.00.0 12160.30.0  0.0  0.90.09.9   0  29 
c4t600A0B800039C9B50A9C47B4522Dd0
   96.40.0 12333.10.0  0.0  0.90.09.5   0  29 
c4t600A0B800039C9B50AA047B4529Bd0
   96.80.0 12377.90.0  0.0  0.90.09.5   0  30 
c4t600A0B80003A8A0B096647B453CEd0
  100.40.0 12845.10.0  0.0  1.00.09.5   0  29 
c4t600A0B800039C9B50AA447B4544Fd0
   93.40.0 11949.10.0  0.0  0.80.09.0   0  28 
c4t600A0B80003A8A0B096A47B4559Ed0
   91.50.0 11705.90.0  0.0  0.90.09.7   0  28 
c4t600A0B800039C9B50AA847B45605d0
   91.40.0 11680.30.0  0.0  0.90.0   10.1   0  29 
c4t600A0B80003A8A0B096E47B456DAd0
   88.90.0 11366.70.0  0.0  0.90.09.7   0  27 
c4t600A0B800039C9B50AAC47B45739d0
   94.30.0 12045.50.0  0.0  0.90.09.9   0  29 
c4t600A0B800039C9B50AB047B457ADd0
   96.50.0 12339.50.0  0.0  0.90.09.3   0  28 
c4t600A0B80003A8A0B097347B457D4d0
   87.90.0 11232.70.0  0.0  0.90.0   10.4   0  29 
c4t600A0B800039C9B50AB447B4595Fd0
0.00.00.00.0  0.0  0.00.00.0   0   0 c5t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c6t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 
c2t202400A0B83A8A0Bd31
0.00.00.00.0  0.0  0.00.00.0   0   0 
c3t202500A0B83A8A0Bd31
0.00.00.00.0  0.0  0.00.00.0   0   0 freddy:vold(pid508)

"terrible" (8% busy) iostat:

extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c0t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c1t0d0
0.01.80.01.0  0.0  0.00.0   26.6   0   1 c1t1d0
   26.80.0 3430.40.0  0.0  0.10.02.9   0   8 
c4t600A0B80003A8A0B096147B451BEd0
   21.00.0 2688.00.0  0.0  0.10.03.9   0   8 
c4t600A0B800039C9B50A9C47B4522Dd0
   24.00.0 3059.60.0  0.0  0.10.03.4   0   8 
c4t600A0B800039C9B50AA047B4529Bd0
   27.60.0 3532.80.0  0.0  0.10.03.2   0   9 
c4t600A0B80003A8A0B096647B453CEd0
   20.80.0 2662.40.0  0.0  0.10.03.1   0   6 
c4t600A0B800039C9B50AA447B4544Fd0
   26.50.0 3392.00.0  0.0  0.10.02.6   0   7 
c4t600A0B80003A8A0B096A47B4559Ed0
   20.60.0 2636.80.0  0.0  0.10.03.0   0   6 
c4t600A0B800039C9B50AA847B45605d0
   22.90.0 2931.20.0  0.0  0.10.03.8   0   9 
c4t600A0B80003A8A0B096E47B456DAd0
   21.40.0 2739.20.0  0.0  0.10.03.5   0   7 
c4t600A0B800039C9B50AAC47B45739d0
   23.10.0 2944.40.0  0.0  0.10.03.7   0   9 
c4t600A0B800039C9B50AB047B457ADd0
   24.90.0 3187.20.0  0.0  0.10.03.4   0   8 
c4t600A0B80003A8A0B097347B457D4d0
   28.30.0 3622.40.0  0.0  0.10.02.8   0   8 
c4t600A0B800039C9B50AB447B4595Fd0
0.00.00.00.0  0.0  0.00.00.0   0   0 c5t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c6t0d0
0.00.00.00.0  0.0  0.00.00.0   0   0 
c2t202400A0B83A8A0Bd31
0.00.00.00.0  0.0  0.00.00.0   0   0 
c3t202500A0B83A8A0Bd31
0.00.00.00.0  0.0  0.00.00.0   0   0 freddy:vold(pid508)

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Ross
Aaah, ok, I think I understand now.  Thanks Richard.

I'll grab the updated test and have a look at the ARC ghost results when I get 
back to work tomorrow.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Thank you.

2009-07-15 Thread Dennis Clarke

I want to express my thanks. My gratitude. I am not easily impressed
by technology anymore and ZFS impressed me this morning.

Sometime late last night a primary server of mine had a critical
fault. One of the PCI cards in a V480 was the cause and for whatever
reasons this destroyed the DC-DC power convertors that powered the
primary internal disks. It also dropped the whole machine and 12
zones.

I feared the worst and made the call for service at about midnight
last night. A Sun service tech said he could be there in 2 hours
or so but he asked me to check this and check that. The people at
the datacenter were happy to tell me there was a wrench light on
but other than that, they knew nothing.

This machine, like all critical systems I have, uses mirrored disks
in ZPools with multiple links of fibre to arrays.  I dreaded what
would happen when we tried to boot this box after all the dust was
blown out and hardware swapped.

Early this morning ... I watched the detailed diags run and finally
a nice clean ok prompt.

<*>
Hardware Power On

@(#)OBP 4.22.34 2007/07/23 13:01 Sun Fire 4XX
System is initializing with diag-switch? overrides.
Online: CPU0 CPU1 CPU2 CPU3*
Validating JTAG integrity...Done
.
.
.
CPU0: System POST Completed
Pass/Fail Status  = ...
ESB Overall Status  = ...

<*>
POST Reset
.
.
.

{3} ok show-post-results
System POST Results
Component:Results

CPU/Memory:Passed
IO-Bridge8:Passed
IO-Bridge9:Passed
GPTwo Slots:   Passed
Onboard FCAL:  Passed
Onboard Net1:  Passed
Onboard Net0:  Passed
Onboard IDE:   Passed
PCI Slots: Passed
BBC0:  Passed
RIO:   Passed
USB:   Passed
RSC:   Passed
POST Message:  POST PASS
{3} ok boot -s

Eventually I saw my login prompt. There were no warnings about data
corruption. No data loss. No noise at all in fact.   :-O

# zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
fibre0   680G   654G  25.8G96%  ONLINE  -
z0  40.2G   103K  40.2G 0%  ONLINE  -

# zpool status fibre0
  pool: fibre0
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
fibre0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t16d0  ONLINE   0 0 0
c5t0d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c5t1d0   ONLINE   0 0 0
c2t17d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c5t2d0   ONLINE   0 0 0
c2t18d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t20d0  ONLINE   0 0 0
c5t4d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c2t21d0  ONLINE   0 0 0
c5t6d0   ONLINE   0 0 0
spares
  c2t22d0AVAIL

errors: No known data errors
#

Not one error. No message about resilver this or inode that.

Everything booted flawlessly and I was able to see all my zones :

# bin/lz
-
NAME   ID  STATUS  PATH  HOSTNAME  BRAND IP
-
z_001  4   running /zone/z_001   pluto solaris8  excl
z_002  -   installed   /zone/z_002   ldap01nativeshared
z_003  -   installed   /zone/z_003   openfor   solaris9  shared
z_004  6   running /zone/z_004   gaspranativeshared
z_005  5   running /zone/z_005   ibisprd   nativeshared
z_006  7   running /zone/z_006   ionativeshared
z_007  1   running /zone/z_007   nis   nativeshared
z_008  3   running /zone/z_008   callistoz nativeshared
z_009  2   running /zone/z_009   loginznativeshared
z_010  -   installed   /zone/z_010   venus solaris8  shared
z_011  -   installed   /zone/z_011   adbs  solaris9  shared
z_012  -   installed   /zone/z_012   auroraux  nativeshared
z_013  8   running /zone/z_013   osirisnativeexcl
z_014  -   installed   /zone/z_014   jira  nativeshared

People love to complain. I see it all the time.

I downloaded this OS for free and I run it in production.
I have support and I am fine with paying for support contracts.
But someone somewhere needs to buy the ZFS guys some keg(s) of
whatever beer they want. Or maybe new Porsche Cayman S toys.

That would be gratitude as something more than just words.

Thank you.

-- 
Dennis Clarke

ps: the one funny thing is th

[zfs-discuss] Two disk issue

2009-07-15 Thread Keith Calvelli
I recently installed opensolaris with the intention of creating a home 
fileserver.  The machine I installed on has two 1TB drives, and I wanted to 
create a raidz config.  Unfortunately, I am very, very new to solaris and 
installed the OS on a single 100GB partition on the first disk, with the 
assumption that I would be able to create a second partition on that disk and 
use it in a zpool with the other disk.  I should have done some research before 
installing, as this does not seem to be a viable solution.

I understand that zfs likes whole disks, but do not want to install a third HD 
in the machine.  Is there a way I can accomplish what I am trying to do without 
having to install another HD?

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Two disk issue

2009-07-15 Thread Keith Calvelli
I found a guide that explains how to accomplish what I was looking to do:

http://www.kamiogi.net/Kamiogi/Frame_Dragging/Entries/2009/5/10_OpenSolaris_Disk_Partitioning_and_the_Free_Hog.html
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] An amusing scrub

2009-07-15 Thread Rich
Today, I ran a scrub on my rootFS pool.

I received the following lovely output:
# zpool status larger_root
  pool: larger_root
 state: ONLINE
 scrub: scrub completed after 307445734561825856h29m with 0 errors on
Wed Jul 15 21:49:02 2009
config:

    NAME    STATE READ WRITE CKSUM
    larger_root  ONLINE   0 0 0
  c4t1d0s0  ONLINE   0 0 0

errors: No known data errors

For reference, assuming the universe is 14 billion years old (the
largest number I found)

(307 445 734 561 825 856 hours 29 minutes) / (14 billion years) = 2
505.23371 lifetimes of the universe

So ZFS really is the Last (and First) word in filesystems... :)

- Rich

(Footnote: I ran ntpdate between starting the scrub and it finishing,
and time rolled backwards. Nothing more exciting.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An amusing scrub

2009-07-15 Thread Mike Gerdts
On Wed, Jul 15, 2009 at 9:19 PM, Rich wrote:
> Today, I ran a scrub on my rootFS pool.
>
> I received the following lovely output:
> # zpool status larger_root
>   pool: larger_root
>  state: ONLINE
>  scrub: scrub completed after 307445734561825856h29m with 0 errors on
> Wed Jul 15 21:49:02 2009
> config:
>
>     NAME    STATE READ WRITE CKSUM
>     larger_root  ONLINE   0 0 0
>   c4t1d0s0  ONLINE   0 0 0
>
> errors: No known data errors
>
> For reference, assuming the universe is 14 billion years old (the
> largest number I found)
>
> (307 445 734 561 825 856 hours 29 minutes) / (14 billion years) = 2
> 505.23371 lifetimes of the universe
>
> So ZFS really is the Last (and First) word in filesystems... :)

If you had a nickle and a half for every hour that it took, you would
have enough to pay this credit card bill.

http://www.cnn.com/2009/US/07/15/quadrillion.dollar.glitch/index.html

> - Rich
>
> (Footnote: I ran ntpdate between starting the scrub and it finishing,
> and time rolled backwards. Nothing more exciting.)

And Visa is willing to wave the $15 over the limit fee associated with
the errant charge...

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss