[zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
 device
   26.00.0 1791.60.0  0.1  0.03.60.6   1   2 c3t0d0
1.00.0   85.50.0 26.9  1.0 26908.0 1000.2 100 100 c3t1d0
   26.00.0 1791.60.0  0.1  0.04.40.7   2   2 c3t2d0
   25.00.0 1791.60.0  0.1  0.05.50.8   2   2 c3t3d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0 27.0  1.00.00.0 100 100 c3t1d0

and then you get the odd bursts (but not from c3t1d0):

  extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
1.00.0  128.00.0 34.0  1.0 33999.6 1000.0 100 100 c3t1d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
1.00.0   85.50.0 34.0  1.0 34003.1 1000.1 100 100 c3t1d0
 1562.92.0 87312.90.0  0.0  0.70.00.5   0  75 c3t2d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
1.00.0   85.50.0 34.0  1.0 33997.3  999.9 100 100 c3t1d0
  728.10.0 40771.20.0  0.0  0.30.00.4   0  33 c3t2d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
1.00.0   42.50.0 34.0  1.0 33999.6 1000.0 100 100 c3t1d0


during dd tests, the speed is great from all 4 devices (4th one though is a 
fraction slower, I think it's different firmware revision):

t...@opensolaris:/dev/dsk$ pfexec dd if=/dev/dsk/c3t0d0 of=/dev/null bs=128k 
count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 11.2845 s, 116 MB/s

t...@opensolaris:/dev/dsk$ pfexec dd if=/dev/dsk/c3t1d0 of=/dev/null bs=128k 
count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 10.9697 s, 119 MB/s

t...@opensolaris:/dev/dsk$ pfexec dd if=/dev/dsk/c3t2d0 of=/dev/null bs=128k 
count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 11.6693 s, 112 MB/s

t...@opensolaris:/dev/dsk$ pfexec dd if=/dev/dsk/c3t3d0 of=/dev/null bs=128k 
count=1
1+0 records in
1+0 records out
131072 bytes (1.3 GB) copied, 14.2785 s, 91.8 MB/s


  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device

 2130.90.0 119329.20.0  0.0  0.90.00.4   1  92 c3t1d0
 2005.10.0 112287.70.0  0.0  0.90.00.4   1  88 c3t2d0
 2082.90.0 116642.00.0  0.0  0.90.00.4   1  93 c3t0d0
 1628.00.0 91168.30.0  0.0  0.90.00.6   0  94 c3t3d0

I've searched and read many opensolaris threads, bugs etc, but I just can't get 
to the bottom of this.  To me it seems like a ZFS issue, but then it seems like 
a hardware issue as it's always the same drive that seems to hang things up 
(but dd read tests from it are fine).

I'd really appreciate if anyone has any ideas or things to try..my read 
throughput is around 300kb/sec, next to nothing.  I've got all this data in the 
pool and I can't access it.

Cheers
Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
the closest bug I can find it this : 6772082 (ahci: ZFS hangs when IO happens)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
Bob that was my initial thought as well when I saw the problem stay with the 
drive after moving it to a different SATA port, but then it doesn't explain why 
a dd test runs fine.  I guess I could try a longer dd test.  My dd test could 
have been just lucky and hit an ok part of disk.  

Would a scrub help or would that just complicate things if the drive is bad and 
the scrub then has problems ?

I haven't swapped out the disk as yet as I don't have a spare, but I'm thinking 
I'll have to go buy another one in order to do the swap (and send the potential 
faulty one back).  The drives had only a few days use.

Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
I'll dd the whole disk tonight.  I was thinking it was bad spots, given how 
some files I can copy (admittedly they are small ones) better then others  
but in saying that, seeing the throughput at 349k/sec often is rather odd on 
different files. And then the files that manage to copy ok, the throughput is 
still stop/start all the way...the speed comes in bursts, then stops, then 
starts. So thinking bad sectors is the reason why I ran the Samsung diags on 
the drive (non-destructive) and it scanned the entire surface with no problems..
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
I never formatted these drives when I built the box, I just added them to zfs.
I can try format>analyze>read as well.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
hmm interesting...  haven't tried dd yet...  just been running a read test via 
format>analyze and it's showing up the slow down.

Starts off reading fast...next time I looked at it, it was reading slowly and 
was up to 5210.  I started another read test on one of the other drives and 
in a few minutes it was reading past the sus one.   I restarted the read test 
on the sus drive as I wanted to see exactly what count it started to read slow, 
got to count 5120 and then it slows right down and becomes erratic.  I 
haven't let it run much longer then that.  Interesting though...I dare say 
there's no bad sectors, but something is amiss with the scanning.  Anyone ever 
seen a drive behave like this before ?  I thought the count being 512x a 
little odd too.

Going to do some more tests.

Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
at exactly the same spot it slows down ?   I've just run the test a number of 
times, and without fail at the exactly the same spot, the read will just crawl 
along and erratically.  It's at approx 51256xxx.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
I'm just doing a surface scan via the Samsung utility to see if I see the same 
slow down..
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-16 Thread Tim
hmm, not seeing the same slow down when I boot from the Samsung EStool CD and 
run a diag which performs a surface scan...
could this still be a hardware issue, or possibly something with the Solaris 
data format on the disk?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-17 Thread Tim
fmdump shows errors on a different drive, and none on the one that has this 
slow read problem:

Nov 27 2009 20:58:28.670057389 ereport.io.scsi.cmd.disk.recovered
nvlist version: 0
class = ereport.io.scsi.cmd.disk.recovered
ena = 0xbeb7f4dd531
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /p...@0,0/pci1043,8...@9/d...@2,0
devid = id1,s...@sata_samsung_hd753lj___s1pwj1cq801987
(end detector)

driver-assessment = recovered
op-code = 0x28
cdb = 0x28 0x0 0x4 0x80 0x32 0x80 0x0 0x0 0x80 0x0
pkt-reason = 0x0
pkt-state = 0x1f
pkt-stats = 0x50
__ttl = 0x1
__tod = 0x4b0fa2c4 0x27f043ad


The serial number of the sus drive is S1PWJ1CQ801987.   

iostat -En shows:

c0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Model: ST360021A   Revision:  Serial No: 3HR2AG72Size: 60.02GB 
<60020932608 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 
c6d1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Model: SAMSUNG HD154UI Revision:  Serial No: S1Y6J1KS720622  Size: 1500.30GB 
<1500295200768 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 0 
c3t0d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: ATA  Product: SAMSUNG HD753LJ  Revision: 1113 Serial No:  
Size: 750.16GB <750156374016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 28 Predictive Failure Analysis: 0 
[b]c3t1d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: ATA  Product: SAMSUNG HD753LJ  Revision: 1113 Serial No:  
Size: 750.16GB <750156374016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 49 Predictive Failure Analysis: 0 [/b]
c3t2d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: ATA  Product: SAMSUNG HD753LJ  Revision: 1113 Serial No:  
Size: 750.16GB <750156374016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 28 Predictive Failure Analysis: 0 
c3t3d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 
Vendor: ATA  Product: SAMSUNG HD753LJ  Revision: 1110 Serial No:  
Size: 750.16GB <750156374016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 
Illegal Request: 28 Predictive Failure Analysis: 0 
c0t1d0   Soft Errors: 0 Hard Errors: 30 Transport Errors: 0 
Vendor: ATAPIProduct: CD-RW 52X24  Revision: F.JZ Serial No:  
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 30 No Device: 0 Recoverable: 0 
Illegal Request: 0 Predictive Failure Analysis: 0
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
I tried to buy another drive today (750GB or 1TB) to swap out c3t1d0 (750GB) 
but could not find one quickly.  So I was thinking as a temporary measure to 
use my 1.5TB disk instead as it can be reused at the moment (is currently 
attached to a sil3114 controller - c6d1p0).

Would it be ok to do a zpool replace with this 1.5TB disk attached to the 
sil3114 controller (it's only a SATA1 controller, whereas motherboard ports are 
SATAII):

"zpool replace storage c3t1d0 c6d1p0"

or

should I just physically disconnect the sus 750GB drive (c3t1d0) and plug the 
1.5TB disk to that port and then run:

"zpool replace storage c3t1d0"

What's the best approach ?  And can you confirm the correct steps to take 
please.  If it's the latter option, do I need to do anything with taking the 
pool offline at all, or is it just a matter of shutting the box down, swap 
cables over, start it up and when it comes up degraded (assuming it will), I 
then run the zpool replace command.

I'm feeling a little uneasy about the whole thing, as I have no other backup at 
the moment other then this array being RAIDZ...my 1.5TB disk was to be a 2nd 
copy/backup of most of the RAIDZ data (not the best backup, but at least 
something) but to date I can't read well from the RAIDZ pool to do that 
backup..and it's a few years of home video.  I think I'd just fall to pieces if 
I lost it.

I'd greatly appreciate the advice.
Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
Hi Cindy, 

I had similar concerns however I wasn't aware of that bug.  Before I bought 
this controller I had read a number of people saying that they had problems and 
then other people saying didn't have problems with the sil3114. I was 
originally after a sil3124 (SATAII) but given my future drives didn't need the 
extra speed I settled on the cheaper sil3114. Sil3124 was 5 times the cost of a 
sil3114.  A friend was running a sil3112 (2 port SATAI card) and that appeared 
to be fine.  So I bought the sil3114.

As far as I've seen, the card is fine.  I have already connected and created a 
pool on the 1.5TB disk attached to the sil3114 using snv111.  I think from 
memory I even booted back to snv101 and it still recognised it as well.  It's 
after I started copying files to this new pool that I found my read problem on 
the main 'storage' pool.  That was my plan for a backup device, just a single 
drive to start in it's own pool of the sil3114, and then I could add to it as 
needed, hence the sil3114 & 1.5TB disk.

I'm fairly sure though when I created the new pool 'backup1' that I used the 
device c6d1p0, not p1.  I'll try c6d1p1 today to make sure that is ok.  Is 
there a problem using c6d1p0 ?

Do you or anyone else know when a disk is replaced via :

zpool replace pool_name old_disk new_disk

when does the old_disk actually get removed from the pool ?  is it before the 
new_disk starts it's resilver, or is it after the new_disk has been resilvered 
?   

Once I get the drives swapped in the array, I was going to reformat the sus 
750GB and give it another workout and see if the read slow down persists, 
before sending it back to Samsung.  The drive is 14 months old, but it's had 
probably 2 weeks of total use.

Cindy, thanks for the reply, I really appreciate it.
Tim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
there's actually no device c6d1 in /dev/dsk, only:

t...@opensolaris:/dev/dsk$ ls -l c6d1*
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p0 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:q
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p1 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:r
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p2 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:s
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p3 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:t
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1p4 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:u
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s0 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:a
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s1 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:b
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s10 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:k
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s11 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:l
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s12 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:m
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s13 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:n
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s14 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:o
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s15 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:p
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s2 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:c
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s3 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:d
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s4 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:e
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s5 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:f
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s6 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:g
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s7 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:h
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s8 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:i
lrwxrwxrwx 1 root root 62 2009-10-27 18:03 c6d1s9 -> 
../../devices/p...@0,0/pci10de,5...@8/pci-...@6/i...@0/c...@1,0:j
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
should I use slice 2 instead of p0:

Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0)  0
  1 unassignedwm   00 (0/0/0)  0
  2 backupwu   0 - 607961.36TB(60797/0/0) 2930111415
  3 unassignedwm   00 (0/0/0)  0
  4 unassignedwm   00 (0/0/0)  0
  5 unassignedwm   00 (0/0/0)  0
  6 unassignedwm   00 (0/0/0)  0
  7 unassignedwm   00 (0/0/0)  0
  8   bootwu   0 - 0   23.53MB(1/0/0)  48195
  9 alternateswm   1 - 2   47.07MB(2/0/0)  96390

or should I create a proper device c6d1 ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
I had referred to this blog entry:
http://blogs.sun.com/observatory/entry/which_disk_devices_to_use
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
hmm ok, the replace with the existing drive still in place wasn't the best 
option...it's replacing, but very slowly as it's reading from that sus disk:

  pool: storage
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h3m, 0.00% done, 2895h58m to go
config:

NAME   STATE READ WRITE CKSUM
storageONLINE   0 0 0
  raidz1   ONLINE   0 0 0
c3t0d0 ONLINE   0 0 0
replacing  ONLINE   0 0 0
  c3t1d0   ONLINE   0 0 0
  c6d1p0   ONLINE   0 0 0  13.3M resilvered
c3t2d0 ONLINE   0 0 0
c3t3d0 ONLINE   0 0 0


t...@opensolaris:~$ iostat -xnz 1
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   12.25.4  845.4  105.5  0.8  0.3   45.7   14.4   9  15 c0d0
0.28.91.8   36.2  0.0  0.04.41.1   0   1 c6d1
1.51.1   89.73.4  0.1  0.0   24.04.7   1   1 c3t0d0
1.51.1   89.63.4  5.2  0.5 1986.6  196.2  51  52 c3t1d0
1.61.3   92.13.2  0.1  0.0   19.03.8   1   1 c3t2d0
1.51.2   89.02.8  0.1  0.0   21.34.4   1   1 c3t3d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0 13.0  1.00.00.0 100 100 c3t1d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.02.00.0   85.5  0.0  0.00.01.1   0   0 c6d1
1.00.0   85.50.0 12.8  1.0 12838.7  999.7 100 100 c3t1d0
extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0 12.0  1.00.00.0 100 100 c3t1d0


can I stop the resilver or maybe shutdown the box to disconnect the dead disk ?
is it ok to reboot the box while it's doing a resilver ?  ie: will it continue 
when it's back up ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hanging during reads

2009-12-18 Thread Tim
slow and steady wins the race ?

I ended up doing a zpool remove of c6d1p0.  This stopped the replace and it 
removed c6d1p0, and left the array doing a scrub, which was going to take by 
rough calculations around 12 months and increasing !
  
So I shut the box down, disconnected the SATA cable from c3t1d0 and restarted 
the boxit then hung on startup and wouldn't mount any ZFS volumes and no 
disk activity

So I shut the box down again, reconnected the SATA cable to c3t1d0 and 
restarted the box.  It came up (phew). 
 
zpool status showed the scrub was running again:

scrub: scrub in progress for 0h3m, 0.00% done, 8769h2m to go
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0

So this time I offlined c3t1d0:

NAMESTATE READ WRITE CKSUM
storage DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  OFFLINE  0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0

then I ran the zpool replace and now it's doing the replace:

 scrub: resilver in progress for 0h15m, 4.10% done, 6h7m to go
config:

NAME   STATE READ WRITE CKSUM
storageDEGRADED 0 0 0
  raidz1   DEGRADED 0 0 0
c3t0d0 ONLINE   0 0 0
replacing  DEGRADED 0 0  570K
  c3t1d0   OFFLINE  0 0 0
  c6d1p0   ONLINE   0 0 0  23.7G resilvered
c3t2d0 ONLINE   0 0 0
c3t3d0 ONLINE   0 0 0


I don't care if it takes a day, as long as it works :o)
Thanks so far for the advice, I'll let you know how it goes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] We have a driver for the MM-5425CN

2008-07-18 Thread Tim
On Thu, Jul 10, 2008 at 5:51 AM, Ross <[EMAIL PROTECTED]> wrote:

> Hey everybody,
>
> Well, my pestering paid off.  I have a Solaris driver which you're welcom
> to download, but please be aware that it comes with NO SUPPORT WHATSOEVER.
>
> I'm very grateful to the chap who provided this driver, please don't abuse
> his generosity by calling Micro Memory or Vmetro if you have any problems.
>
> I've no idea which version of Solaris this was developed for, how many
> other cards it works with, or if it even works in the current version of
> solaris.  Use at your own risk.
> http://www.averysilly.com/Micro_Memory_MM-5425CN.zip
>
> Ross
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Ross,

Have you gotten a card and had a chance to test this yet?  Does it work with
the new(er) SNV builds?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting view on flash

2008-07-18 Thread Tim
Except the article was redacted.  The reason the battery life
decreased was because the throughput increased so much that it drove
up the cpu usage up, thus bringing down battery life.  It just goes to
show how SEVERELY io bound we currently are.  The flash itself was
using LESS power.

--tim



On 7/18/08, Bob Friesenhahn <[EMAIL PROTECTED]> wrote:
> On Fri, 18 Jul 2008, Al Hopper wrote:
>
>> If you look at the overall I/O throughput in Mb/Sec over the years and
>> compare it with the advances in server memory size or SPEC-int rates
>> over the years, the I/O throughput curve looks *almost* flat - as the
>> delta between the other two curves continues to widen.  What is even
>> worse, is that to improve the I/O curve, especially in terms of I/O
>> Ops/Sec (IOPS) is very, very expensive.   Flash drives - or even a
>
> While I certainly agree with you regarding IOPS, I would not agree
> that I/O performance gains have been flat.  Sequential I/O performance
> has been increasing over the years similar to memory performance.
> 220MB/second is a lot faster than 3MB/second.  We are now in a
> situation where per-core CPU performance has stalled (actually
> retreated by 30%), RAM memory performance has almost stalled, and disk
> seek times have completely stalled.
>
> Solid-state devices will definitely help dramatically with IOPS (by
> eliminating the seek) but it seems that the vast majority of solid
> state devices will still be much slower at sequential write I/O than
> traditional hard drives.  It is fully conceivable that in a couple of
> years, hard drives will be supporting 400MB/second with 4TB of storage
> each.
>
> As far as saving power goes, a recent study found that FLASH drives
> were consuming considerably more power than the hard drives they
> replaced.  This study likely applies to laptop/desktop applications
> rather than server applications.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Tim
On Wed, Jul 23, 2008 at 2:37 PM, Steve <[EMAIL PROTECTED]> wrote:

> I'm a fan of ZFS since I've read about it last year.
>
> Now I'm on the way to build a home fileserver and I'm thinking to go with
> Opensolaris and eventually ZFS!!
>
> Apart from the other components, the main problem is to choose the
> motherboard. The offer is incredibly high and I'm lost.
>
> Minimum requisites should be:
> - working well with Open Solaris ;-)
> - micro ATX (I would put in a little case)
> - low power consumption but more important reliable (!)
> - with Gigabit ethernet
> - 4+ (even better 6+) sata 3gb controller
>
> Also: what type of RAM to select toghether? (I would chose if good ECC, but
> the rest?)
>
> Does it make sense? What are the possibilities?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Just wondering what case you're going to put a micro-atx motherboard in
that's going to support 6+ drives without overheating.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-08-04 Thread Tim
On Mon, Aug 4, 2008 at 8:02 AM, Ross <[EMAIL PROTECTED]> wrote:

> Did anybody ever get this card working?  SuperMicro only have Windows and
> Linux drivers listed on their site.  Do Sun's generic drivers work with this
> card?
>
>

Still waiting to buy a set.  I've already got the supermicro marvell based
cards, as well as a motherboard with 2xPCI-X 133mhz slots and no PCI-E.
It'll be a motherboard plus two of the cards when I do buy them.  Haven't
really had the motivation or the cash to do so yet.  I've been keeping my
eye out for a board that supports the opteron 165 and the wider lane dual
pci-E slots that isn't stricly a *gaming* board.  I'm starting to think the
combination doesn't exist.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-08-04 Thread Tim
Thanks for the link.  I'll consider those, but it still means a new CPU, and
it appears it does not support any of the opteron line-up.



On Mon, Aug 4, 2008 at 3:58 PM, Brandon High <[EMAIL PROTECTED]> wrote:

> On Mon, Aug 4, 2008 at 6:49 AM, Tim <[EMAIL PROTECTED]> wrote:
> > really had the motivation or the cash to do so yet.  I've been keeping my
> > eye out for a board that supports the opteron 165 and the wider lane dual
> > pci-E slots that isn't stricly a *gaming* board.  I'm starting to think
> the
> > combination doesn't exist.
>
> The AMD 790GX boards are starting to show up:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128352
>
> Dual 8x PCIe slots, integrated video and 6 AHCI SATA ports.
>
> -B
>
> --
> Brandon High [EMAIL PROTECTED]
> "The good is the enemy of the best." - Nietzsche
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-14 Thread Tim
I don't have any extra cards lying around and can't really take my server
down, so my immediate question would be:
Is there any sort of PCI bridge chip on the card?  I know in my experience
I've seen all sorts of headaches with less than stellar bridge chips.
Specifically some of the IBM bridge chips.

Food for thought.

--Tim





On Thu, Aug 14, 2008 at 5:24 AM, Ross <[EMAIL PROTECTED]> wrote:

> This is the problem when you try to write up a good summary of what you
> found.  I've got pages and pages of notes of all the tests I did here, far
> more than I could include in that PDF.
>
> What makes me think it's driver is that I've done much of what you
> suggested.  I've replicated the exact same behaviour on two different cards,
> individually and with both cards attached to the server.  It's also
> consistent across many different brands and types of drive, and occurs even
> if I have just 4 drives connected out of 8 on a single controller.
>
> I did wonder whether it could be hardware related, so I tested plugging and
> unplugging drives while the computer was booting.  While doing that and
> hot-plugging drives in the BIOS, at no point did I see any hanging of the
> system, which tends to confirm my thought that it's driver related.
>
> I was also able to power on the system with all drives connected, wait for
> the controllers to finish scanning the drives, then remove a few at the GRUB
> boot screen.  From there when I continue to boot Solaris, the correct state
> is detected every time for all drives.
>
> Based on that, it appears that it's purely a problem with detection of the
> insertion / removal event after Solaris has loaded its drivers.  Initial
> detection is fine, it's purely hot swap detection on ports 0-5 that fails.
>  I know it sounds weird, but trust me I checked this pretty carefully, and
> experience has taught me never to assume computers won't behave in odd ways.
>
> I do appreciate my diagnosis may be wrong as I have very limited knowledge
> of Solaris' internals, but that is my best guess right now.
>
> Ross
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-15 Thread Tim
You could always try FreeBSD :)

--Tim

On Fri, Aug 15, 2008 at 9:44 AM, Ross <[EMAIL PROTECTED]> wrote:

> Haven't a clue, but I've just gotten around to installing windows on this
> box to test and I can confirm that hot plug works just fine in windows.
>
> Drives appear and dissappear in device manager the second I unplug the
> hardware.  Any drive, either controller.  So far I've done a couple of dozen
> removals, pulling individual drives, or as many as half a dozen at once.
>  I've even gone as far as to immediately pull a drive I only just connected.
>  Windows has no problems at all.
>
> Unfortunately for me, Windows doesn't support ZFS...  right now it's
> looking a whole load more stable.
>
> Ross
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-18 Thread Tim
Long story short,

There isn't a project, there are no plans to start a project, and don't
expect to see it in Solaris10 in this lifetime without some serious pushback
from large Sun customers.  Even then, it's unlikely to happen anytime soon
due to the technical complications of doing so reliably.

--Tim




On Mon, Aug 18, 2008 at 6:06 AM, Bernhard Holzer <[EMAIL PROTECTED]>wrote:

> Hi,
>
> I am searching for a roadmap for shrinking a pool. Is there some
> project, where can I find informations, when will it be implemented in
> Solars10
>
> Thanks
> Regards
> Bernhard
>
> --
> Bernhard Holzer
> Sun Microsystems Ges.m.b.H.
> Wienerbergstraße 3/7
> A-1100 Vienna, Austria
> Phone x60983/+43 1 60563 11983
> Mobile +43 664 60563 11983
> Fax +43 1 60563  11920
> Email [EMAIL PROTECTED]
> Handelsgericht Wien, Firmenbuch-Nr. FN 186250 y
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Tim
On Wed, Aug 20, 2008 at 5:17 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Wed, 20 Aug 2008, Al Hopper wrote:
>
> > It looks like Intel has a huge hit (product) on its hands with the
> > latest SSD product announcements.  No pricing yet ... but the specs
> > will push computer system IO bandwidth performance to numbers only
> > possible today with extremely expensive RAM based disk subsystems.
> >
> > SSDs + ZFS - a marriage made in (computer) heaven!
>
> Where's the beef?
>
> I sense a lot of smoke and mirrors here, similar to Intel's recent CPU
> "announcements" which don't even reveal the number of cores.  No
> prices and funny numbers that the writers of technical articles can't
> seem to get straight.
>
> Obviously these are a significant improvement for laptop drives but
> how many laptop users have a need for 11,000 IOPs and 170MB/s?  It
> seems to me that most laptops suffer from insufficent RAM and
> low-power components which don't deliver much performance.  The CPUs
> which come in laptops are not going to be able to process 170MB/s.
>
> What about the dual-ported SAS models for enterprise use?
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



I don't know about that.  I just went from an SSD back to a SATA drive
because the SSD started failing in less than a month (I'm having troubles
believing this great write-leveling they talk about is working
properly...).  And the SATA drive is dog slow in comparison.  The biggest
issue is seek times.  Opening apps/directories there is a VERY noticeable
difference from the SSD to this drive.

The user experience is drastically improved with the SSD imho.  Of course,
the fact that it started giving me i/o errors after just 3 weeks means it's
going to be RMA'd and won't find a home back in my laptop anytime soon.

This was one of the 64GB OCZ Core drives for reference.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread Tim
I don't think its just b94, I recall this behavior for as long as I've
had the card.  I'd also be interested to know if the sun driver team
has ever even tested with this card.  I realize its probably not a top
priority, but it sure would be nice to have it working properly.






On 8/20/08, Ross Smith <[EMAIL PROTECTED]> wrote:
>
>> > Without fail, cfgadm changes the status from "disk" to "sata-port" when
>> > I
>> > unplug a device attached to port 6 or 7, but most of the time unplugging
>> > disks 0-5 results in no change in cfgadm, until I also attach disk 6 or
>> > 7.
>>
>> That does seem inconsistent, or at least, it's not what I'd expect.
>
> Yup, was an absolute nightmare to diagnose on top of everything else.
> Definitely doesn't happen in windows too.  I really want somebody to try
> snv_94 on a Thumper to see if you get the same behaviour there, or whether
> it's unique to Supermicro's Marvell card.
>
>> > Often the system hung completely when you pulled one of the disks 0-5,
>> > and wouldn't respond again until you re-inserted it.
>> >
>> > I'm 99.99% sure this is a driver issue for this controller.
>>
>> Have you logged a bug on it yet?
>
> Yup, 6735931.  Added the information about it working in Windows today too.
>
> Ross
>
> _
> Get Hotmail on your mobile from Vodafone
> http://clk.atdmt.com/UKM/go/107571435/direct/01/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread Tim
14+2 or 7+1



On 8/22/08, Miles Nordin <[EMAIL PROTECTED]> wrote:
>> "m" == mike  <[EMAIL PROTECTED]> writes:
>
>  m> can you combine two zpools together?
>
> no.  You can have many vdevs in one pool.  for example you can have a
> mirror vdev and a raidz2 vdev in the same pool.  You can also destroy
> pool B, and add its (now empty) devices to pool A.  but once two
> separate pools are created you can't later smush them together.
>
> but...since you bring it up, that is exactly what I would do with the
> 16 disks: make two pools.  I'd make one of the pools compressed, make
> backups onto it with zfs send/recv, and leave it exported most of the
> time.  Every week or so I'd spin up the disks, import the pool, write
> another incremental backup onto it, scrub it, export it, and spin the
> disks back down.
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS hangs/freezes after disk failure, resumes when disk is replaced

2008-08-23 Thread Tim
On Sat, Aug 23, 2008 at 11:06 PM, Todd H. Poole <[EMAIL PROTECTED]>wrote:

> Howdy yall,
>
> Earlier this month I downloaded and installed the latest copy of
> OpenSolaris (2008.05) so that I could test out some of the newer features
> I've heard so much about, primarily ZFS.
>
> My goal was to replace our aging linux-based (SuSE 10.1) file and media
> server with a new machine running Sun's OpenSolaris and ZFS. Our old server
> ran your typical RAID5 setup with 4 500GB disks (3 data, 1 parity), used
> lvm, mdadm, and xfs to help keep things in order, and relied on NFS to
> export users' shares. It was solid, stable, and worked wonderfully well.
>
> I would like to replicate this experience using the tools OpenSolaris has
> to offer, taking advantages of ZFS. However, there are enough differences
> between the two OSes - especially with respect to the filesystems and (for
> lack of a better phrase) "RAID managers" - to cause me to consult (on
> numerous occasions) the likes of Google, these forums, and other places for
> help.
>
> I've been successful in troubleshooting all problems up until now.
>
> On our old media server (the SuSE 10.1 one), when a disk failed, the
> machine would send out an e-mail detailing the type of failure, and
> gracefully fall into a degraded state, but would otherwise continue to
> operate using the remaining 3 disks in the system. After the faulty disk was
> replaced, all of the data from the old disk would be replicated onto the new
> one (I think the term is "resilvered" around here?), and after a few hours,
> the RAID5 array would be seamlessly promoted from "degraded" back up to a
> healthy "clean" (or "online") state.
>
> Throughout the entire process, there would be no interruptions to the end
> user: all NFS shares still remained mounted, there were no noticeable drops
> in I/O, files, directories, and any other user-created data still remained
> available, and if everything went smoothly, no one would notice a failure
> had even occurred.
>
> I've tried my best to recreate something similar in OpenSolaris, but I'm
> stuck on making it all happen seamlessly.
>
> For example, I have a standard beige box machine running OS 2008.05 with a
> zpool that contains 4 disks, similar to what the old SuSE 10.1 server had.
> However, whenever I unplug the SATA cable from one of the drives (to
> simulate a catastrophic drive failure) while doing moderate reading from the
> zpool (such as streaming HD video), not only does the video hang on the
> remote machine (which is accessing the zpool via NFS), but the server
> running OpenSolaris seems to either hang, or become incredibly unresponsive.
>
> And when I write unresponsive, I mean that when I type the command "zpool
> status" to see what's going on, the command hangs, followed by a frozen
> Terminal a few seconds later. After just a few more seconds, the entire GUI
> - mouse included - locks up or freezes, and all NFS shares become
> unavailable from the perspective of the remote machines. The whole machine
> locks up hard.
>
> The machine then stays in this frozen state until I plug the hard disk back
> in, at which point everything, quite literally, pops back into existence all
> at once: the output of the "zpool status" command flies by (with all disks
> listed as "ONLINE" and all "READ," "WRITE," and "CKSUM," fields listed as
> "0"), the mouse jumps to a different part of the screen, the NFS share
> becomes available again, and the movie resumes right where it had left off.
>
> While such a quick resume is encouraging, I'd like to avoid the freeze in
> the first place.
>
> How can I keep any hardware failures like the above transparent to my
> users?
>
> -Todd
>
> PS: I've done some researching, and while my problem is similar to the
> following:
>
> http://opensolaris.org/jive/thread.jspa?messageID=151719𥂧
> http://opensolaris.org/jive/thread.jspa?messageID=240481𺭡
>
> most of these posts are quite old, and do not offer any solutions.
>
> PSS: I know I haven't provided any details on hardware, but I feel like
> this is more likely a higher-level issue (like some sort of configuration
> file or setting is needed) rather than a lower-level one (like faulty
> hardware). However, if someone were to give me a command to run, I'd gladly
> do it... I'm just not sure which ones would be helpful, or if I even know
> which ones to run. It took me half an hour of searching just to find out how
> to list the disks installed in this system (it's "format") so that I could
> build my zpool in the first place. It's not quite as simple as writing out
> /dev/hda, /dev/hdb, /dev/hdc, /dev/hdd. ;)
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



It's a lower level one.  What hardware are you running?
___
zfs-discuss mailing list
zfs-dis

Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-23 Thread Tim
On Sat, Aug 23, 2008 at 11:41 PM, Todd H. Poole <[EMAIL PROTECTED]>wrote:

> Hmm... I'm leaning away a bit from the hardware, but just in case you've
> got an idea, the machine is as follows:
>
> CPU: AMD Athlon X2 4850e 2.5GHz Socket AM2 45W Dual-Core Processor Model
> ADH4850DOBOX (
> http://www.newegg.com/Product/Product.aspx?Item=N82E16819103255)
>
> Motherboard: GIGABYTE GA-MA770-DS3 AM2+/AM2 AMD 770 ATX All Solid Capacitor
> AMD Motherboard (
> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128081)
>
> RAM: G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual
> Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ (
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820231122)
>
> HDD (x4): Western Digital Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA
> 3.0Gb/s Hard Drive (
> http://www.newegg.com/Product/Product.aspx?Item=N82E16822136151)
>
> The reason why I don't think there's a hardware issue is because before I
> got OpenSolaris up and running, I had a fully functional install of openSuSE
> 11.0 running (with everything similar to the original server) to make sure
> that none of the components were damaged during shipping from Newegg.
> Everything worked as expected.
>
> Furthermore, before making my purchases, I made sure to check the HCL and
> my processor and motherboard combination are supported:
> http://www.sun.com/bigadmin/hcl/data/systems/details/3079.html
>
> But, like I said earlier, I'm new here, so you might be on to something
> that never occurred to me.
>
> Any ideas?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


What are you using to connect the HD's to the system?  The onboard ports?
What driver is being used?  AHCI, or IDE compatibility mode?

I'm not saying the hardware is bad, I'm saying the hardware is most likely
the cause by way of driver.  There really isn't any *setting* in solaris I'm
aware of that says "hey, freeze my system when a drive dies".  That just
sounds like hot-swap isn't working as it should be.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-24 Thread Tim
I'm pretty sure pci-ide doesn't support hot-swap.  I believe you need ahci.





On 8/24/08, Todd H. Poole <[EMAIL PROTECTED]> wrote:
> Ah, yes - all four hard drives are connected to the motherboard's onboard
> SATA II ports. There is one additional drive I have neglected to mention
> thus far (the boot drive) but that is connected via the motherboard's IDE
> channel, and has remained untouched since the install... I don't really
> consider it part of the problem, but I thought I should mention it just in
> case... you never know...
>
> As for the drivers... well, I'm not sure of the command to determine that
> directly, but going under System > Administration > Device Driver Utility
> yields the following information under the "Storage" entry:
>
> Components: "ATI Technologies Inc. SB600 IDE"
> Driver: pci-ide
> --Driver Information--
> Driver: pci-ide
> Instance: 1
> Attach Status: Attached
> --Hardware Information--
> Vendor ID: 0x1002
> Device ID: 0x438c
> Class Code: 0001018a
> DevPath: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1
>
> and
>
> Components: "ATI Technologies Inc. SB600 Non-Raid-5 SATA"
> Driver: pci-ide
> --Driver Information--
> Driver: pci-ide
> Instance: 0
> Attach Status: Attached
> --Hardware Information--
> Vendor ID: 0x1002
> Device ID: 0x4380
> Class Code: 0001018f
> DevPath: /[EMAIL PROTECTED],0/[EMAIL PROTECTED]
>
> Furthermore, there is one Driver Problem detected but the error is under the
> "USB" entry. There are seven items listed:
>
> Components: ATI Technologies Inc. SB600 USB Controller (EHCI)
> Driver: ehci
>
> Components: ATI Technologies Inc. SB600 USB (OHCI4)
> Driver: ohci
>
> Components: ATI Technologies Inc. SB600 USB (OHCI3)
> Driver: ohci
>
> Components: ATI Technologies Inc. SB600 USB (OHCI2)
> Driver: ohci
>
> Components: ATI Technologies Inc. SB600 USB (OHCI1)
> Driver: ohci (Driver Misconfigured)
>
> Components: ATI Technologies Inc. SB600 USB (OHCI0)
> Driver: ohci
>
> Components: Microsoft Corp. Wheel Mouse Optical
> Driver: hid
>
> As you can tell, the OHCI1 device isn't properly configured, but I don't
> know how to configure it (there's only a "Help" "Submit...", and "Close"
> button to click, no "Install Driver"). And, to tell you the truth, I'm not
> even sure it's worth mentioning because I don't have anything but my mouse
> plugged into USB, and even so... it's a mouse... plugged into USB... hardly
> something that is going to bring my machine to a grinding halt every time a
> SATA II disk gets yanked from a RAID-Z array (at least, I should hope the
> two don't have anything in common!).
>
> And... wait... you mean to tell me that I can't just untick the checkbox
> that says "Hey, freeze my system when a drive dies" to solve this problem?
> Ugh. And here I was hoping for a quick fix... ;)
>
> Anyway, how does the above sound? What else can I give you?
>
> -Todd
>
> PS: Thanks, by the way, for the support - I'm not sure where else to turn to
> for this kind of stuff!
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-27 Thread Tim
>
>
> By the way: Is there a way to pull up a text-only interface from the log in
> screen (or during the boot process?) without having to log in (or just sit
> there reading about "SunOS Release 5.11 Version snv_86 64-bit")? It would be
> nice if I could see a bit more information during boot, or if I didn't have
> to use gnome if I just wanted to get at the CLI anyways... On some OSes, if
> you want to access TTY1 through 6, you only need to press ESC during boot,
> or CTRL + ALT + F1 through F6 (or something similar) during the login screen
> to gain access to other non-GUI login screens...
>

On SXDE/Solaris, there's a dropdown menu that lets you select what type of
logon you'd like to use.  I haven't touched 2008.11 so I have no idea if
it's got similar.


>
> Anyway, after changing the setting back to [Native IDE], the machine boots
> fine. And this time, the freeze-on-login bug didn't get me. Now, I know for
> a fact this motherboard supports SATA II (see link to manufacturer's website
> in earlier post), and that all 4 of these disks are _definitely_ SATA II
> disks (see hardware specifications listed in one of my earliest posts), and
> that I'm using all the right cables and everything... so, I don't know how
> to explore this any further...
>
> Could it be that when I installed OpenSolaris, I was using the pci-ide (or
> [Native IDE]) setting on my BIOS, and thus if I were to change it,
> OpenSolaris might not know hot to handle that, and might refuse to boot? Or
> that maybe OpenSolaris only installed the drivers it thought it would need,
> and the stat-ahci one wasn't one of them?
>

Did you do a reboot reconfigure?  "reboot -- -r" or "init 6"?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 1:18 AM, MC <[EMAIL PROTECTED]> wrote:

> Okay, so your ACHI hardware is not using an ACHI driver in solaris.  A
> crash when pulling a cable is still not great, but it is understandable
> because that driver is old and bad and doesn't support hot swapping at all.
>

His AHCI is not using AHCI because he's set it not to.  If linux is somehow
ignoring the BIOS configuration, and attempting to load an AHCI driver for
the hardware anyways, that's *BROKEN* behavior.  I've yet to see WHAT driver
linux was using because he was too busy having a pissing match to get that
USEFUL information back to the list.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 1:08 PM, Kenny <[EMAIL PROTECTED]> wrote:

> Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
>
> I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each).  The
> host system ( SUN Enterprise 5220) reconizes the "disks" as each having
> 931GB space.  So that should be 10+ TB in size total.  However when I zpool
> them together (using raidz) the zpool status reports 9GB instead of 9TB.
>
> Does ZFS have problem reporting TB and defaults to GB instead??  Is my pool
> really TB in size??
>
> I've read in the best practice wiki that splitting them into smaller pools.
>  Any recommendation on this??  I'm desperate in keepingas much space useable
> as possible.
>



OS version and zfs version would be helpful.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can a iSCS-targetI and CIFIS-share co-exist?

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 12:50 PM, Steven <[EMAIL PROTECTED]>wrote:

> On openSolaris I have a ZFS zpool vol that is a CIFS share on my network.
> Can I make this same shared-data an iSCSI target also?
>
> The reason I want to do this is that my VirtualBox VM's are about 10x
> slower in accessing the network-share than any other machine on the network.
> This prevents me from say using Visual Studio under VBox to compile a
> project on the share.
>
> I heard VBox's iSCSI would be faster than it's networking. But other
> computers still need to access that data via CIFS. Any ideas?
>
>

You'd have to share out an iSCSI LUN to *insert destination*.  Then share
out the LUN from the host it's presented to via cifs/nfs/whatever.  You
can't magically make an iSCSI LUN out of the cifs data currently sitting on
the share on the Solaris host.  Doesn't work that way.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 1:31 PM, Ross <[EMAIL PROTECTED]> wrote:

> Forgive me for being a bit wooly with this explanation (I've only recently
> moved over from Windows), but changing disk mode from IDE to SATA may well
> not work without a re-install, or at the very least messing around with boot
> settings.  I've seen many systems which list SATA disks in front of IDE
> ones, so you changing the drives to SATA may now mean that instead of your
> OS being installed on drive 0, and your data on drive 1, you now have the
> data on drive 0 and the OS on drive 1.
>


Solaris does not do this.  This is one of the many annoyances I have with
linux.  The way they handle /dev is ridiculous.  Did you add a new drive?
Let's renumber everything!

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 1:51 PM, Kenny <[EMAIL PROTECTED]> wrote:

> Tcook - Sorry bout that...
>
> Solaris 10 (8/07 I think)
> ZFS version 4
>
> How can I upgrade ZFS w/o having to rebuild with Sol 10 5/08?
>
> Thanks   --Kenny
>
>
Please paste the output of df, zpool status, and format so we can verify
what you're seeing. :)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Subversion repository on ZFS

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins <[EMAIL PROTECTED]> wrote:

>
> Does anyone have any tuning tips for a Subversion repository on ZFS?  The
> repository will mainly be storing binary (MS Office documents).
>
> It looks like a vanilla, uncompressed file system is the best bet.
>
> Ian
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


I believe this is called sharepoint :D  I think the biggest deal/question
will be: how many files?  How big is the pool?  Do they all need to be on
the same pool, or can you split it up.  The biggest issue performance wise
you'll see with zfs is from millions of tiny files.  That would also
determine tuning.

I guess the other question would be how much ram?  What sort of backend
storage?  FC/SATA/SAS?  Internal, coming off a SAN?  I'm assuming the files
won't get hit very hard if they're just office documents, but you may have a
special use-case :)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 5:33 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:
>
> t> Solaris does not do this.
>
> yeah but the locators for local disks are still based on
> pci/controller/channel not devid, so the disk will move to a different
> device name if he changes BIOS from pci-ide to AHCI because it changes
> the driver attachment.  This may be the problem preventing his bootup,
> rather than the known AHCI bug.
>

Except he was, and is referring to a non-root disk.  If I'm using raw
devices and I unplug my root disk and move it somewhere else, I would expect
to have to update my boot loader.



> Linux device names don't move as much if you use LVM2, as some of the
> distros do by default even for single-device systems.  Device names
> are then based on labels written onto the drive, which is a little
> scary and adds a lot of confusion, but I think helps with this
> moving-device problem and is analagous to what it sounds like ZFS
> might do on the latest SXCE's that don't put zpool.cache in the boot
> archive.
>

LVM hardly changes the way devices move around in Linux, or it's horrendous
handling of /dev.  You are correct in that it is a step towards masking the
ugliness.  I, however, do not consider it a fix.  Unfortunately it's not
used in the majority of the sites I am involved in, and as such isn't any
sort of help.  The administration overhead it adds is not worth the hassle
for the majority of my customers.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-27 Thread Tim
On Wed, Aug 27, 2008 at 5:39 PM, Toby Thain <[EMAIL PROTECTED]>wrote:

>
>
> Let's not be too quick to assign blame, or to think that perfecting
> the behaviour is straightforward or even possible.
>
> Traditionally, systems bearing 'enterprisey' expectations were/are
> integrated hardware and software from one vendor (e.g. Sun) which
> could be certified as a unit.
>

PSSSHHH, Sun should be certifying every piece of hardware that is, or will
ever be released.  Community putback shmamunnity putback.


>
> Start introducing 'random $20 components' and you begin to dilute the
> quality and predictability of the composite system's behaviour.
>

But this NEVER happens on linux *grin*.


>
> If hard drive firmware is as cr*ppy as anecdotes indicate, what can
> we really expect from a $20 USB pendrive?
>
> --Toby
>
>
Perfection?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Will there be a GUI for ZFS ?

2008-08-28 Thread Tim
On Thu, Aug 28, 2008 at 3:47 AM, Klaus Bergius <[EMAIL PROTECTED]>wrote:

> I'll second the original questions, but would like to know specifically
> when we will see (or how to install) the ZFS admin gui for OpenSolaris
> 2008.05.
> I installed 2008.05, then updated the system, so it is now snv_95.
> There are no smc* commands, there is no service 'webconsole' to be seen in
> svcs -a,
> because: there is no SUNWzfsg package installed.
> However, the SUNWzfsg package is also not in the 
> pkg.opensolaris.orgrepository.
>
> Any hint where to find the package? I would really love to have the zfs
> admin gui on my system.
>
> -Klaus
>
>
My personal conspiracy theory is it's part of "project fishworks" that is
still under wraps.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Tim
exactly :)



On 8/28/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Daniel Rock wrote:
>>
>> Kenny schrieb:
>> >2. c6t600A0B800049F93C030A48B3EA2Cd0
>> 
>> >   /scsi_vhci/[EMAIL PROTECTED]
>> >3. c6t600A0B800049F93C030D48B3EAB6d0
>> 
>> >   /scsi_vhci/[EMAIL PROTECTED]
>>
>> Disk 2: 931GB
>> Disk 3: 931MB
>>
>> Do you see the difference?
>>
> Not just disk 3:
>
>> AVAILABLE DISK SELECTIONS:
>>3. c6t600A0B800049F93C030D48B3EAB6d0
>> 
>>   /scsi_vhci/[EMAIL PROTECTED]
>>4. c6t600A0B800049F93C031C48B3EC76d0
>> 
>>   /scsi_vhci/[EMAIL PROTECTED]
>>8. c6t600A0B800049F93C031048B3EB44d0
>> 
>>   /scsi_vhci/[EMAIL PROTECTED]
>>
> This all makes sense now, since a RAIDZ (or RAIDZ2) vdev can only be as
> big as it's *smallest* component device.
>
>-Kyle
>
>>
>>
>> Daniel
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eWeek: corrupt file brought down FAA's antiquated IT system

2008-08-28 Thread Tim
On Thu, Aug 28, 2008 at 12:38 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Thu, 28 Aug 2008, Toby Thain wrote:
>
> > What goes unremarked here is how the original system has coped
> > reliably for decades of (one guesses) geometrically growing load.
>
> Fantastic engineering from a company which went defunct shortly after
> delivering the system.



And let this be a lesson to all of you not to write code that is too good.
If you can't sell an "update" (patch) every 6 months, you'll be out of
business as well :D

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Tim
Netapp does NOT recommend 100 percent.  Perhaps you should talk to
netapp or one of their partners who know their tech instead of their
competitors next time.

Zfs, the way its currently implemented will require roughly the same
as netapp... Which still isn't 100.



On 8/30/08, Ross <[EMAIL PROTECTED]> wrote:
> Just saw this blog post linked from the register, it's EMC pointing out that
> their array wastes less disk space than either HP or NetApp.  I'm loving the
> 10% of space they have to reserve for snapshots, and you can't add more o_0.
>
> HP similarly recommend 20% of reserved space for snapshots, and NetApp
> recommend a whopping 100% (that was one reason we didn't buy NetApp
> actually).
>
> Could anybody say how ZFS would match up to these figures?  I'd have thought
> a 14+2 raid-z2 scheme similar to NFS' would probably be fairest.
>
> http://chucksblog.typepad.com/chucks_blog/2008/08/your-storage-mi.html
>
> Ross
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-31 Thread Tim
With the restriping: wouldn't it be as simple as creating a new
folder/dataset/whatever on the same pool and doing an rsync to the
same pool/new location.  This would obviously cause a short downtime
to switch over and delete the old dataset, but seems like it should
work fine.  If you're doubling the pool size, space shouldn't be an
issue.




On 8/31/08, Ross <[EMAIL PROTECTED]> wrote:
> Personally I'd go for an 11 disk raid-z2, with one hot spare.  You loose
> some capacity, but you've got more than enough for your current needs, and
> with 1TB disks single parity raid means a lot of time with your data
> unprotected when one fails.
>
> You could split this into two raid-z2 sets if you wanted, that would have a
> bit better performance, but if you can cope with the speed of a single pool
> for now I'd be tempted to start with that.  It's likely that by Christmas
> you'll be able to buy flash devices to use as read or write cache with ZFS,
> at which point the speed of the disks becomes academic for many cases.
>
> Adding a further 12 disks sounds fine, just as you suggest.  You can add
> another 11 disk raid-z2 set to your pool very easily.  ZFS can't yet
> restripe your existing data across the new disks, so you'll have some data
> on the old 12 disk array, some striped across all 24, and some on the new
> array.
>
> ZFS probably does add some overhead compared to hardware raid, but unless
> you have a lot of load on that box I wouldn't expect it to be a problem.  I
> don't know the T5220 servers though, so you might want to double check that.
>
> I do agree that you don't want to use the hardware raid though, ZFS has
> plenty of advantages and it's best to let it manage the whole lot.  Could
> you do me a favour though and see how ZFS copes on that array if you just
> pull a disk while the ZFS pool is running?  I've had some problems on a home
> built box after pulling disks, I suspect a proper raid array will cope fine
> but haven't been able to get that tested yet.
>
> thanks,
>
> Ross
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Tim
On Sun, Aug 31, 2008 at 10:39 AM, Ross Smith <[EMAIL PROTECTED]> wrote:

>  Hey Tim,
>
> I'll admit I just quoted the blog without checking, I seem to remember the
> sales rep I spoke to recommending putting aside 20-50% of my disk for
> snapshots.  Compared to ZFS where I don't need to reserve any space it feels
> very old fashioned.  With ZFS, snapshots just take up as much space as I
> want them to.
>

Your sales rep was an idiot then.  Snapshot reserve isn't required at all.
It isn't necessary to take snapshots.  It's simply a portion of space out of
a volume that can only be used for snapshots, live data cannot enter into
this space.  Snapshots, however, can exist on a volume with no snapshot
reserve.  They are in no way limited to the "snapshot reserve" you've set.
Snapshot reserve is a guaranteed minimum amount of space out of a volume.
You can set it 90% as you mention below, and it will work just fine.

ZFS is no different than NetApp when it comes to snapshots.  I suggest until
you have a basic understanding of how NetApp software works, not making ANY
definitive statements about them.  You're sounding like a fool and/or
someone working for one of their competitors.


>
>
> The problem though for our usage with NetApp was that we actually couldn't
> reserve enough space for snapshots.  50% of the pool was their maximum, and
> we're interested in running ten years worth of snapshots here, which could
> see us with a pool with just 10% of live data and 90% of the space taken up
> by snapshots.  The NetApp approach was just too restrictive.
>
> Ross
>

There is not, and never has been a "50% of the pool maximum".  That's also a
lie.  If you want snapshots to take up 90% of the pool, ONTAP will GLADLY do
so.  I've got a filer sitting in my lab and would be MORE than happy to post
the df output of a volume that has snapshots taking up 90% of the volume.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max vol size and number of files in production

2008-09-04 Thread Tim
The issue is EXANET isn't really holding "300 million files" in one dataset
like you're talking about doing with zfs.  It's a clustered approach with a
single namespace.

Reality is you can do what the customer wants to do, but you'd be leveraging
something like pnfs which I don't think is quite production ready yet.

I'm sure there are others on this list much better versed in PNFS than
myself that can speak to that solution.

--Tim



On Thu, Sep 4, 2008 at 12:59 PM, Jean Luc Berrier <[EMAIL PROTECTED]> wrote:

> Hi,
>
> My problem is one of my customer wants to change his Exanet Systems to ZFS,
> but SUN told him that there is real limitation with ZFS :
> Customer env :
> Incoming Data
> FS SIZE : 50 TB, with at least 100 Thousand files write  per day, around 20
> Millions files.
> Consulting Data
> FS Size : 200 TB with at least 300 Millions Files
>
> Sun told him over 10 millions files ZFS should not be the right solution.
> Do you have POC or explanation to this limitations regarding the spec of
> ZFS.
> Best regards
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL & NVRAM partitioning?

2008-09-06 Thread Tim
On Sat, Sep 6, 2008 at 5:55 PM, Albert Chin <
[EMAIL PROTECTED]> wrote:

> On Sat, Sep 06, 2008 at 11:16:15AM -0700, Kaya Bekiroğlu wrote:
> > > The big problem appears to be getting your hands on these cards.
> > > Although I have the drivers now my first supplier let me down, and
> > > while the second insists they have shipped the cards it's been three
> > > weeks now and there's no sign of them.
> >
> > Thanks to Google Shopping I was able to order two of these cards from:
> >
> http://www.printsavings.com/01390371OP-discount-MICRO+MEMORY-MM5425--512MB-NVRAM-battery.aspx
> >
> > They appear to be in good working order, but unfortunately I am unable
> > to verify the driver. "pkgadd -d umem_Sol_Drv_Cust_i386_v01_11.pkg"
> > hangs on "## Installing  part 1 of 3." on snv_95.  I do not have other
> > Solaris versions to  experiment with; this is really just a hobby for
> > me.
>
> Does the card come with any programming specs to help debug the driver?
>
> --
> albert chin ([EMAIL PROTECTED])
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


The driver is a binary only with no support that was passed on behind the
scene's as a favor.  I don't know what debugging is going ot be possible.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Tim
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:

> I was under the impression that MLC is the preferred type of SSD, but I
> want to prevent myself from having a think-o.
>
>
> I'm looking to get (2) SSD to use as my boot drive. It looks like I can
> get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
> Which would be the better technology?  (I'll worry about rated access
> times/etc of the drives, I'm just wondering about general tech for an OS
> boot drive usage...)
>
>
Depends on the MFG.  The new Intel MLC's have proven to be as fast if not
faster than the SLC's, but they also cost just as much.  If they brought the
price down, I'd say MLC all the way.  All other things being equal though,
SLC.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] working closed blob driver

2008-09-25 Thread Tim
>
>
>But to be honest I don't wish for a driver for every chip---I'm
>not trying to ``convert'' machines, I buy them specifically for
>the task.  I just want an open driver that works well for some
>fairly-priced card I can actually buy.  I'm willing to fight the
>OEM problem:
>
> http://www.openbsd.org/papers/brhard2007/mgp00022.html
>
>by purchasing systems in a complicated way, with lots of add-in
>cards, at higher cost.  I will buy whatever card I'm told.  But so
>far the track record is not good.  I still have one of those bunk
>Supermicro Marvell cards sitting on the shelf.
>


So what's wrong with this card?
http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] working closed blob driver

2008-09-26 Thread Tim
On Fri, Sep 26, 2008 at 1:02 PM, Will Murnane <[EMAIL PROTECTED]>wrote:

> On Thu, Sep 25, 2008 at 18:51, Tim <[EMAIL PROTECTED]> wrote:
> > So what's wrong with this card?
> > http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
> If you have a UIO slot (many recent Supermicro boards do) then it's a
> fine choice.  But if you have a non-Supermicro board, you may be in
> for a shock when you get it---it's swapped left for right, compare it
> to a regular pci-e card.  It won't fit in a standard case.  AIUI it is
> a standard pci express slot, just shifted over a bit so the backwards
> slot cover fits into normal cases, so perhaps you could try fastening
> a normal slot cover to it and using it in a normal pci-e slot... but
> that doesn't sound particularly elegant, and would take up the slot on
> the other side of it as well.
>
> Will
>


This is not a UIO card.  It's a standard PCI-E card.  What the description
is telling you is that you can combine it with a UIO card to add raid
functionality as there is none built-in.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] working closed blob driver

2008-09-26 Thread Tim
On Fri, Sep 26, 2008 at 12:29 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:
>
> t>
> http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
>
> I'm not sure.  A different thing is wrong with it depending on what
> driver attaches to it.  I can't tell for sure because this page:
>
>  http://linuxmafia.com/faq/Hardware/sas.html
>
> says the LSI SAS 3800 series uses a 1068E chip, and James says (1)
> 1068E is supported by mpt, (2) LSI SAS 3800 uses mega_sas.  so, I
> don't know which for that card, which means I don't know which for
> this card.
>
> If it's mpt:
>
>  * does not come with source according to:
>
>   http://www.openbsd.org/papers/opencon06-drivers/mgp00024.html
>   http://www.opensolaris.org/os/about/no_source/
>
> If it's mega_sas:
>
>  * does not come with source
>
>  * driver is new and unproven.  We believed the Marvell driver was
>   good for the first few months too, the same amount of experience we
>   have with mega_sas.
>
>  * not sure if it's available in stable solaris.
>

Someone's already gotten it working, if they're watching I'm sure they'll
pipe up on what driver it uses.


>
>
> In either case:
>
>  * may require expensive cables
>

Nope, cables are standardized.  I'm not sure what your definition of
"expensive" is but I believe they were roughly 15$ for a SAS>>4sata ports.


>
> Uncertain problems:
>
>  * might not support hotplug
>
>  * might not support NCQ
>
>  * probably doesn't support port multipliers
>
>  * probably doesn't support smartctl
>
>  * none of these features can be fixed by the community without
>   source.  all are available with cheaper cards on Linux, and on
>   Linux both mptsas and megaraid_sas come with source as far as I can
>   tell maintained by dell and lsi, though might not support the above
>   features.
>
>
> HTH, HAND.
>

I know it supports hotplug and NCQ.  Can't say smartctl was ever on my list
of important features so I haven't bothered to research if it does.  I'm
also not sure what good port multipliers are going to do you in this
instance... the cables it uses already support 4 SATA drives per physical
card port.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] working closed blob driver

2008-09-26 Thread Tim
On Fri, Sep 26, 2008 at 5:07 PM, Will Murnane <[EMAIL PROTECTED]>wrote:

> On Fri, Sep 26, 2008 at 21:51, Tim <[EMAIL PROTECTED]> wrote:
> > This is not a UIO card.  It's a standard PCI-E card.  What the
> description
> > is telling you is that you can combine it with a UIO card to add raid
> > functionality as there is none built-in.
> Not so.  The description [1] mentions that this is UIO, and says only
> that it negotiates pci-e link speeds, not that it fits in a pci
> express slot.  UIO is pci express, but the slots are positioned
> differently from pci-e ones.
>
> Compare this to the picture of an equivalent LSI card [2].  The
> pictures are similar, but compare the position of the bracket.  The
> components are mounted on the wrong sides.  Take a look at a UIO board
> [3]: the PCI-X slot is shared with the blue UIO slot on the left side,
> like PCI and ISA slots used to be shared.  This is why the components
> are backwards.
>
> Will
>
> [1]:
> http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
> [2]:
> http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/megaraid_sas_8208elp/index.html
> [3]:
> http://www.supermicro.com/products/motherboard/Xeon1333/5400/X7DWE.cfm
>


Well, there's people that have it working in a PCI-E slot, so I don't know
what to tell you.

http://www.opensolaris.org/jive/thread.jspa?messageID=272283񂞛
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Tim
On Fri, Sep 26, 2008 at 5:46 PM, Ross Becker <[EMAIL PROTECTED]>wrote:

> Okay, after doing some testing, it appears that the issue is on the ZFS
> side.  I fiddled around a while with options on the areca card, and never
> got any better performance results than my first test. So, my best out of
> the raidz2 is 42 mb/s write and 43 mb/s read.  I also tried turning off
> crc's (not how I'd run production, but for testing), and got no performance
> gain.
>
> After fiddling with options, I destroyed my zfs & zpool, and tried some
> single-drive bits.   I simply used newfs to create filesystems on single
> drives, mounted them, and ran some single-drive bonnie++ tests.  On a single
> drive, I got 50 mb/sec write & 70 mb/sec read.   I also tested two
> benchmarks on two drives simultaneously, and on each of the tests, the
> result dropped by about 2mb/sec, so I got a combined 96 mb/sec write & 136
> mb/sec read with two separate UFS filesystems on two separate disks.
>
> So next steps?
>
> --ross
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Did you try disabling the card cache as others advised?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 2:10 PM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:

>
> * Now that I'm using ECC RAM, and enterprisey disks, Does this put this
> solution in par with low end netapp 2020 for example ?
>
>
*sort of*.  What are you going to be using it for?  Half the beauty of
NetApp are all the add-on applications you run server side.  The snapmanager
products.

If you're just using it for basic single head file serving, I'd say you're
pretty much on par.  IMO, NetApp's clustering is still far superior (yes
folks, from a fileserver perspecctive, not an application clustering
perspective) to anything Solaris has to offer right now, and also much,
much, MUCH easier to configure/manage.  Let me know when I can plug an
infiniband cable between two Solaris boxes and type "cf enable" and we'll
talk :)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-30 Thread Tim
>
> No apology necessary and I'm glad you figured it out - I was just
> reading this thread and thinking "I'm missing something here - this
> can't be right".
>
> If you have the budget to run a few more "experiments", try this
> SuperMicro card:
> http://www.springsource.com/repository/app/faq
> that others have had success with.
>
> Regards,
>
> --
> Al Hopper  Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
>   Voice: 972.379.2133 Timezone: US CDT
> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
>

Wrong link?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 4:26 PM, Toby Thain <[EMAIL PROTECTED]>wrote:

>
> On 30-Sep-08, at 6:58 AM, Ahmed Kamal wrote:
>
> > Thanks for all the answers .. Please find more questions below :)
> >
> > - Good to know EMC filers do not have end2end checksums! What about
> > netapp ?
>
> Blunty - no remote storage can have it by definition. The checksum
> needs to be computed as close as possible to the application. What's
> why ZFS can do this and hardware solutions can't (being several
> unreliable subsystems away from the data).
>
> --Toby
>
> > ...
>

So how is a Server running Solaris with a QLogic HBA connected to an FC JBOD
any different than a NetApp filer, running ONTAP with a QLogic HBA directly
connected to an FC JBOD?  How is it "several unreliable subsystems away from
the data"?

That's a great talking point but it's far from accurate.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 5:19 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:

>
> To make Will's argument more succinct (), with a NetApp, undetectable
> (by the NetApp) errors can be introduced at the HBA and transport layer (FC
> Switch, slightly damage cable) levels.   ZFS will detect such errors, and
> fix them (if properly configured). NetApp has no such ability.
>
> Also, I'm not sure that a NetApp (or EMC) has the ability to find bit-rot.
>  That is, they can determine if a block is written correctly, but I don't
> know if they keep the block checksum around permanently, and, how redundant
> that stored block checksum is.  If they don't permanently write the block
> checksum somewhere, then the NetApp has no way to determine if a READ block
> is OK, and hasn't suffered from bit-rot (aka disk block failure).  And, if
> it's not either multiply stored, then they have the potential to lose the
> ability to do READ verification.  Neither are problems of ZFS.
>
>
> In many of my production environments, I've got at least 2 different FC
> switches between my hosts and disks.  And, with longer cables, comes more of
> the chance that something gets bent a bit too much. Finally, HBAs are not
> the most reliable things I've seen (sadly).
>
>
>
* NetApp's block-appended checksum approach appears similar but is in fact
much stronger. Like many arrays, NetApp formats its drives with 520-byte
sectors. It then groups them into 8-sector blocks: 4K of data (the WAFL
filesystem blocksize) and 64 bytes of checksum. When WAFL reads a block it
compares the checksum to the data just like an array would, but there's a
key difference: it does this comparison after the data has made it through
the I/O path, so it validates that the block made the journey from platter
to memory without damage in transit. *
**
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 5:04 PM, Ross Becker <[EMAIL PROTECTED]>wrote:

> At this point, ZFS is performing admirably with the Areca card.  Also, that
> card is only 8-port, and the Areca controllers I have are 12-port.  My
> chassis has 24 SATA bays, so being able to cover all the drives with 2
> controllers is preferable.
>
> Also, the driver for the Areca controllers is being integrated into
> OpenSolaris as we discuss, so the next spin of Opensolaris won't even
> require me to add the driver for it.
>
>
> --Ross
> --
>

All very valid points... if you don't mind spending 8x as much for the cards
:)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 6:03 PM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:

>
>
>>
> Hmm ... well, there is a considerable price difference, so unless someone
> says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200
> drives. By the way, how many of those would saturate a single (non trunked)
> Gig ethernet link ? Workload NFS sharing of software and homes. I think 4
> disks should be about enough to saturate it ?
>

SAS has far greater performance, and if your workload is extremely random,
will have a longer MTBF.  SATA drives suffer badly on random workloads.


>
>
> BTW, for everyone saying zfs is more reliable because it's closer to the
> application than a netapp, well at least in my case it isn't. The solaris
> box will be NFS sharing and the apps will be running on remote Linux boxes.
> So, I guess this makes them equal. How about a new "reliable NFS" protocol,
> that computes the hashes on the client side, sends it over the wire to be
> written remotely on the zfs storage node ?!
>

Won't be happening anytime soon.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 7:15 PM, David Magda <[EMAIL PROTECTED]> wrote:

> On Sep 30, 2008, at 19:09, Tim wrote:
>
>  SAS has far greater performance, and if your workload is extremely random,
>> will have a longer MTBF.  SATA drives suffer badly on random workloads.
>>
>
> Well, if you can probably afford more SATA drives for the purchase price,
> you can put them in a striped-mirror set up, and that may help things. If
> your disks are cheap you can afford to buy more of them (space, heat, and
> power not withstanding).
>
>
More disks will not solve SATA's problem.  I run into this on a daily basis
working on enterprise storage.  If it's for just archive/storage, or even
sequential streaming, it shouldn't be a big deal.  If it's random workload,
there's pretty much nothing you can do to get around it short of more
front-end cache and intelligence which is simply a band-aid, not a fix.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 7:30 PM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:

>
>
>> Well, if you can probably afford more SATA drives for the purchase
>> price, you can put them in a striped-mirror set up, and that may help
>> things. If your disks are cheap you can afford to buy more of them
>> (space, heat, and power not withstanding).
>>
>
> Hmm, that's actually cool !
> If I configure the system with
>
> 10 x 400G 10k rpm disk == cost ==> 13k$
> 10 x 1TB SATA 7200 == cost ==> 9k$
>
> Always assuming 2 spare disks, and Using the sata disks, I would configure
> them in raid1 mirror (raid6 for the 400G), Besides being cheaper, I would
> get more useable space (4TB vs 2.4TB), Better performance of raid1 (right?),
> and better data reliability ?? (don't really know about that one) ?
>
> Is this a recommended setup ? It looks too good to be true ?
>


I *HIGHLY* doubt you'll see better performance out of the SATA, but it is
possible.  You don't need 2 spares with SAS, 1 is more than enough with that
few disks.  I'd suggest doing RAID-Z (raid-5) as well if you've only got 9
data disks.  8+1 is more than acceptable with SAS drives.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 8:13 PM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:

>
>> So, performance aside, does SAS have other benefits ? Data integrity ? How
>> would a 8 raid1 sata compare vs another 8 smaller SAS disks in raidz(2) ?
>> Like apples and pomegranates.  Both should be able to saturate a GbE link.
>>
>
> You're the expert, but isn't the 100M/s for streaming not random
> read/write. For that, I suppose the disk drops to around 25M/s which is why
> I was mentioning 4 sata disks.
>
> When I was asking for comparing the 2 raids, It's was aside from
> performance, basically sata is obviously cheaper, it will saturate the gig
> link, so performance yes too, so the question becomes which has better data
> protection ( 8 sata raid1 or 8 sas raidz2)
>

SAS's main benefits are seek time and max IOPS.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 8:50 PM, Toby Thain <[EMAIL PROTECTED]>wrote:

>
>
> * NetApp's block-appended checksum approach appears similar but is in fact
> much stronger. Like many arrays, NetApp formats its drives with 520-byte
> sectors. It then groups them into 8-sector blocks: 4K of data (the WAFL
> filesystem blocksize) and 64 bytes of checksum. When WAFL reads a block it
> compares the checksum to the data just like an array would, but there's a
> key difference: it does this comparison after the data has made it through
> the I/O path, so it validates that the block made the journey from platter
> to memory without damage in transit.*
>
>
> This is not end to end protection; they are merely saying the data arrived
> in the storage subsystem's memory verifiably intact. The data still has a
> long way to go before it reaches the application.
>
> --Toby
>
>
As it does in ANY fileserver scenario, INCLUDING zfs.  He is building a
FILESERVER.  This is not an APPLICATION server.  You seem to be stuck on
this idea that everyone is using ZFS on the server they're running the
application.  That does a GREAT job of creating disparate storage islands,
something EVERY enterprise is trying to get rid of.  Not create more of.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 10:44 PM, Toby Thain <[EMAIL PROTECTED]>wrote:

>
>
> ZFS allows the architectural option of separate storage without losing end
> to end protection, so the distinction is still important. Of course this
> means ZFS itself runs on the application server, but so what?
>
> --Toby
>
>
So what would be that the application has to run on Solaris.  And requires a
LUN to function.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Wed, Oct 1, 2008 at 12:24 AM, Ian Collins <[EMAIL PROTECTED]> wrote:

> Tim wrote:
> >
> > As it does in ANY fileserver scenario, INCLUDING zfs.  He is building
> > a FILESERVER.  This is not an APPLICATION server.  You seem to be
> > stuck on this idea that everyone is using ZFS on the server they're
> > running the application.  That does a GREAT job of creating disparate
> > storage islands, something EVERY enterprise is trying to get rid of.
> > Not create more of.
>
> I think you'd be surprised how large an organisation can migrate most,
> if not all of their application servers to zones one or two Thumpers.
>
> Isn't that the reason for buying in "server appliances"?
>
> Ian
>
>
I think you'd be surprised how quickly they'd be fired for putting that much
risk into their enterprise.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Tim
On Tue, Sep 30, 2008 at 11:58 PM, Nicolas Williams <[EMAIL PROTECTED]
> wrote:

> On Tue, Sep 30, 2008 at 08:54:50PM -0500, Tim wrote:
> > As it does in ANY fileserver scenario, INCLUDING zfs.  He is building a
> > FILESERVER.  This is not an APPLICATION server.  You seem to be stuck on
> > this idea that everyone is using ZFS on the server they're running the
> > application.  That does a GREAT job of creating disparate storage
> islands,
> > something EVERY enterprise is trying to get rid of.  Not create more of.
>
> First off there's an issue of design.  Wherever possible end-to-end
> protection is better (and easier to implement and deploy) than
> hop-by-hop protection.
>
> Hop-by-hop protection implies a lot of trust.  Yes, in a NAS you're
> going to have at least one hop: from the client to the server.  But how
> does the necessity of one hop mean that N hops is fine?  One hop is
> manageable.  N hops is a disaster waiting to happen.


Who's talking about N hops?  WAFL gives you the exact same amount of hops as
ZFS.


>
>
> Second, NAS is not the only way to access remote storage.  There's also
> SAN (e.g., iSCSI).  So you might host a DB on a ZFS pool backed by iSCSI
> targets.  If you do that with a random iSCSI target implementation then
> you get end-to-end integrity protection regardless of what else the
> vendor does for you in terms of hop-by-hop integrity protection.  And
> you can even host the target on a ZFS pool, in which case there's two
> layers of integrity protection, and so some waste of disk space, but you
> get the benefit of very flexible volume management on both, the
> initiator and the target.
>

I don't recall saying it was.  The original poster is talking about a
FILESERVER, not iSCSI targets.  As off topic as it is, the current iSCSI
target is hardly fully baked or production ready.


>
> Third, who's to say that end-to-end integrity protection can't possibly
> be had in a NAS environment?  Sure, with today's protocols you can't
> have it -- you can get hop-by-hop protection with at least one hop (see
> above) -- but having end-to-end integrity protection built-in to the
> filesystem may enable new NAS protocols that do provide end-to-end
> protection.  (This is a variant of the first point above: good design
> decisions pay off.)
>
>
Which would apply to WAFL as well as ZFS.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Tim
On Wed, Oct 1, 2008 at 9:18 AM, Joerg Schilling <
[EMAIL PROTECTED]> wrote:

> David Magda <[EMAIL PROTECTED]> wrote:
>
> > On Sep 30, 2008, at 19:09, Tim wrote:
> >
> > > SAS has far greater performance, and if your workload is extremely
> > > random,
> > > will have a longer MTBF.  SATA drives suffer badly on random
> > > workloads.
> >
> > Well, if you can probably afford more SATA drives for the purchase
> > price, you can put them in a striped-mirror set up, and that may help
> > things. If your disks are cheap you can afford to buy more of them
> > (space, heat, and power not withstanding).
>
> SATA and SAS disks usually base on the same drive mechanism. The seek times
> are most likely identical.
>
> Some SATA disks support tagged command queueing and others do not.
> I would asume that there is no speed difference between SATA with command
> queueing and SAS.
> Jörg
>
>

Ummm, no.  SATA and SAS seek times are not even in the same universe.  They
most definitely do not use the same mechanics inside.  Whoever told you that
rubbish is an outright liar.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Tim
On Wed, Oct 1, 2008 at 10:28 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Wed, 1 Oct 2008, Tim wrote:
>
>>
>>> I think you'd be surprised how large an organisation can migrate most,
>>> if not all of their application servers to zones one or two Thumpers.
>>>
>>> Isn't that the reason for buying in "server appliances"?
>>>
>>>  I think you'd be surprised how quickly they'd be fired for putting that
>> much
>> risk into their enterprise.
>>
>
> There is the old saying that "No one gets fired for buying IBM".  If one
> buys an IBM system which runs 30 isolated instances of Linux, all of which
> are used for mission critical applications, is this a similar risk to
> consolidating storage on a Thumper since we are really talking about just
> one big system?
>
> In what way is consolidating on Sun/Thumper more or less risky to an
> enterprise than consolidating on a big IBM server with many subordinate OS
> instances?
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
>
Are you honestly trying to compare a Thumper's reliability to an IBM
mainframe?  Please tell me that's a joke...  We can start at redundant,
hot-swappable components and go from there.  The thumper can't even hold a
candle to Sun's own older sparc platforms.  It's not even in the same game
as the IBM mainframes.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Tim
On Wed, Oct 1, 2008 at 11:20 AM, <[EMAIL PROTECTED]> wrote:

>
>
> >Ummm, no.  SATA and SAS seek times are not even in the same universe.=
> >  They
> >most definitely do not use the same mechanics inside.  Whoever told y=
> >ou that
> >rubbish is an outright liar.
>
>
> Which particular disks are you guys talking about?
>
> I;m thinking you guys are talking about the same 3.5" w/ the same RPM,
> right?  We're not comparing 10K/2.5 SAS drives agains 7.2K/3.5 SATA
> devices, are we?
>
> Casper
>
>
I'm talking about 10k and 15k SAS drives, which is what the OP was talking
about from the get-go.  Apparently this is yet another case of subsequent
posters completely ignoring the topic and taking us off on tangents that
have nothing to do with the OP's problem.

--Tm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Tim
On Wed, Oct 1, 2008 at 11:53 AM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:

> Thanks for all the opinions everyone, my current impression is:
> - I do need as much RAM as I can afford (16GB look good enough for me)
>

Depends on both the workload, and the amount of storage behind it.  From
your descriptions though, I think you'll be ok.


> - SAS disks offers better iops & better MTBF than SATA. But Sata offers
> enough performance for me (to saturate a gig link), and its MTBF is around
> 100 years, which is I guess good enough for me too. If I wrap 5 or 6 SATA
> disks in a raidz2 that should give me "enough" protection and performance.
> It seems I will go with sata then for now. I hope for all practical purposes
> the raidz2 array of say 6 sata drives are "very well protected" for say the
> next 10 years! (If not please tell me)
>

***If you have a sequential workload.  It's not a blanket "SATA is fast
enough".



> - This will mainly be used for NFS sharing. Everyone is saying it will have
> "bad" performance. My question is, how "bad" is bad ? Is it worse than a
> plain Linux server sharing NFS over 4 sata disks, using a crappy 3ware raid
> card with caching disabled ? coz that's what I currently have. Is it say
> worse that a Linux box sharing over soft raid ?
>

Whoever is saying that is being dishonest.  NFS is plenty fast for most
workloads.  There are very, VERY few workloads in the enterprise that are
I/O bound, they are almost all IOPS bound.


> - If I will be using 6 sata disks in raidz2, I understand to improve
> performance I can add a 15k SAS drive as a Zil device, is this correct ? Is
> the zil device per pool. Do I loose any flexibility by using it ? Does it
> become a SPOF say ? Typically how much percentage improvement should I
> expect to get from such a zil device ?
>

ZIL's come with their own fun.  Isn't there still the issue of losing the
entire pool if you lose the ZIL?  And you can't get it back without
extensive, ugly work?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Tim
On Wed, Oct 1, 2008 at 12:51 PM, Joerg Schilling <
[EMAIL PROTECTED]> wrote:

>
> Did you recently look at spec files from drive manufacturers?
>
> If you look at drives in the same category, the difference between a SATA
> and a
> SAS disk is only the firmware and the way the drive mechanism has been
> selected.
> Another difference is that SAS drives may have two SAS interfaces instead
> of the
> single SATA interface found in the SATA drives.
>
> IOPS/s depend on seek times, latency times and probably on disk cache size.
>
> If you have a drive with 1 ms seek time, the seek time is not really
> important.
> What's important is the latency time which is 4ms for a 7200 rpm drive and
> only
> 2 ms for 15000 rpm drive.
>
> People who talk about SAS usually forget that they try to compare 15000 rpm
> SAS drives with 7200 rpm SATA drives. There are faster SATA drives but
> these
> drives consume more power.
>

That's because the faster SATA drives cost just as much money as their SAS
counterparts for less performance and none of the advantages SAS brings such
as dual ports.  Not to mention none of them can be dual sourced making it a
non-starter in the enterprise.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-02 Thread Tim
On Thu, Oct 2, 2008 at 11:43 AM, Joerg Schilling <
[EMAIL PROTECTED]> wrote:

>
>
> You seem to missunderstand drive physics.
>
> With modern drives, seek times are not a dominating factor. It is the
> latency
> time that is rather important and this is indeed 1/rotanilnal-speed.
>
> On the other side you did missunderstand another important fact in drive
> physics:
>
> The sustained transfer speed of a drive is proportional to the linear data
> density on the medium.
>
> The third mistake you make is to see that confuse the effects from the
> drive interface type with the effects from different drive geometry. The
> only
> coincidence here is that the drive geometry is typically updated more
> frequently for SATA drives than it is for SAS drives. This way, you benefit
> from the higher data density of a recent SATA drive and get a higher
> sustained
> data rate.
>
> BTW: I am not saying it makes no sense to buy SAS drives but it makes sense
> to
> look at _all_ important parameters. Power consumption is a really important
> issue here and the reduced MTBF from using more disks is another one.
>
> Jörg
>
> --
>

Please, give me a list of enterprises currently using SATA drives for their
database workloads, vmware workloads... hell any workload besides email
archiving, lightly used cifs shares, or streaming sequential transfers of
large files.  I'm glad you can sit there with a spec sheet and tell me how
you think things are going to work.  I can tell you from real-life
experience you're not even remotely correct in your assumptions.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-02 Thread Tim
On Thu, Oct 2, 2008 at 10:56 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

>
>
> I doubt that anyone will successfully argue that SAS drives offer the
> best IOPS/$ value as long as space, power, and reliability factors may
> be ignored.  However, these sort of enterprise devices exist in order
> to allow enterprises to meet critical business demands which are
> otherwise not possible to be met.
>
> There are situations where space, power, and cooling are limited and
> the cost of the equipment is less of an issue since the space, power,
> and cooling cost more than the equipment.
>
> Bob
>

It's called USABLE IOPS/$.  You can throw 500 drives at a workload, if
you're attempting to access lots of small files in random ways, it won't
make a lick of difference.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-06 Thread Tim
On Mon, Oct 6, 2008 at 3:00 PM, "C. Bergström" <[EMAIL PROTECTED]>wrote:

> Matt Aitkenhead wrote:
> > I see that you have wasted no time. I'm still determining if you have a
> sincere interest in working with us or alternatively have an axe to grind.
> The latter is shining through.
> >
> > Regards,
> > Matt
> >
> Hi Matt,
>
> I'd like to make our correspondence in public if you don't mind so my
> intention isn't mistaken.  My point wasn't at all to grind an axe.
>
> 1) That's no way to encourage a company which is already scared of open
> source to even think about releasing patches. (Sun's marketing isn't
> stupid.. they did this because it's good for them)
> 2) I am sincerely interested in your product (as others seem to be as well)
>
> Code review, increased testing and viral marketing are all typically
> good things.  Anyway, hope this clears things up.
>
> Cheers,
>
> ./C
>

ZFS is licensed under the CDDL, and as far as I know does not require
derivative works to be open source.  It's truly free like the BSD license in
that companies can take CDDL code, modify it, and keep the content closed.
They are not forced to share their code.  That's why there are "closed"
patches that go into mainline Solaris, but are not part of OpenSolaris.

While you may not like it, this isn't the GPL.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Tim
On Thu, Oct 9, 2008 at 3:09 PM, Solaris <[EMAIL PROTECTED]> wrote:

> I have been leading the charge in my IT department to evaluate the Sun
> Fire X45x0 as a commodity storage platform, in order to leverage
> capacity and cost against our current NAS solution which is backed by
> EMC Fiberchannel SAN.  For our corporate environments, it would seem
> like a single machine would supply more than triple our current usable
> capacity on our NAS, and the cost is significantly less per GB.  I am
> also working to prove the multi-protocol shared storage capabilities
> of the Thumper significantly out perform those of our current solution
> (which is notoriously bad from the end user perspective).
>
> The EMC solution is completely redundant with no single point of
> failure.  What are some good strategies for providing a Thumper
> solution with no single point of failure?
>
> The storage folks are poo-poo'ing this concept because of the chances
> for an Operating System failure... I'd like to come up with some
> reasonable methods to put them in their place :)
>

Unless you're taking about buying multiple thumpers and mirroring them,
there are none.  The motherboard is a single point of failure.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID-Z True Volume Size?

2008-10-20 Thread Tim
On Mon, Oct 20, 2008 at 11:32 AM, William Saadi <[EMAIL PROTECTED]>wrote:

> Hi all,
>
> I have a little question.
> WIth RAID-Z rules, what is the true usable disks space?
> Is there a calcul like any RAID (ex. RAID5 = nb of disks - 1 for parity)?
>
>
# of disks - 1 for parity
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Tim
Well, what's the end goal?  What are you testing for that you need from the
thumper?

I/O interfaces?  CPU?  Chipset?  If you need *everything* you don't have any
other choice.

--Tim

On Tue, Nov 4, 2008 at 5:11 PM, Gary Mills <[EMAIL PROTECTED]> wrote:

> On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote:
> >
> > There isn't a de-populated version.
> >
> > Would X4540 with 250 or 500 GB drives meet your needs?
>
> That might be our only choice.
>
> --
> -Gary Mills--Unix Support--U of M Academic Computing and
> Networking-
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun Storage 7000

2008-11-10 Thread Tim
Just got an email about this today.  Fishworks finally unveiled?

http://www.sun.com/launch/2008-1110/index.jsp
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Storage 7000

2008-11-10 Thread Tim
On Mon, Nov 10, 2008 at 3:07 PM, Andy Lubel <[EMAIL PROTECTED]> wrote:

> LOL, I guess Sun forgot that they had xvm!  I wonder if you could use a
> converter (vmware converter) to make it work on vbox etc?
>
> I would also like to see this available as an upgrade to our 4500's..
> Webconsole/zfs just stinks because it only paints a tiny fraction of the
> overall need for a web driven GUI.
>
> Anyone know if something like that is in the works?  It looks like a
> nice appliance for file shares in a corp network.
>
> -Andy
>
>

I doubt you'll ever see it on the 4500.  The 4540 perhaps.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-12 Thread Tim
On Wed, Nov 12, 2008 at 2:38 PM, Andy Lubel <[EMAIL PROTECTED]> wrote:

>
>
> I too would like to see how this happens, checked with some Sun people
> and they didn't know of a way to "upgrade" a 4500 other than trading it
> in.  Im assuming the Motherboard/CPU/Memory get swapped out, and from
> the chasis layout, looks fairly involved. We don't want to "upgrade"
> something that we just bought so we can take advantage of this software
> which appears to finally complete the Sun NAS picture with zfs!
>
> -Andy
>


Couldn't you just swap out the hard drives?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Storage 7000

2008-11-17 Thread Tim
I'm not sure if this is the right place for the question or not, but I'll
throw it out there anyways.  Does anyone know, if you create your pool(s)
with a system running fishworks, can that pool later be imported by a
standard solaris system?  IE: If for some reason the head running fishworks
were to go away, could I attach the JBOD/disks to a system running
snv/mainline solaris/whatever, and import the pool to get at the data?  Or
is the zfs underneath fishworks proprietary as well?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Tim
On Mon, Nov 17, 2008 at 12:48 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:

> Yes, the on-disk format is compatible.  You cannot, however, do the
> reverse.  Importing arbitrary Solaris pools (or former Fishworks pools)
> into the Fishworks environment is not supported.  While the on-disk
> format is the same, the dataset hierarchy is specific to the appliance,
> and properties much be configured in an implementation-defined way for
> the software to function correctly.
>
> - Eric
>
> --
> Eric Schrock, Fishworks
> http://blogs.sun.com/eschrock
>

So that leads me to my second question then:  If I buy a 7410 with a single
JBOD attached, can I easily attach a second JBOD and grow the pool?  It
would seem the logical answer is "yes", since growing the pool would just
require adding an/many additional vdev's.  Some of the literature seems to
suggest otherwise though.  I can totally understand the issue with adding
one, or just a few additional disks to the pool, but if you were to double
the number of disks, in theory, that should be fairly seamless.

Thanks,

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Tim
On Mon, Nov 17, 2008 at 1:14 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:

>
>
> Yes, we support adding whole or half JBODs.  We do not support adding
> individual disks or arbitrarily populated JBODs.  If you want the
> ability to survive JBOD failure ("NSPF" in our storage config terms),
> you must add JBODs in groups (i.e. two at a time in a mirrored config)
> so that we can mirror/RAID across them.  Even without NSPF, we have
> redundant cables, HBAs, power supplies, and controllers, so this is only
> required if you are worried about disk backplane failure (a very rare
> failure mode).
>
> Can you point to the literature that suggests this is not possible?
>
> - Eric
>
> --
> Eric Schrock, Fishworks
> http://blogs.sun.com/eschrock
>


Actually I think it was your blog :D

Specifically this post:
http://blogs.sun.com/eschrock/entry/fishworks_storage_configuration


And this passage:
"If there is a broken or missing disk, we don't let you proceed without
explicit confirmation. The reason we do this is that once the storage pool
is configured, there is no way to add those disks to the pool without
changing the RAS and performance characteristics you specified during
configuration."

The post as a whole makes it unclear if you are able to add more disk in the
future or not without changing your RAS/performance.  That specific example,
after re-reading, appears to be referring to the "one broken disk(s)".  I
guess everything I've read about fishworks talks about "making your choice"
at first configuration, and doesn't really make it clear what the process or
requirements are for adding disk in the future.  Knowing zfs, I made the
assumption that it was possible via means like you've described above.  If I
had no exposure to zfs though, it might be a bit less clear, and I guess in
this case, I wasn't 100% positive even with the background I have.

If I've missed something, it wouldn't be the first time ADD has gotten the
best of me, so I apologize.  Feel free to correct as you see fit.


Thanks for the help,

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Tim
On Mon, Nov 17, 2008 at 2:36 PM, Eric Schrock <[EMAIL PROTECTED]> wrote:

> On Mon, Nov 17, 2008 at 01:38:29PM -0600, Tim wrote:
> >
> > And this passage:
> > "If there is a broken or missing disk, we don't let you proceed without
> > explicit confirmation. The reason we do this is that once the storage
> pool
> > is configured, there is no way to add those disks to the pool without
> > changing the RAS and performance characteristics you specified during
> > configuration."
>
> This is what I meant by "individual disks or arbitrarily populated
> JBODs" in my last message.  We don't support addiing JBODs with broken
> or missing disks, but full (or half) populated JBODs can be added at
> will.
>
> > The post as a whole makes it unclear if you are able to add more disk in
> the
> > future or not without changing your RAS/performance.  That specific
> example,
> > after re-reading, appears to be referring to the "one broken disk(s)".  I
> > guess everything I've read about fishworks talks about "making your
> choice"
> > at first configuration, and doesn't really make it clear what the process
> or
> > requirements are for adding disk in the future.  Knowing zfs, I made the
> > assumption that it was possible via means like you've described above.
>  If I
> > had no exposure to zfs though, it might be a bit less clear, and I guess
> in
> > this case, I wasn't 100% positive even with the background I have.
> >
> > If I've missed something, it wouldn't be the first time ADD has gotten
> the
> > best of me, so I apologize.  Feel free to correct as you see fit.
>
> I can update the blog entry if it's misleading.  I assumed that it was
> implicit that the absence of the above (missing or broken disks) meant
> supported, but I admit that I did not state that explicitly, and not in
> the context of adding storage.
>
> Thanks,
>
> - Eric
>
> --
> Eric Schrock, Fishworks
> http://blogs.sun.com/eschrock
>


I follow exactly what you mean after the clarification in this thread.  I
think it would be helpful for other readers if you put a blurb of some sort
like: "when adding storage, in order to maintain RAS/performance, we require
you to add either a half or full JBOD at a time".  Just my .02 :)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-17 Thread Tim
On Mon, Nov 17, 2008 at 3:33 PM, Will Murnane <[EMAIL PROTECTED]>wrote:

> On Mon, Nov 17, 2008 at 20:54, BJ Quinn <[EMAIL PROTECTED]> wrote:
> > 1.  Dedup is what I really want, but it's not implemented yet.
> Yes, as I read it.  greenBytes [1] claims to have dedup on their
> system; you might investigate them if you decide rsync won't work for
> your application.
>
> > 2.  The only other way to accomplish this sort of thing is rsync (in
> other words, don't overwrite the block in the first place if it's not
> different), and if I'm on Windows, I'll just have to go ahead and install
> rsync on my Windows boxes if I want it to work correctly.
> I believe so, yes.  Other programs may have the same capability, but
> rsync by any other name would smell as sweet.
>
> > Wmurnane, you mentioned there was a Windows-based rsync daemon.  Did you
> mean one other than the cygwin-based version?  I didn't know of any native
> Windows rsync software.
> The link I gave ([2]) contains a version of rsync which is
> ``self-contained''---it does use Cygwin libraries, but it includes its
> own copies of the ones it needs.  It's also nicely integrated with the
> Windows management tools, in that it uses a Windows service and
> Windows scheduled tasks to do its job rather than re-inventing
> circular rolling things everywhere.
>
>

Rsync:
http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=64&Itemid=85
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading from a single disk.

2008-11-19 Thread Tim
On Wed, Nov 19, 2008 at 8:39 AM, Ross <[EMAIL PROTECTED]> wrote:

> Sorry to ask a dumb question, but is that before or after you attach the
> disk to the mirror?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


For me I had to wait until after I rebooted after installing opensolaris.  I
attached the disk right away on first boot after install, and for some
reason it wouldn't let me install grub, claiming that s0 was an invalid
location.  After a reboot though, all was well.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading from a single disk.

2008-11-20 Thread Tim
On Thu, Nov 20, 2008 at 4:41 AM, <[EMAIL PROTECTED]> wrote:

>
> >If you need to install bootblocks, it doesn't matter when you do it.
> >
> >But I found the answers I want, I think what I need to do is:
> >
> >
> >   zpool attach export disk#0 disk#1
> >
> >(this makes the single pool into a mirror)
> >
> >And if resilvering is done, I can use:
> >
> >   zpool replace export disk#0 disk#2
> >
>
> It was indeed very simple; you can even run both commands at the same time
> and the result is that the first disk is read only ones.
>
> The only thing that didn't work is making the zpool bigger.
>
> That seems to have required a reboot.  Before the reboot it just kept
> the old (first disk) size.
>
> Is there a specific way to make zfs reevaluate the size without requiring
> a reboot (or export/import)
>
> Casper


It's a known bug, I don't have the ID offhand, but I'm sure someone can get
it to you ;)

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Tim
On Thu, Nov 20, 2008 at 9:54 AM, Kam <[EMAIL PROTECTED]> wrote:

> I was asked a few interesting questions by a fellow co-worker regarding ZFS
> and after much google-bombing, still can't find answers. I've seen several
> people try to ask these questions, but only to have been answered
> indirectly.
>
> If I have a pool that consists of a raidz-1 w/ three 500gb disks and I go
> to the store and buy a fourth 500gb disk and add it to the pool as the
> second vdev, what happens when that fourth disk has a hardware failure?


Your pool is toast.  If you set copies=2, and get lucky enough that copies
of every block on the standalone are copied to the raidz vdev, you might be
able to survive, the odds are pretty slim.

Long story short, don't ever do it.  Not only will you have messed up
performance, you've basically wasted the drive used for parity in the raidz
vdev.  The pool isn't any better than a raid-0 at this point.


>
>
> The second question is lets say I have two disks and I create a non-parity
> pool [2 vdevs creating /tank] with a single child filesystem
> [/tank/fscopies2/] in the pool with the copies=2 attribute. If I lose one of
> these disks, will I still have access to my files? If you were to add a
> third disk to this filesystem as a third vdev at a future point in time,
> would there be any scenario where a hardware failure would cause the rest of
> the pool to be unreadable?
>

Same as above, not necessarily.  There's nothing guaranteeing where the two
copies will exist, just that there will be two.  They may both be on one
disk, or they may not.  This is more to protect against corrupt blocks if
you only have a single drive, than against losing an entire drive.

Moral of the story continues to be: If you want protection against a failed
disk, use a raid algorithm that provides it.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Tim
On Thu, Nov 20, 2008 at 12:35 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:
>
> >> a fourth 500gb disk and add
> >> it to the pool as the second vdev, what happens when that
> >> fourth disk has a hardware failure?
>
> t> If you set copies=2, and get lucky enough
> t> that copies of every block on the standalone are copied to the
> t> raidz vdev, you might be able to survive,
>
> no, of course you won't survive.  Just try it with file vdev's before
> pontificating about it.
>
> -8<
> bash-3.00# mkfile 64m t0
> bash-3.00# mkfile 64m t1
> bash-3.00# mkfile 64m t2
> bash-3.00# mkfile 64m t00
> bash-3.00# pwd -P
> /usr/export
> bash-3.00# zpool create foolpool raidz1 /usr/export/t{0..2}
> bash-3.00# zpool add foolpool /usr/export/t00
> invalid vdev specification
> use '-f' to override the following errors:
> mismatched replication level: pool uses raidz and new vdev is file
> bash-3.00# zpool add -f foolpool /usr/export/t00
> bash-3.00# zpool status -v foolpool
>  pool: foolpool
>  state: ONLINE
>  scrub: none requested
> config:
>
>NAMESTATE READ WRITE CKSUM
>foolpoolONLINE   0 0 0
>  raidz1ONLINE   0 0 0
>/usr/export/t0  ONLINE   0 0 0
>/usr/export/t1  ONLINE   0 0 0
>/usr/export/t2  ONLINE   0 0 0
>  /usr/export/t00   ONLINE   0 0 0
>
> errors: No known data errors
> bash-3.00# zfs set copies=2 foolpool
> bash-3.00# cd /
> bash-3.00# pax -rwpe sbin foolpool/
> bash-3.00# > /usr/export/t00
> bash-3.00# pax -w foolpool/ > /dev/null
> bash-3.00# zpool status -v foolpool
>  pool: foolpool
>  state: DEGRADED
> status: One or more devices has experienced an unrecoverable error.  An
>attempt was made to correct the error.  Applications are unaffected.
> action: Determine if the device needs to be replaced, and clear the errors
>using 'zpool clear' or replace the device with 'zpool replace'.
>   see: http://www.sun.com/msg/ZFS-8000-9P
>  scrub: none requested
> config:
>
>NAMESTATE READ WRITE CKSUM
>foolpoolDEGRADED 4 021
>  raidz1ONLINE   0 0 0
>/usr/export/t0  ONLINE   0 0 0
>/usr/export/t1  ONLINE   0 0 0
>/usr/export/t2  ONLINE   0 0 0
>  /usr/export/t00   DEGRADED 4 021  too many errors
>
> errors: No known data errors
> bash-3.00# zpool offline foolpool /usr/export/t00
> cannot offline /usr/export/t00: no valid replicas
> bash-3.00# zpool export foolpool
> panic[cpu0]/thread=2a1016b7ca0: assertion failed: vdev_config_sync(rvd,
> txg) == 0, file: ../../common/fs/zfs/spa.c, line: 3125
>
> 02a1016b7850 genunix:assfail+78 (7b72c668, 7b72b680, c35, 183d800,
> 1285c00, 0)
>  %l0-3: 0422 0081 03001df5e580 70170880
>  %l4-7: 060016076c88  01887800 
> 02a1016b7900 zfs:spa_sync+244 (3001df5e580, 42, 30043434e30, 7b72c400,
> 7b72b400, 4)
>  %l0-3:  03001df5e6b0 03001df5e670 0600155cce80
>  %l4-7: 030056703040 060013659200 03001df5e708 018c2e98
> 02a1016b79c0 zfs:txg_sync_thread+120 (60013659200, 42, 2a1016b7a70,
> 60013659320, 60013659312, 60013659310)
>  %l0-3:  0600136592d0 0600136592d8 060013659316
>  %l4-7: 060013659314 0600136592c8 0043 0042
>
> syncing file systems... done
> [...first reboot...]
> WARNING: ZFS replay transaction error 30, dataset boot/usr, seq 0x134c,
> txtype 9
>
> NOTICE: iscsi: SendTarget discovery failed (11) [``patiently
> waits'' forver]
>
> ~#Type  'go' to resume
> {0} ok boot -m milestone=none
> Resetting ...
> [...second reboot...]
>
> # /sbin/mount -o remount,rw /
> # /sbin/mount /usr
> # iscsiadm remove discovery-address 10.100.100.135
> # iscsiadm remove discovery-address 10.100.100.138
> # cd /usr/export
> # mkdir hide
> # mv t0 t1 t2 t00 hide
> mv: cannot access t00   [haha ZFS.]
> # sync
> # reboot
> syncing file systems... done
> [...third reboot...]
> SunOS Release 5.11 Version snv_71 64-bit
> Copyright 1983-2007 Sun Microsystems, Inc.  All rights reserved.
> Use is subject to license terms.
> NOTICE: mddb: unable to ge

Re: [zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Tim
On Thu, Nov 20, 2008 at 5:02 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:
>
> t> Pretty sure ALL of the above are settings that can be changed.
>
> nope.  But feel free to be more specific.  Or repeat the test
> yourself---it only takes like fifteen minutes.  It'd take five if not
> for the rebooting.


Uhh, yes.  There's more than one post here describing how to set what the
system does when the pool is in a faulted state/you lose a drive.  Wait,
panic, proceed.  If we had any sort of decent search function I'd already
have the response.  As such, it'll take a bit of time for me to dig it up.



>
>
> t> Not to mention that *backline*, from what I've seen, can still
> t> get the data if you have valid copies of the blocks.
>
> Can you elaborate?  There have been a lot of posts here about pools in
> FAULTED state that won't import---I'm sure those posters would be even
> more interested than I to know how they can recover their data using
> *backline*.
>
>
I'd have to search the list, but I've read of more than one person here that
has worked with a Sun Engineer to manually re-create the metadata.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Tim
On Thu, Nov 20, 2008 at 5:02 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:
>
> t> Pretty sure ALL of the above are settings that can be changed.
>
> nope.  But feel free to be more specific.  Or repeat the test
> yourself---it only takes like fifteen minutes.  It'd take five if not
> for the rebooting.
>

Here's this one:

PSARC 2007/567 zpool failmode property
http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs-to-gracefully-deal-with-failures/
http://mail.opensolaris.org/pipermail/onnv-notify/2007-October/012782.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Tim
On Thu, Nov 20, 2008 at 5:37 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "t" == Tim  <[EMAIL PROTECTED]> writes:
>
>  t> Uhh, yes.  There's more than one post here describing how to
> t> set what the system does when the pool is in a faulted
> t> state/you lose a drive.  Wait, panic, proceed.
>
> (a) search for it, try it, then report your results.  Don't counter a
>reasonably-designed albeit quick test with stardust and fantasy
>based on vague innuendos.
>
> (b) ``ALL of the above are settings that can be changed.''  Please
>tell me how to fix ``ALL'' my problems, which you claim are all
>fixable without enumerating them.  I will:
>
>1. panic on exporting pool
>
>2. 'zpool import foolpool' coredumps
>
>3. pools backed by unavailable iSCSI targets and stored in
>   zpool.cache prevent bootup
>
>4. WARNING: ZFS replay transaction error 30, dataset boot/usr, seq
> 0x134c, txtype 9
>
>5. disappearing, reappearing /usr/export/t00 (the backing for the
>   file vdev was another ZFS pool, so this is also a ZFS problem)
>
> Yeah, I'm pressing the issue.  But I'm annoyed because I gave the
> question an honest go and got a clear result, and instead of repeating
> my test as you easily could, you dismiss the result without even
> reading it carefully.  I think it's possible to give much better
> advice by trying the the thing out than by extrapolating marketing
> claims.


I gave you a the PSARC.



>
> t> I've read of more than one person here that has worked with a
> t> Sun Engineer to manually re-create the metadata.
>
> (a) not in this situation, they didn't.  The loss was much less
>catastrophic than a whole vdev---in fact it wasn't ever proven
>that anything was lost at all by the underlying storage.  I think
>the pool may also have been simpler structure, like a
>single-device vdev exported as a LUN from a SAN.
>
> (b) that's not the same thing as *backline*, whatever that is.
>
> (c) that's pretty far short of ``if you get lucky enough ... you might
>be able to survive.''  My claim: there is zero chance of importing
>the pool in the normal, supported way after this failure scenario.
>
> (d) I don't think the OP had in mind counting on support from Sun
>engineering when you responded by suggesting an unredundant pool
>with copies=2 might suit his needs.
>

Which is why I told him not to do it, and that his chances were slim...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] `zfs list` doesn't show my snapshot

2008-11-21 Thread Tim
On Fri, Nov 21, 2008 at 9:38 PM, Jens Elkner
<[EMAIL PROTECTED]<[EMAIL PROTECTED]>
> wrote:

>
>
> The uncomplete one - where is the '-t all' option? It's really annoying,
> error prone, time consuming to type stories on the command line ...
> Does anybody remember the "keep it small and simple" thing?
>
> Regards,
> jel.
> --
> Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
> Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
> 39106 Magdeburg, Germany Tel: +49 391 67 12768
> __



How is defaulting to output that makes the command unusable to the majority
of their customers keeping it simple?  Their choice of implementation does
leave something to be desired though... I would think it would make more
sense to have something like "zfs list snapshots", and if you wanted to
limit that to a specific pool "zfs list snapshots poolname".

--TIm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RC1 Zfs writes a lot slower when running X

2008-11-21 Thread Tim
On Fri, Nov 21, 2008 at 11:33 PM, zerk <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I have OpenSolaris on an Amd64 Asus-A8NE with 2gig of Rams and 4x320 gig
> sata drives in raidz1.
>
> With dd, I can write at quasi disk maximum speed of 80meg each for a total
> of 250meg/s if I have no Xsession at all (only console tty).
>
> But as soon as I have an Xsession running, the write speed drops to about
> 120MB/s.
> Its even worse if I have a VBoxHeadless running with an idle win2k3 inside.
> It drops to 30 MB/s.
>
> The CPU is at 0% in both cases and nothing is using the array either. I
> tried to investigate with DTrace without success...
>
> Anyone have a clue of what could be going on?
>
> Thanks
>
> Zerk
> --
>

Ya, you're using gobs of ram that was normally being used by zfs for
caching.  I would venture to guess if you stuck another 2GB ram in there
you'd see far less of a *hit* from X or a VM.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-11-23 Thread Tim
On Sun, Nov 23, 2008 at 4:55 PM, James C. McPherson <[EMAIL PROTECTED]
> wrote:

> On Sun, 23 Nov 2008 06:13:51 -0800 (PST)
> Ross <[EMAIL PROTECTED]> wrote:
>
> > I'd also like to know how easy it is to identify drives when you use
> > this card?  Is it easy to know which is which after you've had a few
> > failure & swapped drives around?
>
> Hi Ross,
> in general, it's just as easy to identify drives attached to
> this card as it is to identify drives attached to any other
> card.
>
> The output from 'iostat -En' will help, along with careful
> observation of the drive's serial number and devid. You can
> find the devid (which, last I checked, is guaranteed to be
> unique) by looking for the "devid" property associated with
> your specific device.
>
>
>  :name  pci1000,3150
> sd (block, de0463)
>  :compatible   (de0482TBL)
>   | scsiclass,00.vATA.pST3320620AS.rD |
>   | scsiclass,00.vATA.pST3320620AS |
>   | scsa,00.bmpt |
>   | scsiclass,00 |
>   | scsiclass |
>  :sas-mpt
>  :pm-capable0x1
>  :inquiry-vendor-id ST3320620AS
>  :inquiry-revision-id   3.AAD
>  :inquiry-device-type   0
>  :inquiry-product-idST3320620AS
>  :devid
>
>  id1,[EMAIL PROTECTED]
>  :class scsi
>  :target0
>  :lun   0
>  :device-blksize0x200
>  :fm-ereport-capable
>  :ddi-kernel-ioctl
>  :ddi-failfast-supported
>  :pm-hardware-state needs-suspend-resume
>  :pm-components(de046eTBL)
>   | NAME=spindle-motor |
>   | 0=off |
>   | 1=on |
>  :lba-access-ok
>  :inquiry-serial-no 3QF0EAFP
>  :devfs-path/[EMAIL PROTECTED],0/pci10de,[EMAIL 
> PROTECTED]
> /pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
>  :driver-name   sd
>  :binding-name  scsiclass,00
>  :bus-addr  0,0
>  :instance  3
>  :_classblock
>  :name  sd
>
>
>
> hth,
> James
> --
>

I don't know that that necessarily makes it *EASY* to find the drive,
especially if they're in a hot-swap bay.  Something like an "led_on" type
command would be helpful.  Whether that be through sending IO to the drive
to light up an activity LED, or through other means.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 11:41 AM, marko b <[EMAIL PROTECTED]> wrote:

> At this point, this IS an academic exercize. I've tried to outline the
> motivations/justifications for wanting this particular functionality.
>
> I believe my architectural "why not?" and "is it possible?" question is
> sufficiently valid.
>
> Its not about disk cost. Its about being able to grow the pool easily, and
> without having to replace all the drives within the pool at the same time.
>
> Its about having the flexibility of not having to pre-determine what amount
> of data I want protected against failure at the time I build the pool.
>
> This theoretical NAS doesn't exist yet. But I don't forsee being able to
> build it out of a machine with more than 8 SATA bays. At this point, its
> likely to be a cheap dell server with an external 4 bay enclosure.
>
> -Marko
> --
>
> <http://mail.opensolaris.org/mailman/listinfo/zfs-discuss>

It's about what you want in a home device, not what Sun's target enterprise
market uses.  I suggest looking into windows home server, it meets your
requirements today.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 4:04 PM, marko b <[EMAIL PROTECTED]> wrote:

> Darren,
>
> Perhaps I misspoke when I said that it wasn't about cost. It is _partially_
> about cost.
>
> Monetary cost of drives isn't a major concern. At about $110-150 each.
> Loss of efficiency (mirroring 50%), zraid1 (25%), is a concern.
> Expense of sata bays, either in a single chassis or an external chassis is
> a concern.
> Expense of replacing entire pool sets of disks to grow is a concern.
>
> At this point zraid seems to be my only choice if I want 3TB of actual
> space. I may gain some benefit by using compression.
>

Windows Home Server doesn't help me because I want xvm, iscsi and nfs. But
> the 'features' of WHS do approximate this fantasy setup I outlined.
>


Windows home server will eventually have hyper-v, and currently has
virtualbox or vmware workstation/server.

iSCSI?  There's an iscsi initiator built in.  If you're talking iscsi
targets, let me tell you right now you're fricking nuts if you think you're
going to run iSCSI target backed by a non-redundant SATA disk.  If you
insist, that's already been created as well:
http://www.rocketdivision.com/wind.html

NFS can be had with services for unix, or hummingbird software if you have
something against free nfs.  It's included in vista.



>
> Mirroring:
>
> Let me see if I'm understanding your suggestion. A stripe of mirrored
> pairs. I can grow by resizing an existing mirrored pair, or just attaching
> another mirrored pair to the stripe?
>
> This gives me growth capability at the cost of 50% disk capacity and
> requiring a large amount of sata ports/bays.
>
> I'm still holding out for an answer to my original question. :)
>



Your original question has been answered: Sun's market has absolutely no use
for those features.  They aren't building zfs features for home users that
make them no money.  They're a business, their goal is to make money.  You
have the source, feel free to code it if you want it ;)  Just don't expect
to ever see it from Sun.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ESX integration

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 2:22 PM, Ahmed Kamal <
[EMAIL PROTECTED]> wrote:

> Hi,
> Not sure if this is the best place to ask, but do Sun's new Amber road
> storage boxes have any kind of integration with ESX? Most importantly,
> quiescing the VMs, before snapshotting the zvols, and/or some level of
> management integration thru either the web UI or ESX's console ? If there's
> nothing official, did anyone hack any scripts for that?
>
> Regards
>

It can be scripted, no there's no integration.  If there were it would have
to be a client-side/esx service console application.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New 2 ZFS: How assign user permission to read, write, execute a new filesys?

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 7:54 PM, Richard Catlin
<[EMAIL PROTECTED]>wrote:

> I am new to OpenSolaris and ZFS.
>
> I created a new filesystem under and existing filesystem for a user
> Exists: /rpool/export/home/user01
> zfs create rpool/export/home/user01/fs1
>
> As root, I can add a file to fs1, but as user01, I don't have the
> permission.
>
> How do I give user01 permission?  Can I limit this to only user01?
>
> Thanks.
> Richard
>

Is this a user on the solaris system?  If so, do:

chown -R  user01 /rpool/export/home/user01/fs1
chmod 700 -R /rpool/export/home/user01/fs1

This will give user01 full privileges for fs1, and no permissions for
everyone else.  You should really do some reading though, that's unix101:
http://www.perlfect.com/articles/chmod.shtml

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   4   5   6   7   8   9   10   >